uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
2,877,628,089,754 | arxiv | \section{Introduction}
Pouring is one of the most commonly executed tasks in humans' daily lives, especially for food preparation. In cooking scenarios, it is the most frequently executed motion \cite{pauliusiros2019}. Pouring is a challenging task given the variety of containers and materials we can find in a kitchen. Humans excel at pouring liquids and solid materials, a skill that a cooking robot needs to master. This skill becomes particularly tricky when there is the need to execute it with precision and speed and for different setups and conditions.
Accurate pouring is not a trivial task, and it is affected by many factors, including the property of the material, the geometry of the source and receiving containers, the manipulation of the source container, to name a few. It is a problem that cannot be solved using traditional control policies for two reasons:
\begin{enumerate}
\item Lack of precise dynamics models: modeling fluid or granular motion precisely is either impossible or unfeasible because there are many unobservable parameters and those parameters vary with many factors such as the material and the shape of the pouring device.
\item Un-reversible feature of the task: the poured material cannot come back to the pouring device once it is poured out. Therefore, it is not possible to have an overshoot in the system's response.
\end{enumerate}
The two difficulties go hand-in-hand. The un-reversible feature of pouring calls for an approach that can predict. Figure \ref{fig-human-pouring} shows an example of velocity and volume sequences collected from a person pouring water. It can be seen that after the backward rotation starts, the water still comes out of the source container and the volume in the receiving container keeps increasing for a while. Therefore, the approach needs to predict when to start the backward rotation to reach the goal. However, prediction requires a precise model. In this paper, we propose a self-supervised learning approach that learns from demonstrations that are either unsupervised or performed by unskilled demonstrators. The approach self-supervises the learning process by taking in all demonstrations without checking their performance or labeling them as successful or unsuccessful. Instead, the self-supervised approach uses the real outcomes of the demonstrations as the desired goals. It is drastically different from traditional learning from demonstration approaches (LfD) \cite{Billard08chapter} that learn optimal motion trajectories from skilled demonstrators.
We designed a data collection system that collected 284 human pouring demonstrations. This new data collection approach extends the Daily Interactive Manipulation (DIM) dataset \cite{huang2019dataset}. To learn water pouring dynamics, pouring motion, and outcomes, we have developed a peephole long short-term memory (LSTM) learning structure that used the previous step’s outcome as the current input. The cell unit in the peephole LSTM learns, memorizes, and updates the liquid or granular material’s movement dynamics over time. It allows the peephole LSTM model to learn the relationship between the manipulation motion and the disparity between the current outcome and the desired outcome based on the liquid or granular material’s movement dynamics.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Figures/Velocity_Volume_Human.png}
\caption{An example of human pouring water. It can be seen that when the rotation is going backward, the volume is still increasing. Velocity: velocity of the source container measured counter clock-wise, negative radians/seconds means that the container is going forward and positive means backward. Volume: volume of water in the receiving container.}
\label{fig-human-pouring}
\end{figure}
We have evaluated the proposed approach with five containers: one from the training set and four new containers that are not significantly different from the training set in terms of size. We have observed the mean volume error of those five containers to be as small as 4.12 milliliters (mL) and not higher than 12.35mL, a result that is better than the pouring accuracy of a regular person. The pouring speeds of the proposed approach are on par with a regular human pouring speed that is about 5 to 10 times faster than state-of-the-art approaches \cite{7989307, 8653969}.
The learned model exhibits either much higher pouring error or much greater pouring standard deviation when evaluated on unaccustomed containers that are far different from the ones in the training set. To generalize the learned model to unaccustomed containers, we propose a self-supervised practicing approach. It uses the learned model to practice with the unaccustomed containers, collects the motions and outcomes from the practices, and uses the real outcomes as the desired outcomes to fine-tune the model. We refer to this approach as {\it generalization by self-supervised practicing (GSSP)}, with which we achieved a reduction in mean volume error from values of more than 50\,mL to values lower than state-of-the-art works.
Our contributions in this work include
\begin{enumerate}
\item Data collection system and pouring motion dataset. We designed a data collection system that captures the motion signals of human pouring employing a motion tracker and a force sensor (for volume measurement). We have collected 284 pouring motions for nine different sizes of source containers.
\item Self-supervised learning from demonstrations and outcomes. We present a motion model for accurate pouring learned from human demonstrations. The pouring target in the input of the model during training is set to be the \textbf{actual poured volume} instead of the desired target. It allows the learning to be self-supervised since the approach does not have to specify or know the desired outcomes.
\item The learned pouring skill could achieve human-like pouring accuracy and speed. The proposed approach models the complicated spatial-temporal patterns of human pouring, and as a result, pours smoothly and as fast as humans. In our dataset collected from human demonstrations, the pouring time ranges from 3.2 to 8.7 seconds, and the duration executed by our model ranges from 2.8 to 7.6 seconds. It pours faster than related methods \cite{7989307, 8653969} which report 25 seconds and 20-45 seconds per pour, respectively. It achieves lower pouring error than existing methods that also use a single modality to monitor the poured volume \cite{7989307, do2018accurate, chaudo2018, tianze2019}.
\item Generalizing pouring skills by self-supervised practicing. We present the Generalization by self-supervised Practicing (GSSP) approach, which fine-tunes the model using the actual pouring outcomes of a robot. It allows the robot to pour accurately, using unaccustomed containers and materials.
\end{enumerate}
\subsection{Related Work on LfD}
One popular LfD approach is Gaussian mixture regression (GMR) \cite{4650593}. The approach first learns the spatial-temporal relation of a motion. Then, it produces a novel sequence of the motion by producing the position or state of the motion corresponding to each time step. Functional Principal Component Analysis (fPCA) is another approach for spatial-temporal motion learning. The traditional PCA can be applied on both the temporal and spatial axes of a motion to find how the motion varies at different time steps \cite{lim2005, min2009}. Functional PCA (fPCA) extends PCA by representing a motion in a continuous-time format instead of using a collection of points \cite{ramsay_etal2009, dai2013functional, huang2015, paulius2016}.
The spatial-temporal GMR and fPCA in nature consider the motion as a whole rather than a dynamical system, which makes it inconvenient to receive timely feedback while executing the motion.
In comparison, GMR can be configured to learn the relationship between states and actions and thus behave as a dynamical system \cite{Hersch08TRO}.
Alternatively, one can use movement primitives (MP). The first MPs, the dynamic movement primitives (DMP), consist of three components: 1) a strictly damped string model that guarantees the convergence of the motion state to a goal state, 2) a forcing function that contains the shape that the motion is expected to go through, and 3) a canonical system that modulates the temporal profile of the motion \cite{Ijspeert:2013:DMP:2432779.2432781}. Its variants include but are not limited to interactive primitives (IP), which enable the interaction between two agents \cite{6907265} and probabilistic movement primitives (ProMP), which enable more flexibility on the force function \cite{NIPS2013_5177}.
The general GMR, MP, and fPCA involve the usage of a temporal alignment algorithm of the motion data, such as dynamic time warping (DTW) \cite{sakoe_etal1978}, which may damage certain spatial-temporal patterns in the data in unclear ways.
In comparison, a recurrent neural network (RNN) does not require aligning the motion data in time. It is designed to process time sequences and is capable of representing dynamical systems \cite{han2004, trischler2016}. RNN has been successfully applied for text generation as well as motion generation \cite{sutskever2014, graves2013, huang2017learning_pour}.
\subsection{Related Work on Pouring} \label{sec:related_works}
In \cite{Pan:2016:RMP:3038594.3038659}, the authors propose trajectory planning algorithms for liquid body transfer that uses fluid simulation, \cite{Pan2017FeedbackMP} learns to predict the state of the fluid using neural networks. However, the fluid model of the liquid is in general difficult to obtain, for which reason \cite{TAMOSIUNAITE2011910} proposes adding goal learning to shape learning with MP for liquid transfer. \cite{6614613} considers the amount of the liquid in the source container while pouring and proposes a liquid transfer algorithm based on a parametric hidden Markov model.
The difficulty of generating pouring motion increases when pouring accuracy is essential. The demand for accurate pouring is observed in casting factories where molten metal is poured into molds. \cite{7068564} proposes predicting the residual pouring amount of the liquid to increase the accuracy of pouring. \cite{4758180} introduces predictive sequence control, which suppresses the increase of error when the pouring amount increases.
Humans also control the amount when they pour and for which they combine pouring with shaking and tapping \cite{doi:10.1142/S0219843615500309}. Efforts have been made to estimate the volume or height of the poured amount in the receiving container and to use the estimate as real-time feedback to a simple PID controller. \cite{7989307} uses a deep neural network to estimate the volume of liquid in a cup using visual data and uses PID controllers to control the rotation of a robot arm. In 30 pours, the algorithm achieves an average error of 38mL with 25 seconds per pour. \cite{do2018accurate} uses an RGB-D point cloud of the receiving cup to determine the liquid height and a PID controller to control the rotating angle of the source cup. They achieve a mean error of 23.9mL, 13.2mL, and 30.5mL for three different receiving cups. In both algorithms, the PID controller stops and rotates the source container back to the original angle when the estimated volume/height reaches the target. However, the before-mentioned technique might lead to over-pouring since there is still liquid coming out from the source container when it starts its backward rotation as shown in Figure \ref{fig-human-pouring}. Moreover, the generalization of those vision-based approaches is limited when there is variation in the color of the receiving container, the lighting conditions, the background, or the type of pouring material.
Instead of using PID, \cite{chaudo2018} learns a policy using reinforcement learning simulation and transfers the policy to actual robots. The policy performs pouring to the same target heights for which it was trained. It reaches an average error of 19.96mL over 40 pouring trials. However, the authors also use a vision-based system to detect the height of the liquid in the receiving container, leading to the same limitations already discussed.
In \cite{hong2019pouring_audio} the authors rely on an audio spectrogram to determine the volume poured by the robot. The mean volume errors reported for different receiving containers ranged from 6.42\,ml to 13.79\,ml, such small errors are achieved by the usage of a spout at the opening of the pouring containers, which reduces the speed of pouring.
\cite{8202301, 8653969} derives analytical pouring models for the source containers with known geometry and extends the model to source containers with similar geometry. The most up-to-date work \cite{8653969} uses both vision and weight during pouring and achieves a pouring error of less than 5mL. However, the proposed system pouring time ranges from 20 to 45 seconds. Humans can also achieve small pouring errors if requested to pour slowly. In our dataset, humans took 3.2 to 8.7 seconds to pour water. The authors in \cite{tianze2019} apply model predictive control (MPC) based on a recurrent neural network for estimating the poured volume. The approach achieves average errors of 14.25mL, 18.25mL, and 26.13mL for three unseen source containers. Another work \cite{huang2017learning_pour} presented a Long-Short-Term Memory (LSTM) model that was trained using demonstration data. However, the learned model was only evaluated in simulation.
In summary, the accurate pouring approach presented in this paper has a significant improvement over our previous works. The proposed peephole LSTM approach drastically outperforms the MPC algorithm presented in \cite{tianze2019}. The proposed approach was applied to a real robot, whereas in \cite{huang2017learning_pour} the pouring motion velocities were generated in simulation. In \cite{huang2019accurate}, we presented limited preliminary results on pouring as an abstract report. A generalization in practice (GiP) approach was proposed in \cite{wilches2020generalizing} with limited experiments on pouring water. This paper presents a comprehensive description of the peephole LSTM pouring motion generation approach and the generalization by self-supervised practicing (GSSP), both thoroughly evaluated with numerous experiments in real-world scenarios with a real robotic system.
\section{Problem Description \& Approach}
In this work, we do not consider the transfer of the source container, which is essentially pick-and-place, and only focus on controlling the flow of the liquid by manipulating the source container. We consider the receiving container to be large enough to prevent spilling.
The main movement of the source container is its rotation, which resides mostly on a 2-dimensional plane. It enables the motion to be simplified to its rotation. The anchor of the rotation is fixed to be approximately at the middle point of the height of each container.
This simplification is also applied in \cite{7989307, 8653969, do2018accurate, 8202301}, which makes our assumptions reasonable.
Volume can be perceived visually and is intuitive for measuring liquid. Therefore, in this work, we use volume to represent the amount of liquid.
We describe the pouring process as a result of rotating the source container. Initially, a certain volume of liquid exists in the source container. If the source container is full, then the liquid flows out as the source container starts rotating.
If the source container is not full, then there will be a delay of the liquid flowing out after the source container has started rotating.
The liquid flows into and stays in the receiving container, and therefore the poured volume only increases and never decreases. When the source container stops rotating, the liquid may either instantly stop flowing out or keep flowing for a short time until the surface of the liquid inside the source container is level. The pouring process is sequential, and the poured volume is determined by the trajectory of the rotation velocities of the source container.
We model the pouring process as a discrete-time series:
\begin{algorithmic}[1]
\For {$i$ in ($1, 2, \dots$) }
\State $t = t_1 + (i-1)\Delta t$
\State $\theta(t + \Delta t) = \theta(t) + \omega(t)\Delta t$
\State $vol(t + \Delta t) = F(\omega(t),\theta(t),vol(t))$
\EndFor
\end{algorithmic}
where $t_1$ is the initial time instant, $\Delta t$ is the time interval, $\theta(t)$ and $\omega(t)$ are the rotation angle and angular velocity of the source container, respectively, $vol(t)$ is the poured volume, $F(\cdot)$ denotes the pouring system. We also illustrate the process in Figure \ref{fig-system_illus}. We do not impose a strict restriction on the initial angle $\theta(t_1)$ but assume that it is close to zero.
The effect of the velocity $\omega(t)$ executed at time $t$ is observed at the next time step, $t + \Delta t$, and the effects are the next rotation angle $\theta(t + \Delta t)$ and the next poured volume $vol(t + \Delta t)$. Other factors that affect the pouring behavior considered in this paper are the shape of the source container, the initial volume in the source container, and the target volume in the receiving container.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Figures/system.png}
\caption{The sequential pouring process with the input being the current angular velocity and the current poured volume and the output being the poured volume for the next time step}
\label{fig-system_illus}
\end{figure}
The angular velocity, $\omega(t)$, is the action that pushes the pouring process forward. To perform pouring, we can use a motion model that takes the target volume as input and generates the velocity as output. At any time step during pouring, the model takes the current poured volume as input, compares it with the target volume, and adjusts the velocity accordingly.
The model is represented as
\begin{equation} \label{eq-vel_gen}
\omega(t) = G(\omega(t - \Delta t), \theta(t), vol(t), vol_{2pour}),
\end{equation}
where $G(\cdot)$ denotes the function that relates the previous velocity $\omega(t-\Delta t)$, current angle $\theta(t)$ and volume $vol(t)$, and target volume $vol_{2pour}$ with the current velocity. The pouring process is written again as:
\begin{algorithmic}[1]
\For {$i$ in ($1, 2, \dots$) }
\State $t = t_1 + (i-1)\Delta t$
\State $\omega(t) = G(\omega(t - \Delta t), \theta(t), vol(t), vol_{2pour})$
\State $\theta(t + \Delta t) = \theta(t) + \omega(t)\Delta t$
\State $vol(t + \Delta t) = F(\omega(t),\theta(t),vol(t))$
\EndFor
\end{algorithmic}
\subsection{Self-Supervised Learning from Demonstration and Outcomes}
The learned pouring model should have the targeted volume as one of the inputs since the pouring behavior should change based on the target. However, as one of the critical innovations in our approach, we replace the targets with the actual results in training. For example, we ask subjects to pour 150mL, but the subject pours 165mL. In this case, the input to the model is 165mL, not 150mL. This change enables the proposed self-supervised learning since the subjects in the demonstration do not have to be supervised, and the desired target does not have to be labeled. The robot can observe the actual outcomes and use them as training inputs. This approach allows the model to pour with a human-like pace but achieves better accuracy.
In the model, although taking $vol_{2pour}$ as input, $G(\cdot)$ is not guaranteed to generate the exact $\omega(t)$'s which will lead to $vol_{2pour}$. In reality, given $vol_{2pour}$, $G(\cdot)$ will generate $\omega(t)$'s whose execution will lead to a certain final volume $vol_{final}$ in the receiving container. Assume the receiving container initially has a volume $vol_{init}$, then $vol_d =| vol_{final} - vol_{init} - vol_{2pour}|$ reflects the ability of $G(\cdot)$ to fulfill $vol_{2pour}$. If $vol_d = 0$, then $G(\cdot)$ is perfect in fulfilling the goal. Reversely, $vol_{final}$ can be considered as the result of executing a \emph{perfect} model $G^*(\cdot)$ assuming its goal is set to $vol_{2pour} = vol_{final} - vol_{init}$.
To learn $G(\cdot)$, if we use the actual $vol_{2pour}$, i.e., the volume we \emph{intend} to reach, then the learned model will approximate the one that is given by $vol_{2pour}$ but ends with $vol_{final}$. If we set $vol_{2pour} = vol_{final} - vol_{init}$, the learned model will approximate the perfect model $G^*(\cdot)$. In the hope of learning a more accurate motion model, we set the motion goal $vol_{2pour}$ using the actual outcome $vol_{final} - vol_{init}$.
\subsection{RNN-Based Pouring Skill Model}
A straightforward design choice for accurate pouring is a simple PID controller. \cite{7989307, do2018accurate} have applied PID for accurate pouring, in which they both used PIDs with fixed gains. While pouring, as the liquid flows out of the source container, the plant changes, which accordingly requires adjustment to the gains of the PID. Performing pouring thus requires an adaptive PID whose gains change their values throughout the pouring process. We speculate that this partly limits the achievable accuracy in \cite{7989307, do2018accurate}. An adaptive PID, however, is no longer a simple controller. That justifies our quest for a more complicated motion model.
We aim to learn a motion model from pouring demonstrations that can perform properly in new settings after being trained in a finite number of settings. In this work, we explore the generalization of the pouring skill model to different shapes of source containers, type of liquid, and granular materials. Generalization of neural networks has been observed in practice, and active research has been conducted, which tries to identify possible causes such as the norm of network parameters \cite{NIPS2017_7176}, the specialty of the network structure and the landscape of the cost function \cite{DBLP:journals/corr/WuZE17}, and sharpness/flatness of the minima \cite{Dinh:2017:SMG:3305381.3305487}.
Apart from generalization, we seek two other properties from the candidate model:
\begin{enumerate}
\item Since all demonstrations are sequences, the model should be inherently capable of dealing with sequences and capturing the spatial-temporal patterns in the sequences.
\item Since demonstrations vary in length, the model should be able to learn effectively from sequences with different lengths.
\end{enumerate}
Due to the successful records of the generalizability of neural networks and our need for a sequential model, we use RNN to represent the motion model.
RNN is a class of neural networks that is designed to process its inputs in order. It feeds its output from one time step into its input at the next time step, shown specifically in Eq. \eqref{eq-rnn}, where $x(t)$ is the given input, $h(t-1)$ and $h(t)$ are outputs from the previous and the current step, respectively. The weight $W$ and bias $b$ are learned using Backpropagation Through Time \cite{werbos1990}.
\begin{equation} \label{eq-rnn}
h(t) = \text{tanh}\left(W[h(t-1), x(t)]^\top + b\right)
\end{equation}
We need to decide the input features to the RNN at any time step. Each feature corresponds to a type of data. We write Eq. \eqref{eq-vel_gen} again below for convenience:
\begin{equation}
\omega(t) = G(\omega(t - \Delta t), \theta(t), vol(t), vol_{2pour})
\end{equation}
The first feature is the previous angular velocity that can be encoded as the hidden state of the RNN represented by Eq. \eqref{eq-rnn}. We also use $\theta(t)$ as a feature.
The next two features are $vol(t)$ and $vol_{2pour}$, respectively.
Corresponding to $vol_{2pour}$, the initial volume of liquid in the source container $vol_{total}$ can be set as a feature. We can also have features that describe the shape of the source container. We model the source container as a cylinder and set both the height $H$ and the body diameter $D$ as features.
The four static features $vol_{2pour}$, $vol_{total}$, $H$, and $D$ describe a pouring task and distinguish one task from another. The two sequential features $\theta(t)$ and $vol(t)$ represent the feedback from the rotation on the source container and the volume change on the receiving container. Figure \ref{fig-pouring_scene} illustrates the six input features. Therefore, the input and output of the RNN from Eq. \eqref{eq-rnn} become:
\begin{equation} \label{input-rnn}
x(t)=[\theta(t), vol(t), vol_{total}, vol_{2pour}, H, D]
\end{equation}
\begin{equation} \label{output-rnn}
\omega(t) = K(h(t))
\end{equation}
where the function $K(.)$ relates the hidden state of the RNN to the scalar angular velocity. The plain RNN as shown in Eq. \eqref{eq-rnn} suffers from the problem of vanishing and exploding gradients \cite{bengio1994, hochreiter1997}, which prevents it from learning long-term dependencies effectively. The problem was solved by long short-term memory (LSTM) which introduces gates and memory cells \cite{hochreiter1997}. Later, peepholes were introduced to LSTM to enable the access of all gates to the memory cell \cite{Gers:2003:LPT:944919.944925}. The mechanism of peephole LSTM is illustrated in Figure \ref{fig-lstm} and is written as:
\begin{align}
i &= \text{sigm}\left(W_i[h(t-1), x(t)]^\top + b_i + p_i\odot c(t-1) \right) \label{lstm-input} \\
f &= \text{sigm}\left(W_f[h(t-1), x(t)]^\top + b_f + p_f\odot c(t-1) \right) \label{lstm-forget} \\
g &= \text{tanh}\left(W_g[h(t-1), x(t)]^\top + b_g \right) \label{lstm-input1}\\
c(t) &= f \odot c(t-1) + i \odot g \label{lstm-cell}\\
o &= \text{sigm}\left(W_o[h(t-1), x(t)]^\top + b_o + p_o\odot c(t) \right) \label{lstm-output}\\
h(t) &= o \odot \text{tanh}(c(t)) \label{lstm-hidden}
\end{align}
where $i$, $o$, and $f$ are the input, output, and forget gates, respectively. $W_i$, $W_f$, $W_g$, $W_o$, $b_i$, $b_f$, and $b_g$ are the LSTM cell weights to learn. $p_i$, $p_o$, and $p_f$ are also the peephole connection weights to be learned for gates $i$, $o,$ and $f$, respectively. $c(t)$ is the long-term memory, $h(t)$ the output, and $x(t)$ the input of the LSTM block. ``sigm" represents the sigmoid function applied element-wise, and is used to implement gates. ``tanh" represents the hyperbolic tangent function, applied element-wise, and is used to avoid vanishing or exploding gradients. $\odot$ represent element-wise multiplication. In this work, we use peephole LSTMs.
Taking into account \cref{input-rnn,output-rnn,lstm-input,lstm-forget,lstm-input1,lstm-cell,lstm-output,lstm-hidden} we can see that the dynamic model of the pouring motion is allocated in the long-term memory $c(t)$ of the LSTM. The combination of Eq. \eqref{lstm-cell} and Eq. \eqref{lstm-input1} gives:
\begin{equation}
c(t) = f \odot c(t-1) + i \odot \text{tanh}\left(W_g[h(t-1), x(t)]^\top + b_g \right)
\end{equation}
where the dynamic model $c(t)$ depends on both the previous dynamics and the previous input and output. The gate $i$ decides which part of the current input and past output contributes to the current dynamic model, and the gate $f$ decides which part of the past dynamic model contributes to the current dynamic model. Eq. \eqref{lstm-hidden} shows that gate $o$ decides which part of the dynamic model will be used as the current output. The previous mechanism allows the LSTM network to predict when to start the backward rotation of the source container based on the information provided by its current input (feedback signals and static features), its past output (past angular velocity) and its long-term memory (past dynamic model).
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Figures/lstm_peephole.png}
\caption{Mechanism of a peephole LSTM block}
\label{fig-lstm}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Figures/pouring_scene.png}
\caption{An illustrative pouring scene that shows the six physical quantities to obtain. $vol_{2pour}$ and $vol_{total}$ is the target and initial volume. $D$ and $H$ are the diameter and height of the source container. $\theta(t)$ and $vol(t)$ are the sequences of rotation angle and of the poured volume.}
\label{fig-pouring_scene}
\end{figure}
\section{Training} \label{sec:training}
\subsection{Data Collection} \label{sec:dataset}
We want to collect all the input features that we have identified for RNN and we need to decide how to measure volume. Intuitively, the volumes $vol_{total}$ and $vol_{2pour}$ can be measured using a measuring cup. However, obtaining $vol(t)$ using a measuring cup requires a real-time video stream of the measuring cup and a computer vision algorithm that extracts the volume from the video stream.
To simplify the problem that we have to solve, we decide that we will not include the above vision problem in our solution, and instead, we compute the volume from other quantities.
The volume can be computed as the mass $m$ divided by the density $\rho$, i.e., $v = m / \rho$. We consider the weight as the gravitational force acting on an object that keeps the object in place. The weight $f$ is the product of mass $m$ and gravitational acceleration $g$, i.e., $f = mg$. Therefore, the volume can be calculated from the weight:
\begin{equation} \label{eq-volume_weight}
v = \frac{f}{\rho g},
\end{equation}
We represent $vol_{total}$ by its corresponding weight $f_{total}$, $vol_{2pour}$ by weight $f_{2pour}$, and similarly the current poured volume $vol(t)$ by weight $f(t)$.
Figure \ref{fig-data_collect_setup} illustrates the setup for our data collection. We collect data of pouring water from 9 different source containers into the same receiving container. The 9 source containers are shown as the left half of Figure \ref{fig-cups}. We measure $H$ and $D$ of each source container in millimeters (mm) using a ruler. We 3D-print a handle where the source container is mounted on one end, and a Polhemus Patriot motion tracker is mounted on the other end. The motion tracker records the rotating angles $\theta(t)$'s of the source container in degrees. We place an ATI Mini40 force/torque sensor under the receiving container to record the raw force reading $f_{raw}(t)$ in pound-force (lbf).
We obtain $f_{total}$ and $f_{2pour}$ from $f_{raw}(t)$. $f_{2pour}$ is calculated by $f_{2pour} = f_{final} - f_{init}$, where $f_{init}$ and $f_{final}$ are the weights read from the receiving container before and after a trial, respectively. Thus, we set $f_{2pour}$ using the actual poured outcome.
In each trial, $f_{total} > f_{2pour}$, that is, there is water left in the source container after pouring. Various $f_{total}$ and $f_{2pour}$ are recorded to aid the generalizability of the prospective motion model.
$\theta(t)$'s are recorded at 60Hz and $f_{raw}(t)$'t are recorded at 1KHz. The collected pouring data is part of RPAL Daily Interactive Manipulation (DIM) dataset \cite{huang2019dataset}, which is publicly available. More manipulation datasets can be found at \cite{huang2016recent}.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Figures/data_collection_setup.png}
\caption{Illustration of the data collection setup. The source container is connected to the motion tracker through a 3-D printed adapter. The force sensor is placed underneath the receiving container.}
\label{fig-data_collect_setup}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Figures/fig_cups_all.png}
\caption{Source containers used for (left) training and for (right) evaluation. The red cup was used both for training and for evaluation.}
\label{fig-cups}
\end{figure}
\subsection{Implementation} \label{sec:implementation}
The neural network can have multiple layers and each layer can contain multiple peephole LSTM units. By units we mean the size of the vectors $\mathbf{c}_t$ and $\mathbf{h}_t$ of Figure \ref{fig-lstm} formalized by Eqs. \eqref{lstm-cell} and \eqref{lstm-hidden}, respectively. Dropout \cite{zaremba2014} is applied between layers to avoid memorizing the data and aid generalizability.
The final layer is a fully connected layer with linear activation which generates the angular velocity. The mechanism of the network with $L$ layers at time $t$ is represented as
\begin{algorithmic}[1]
\State $h_0(t) = x(t)$
\For {$i=(1, 2, \dots, L)$}
\State $h_i(t) = \text{LSTM}\left(h_i(t-1), h_{i-1}(t); n_{unit}\right)$
\State $h_i(t) = \text{Dropout}\left(h_i(t); p_{keep}\right)$
\EndFor
\State $\hat{y}(t) = W_yh_L(t) + b_y$
\end{algorithmic}
where $\text{LSTM}(\cdot; n_{unit})$ means using an LSTM block from Figure \ref{fig-lstm} with $n_{unit}$ units. Dropout$(\cdot; p_{keep})$ means dropout with a keep probability of $p_{keep}$. $\hat{y}(t)$ corresponds to a linear layer that converts the hidden state $h_L(t)$ to the final output.
\begin{comment}
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Figures/network_theory.png}
\caption{An example of the network with two LSTM layers and the final layer. }
\label{fig-network_theory}
\end{figure}
\end{comment}
To feed the input features into the network, we group them into a vector $x(t)=[\theta(t), f(t), f_{total}, f_{2pour}, H, \kappa]^\top$ for $t=1,\dots,T-1$, where $T$ is the length of the trial and
\begin{enumerate}
\item $\theta(t)$ is the rotating angle of the source container.
\item $f(t)$ is the weight of the poured liquid.
\item $f_{total}$ is the weight of the initial amount of liquid present in the source container before pouring.
\item $f_{2pour}$ is the weight of the target poured amount.
\item $H$ is the height of the source container.
\item $\kappa$ is the body curvature of the source container.
\end{enumerate}
The body curvature $\kappa$ of the source container is calculated from the body diameter, $D$:
\begin{equation}
\kappa = 2 / D
\end{equation}
The angular velocities $\omega(1:T-1)$ are computed from $\theta(1:T)$:
\begin{equation}
\omega(t) = (\theta(t+1) - \theta(t))f_s, \qquad t=1, 2, \dots, T-1
\end{equation}
where $f_s$ is the sampling frequency of $\theta(t)$.
For each trial, at time $t\in[1, 2, \dots, T-1]$, the input $x(t)$ and target $y(t)$ of the network are
\begin{align}
x(t) &= [\theta(t), f(t), f_{total}, f_{2pour}, H, \kappa]^\top\\
y(t) &= \omega(t)
\end{align}
The output of the network is denoted by $\hat{y}(t)$. Assume we have $N$ trials in total, and each trial has length $T_i$, $i\in[1, 2, \dots, N]$. The loss function is defined as
\begin{equation}
c = \frac{1}{N}\sum_{i=1}^N\frac{1}{T_i-1}\sum_{t=1}^{T_i-1}(\hat{y}_i(t) - y_i(t))^2.
\end{equation}
\subsection{Data Preparation}
We set the sampling frequency $f_s=60$Hz since it is the lower one between the frequencies of $\theta(t)$ and $f_{raw}(t)$. We kept the recorded $\theta(t)$'s intact and downsampled $f_{raw}(t)$ to 60Hz. We obtain $f(t)$ by filtering the raw reading from the force sensor $f_{raw}(t)$, specifically
\begin{align}
f_m(1:t) &\leftarrow \text{median}\_\text{filter}(f_{raw}(1:t)), \quad \text{window}\_\text{size}=5, \\
f(t) &\leftarrow \text{Gaussian}\_\text{filter}(f_m(1:t)), \quad \sigma=2.
\end{align}
We normalize each input dimension independently using the mean and standard deviation of that dimension.
The model had 1 layer and 16 LSTM units. We trained models with different numbers of layers and LSTM units, and we found the model with 1 layer and 16 units had a simple structure and performed well. We set the keep probability of dropout to be 0.5. Specifically, the computation for time step $t$ is represented as:
\begin{align}
h(t) &= \text{LSTM}\left(h(t-1), x(t)\right) \\
h_d(t) &= \text{Dropout}\left(h(t)\right) \\
\hat{y}(t) &= W_yh_d(t) + b_y
\end{align}
The network is shown in Figure \ref{fig-network}.
The learning model involves 284 trials in total, among which 221 are for training and 63 for validation. Each iteration is an epoch, in which the entire training and validation data are traversed. We ran 2000 epochs and picked the model that has the lowest validation loss. We used the Adam optimizer and set the initial learning rate to be 0.001. The code is written using TensorFlow.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Figures/fig_network.png}
\caption{Our network has 1 layer and 16 peephole LSTM units. Dropout with a keep probability of 0.5 is applied to non-sequential connections. }
\label{fig-network}
\end{figure}
\section{Generalization by Self-Supervised Practicing}
The robot with the motion model has limited generalizability related to certain pouring containers for which it achieved high pouring error. We recorded the outcomes that the robot provided when pouring with the poor-performing containers. We used those outcomes to fine-tune the model. We refer to this fine-tuning process using the outcomes generated by the robot as generalization by self-supervised practicing (GSSP).
GSSP updates the motion model by training it with data generated by practices. Although the outcomes in the first few practices may present a high volume error, they contain the response of the model to new conditions and are therefore valuable. GSSP mimics the behavior of humans when they face a new instance of the pouring task. When we use a cup that we are not familiar with, we pour based on our experience. Shortly after a few practices, the model improves and adapts to the new cup. The key component in GSSP is that the robot uses the actual outcomes instead of the desired outcomes to fine-tune the model.
We refer to the process of using the model on a robot to finish a pouring trial as a {\it practice}. We use the robot's outcome sequences that result from several practices to form a new dataset, which we use to fine-tune the model. The fine-tuning also uses the actual outcomes as the motion goal. The inputs for the fine-tuning process are:
\begin{itemize}
\item $H$: height of the source container.
\item $\kappa = 2 / D$: body curvature of the source container with $d$ being the diameter of the container.
\item $f_{total}$: the initial weight of water present in the source container before pouring.
\item $f_{2pour}$: the actual weight of water that was poured during the practice.
\item $f(t)$: the sequence of the weight of water in the receiving container in the practice.
\item $\theta(t)$: source container's current angle during the practice.
\end{itemize}
The output of the fine-tuning is:
\begin{itemize}
\item $\omega(t)$: angular velocity of the source container \textbf{in the practice}.
\end{itemize}
Before fine-tuning, we obtain an initial model through the presented self-supervised learning from demonstration. The fine-tuning process in GSSP can be carried out in two ways:
\begin{itemize}
\item \textbf{Gradual Fine-tuning}: First, the robot performs $n$ practices, where $n$ is relatively small. The resulting pouring sequences form a new dataset, which is used to fine-tune the model. If the error of the updated model is larger than a predefined threshold, $n$ practices are performed again, the newly generated $n$ data points are added to the new dataset. The practices keep being performed and the new dataset keeps growing until the error of the updated model is below the predefined threshold.
We define the error threshold to be two times that of an averaged human's pouring error. Algorithm \ref{alg:gip} describes gradual fine-tuning with $n \in [5,15]$.
\item \textbf{Batch Fine-tuning}: First, the robot performs $n$ practices, where $n$ is relatively large. The resulting pouring sequences are used to fine-tune the model. Batch fine-tuning is equivalent to conducting one iteration of gradual fine-tuning with a large $n$, i.e., $n > 35$, where 35 is the average number of samples per source container for the initial training set discussed in section \ref{sec:training}.
\end{itemize}
\begin{algorithm*}[h!]
\caption{Generalization by self-supervised practicing (gradual fine-tuning)}\label{alg:gip}
\begin{algorithmic}[1]
\State $M_{init}$: Initial model
\State $n$: Number of practices
\State $\mathcal{R}\gets$ \{($r^1_s$, $r^1_g)$, ..., $(r^n_s$, $r^n_g$)\}
\Comment{$r^i_s$: start state; $r^i_g$ desired outcome or goal}
\State $err\_th\gets$ error threshold
\State $\mathcal{D}\gets \{\}$
\Procedure{Practice}{$M$, $n$, $\mathcal{R}$}
\Repeat
\State Robot practices once using one item in $\mathcal{R}$ using model $M$
\State $\mathcal{D}\gets \mathcal{D} \cup \{d\}$ \Comment$d:$ outcome sequence from the practice
\Until {$n$ practices have been performed}
\State error $\gets$ mean error between actual outcome and desired outcome among $n$ practices
\State \textbf{return} error \label{return}
\EndProcedure
\Procedure{GSSP}{}
\State $M_{new} \gets M_{init}$
\While {True}
\State $err$ $\gets$ \Call {Practice}{$M_{new}$, $n$, $\mathcal{R}$}
\If {$err < err\_th$}
\State \textit{break}
\Else
\State $M_{new} \gets$ {Fine-tune}($M_{new}, \mathcal{D}$)
\Comment{Fine-tune the model using $\mathcal{D}$}
\State $\mathcal{R} \gets$ {Generate-Random-Practices} ($n$)
\Comment{Randomly generate $n$ set of requirements for future practices}
\EndIf
\EndWhile
\State \textbf{return} $M_{new}$
\EndProcedure
\end{algorithmic}
\end{algorithm*}
The GSSP approach can be used for any manipulation where the actual outcomes of several practices can be used as the desired outcomes, i.e., the actual outcome can substitute the desired outcome for training. Taking throwing objects into bins as an example \cite{zeng2020tossingbot}, we can practice throwing with the robot using unseen objects, record the outcome, and apply GSSP to generalize the learning model to new objects. Another example can be recording the state of food ingredients \cite{jelodar2018identifying}, or a state change \cite{jelodar2019joint} after the execution of a manipulation, record the action sequences, use them as training samples, and fine-tune the manipulation model to expand its generalization. However, the GSSP approach may not generalize well among different manipulations. For example, a throwing motion model may not be generalized to mixing manipulations since the two motion models may have different model structures. Our latest work on motion code and motion embedding \cite{paulius2020motion, alibayev2020estimating} may help in this kind of cross-motion-type generalization.
\section{Experiments \& Evaluation} \label{sec:experiments}
To evaluate the motion model, we built a robotic system that consists of the trained RNN, a Dynamixel MX-64 motor, and the same force sensor with which we collected the data. The motor was placed at a certain height above the surface. The force sensor was placed on the surface close by. The source container was attached to the motor. The receiving container was placed on top of the force sensor. We properly placed the receiving container (along with the force sensor) according to the particular source container used so that there is little spilling. Figure \ref{fig-physical_system} (Left) shows the setup of the robotic system.
It runs at 60Hz, the same as the data collection. The time between consecutive time steps is $\Delta t = 0.016$ seconds. Before performing each separate pouring trial, we obtain the four static features which we denote by $z=[f_{total}, f_{2pour}, H, \kappa]$. During the trial, at time step $t$, we obtain $\theta(t)$ from the motor and $f(t)$ from the force sensor, and we feed the input features $x(t) = [\theta(t), f(t), z]^\top$ to the model, which then generates the velocity $\omega(t)$. The motor executes the velocity. The above process repeats at time step $t + \Delta t$. Figure \ref{fig-physical_system} (Right) shows the working process of the robotic system at time $t$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.61\linewidth]{Figures/fig_physical_system.png}
\includegraphics[width=0.37\linewidth]{Figures/physical_system_working.png}
\end{center}
\caption{(Left) The robotic system consists of a motor that executes the generated velocity command and a force sensor that monitors the poured amount. The source containers are attached to the motor through a 3-D printed adapter. (Right) Before pouring, we obtain the static features $z=[f_{total}, f_{2pour}, H, \kappa]$. At time step $t$, the robotic system obtains $\theta(t)$ and $f(t)$, combine them with $z$, and send to the network. The network generates velocity command $\omega(t)$ which is executed by the motor.
}
\label{fig-physical_system}
\end{figure}
The robotic system:
\begin{enumerate}
\item Normalized every input dimension.
\item Obtained $f(t)$ by filtering the raw force readings.
\end{enumerate}
in the same way as in training.
We evaluated the motion model by testing it on pouring certain kinds of liquid from certain source containers. The difficulty of the task changes when the liquid and the source container change. For each pair of liquid and source containers, the model pours 15 times, each time with arbitrary $vol_{total}$ and $vol_{2pour}$, where $vol_{total}>vol_{2pour}$. We show the pouring error of each pair of liquid and source container in the form of figures. In the figure, we plot the actual poured volume against the target volume for all 15 trials. We also show the liquid type, the mean, and standard deviation of the pouring error: $\mu_e$ and $\sigma_e$ in milliliters. At the bottom right of the figure, we show the source container that was used. We also show a black dashed line that illustrates zero pouring error.
Translating the force reading to volume requires the density of the liquid $\rho$ and the gravitational acceleration $g$. We used 0.997g/mL for the density of water and 9.80665 m/s$^2$ for gravitational acceleration.
\begin{comment}
Figure \ref{fig-cups-distr} shows the scatter plot of source containers used for experiments. There are three sets of containers described below:
\begin{itemize}
\item Train: containers from the training dataset shown in the left hand side of Figure \ref{fig-cups}.
\item Test: containers for evaluating the motion model shown in the right hand side of Figure \ref{fig-cups}.
\item New Container: wine bottle and blue bottle that have larger height than the training and testing containers. They were used together with the measuring cup for GSSP evaluation.
\end{itemize}
The boundary defined by the dashed black line in Figure \ref{fig-cups-distr} encloses all the train and test containers except the measuring cup. We wanted to test the motion model on containers outside the boundary, and use GSSP to improve the model on those containers.
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\columnwidth]{Figures/height_diameter2.png}
\caption{Scatter plot of height vs diameter for source containers used for experiments.}
\label{fig-cups-distr}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\columnwidth]{Figures/hist_h_d.png}
\caption{Histogram of euclidean distance between the testing and new containers versus the training containers.}
\label{fig-hist-euclidean}
\end{figure}
\end{comment}
\subsection{Model Evaluation} \label{sect:eval1}
\subsubsection{Model Evaluation of Pouring Water}
We started with the task that has the lowest difficulty and tested the model by pouring water from the red cup that has been used for training. Figure \ref{fig-water_accuracy} (a) shows a small error of $\mu_e=3.71$mL, indicating that the learning is successful.
Then we increase the difficulty of the tasks and test the model by pouring water from different source containers that have not been used for training. Table \ref{table-error} summarizes the mean and standard deviation of the errors, $\mu_e$ and $\sigma_e$, in milliliters of the model pouring water from different source containers, and of the human pouring water from the red cup. Figures \ref{fig-water_accuracy} (b) through (e) show the error of five source containers whose $\sigma_e$ is smaller than that of human. Compared with the error of using the red cup $\mu_e=3.71$mL, the error of using the five source containers is larger, ranging from $\mu_e=4.12$mL to $\mu_e=12.35$mL, which is expected. Based on the results, we call them "accustomed containers". We have also evaluated the model on a UR5e robotic arm using the water bottle (Figure \ref{fig-ur5e}) in which $\mu_e$ = 7.83mL and $\sigma_e$ = 6.62mL.
\begin{table}[h!]
\begin{center}
\begin{tabular}{| c | c | c | c |}
\hline
cup & \thead{cup \\ in training} & $\mu_e$ (mL) & $\sigma_e$ (mL) \\
\hline
red & yes & 3.71 & 3.88\\
water bottle & no & 4.12 & 4.29 \\
bubble & no & 6.77 & 5.76\\
glass & no & 7.32 & 8.24 \\
fat bottle & no & 12.35 & 8.88 \\
red (by human) & n/a & 12.37 & 9.80\\
measuring cup & no & 11.29 & 12.82 \\
wine bottle & no & 51.22 & 39.61 \\
blue bottle & no & 55.84 & 47.26 \\
\hline
\end{tabular}
\end{center}
\caption{Errors of pouring water from different source containers}
\label{table-error}
\end{table}
We wanted to compare the pouring model with humans and therefore we asked four human subjects to do accurate water pouring with the red cup. We made an animation on a computer screen that shows the target volume and the real-time volume of water that has already been poured. The animation faithfully shows the fluctuation of the volume reading while pouring. The subjects were asked to look only at the animation and pour the target volume. They were asked to pour naturally and with a single pour. Pouring too fast or too slow was not allowed. We collected 10 trials with each subject, resulting in 40 total trials. Figure \ref{fig-human} shows the results of human accurate pouring: $\mu_e=12.37$mL and $\sigma_e=9.80$mL. Compared with humans, pouring water from the red cup (Figure \ref{fig-water_accuracy} (a)) achieves a lower $\mu_e=3.71$mL and $\sigma_e=3.88$mL. The model achieves lower error than humans because the model is trained using the actual outcomes.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{Figures/human_experiment.png}
\includegraphics[width=0.2\linewidth]{Figures/human_redcup.png}
\caption{Actual-vs-target comparison of 4 human subjects pouring water from the red cup}
\label{fig-human}
\end{figure}
Figure \ref{fig-robot-pouring} shows an example of the resulting $\omega(t)$, $vol(t)$, and $vol_{2pour}$ for a trial of pouring water using the trained model with the water bottle. The target $vol_{2pour}$ was established as 150\,mL from an initial $vol_{total}$ of approximately 430\,mL. The final amount poured was 149\,mL, an error of 1\,mL. We can see that for this example the pouring was finished in less than 6 seconds and the water stopped to be poured out in less than 5 seconds. The small spike seen in the volume is due to the force sensor's noise generated by the water movement. We measured the final volume poured once the water had stabilized in the receiving container.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.95\columnwidth]{Figures/UR5e.png}
\end{center}
\caption{Evaluating the model on UR5e collaborative robotic arm}
\label{fig-ur5e}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Figures/Velocity_Volume_Robot.png}
\caption{An example of the robot pouring water using the water bottle. Velocity: velocity of the source container measured counter clock-wise, negative radians/seconds means that the container is going forward and positive means backward. Volume: volume of water in the receiving container. Target: target volume for the trial.}
\label{fig-robot-pouring}
\end{figure}
We have also tested the model on two containers that are significantly different from the training set: a wine bottle and a blue bottle that were available in our laboratory. As shown in Table \ref{table-error}, their mean volume errors were 51.22\,mL and 57.06\,mL, respectively. The mean volume error for the wine bottle and the blue bottle is 13 times higher than that of the red cup (3.71\,mL). They are unaccustomed containers to our pouring model because of their unusual shapes.
\begin{figure*}[h]
\includegraphics[width=\linewidth]{Figures/water_accuracy.png}
\caption{Actual-vs-target comparison of pouring water using (a) red cup which is used for training (b) water bottle (c) cup with bubble pattern which we referred to as the bubble cup (d) glass cup (e) fat bottle. }
\label{fig-water_accuracy}
\end{figure*}
Having evaluated the error of the model pouring different but relatively large amounts of water, we evaluated the error of the model pouring a small amount of water. We use the model to pour 20mL and 15mL using the red cup, respectively, each for 15 times. For 20mL $\mu_e$ = 9.68mL and $\sigma_e$ = 7.96mL. For 15mL $\mu_e$ = 2.83mL and $\sigma_e$ = 3.33mL. Both $\mu_e$ and $\sigma_e$ for pouring 20mL are smaller than those of pouring a larger volume with the red cup (Figure \ref{fig-water_accuracy} (a)). The error of pouring 15mL is larger than both the error of pouring 20mL and the error of pouring a larger volume.
In Figure \ref{fig-sensor_drift}, we plot the reading of the force sensor for a 1.0-lbf weight for 300 seconds. Figure \ref{fig-sensor_drift} also shows the water volume converted from the corresponding force. For a 1.0-lbf weight, the force sensor has a nonlinearity error of around 0.01 lbf, which is 1\% of 1.0 lbf. The corresponding error in volume is around 5mL.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\columnwidth]{Figures/force_sensor_accuracy.png}
\caption{Readings of the force sensor for a weight of 1.0 lbf taken during 300 seconds. The bottom subfigure shows the volume converted from force.}
\label{fig-sensor_drift}
\end{figure}
\subsubsection{Comparison with Related Works}
The difficulty of accurate pouring increases as the duration it takes to pour decreases. Our model pours as fast as humans. The duration of each pour in the human demonstrations dataset ranges from 3.2 to 8.7. The duration range of our model was 2.8 to 7.6 seconds. In comparison, \cite{7989307} achieves 38\,mL error using 25 seconds for each pour using similar containers as our accustomed containers, it takes much longer than ours and causes a larger error. \cite{8653969} achieves under 5mL error and uses 20-45 seconds for each pour, it achieves lower error than ours but takes longer than \cite{7989307}. We are not aware of any prior art that has evaluated containers as diverse as ours.
Our approach uses weight to monitor the poured volume. \cite{7989307, do2018accurate, chaudo2018} also use one single modality (weight or vision) to monitor the poured volume, whose reported pouring error is 38mL \cite{7989307}, 23.9mL, 13.2mL, and 30.5mL \cite{do2018accurate}, and 19.96mL \cite{chaudo2018} respectively. \cite{8653969} achieves an error that is under 5mL, but two modalities (weight and vision) are used to monitor the poured volume. It also pours slowly, a behavior that can help with the higher accuracy achieved. The error reached by the model lies between 3.71mL and 12.35mL, lower than the above approaches.
In our previous work \cite{tianze2019}, we used model predictive control (MPC) to address the problem of accurate pouring. The proposed controller uses RNN to predict the weight of the liquid in the receiving container and controls the angular velocity of the source container. We evaluated the performance of the controller by comparing it with a switch controller. The switch controller applies a constant forward velocity to the source container when the volume in the destination container is less than the target. It applies a constant backward velocity when the volume reaches the target. Table \ref{tab-mpc} shows the results. The Switch $\omega_1$ controller used $20$ deg/sec as the forward velocity and $-30$ deg/sec as the backward velocity. The Switch $\omega_2$ controller used $5$ deg/sec as the forward velocity and $-7.5$ deg/sec as the backward velocity. We can see that when the forward angular velocity becomes smaller, the mean volume error decreases. This result is expected since the difficulty of controlling the volume in the destination container decreased. Based on Table \ref{tab-mpc}, the errors for the MPC controller go from 7.25mL and 26.13mL. We can see that model $M_0$ also performs better than the MPC controller. Without the guarantee that the RNN-generated physics model is highly precise, the performance of the powerful MPC algorithm is compromised. Pouring is not a trivial task that can be solved by a traditionally powerful algorithm.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
cup & model & \thead{cup \\ in training} & $\mu_e$ (mL) & $\sigma_e$ (mL) \\
\hline
\multirow{3}{*}{red} & MPC & \multirow{3}{*}{yes} & 7.25 & 4.92\\
& Switch $\omega_1$ & & 33.50 & 7.76\\
& Switch $\omega_2$ & & 4.50 & 1.87\\
\hline
glass & MPC & no & 14.25 & 9.11\\
\hline
bottle & MPC & no & 15.88 & 5.13\\
\hline
fat & MPC & no & 18.25 & 8.30\\
\hline
\multirow{3}{*}{bubble} & MPC & \multirow{3}{*}{no} & 26.13 & 6.29\\
& Switch ($\omega_1$) & & 56.25 & 5.85\\
& Switch ($\omega_2$) & & 22.25 & 4.29\\
\hline
\end{tabular}
\end{center}
\caption{Results of pouring with MPC or Switch Controller}
\label{tab-mpc}
\end{table}
\subsubsection{Model Evaluation on Different Materials} \label{sect:eval_solid}
We also tested the model on liquids with different viscosity from water such as cooking oil and syrup. We used the red cup to pour since it is part of the training dataset. However, the dataset comes from pouring motions of water.
We speculate that viscosity played an important role in the accuracy of pouring different kinds of liquids. Therefore, Figure \ref{fig-viscosity} shows the error bars of pouring water, oil, and syrup with the red cup versus their viscosity for 15 pours. The three types of liquids have very different viscosities. We use 1 centipoise (cps) as the viscosity for water, 65 centipoises for oil, and 2000 for syrup. We plot the viscosities on a logarithmic scale. We can see that the mean error increases as the viscosity increases. The relationship is neither linear nor exponential.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Figures/fig_error_vs_viscosity.png}
\caption{Pouring accuracy of liquids with different viscosity. x-axis plotted in logarithmic scale.}
\label{fig-viscosity}
\end{figure}
We also evaluated the model on granular materials poured in cooking scenarios: beans and rice. We used the same model trained from pouring water. Figure \ref{fig-solids} shows the result of 15 pours of beans and rice using the red cup. We changed the unit of measure to grams (g) as it is more suitable for solid materials than volume. We can see that the mean error is small for rice, whereas it is higher for beans. However, the mean errors presented by the model are similar to those presented in \cite{solid2019pouring}, where the authors state to have a mean error estimation of 14.3g for red beans and 4.36g for pouring rice. Their approach is not meant for accurate pouring but for pouring mass estimation based on fingertip sensors of a robotic hand.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Figures/Solids_Red_Cupv2.png}
\caption{Pouring accuracy of beans and rice using the red cup.}
\label{fig-solids}
\end{figure}
\subsection{GSSP Evaluation -- Generalization to unaccustomed containers}
\label{sect:evalcon}
We evaluated GSSP on the two unaccustomed containers -- a wine bottle and a blue bottle. For comparison, we also added the measuring cup into the unaccustomed container set.
We chose those three containers because, as Table \ref{table-error} shows, the wine bottle and blue bottle had the highest mean errors among all the test containers evaluated. Although the measuring cup did not present a significantly high mean error, it did present a much higher standard deviation error than humans. Figure \ref{fig-cups-distr} shows the scatter plot of the height versus diameter of the source containers used for experiments. We can see that the wine bottle and the blue bottle are both much taller than the rest of the containers, and the measuring cup has a much larger diameter.
We evaluated GSSP using batch fine-tuning for the wine bottle, blue bottle, and measuring cup. We evaluated the accuracy of the resulting models by pouring water 15 times per experiment for batch fine-tuning and carried out the experiments maintaining the same set of volumes for a fair comparison. We also evaluated GSSP using gradual fine-tuning for the wine bottle. We refer to the initially learned model that was evaluated in Section \ref{sect:eval1} as $M_0$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\columnwidth]{Figures/height_diameter2.png}
\caption{Scatter plot of height vs diameter for source containers used for experiments.}
\label{fig-cups-distr}
\end{figure}
\subsubsection{Wine Bottle} \label{sec:wine}
We executed a total of 36 practices using the initial model $M_0$ with the wine bottle for different $f_{total}$ and $f_{2pour}$. Then, we fine-tuned the model using the obtained dataset. We call the fine-tuned model $M_1$.
Figure \ref{fig-wine-cup-m0} shows the mean and standard deviation of the error for 15 pours before and after applying GSSP, i.e., using models $M_0$ and $M_1$, respectively. We can see that the wine bottle's mean volume error became 15.78\,mL, a reduction of around 69\% from 51.22\,mL mean error. After applying GSSP, some trials over pour but others under pour. When using model $M_0$, all trials over pour water.
\begin{comment}
\begin{table}[h!]
\centering
\begin{tabular}{| c | c | c | c | c | }
\hline
\thead{Source \\ Container} & \thead{Fine-tuned \\ model} & $\mu_e$ (mL) & $\sigma_e$ (mL) \\
\hline
\multirow{2}{*}{Red Cup}
Red Cup & $M_{1}$ & 31.88 & 19.60\\
& $M_{2}$ & 8.67 & 5.13\\
\hline
Wine Bottle & $M_{1}$ & 15.78 & 13.55\\
& $M_{2}$ & 17.43 & 13.65\\
\hline
\end{tabular}
\caption{Comparison of accuracy for Red Cup and Wine Bottle after fine-tuning $M_0$.}
\label{tab-batch-wine}
\end{table}
\end{comment}
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth,trim=2em 3em 2em 0em, clip]{Figures/Wine_Bottle_GSSP_1.png}
\caption{Results for Wine Bottle before and after GSSP.}
\label{fig-wine-cup-m0}
\end{figure}
We also tested the gradual fine-tuning approach using the wine bottle for which we set the number of practices $n=10$. The set of volume variations $\mathcal{R} = \{(f_{total}^{i}, f_{2pour}^{i})\}$ for $i=1,...,10$, was chosen to be the same for each iteration of the algorithm. Table \ref{tab-gradual-wine} shows the accuracy evolution of the fine-tuning algorithms we carried out. The mean and standard deviation errors from the table's first row are different from the ones shown in Table \ref{table-error} as the set $\mathcal{R}$ and the number of trials was different for both experiments. We hypothesize that running more iterations of the algorithm will further improve the result. However, we believe that there should exist enough variation in the selection of $f_{total}$ and $f_{2pour}$ to outperform the result of batch fine-tuning for this particular container.
\begin{table}[h!]
\centering
\begin{tabular}{| c | c | c | c |}
\hline
\thead{Base \\ Model} & \thead{Fine-tuned \\ Model} & $\mu_e$ (mL) & $\sigma_e$ (mL) \\
\hline
$M_0$ & & 80.23 & 49.13\\
$M_0$ & $M_{2}$ & 38.67 & 11.98\\
$M_{2}$ & $M_{3}$ & 30.04 & 17.26\\
$M_{3}$ & $M_{4}$ & 18.21 & 8.76\\
\hline
\end{tabular}
\caption{Accuracy for Wine Bottle after gradual fine-tuning.}
\label{tab-gradual-wine}
\end{table}
Comparing the results of Table \ref{tab-gradual-wine}'s fourth row with Figure \ref{fig-wine-cup-m0} after applying GSSP, we can see that both methodologies yield similar results. Gradual fine-tuning has the advantage over batch fine-tuning w.r.t. the cost it takes to collect the practices. However, there exists a trade-off in training time: batch fine-tuning only trains once, while gradual fine-tuning trains several times. Nevertheless, gradual fine-tuning allows us to realize whether there is an improvement or not after the first iteration of the algorithm using only a few practices. In batch fine-tuning, after taking a considerable time carrying out practices, we expect that there exists an improvement, but the practices collected may lead to unsatisfactory results.
\subsubsection{Blue Bottle} \label{sec:blue}
We decided to apply only batch fine-tuning for the blue bottle. We executed 54 practices using the blue bottle and fine-tuned $M_0$. We call the resulting model $M_5$. Figure \ref{fig-blue-cup-m0} shows the mean and standard deviation of the error for 15 pours before and after applying GSSP, i.e., using $M_0$ and $M_5$, respectively. We can see a reduction of 74\% in average error from 55.85\,mL to 14.35\,mL. We can also see that there exists an outlier for the target of 160\,mL that may be affecting the standard deviation. However, there was a 54\% reduction in this statistic from 47.32\,mL to 21.42\,mL.
\begin{comment}
\begin{table}[h!]
\centering
\begin{tabular}{| c | c | c | c | c | }
\hline
\thead{Source \\ Container} & \thead{Base \\ Model} & \thead{Fine-tuned \\ Model} & $\mu_e$ (mL) & $\sigma_e$ (mL) \\
\hline
\multirow{2}{*}{Red Cup} & \multirow{2}{*}{$M_0$} & $M_{6}$ & 25.29 & 14.98\\
& & $M_{7}$ & 8.52 & 5.51\\
\hline
\multirow{5}{*}{Blue Bottle} & \multirow{2}{*}{$M_0$} & $M_{6}$ & 14.35 & 21.42\\
& & $M_{7}$ & 12.65 & 5.99\\
\cline{2-5}
& \multirow{3}{*}{$M_{2}$} & & 26.38 & 38.09\\
& & $M_{8}$ & 25.12 & 27.43\\
& & $M_{9}$ & 28.27 & 25.07\\
\hline
\end{tabular}
\caption{Comparison of accuracy for Red Cup and Blue Bottle after fine-tuning using different base models and datasets.}
\label{tab-batch-blue}
\end{table}
\end{comment}
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth,trim=2em 2em 2em 0em, clip]{Figures/Blue_Bottle_GSSP_1.png}
\caption{Results for Blue Bottle before and after GSSP.}
\label{fig-blue-cup-m0}
\end{figure}
\subsubsection{Measuring Cup}
\label{sec:measuring}
We executed 36 practices using the measuring cup. We also decided to use batch fine-tuning for this source container and applied GSSP. We call the resulting model $M_6$.
Figure \ref{fig-fat-cup-m0} shows the scatter plot of the target versus the actual poured volume for the measuring cup before and after GSSP, i.e., when using $M_0$ and $M_{6}$, respectively.
We can see a reduction again in mean error when comparing the target with the actual volume poured by the robot.
\begin{comment}
\begin{table}[h!]
\centering
\begin{tabular}{| c | c | c | c | c | }
\hline
Source Container & \thead{Fine-tuned \\ Model} & $\mu_e$ (mL) & $\sigma_e$ (mL) \\
\hline
\multirow{2}{*}{Red Cup} & $M_{10}$ & 4.84 & 2.79\\
& $M_{11}$ & 6.95 & 5.57\\
\hline
\multirow{2}{*}{Measuring Cup} & $M_{10}$ & 8.38 & 8.27\\
& $M_{11}$ & 8.03 & 8.34\\
\hline
\end{tabular}
\caption{Comparison of accuracy for Red Cup and Measuring Cup after fine-tuning $M_0$ using different datasets.}
\label{tab-batch-measuring}
\end{table}
\end{comment}
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth,trim=2em 2em 2em 0em, clip]{Figures/Measuring_Cup_GSSP_1.png}
\caption{Results for Measuring Cup before and after GSSP.}
\label{fig-fat-cup-m0}
\end{figure}
\subsection{GSSP Evaluation -- Generalization to new material} \label{sect:evalmat}
We have also carried out GSSP experiments on syrup and red beans. We used the red cup as the container for pouring with the model $M_0$. This model comes from training using the human demonstrations dataset of pouring water. We selected the red cup as it is the most accurate container for pouring water with model $M_0$.
\subsubsection{Syrup}
We applied gradual fine-tuning for syrup using model $M_0$ for which we set the number of practices $n=10$. Figure \ref{fig-gip-syrup} shows the evolution of accuracy for 5 iterations of Algorithm \ref{alg:gip}. We can see that the initial mean error of 13.84\,mL was mostly affected for target volumes lower than 100\,mL. For the second iteration, we decided to pour small volumes for which we could see a mean volume error of 28.07\,mL. For the third iteration, we could see a reduction of the mean error from 28.07\,mL to 16.18\,mL (42\% reduction). We could see that the improvement was caused by targets lower than 50\,mL and higher than 80\,mL. For the fourth iteration, we could see an improvement from 16.18mL to 11.49\,mL (28\% reduction). At this stage, we were able to reduce the initial mean error from 13.8\,4mL to 11.49\,mL (17\% reduction). Finally, we carried out a fifth iteration for which the mean error decreases from 11.49\,mL to 11.11\,mL (3\% reduction).
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Figures/Syrup_GiP_v4.png}
\caption{Accuracy for syrup before and after 4 gradual fine-tuning iterations.}
\label{fig-gip-syrup}
\end{figure}
\subsubsection{Red Beans}
We collected a total of 36 practices for pouring beans with different initial target weights, where the 15 pouring trials shown in Figure \ref{fig-solids} are included. Table \ref{tab-batch-beans} summarizes the mean and standard deviation weight errors. Model $M_{11}$ results from fine-tuning $M_0$ using the 36 pouring trials collected.
We can see that GSSP not only works for reducing the error when pouring liquids from new source containers. It also can be used to improve the accuracy of pouring a different material. Interestingly, the model $M_0$ trained for pouring water has a considerable small error when used for pouring beans.
\begin{table}[h!]
\centering
\begin{tabular}{| c | c | c | c | c | }
\hline
\thead{Source \\ Container} & \thead{Model} & $\mu_e$ (g) & $\sigma_e$ (g) \\
\hline
\multirow{2}{*}{Red Cup} & $M_{0}$ & 16.27 & 6.88\\
& $M_{11}$ & 11.49 & 6.98\\
\hline
\end{tabular}
\caption{Comparison of accuracy for Red Cup before and after fine-tuning $M_0$ using data collected for red beans.}
\label{tab-batch-beans}
\end{table}
\subsection{GSSP Discussion}
At this point, we have applied GSSP as formulated in Algorithm \ref{alg:gip} where the dataset used to fine-tune comes entirely from the robot practices. We performed further experiments to analyze the effects of combining the human demonstrations dataset described in section \ref{sec:dataset} with the robot practices. The hierarchical diagram of Figure \ref{fig-model-relationships} illustrates the details of the fine-tuned models that we have presented so far and the new ones we will present next.
The models presented in sections \ref{sect:evalcon} and \ref{sect:evalmat} correspond to the children of the ``Practices" branch of Figure \ref{fig-model-relationships}. Such models were derived from fine-tuning $M_0$ with robot practices using the same model.
In the following sections, we will compare the results of new models fine-tuned with a combination of robot practices and human demonstrations, versus the models fine-tuned with robot practices only. We also carried out experiments using the red cup with the fine-tuned models to verify the impact of accuracy to the accustomed containers. Finally, we will also show the results of applying GSSP to the robot practices using a fine-tuned model instead of $M_0$.
\begin{figure*}[h!]
\resizebox{\linewidth}{!}{
\begin{forest}
for tree={
minimum height=1cm,
anchor=north,
align=center,
child anchor=north,
edge={-stealth,line width=1pt},
},
[{$M_0$}, align=center, name=BM
[{Practices}, name=DS
[{Gradual}, name=GT
[{Wine \\ Bottle}, name=CM
[{$M_{2\textnormal{-}4}$}, name=RM]
]
[{Syrup} [{$M_{7\textnormal{-}10}$}]]
]
[{Batch}
[{Wine \\ Bottle} [{$M_1$}]]
[{Blue \\ Bottle} [{$M_5$}]]
[{Measuring \\ Cup} [{$M_6$}]]
[{Red \\ Beans}[{$M_{11}$}]]
]
]
[{Practices + \\ Human Demonstrations}
[{Batch}[{Wine \\ Bottle} [{$M_{12}*$}]]
[{Blue \\ Bottle} [{$M_{13}$}]]
[{Measuring \\ Cup} [{$M_{14}$}]]
]
]
]
\node[anchor=west,align=left]
at ([xshift=-2.5cm]RM.west|-RM) {Result \\ Model(s)};
\node[anchor=west,align=left]
at ([xshift=-2.5cm]RM.west|-CM) {Container/ \\ Material};
\node[anchor=west,align=left]
at ([xshift=-2.5cm]RM.west|-GT) {GSSP Type};
\node[anchor=west,align=left]
at ([xshift=-2.5cm]RM.west|-DS) {Dataset};
\node[anchor=west,align=left]
at ([xshift=-2.5cm]RM.west|-BM) {Base \\ Model};
\end{forest}
\begin{forest}
for tree={
minimum height=1cm,
anchor=north,
align=center,
child anchor=north,
edge={-stealth,line width=1pt},
},
[{$M_{12}*$}, align=center, name=BM
[{Practices}, name=DS
[{Batch}, name=GT
[{Blue \\ Bottle}, name=CM
[{$M_{15}$}, name=RM]
]
]
]
[{Practices + \\ Human Demonstrations}
[{Batch}[{Blue \\ Bottle} [{$M_{16}$}]]
]
]
]
\end{forest}
}
\caption{Model relationships for the application of batch or gradual GSSP to particular datasets, containers, or materials.}
\label{fig-model-relationships}
\end{figure*}
\subsubsection{Combination of Practices and Human Demonstrations}
We fine-tuned model $M_0$ using the combination of the human demonstrations dataset with the robot practices. We believe this is the same as training from scratch using the combined datasets, the only difference being that the training converges faster as $M_0$ already learned from the human demonstrations. Table \ref{tab-summary-comb} shows the summary of mean and standard deviation errors for the fine-tuned models. $M_{12}$ is the fine-tuning of $M_0$ with the training dataset of human demonstrations from section \ref{sec:dataset} plus the batch of wine bottle robot practices from section \ref{sec:wine}. Similarly, $M_{13}$ and $M_{14}$ are the results of fine-tuning $M_0$ with the training dataset of human demonstrations of section \ref{sec:dataset} plus the blue bottle and measuring cup batches of robot practices, respectively.
Model $M_{12}$ has slightly higher mean and standard deviation errors than model $M_1$ w.r.t. wine bottle, meaning that model $M_1$ is better suitable for pouring accurately with the wine bottle. Interestingly, model $M_{13}$ slightly outperformed model $M_5$ w.r.t. the blue bottle's accuracy. Similarly, model $M_{14}$ marginally outperformed model $M_6$. Given this marginal improvement, fine-tuning using only the robot practices is sufficient to achieve a satisfactory precision. Moreover, the original human demonstrations dataset is necessary to generate models $M_{13}$ and $M_{14}$ with the increased cost of training with a larger dataset.
\begin{table}[h!]
\centering
\begin{threeparttable}
\begin{tabular}{| c | c | c | c | c | c |}
\hline
\thead{Source \\ Container} &
\thead{Base \\ Model} & \thead{Fine-tuned \\ Model} & $\mu_e$ (mL) & $\sigma_e$ (mL) \\
\hline
\multirow{2}{*}{\thead{Wine Bottle}}
& \multirow{2}{*}{$M_{0}$} & $M_{1}$\tnote{a} & 15.78 & 13.55\\
& & $M_{12}$\tnote{b} & 17.43 & 13.65\\
\hline
\multirow{2}{*}{\thead{Blue Bottle}}
& \multirow{2}{*}{$M_{0}$} & $M_{5}$\tnote{a} & 14.35 & 21.42\\
& & $M_{13}$\tnote{b} & 12.65 & 5.99\\
\hline
\multirow{2}{*}{\thead{Measuring \\ Cup}}
& \multirow{2}{*}{$M_{0}$} & $M_{6}$\tnote{a} & 8.38 & 8.27\\
& & $M_{14}$\tnote{b} & 8.03 & 8.34\\
\hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[a] Model fine-tuned with robot practices only.
\item[b] Model fine-tuned with robot practices plus human demonstrations.
\end{tablenotes}
\end{threeparttable}
\caption{Accuracy comparison of pouring water using the fine-tuned models.}
\label{tab-summary-comb}
\end{table}
\subsubsection{GSSP Effect on Accustomed Containers}
We investigated how the fine-tuned models affected the generalization of pouring containers that already work well with $M_0$. To this end, we selected the red cup as it was the best performing container for $M_0$ and is also part of the human demonstrations dataset. From Table \ref{table-error}, it has $\mu_e =$ 3.71\,mL and $\sigma_e =$ 3.88\,mL. Table \ref{tab-summary-red} shows the accuracy of 15 pouring trials using the red cup using models fine-tuned with robot practices only and the combination of practices with human demonstrations. Based on the results, we can see that the accuracy of the red cup was severely affected by the fine-tuned models that come from the practices of the wine bottle and the blue bottle, i.e., $M_1$ and $M_5$, respectively. The accuracy was not drastically impacted by using $M_6$.
We believe this is related to the fact that $M_0$ was already performing well with the measuring cup. Therefore, the fine-tuning needed to learn more, e.g., modifies $M_0$ more drastically, from the practices of the wine bottle and the blue bottle than from the measuring cup's. Based on this result, we can state that models $M_1$ and $M_5$ are specialized models that accurately pour with the wine bottle and the blue bottle, respectively. At this stage, the robot can use a selector such that when it needs to use the wine bottle, $M_1$ is chosen to pour. Similarly, when it needs to use the blue bottle, $M_5$ is chosen to pour.
\begin{table}[h!]
\centering
\begin{threeparttable}
\begin{tabular}{| c | c | c | c | c | c |}
\hline
\thead{Source \\ Container} &
\thead{Base \\ Model} & \thead{Fine-tuned \\ Model} & $\mu_e$ (mL) & $\sigma_e$ (mL) \\
\hline
\multirow{6}{*}{\thead{Red Cup}}
& \multirow{6}{*}{$M_{0}$} & $M_{1}$\tnote{a} & 31.88 & 19.60\\
& & $M_{12}$\tnote{b} & 8.67 & 5.13\\
\cline{3-5}
& & $M_{5}$\tnote{a} & 25.29 & 14.98\\
& & $M_{13}$\tnote{b} & 8.52 & 5.51\\
\cline{3-5}
& & $M_{6}$\tnote{a} & 4.84 & 2.79\\
& & $M_{14}$\tnote{b} & 6.95 & 5.57\\
\hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[a] Model fine-tuned with robot practices only.
\item[b] Model fine-tuned with robot practices plus human demonstrations.
\end{tablenotes}
\end{threeparttable}
\caption{Accuracy of pouring water with the Red Cup using the fine-tuned models.}
\label{tab-summary-red}
\end{table}
\subsubsection{GSSP on a Different Base Model}
We also investigated the effect of applying GSSP to different starting models. We selected model $M_{12}$ that was the result of fine-tuning model $M_0$ using the combination of the dataset of human demonstrations plus the wine bottle robot practices. We used the blue bottle to collect 54 practices using model $M_{12}$. Table \ref{tab-different-start-blue} summarizes the results. We can see that the blue bottle's accuracy using model $M_{12}$ resulted in $\mu_e =$ 26.38\,mL and $\sigma_e =$ 38.09\,mL. This result outperforms its counterpart when using $M_0$, where $\mu_e =$ 55.84\,mL and $\sigma_e =$ 47.26\,mL. Model $M_{12}$ is more accurate to pour with the blue bottle than $M_0$. We fine-tuned model $M_{12}$ using only the blue bottle's practices, which resulted in model $M_{15}$. Its improvement was marginal around 4.7\% (25.12\,mL from 26.38\,mL). We also used the combination of the human demonstrations plus the wine bottle practices plus the blue bottle practices to fine-tune $M_{12}$. This resulted in model $M_{16}$. We could see that there was no improvement in the average error. Therefore, we believe that the most reliable model to fine-tune is the one that comes from human demonstrations, i.e., $M_{0}$. We hypothesize that such model has learned the variations inherent to humans. Therefore, it has more information than a fine-tuned model with the robot practices.
\begin{table}[h!]
\centering
\begin{threeparttable}
\begin{tabular}{| c | c | c | c | c | }
\hline
\thead{Source \\ Container} & \thead{Base \\ Model} & \thead{Fine-tuned \\ Model} & $\mu_e$ (mL) & $\sigma_e$ (mL) \\
\hline
\multirow{3}{*}{Blue Bottle} & $M_{12}$ & & 26.38 & 38.09\\
& $M_{12}$ & $M_{15}$\tnote{a} & 25.12 & 27.43\\
& $M_{12}$ & $M_{16}$\tnote{b} & 28.27 & 25.07\\
\hline
\end{tabular}
\begin{tablenotes}
\item[a] Model fine-tuned with robot practices only.
\item[b] Model fine-tuned with robot practices plus human demonstrations.
\end{tablenotes}
\end{threeparttable}
\caption{Accuracy of pouring water with the Blue Bottle fine-tuning model $M_{12}$.}
\label{tab-different-start-blue}
\end{table}
\subsection{Evaluation Summary}
Overall, the experiments and evaluation results show that
\begin{enumerate}
\item The model trained on data of humans pouring water pours accurately using accustomed containers not seen during training. The mean volume error range of pouring water was from 4.12\,mL to 12.35\,mL for such containers.
\item The model also generalizes the accurate pouring behavior to liquids such as oil and syrup and solid materials such as rice and beans.
\item By using robot practices as new training data, GSSP lowers the initial pouring error to values smaller than the state-of-the-art. For instance, experiments with a wine bottle showed a reduction of mean volume error from 51.22\,mL to 15.78\,mL (69\% reduction).
\item Batch or gradual fine-tuning yield similar results when there is enough variation on the practices collected.
\item Applying GSSP with the combination of human demonstrations and robot practices does not improve the accurate pouring generalization results.
\end{enumerate}
\section{Conclusion}
In this work, we presented a self-supervised learning from demonstrations approach that allows robots to pour as accurately and fast as humans. The presented work is based on a peephole LSTM that learns the motion dynamics by using the actual outcome of the demonstrations, regardless of an expert human execution. We evaluated the model using a robotic system that we devised and a UR5e robotic arm\footnote{Videos of the robotic pouring demos can be found at: \\
\texttt{https://youtu.be/MYfZBiHTDBc},\\ \texttt{https://youtu.be/u4OyQeMbwsQ}, and \\ \texttt{https://www.youtube.com/watch?v=xp9nEDTntU4}}. Based on the extensive experiments carried out, the presented model pours more accurately and faster than related works that have approached the accurate pouring problem. The capability of the model was further expanded with generalization by self-supervised practicing (GSSP) for containers and materials that presented high pouring error.
\section*{Acknowledgments}
This material is based upon work supported by the National Science Foundation under Grants Nos. 1812933 and 1910040.
\biboptions{square,comma}
|
2,877,628,089,755 | arxiv | \section*{References}
\include{main.bbl}
\onecolumngrid
\newpage
|
2,877,628,089,756 | arxiv | \section{Introduction}
\noindent
We consider the perturbed sine-Gordon equation
\begin{equation}\label{SGE}
\t_{tt}-\t_{xx}+\sin\t=F(\varepsilon,x),~~~~t,x\in\mathbb R,~~~~\varepsilon\ll 1,
\end{equation}
which can be written as a system in first order formulation:
\begin{eqnarray}\label{SGE1 first order introduction}
\partial_t\begin{pmatrix}
\t\\
\psi
\end{pmatrix}=\l(\begin{matrix}
\psi\\
\t_{xx}-\sin\t+F(\varepsilon,x)\\
\end{matrix}\r).
\end{eqnarray}
The unperturbed sine-Gordon equation (i.e., $F(\varepsilon,x)=0$)
admits soliton solutions
$$
\begin{pmatrix}
\t_0(\xi(t),u(t),x)\\
\psi_0(\xi(t),u(t),x)
\end{pmatrix},
~\text{where}
$$
\begin{eqnarray}\label{ODEintro}
\dot\xi=u\,, ~~ \dot u=0\,,~~~~(\xi(0),u(0))=(a,v)\in\mathbb R\times(-1,1).
\end{eqnarray}
Here the functions $(\t_0,\psi_0)$ are defined by
\begin{eqnarray}\label{solitonsolution}
{}&\begin{pmatrix}
\t_0(\xi,u,x)\\
\psi_0(\xi,u,x)
\end{pmatrix}:=\begin{pmatrix}
\t_K(\gamma(u)(x-\xi))\\
-u\gamma(u)\t_K'(\gamma(u)(x-\xi))\\
\end{pmatrix}\,,~u\in(-1,1),~~\xi,x\in\mathbb R,
\end{eqnarray}
where
$$
\gamma(u)=\frac 1 {\sqrt{1-u^2}},~~~~\t_K(x) =4\arctan(e^x),
$$
and $\t_K$ satisfies $\t_K''(x)=\sin\t_K(x)$ with boundary conditions $\t_K(x) \to \begin{pmatrix} 2\pi\\0 \end{pmatrix}$ as $x\to \pm \infty$.
The states $\l(\begin{matrix}
\t_0(a,v,\cdot)\\
\psi_0(a,v,\cdot)\\
\end{matrix}\r)$ form the two dimensional classical solitary manifold
$$
{\cal S}_0:=\l\{ \l(\begin{matrix}
\t_0(a,v,\cdot)\\
\psi_0(a,v,\cdot)\\
\end{matrix}\r)~:~v\in(-1,1),~a\in\mathbb R
\r\}.
$$
Let us mention some previous works before we state the main result.
Orbital stability of soliton solutions under perturbations of the initial data has been proven for the unperturbed sine-Gordon equation
(see \cite{MR678151}, \cite[Section 4]{Stuart3}).
D. M. Stuart \cite{Stuart2}
considered the perturbed sine-Gordon equation
\begin{eqnarray}
{}&\t_{tt}-\t_{xx}+\sin\t+\varepsilon g=0,\nonumber
\end{eqnarray}
for specific perturbations of the form
$
g=g(\varepsilon t, \varepsilon x, \t)
$ and initial data $\varepsilon$-close to
a kink.
He proved the existence of solutions, which approximate kinks with slowly evolving in time centre and velocity, up to time $1/\varepsilon$ and up to errors of order $\varepsilon$.
Kinks are solutions of the unperturbed equation \eqref{SGE},
given by
$
\t(t,x)=\t_0(\xi(t),u(t),x)
$,
where the centre $\xi$ and the velocity $u$ satisfy ODEs \eqref{ODEintro}.
%
The proof is based on an orthogonal
decomposition of the solution into an oscillatory part and a one-dimensional
"zero-mode" term.
In \cite[Part I]{MashkinDissertation} we studied equation \eqref{SGE1 first order introduction} for different types of perturbations.
For instance, we proved for $F(\varepsilon,x)= \varepsilon f(\varepsilon x)$
that the Cauchy problem for initial data $\varepsilon^\frac{1}2$-close to the classical solitary manifold
$
{\cal S}_0
$
has a unique solution, which follows up to time $1/\varepsilon^{\frac 1 4}$ and errors of order $\varepsilon^\frac{1}2 $ a trajectory
on
$
{\cal S}_0
$, where the trajectory on $
{\cal S}_0
$ is described precisely by ODEs for uniform linear motion.
One should take into account that our perturbation $F(\varepsilon,x)= \varepsilon f(\varepsilon x)$ is not comparable to the perturbations in \cite{Stuart2} due to some specific assumptions made on $g$.
For perturbations of type $F(\varepsilon,x)= \varepsilon^2 f(\varepsilon x)$ with
$f\in H^3(\mathbb R)$, we obtined richer dynamics on the solitary manifold in \cite{MashkinElectricField}. We proved that the Cauchy problem for initial data $\varepsilon^\frac{11}8$-close to the classical solitary manifold
$
{\cal S}_0
$
has a unique solution, which follows up to time $1/\varepsilon$ and errors of order $\varepsilon^\frac{3}4$ a trajectory
on
$
{\cal S}_0
$.
The trajectory
on
$
{\cal S}_0
$
is described precisely by
ODEs,
which contain the perturbation $f$.
The ODEs are obtained by considering restricted Hamilton equations and describe a
fixed nontrivial perturbation of the uniform linear motion as $\varepsilon \to 0$ if $f(0)\not=0$.
The evolution of the dynamics on the solitary manifold in \cite[Part I]{MashkinDissertation}/ \cite{MashkinElectricField} is described more accurate than the evolution of the approximated kink in \cite{Stuart2} in the following sense: In \cite[Part I]{MashkinDissertation}/ \cite{MashkinElectricField} the parameters of the manifold satisfy exactly specific ODEs, whereas in \cite{Stuart2} the evolution of the kink parameters are determined just up to errors of order $\varepsilon$.
The proofs of \cite[Part I]{MashkinDissertation}, \cite{MashkinElectricField}, and \cite[Section 4]{Stuart3} are based on a nowadays conventional method for verification of stability of solitons (for different equations), namely the decomposition of the dynamics into a part on the classical solitary manifold and a transversal part along with the application of Lyapunov-type arguments.
This approach emerges, for instance, also in \cite{MR2094474,MR2232367,MR2342704,HoZwSolitonint,MR2855072}.
In \cite{Mashkin} we extended this method
by utilizing a virtual solitary manifold.
There we studied the sine-Gordon equation with perturbations
$
\varepsilon \mapsto F(\varepsilon,\cdot)$
of class
$
C^{n}
$
(mapping into a specific weighted Sobolev space on $\mathbb R$),
whose first $k$ derivatives vanish at 0, i.e.,
$\pa \varepsilon^l F(0,\cdot)=0~~ \text{for} ~~0\le l\le k$, where $k+1 \le n$ and $n\ge 1$.
We constructed in \cite{Mashkin} by an iteration scheme composed of $n$ steps
a virtual solitary manifold, which is adjusted
to the perturbation $F $. The iteration process
can be thought of as a stepwise distortion of the classical solitary manifold ${\cal S}_0$.
Each step in the iteration scheme corresponds to solving implicitly a specific PDE.
The implicit solution $\varepsilon\mapsto (\t_n^\varepsilon(\xi,u,x),
\psi_n^\varepsilon(\xi,u,x),\lambda_{n}^\varepsilon\l(\xi, u\r))$ obtained in
the last iteration step defines
the
virtual solitary manifold
\begin{eqnarray}\label{virtMF}
{\cal S}_n^\varepsilon:=\l\{ \begin{pmatrix}
\t_n^\varepsilon(a,v,\cdot)\\
\psi_n^\varepsilon(a,v,\cdot)
\end{pmatrix}~:~v\in(-u_*,u_*),~a\in\mathbb R
\r\}, ~~~~u_*\in (0,1],
\end{eqnarray}
and is used to formulate
the result of \cite{Mashkin}, which is as following:
For $\xi_s\in\mathbb R$, $\varepsilon\ll 1$,
the Cauchy problem
\begin{eqnarray}\label{Cauchy_intro}
\partial_t \begin{pmatrix}
\t \\
\psi
\end{pmatrix}{}&=\l(\begin{matrix}
\psi \\
\pa x^2\t -\sin\t +F(\eps,x)\\
\end{matrix}\r),~
\begin{pmatrix}
\t(0,x)\\
\psi(0,x)
\end{pmatrix}{}&=
\begin{pmatrix}
\t^\varepsilon_n(\xi_s,u_s,x)\\
\psi^\varepsilon_n(\xi_s,u_s,x)
\end{pmatrix}+
\begin{pmatrix}
\v(0,x)\\
w(0,x)
\end{pmatrix},
\end{eqnarray}
with appropriate initial data that is $\varepsilon^n$-close to ${\cal S}_n^\varepsilon$, i.e.,
$
\nhone{v(0,\cdot)}^2+\nw{w(0,\cdot)}^2\le \varepsilon^{2n},
$
with initial velocity that satisfies the smallness assumption
$
|u_s|\le \tilde C\varepsilon^{\frac{k+1}2}
$,
has a unique solution $(\t,\psi)$,
which may be written up to time
$ 1/ (\tilde C {\varepsilon} ^{\frac{k+1}2})$ in the form
\begin{eqnarray}
\begin{pmatrix}
\t(t,x)\\
\psi(t,x)
\end{pmatrix}=\begin{pmatrix}
\t_n^\varepsilon(\bar\xi(t),\baru(t),x)\\
\psi_n^\varepsilon(\bar\xi(t),\baru(t),x)
\end{pmatrix}+
\begin{pmatrix}
\v(t,x)\\
w(t,x)
\end{pmatrix}.\nonumber
\end{eqnarray}
The solution remains $\varepsilon^n$-close to ${\cal S}_n^\varepsilon$, i.e.,
$
\nhone{v(t,\cdot)}^2+\nltwo{w(t,\cdot)}^2\le \tilde C \varepsilon^{2n} ,
$
and the dynamics on ${\cal S}_n^\varepsilon$ is described precisely by the parameters $(\bar\xi(t),\baru(t))$, which satisfy exactly the ODEs
\begin{eqnarray}\label{ODE introduction}
\d{\bar\xi}( t) = \baru(t) \,,~~~~
\d{\baru}( t) = \lambda_{n}^\varepsilon\l(\bar\xi(t), \baru(t)\r),
\end{eqnarray}
with initial data
$
\bar\xi(0)=\xi_s,~\baru(0)=u_s
$.
The parameters $\bar\xi,\baru$ describe
a fixed nontrivial perturbation of the uniform linear motion
as $\varepsilon \to 0$ if the perturbation $F$ satisfies a specific condition.
The higher the differentiability class $C^n$
of $F$ the higher is the accuracy of the stability statement and the more first derivatives of $F$ vanish at 0
the larger is the time scale of the result.
The sine-Gordon equation arises in various physical applications presented for instance in
\cite{0953-8984-7-2-013,RevModPhys.61.763,FrenKont,0022-3719-11-1-007}.
In \cite{Skyrme237} T. H. R. Skyrme proposed
the equation to model elementary particles and in \cite{doi:10.1143/JPSJ.46.1594}
dynamics of solitons under constant electric field were examined numerically.
We focus in the present work, as also in \cite{Mashkin},
on the interaction
of virtual solitons
with a time independent electric field $F(\varepsilon,x)$, which is a physically relevant problem.
\paragraph{Main Result and Consequences}
The iteration scheme introduced in \cite{Mashkin} provides a sequence of implicitly given functions.
In the present paper, we show that under some additional assumptions the provided sequence,
denoted by $(\t^\varepsilon_n,\psi^\varepsilon_n,\lambda^\varepsilon_n)$,
converges to a limit, which we denote by $(\t^\varepsilon_\infty,\psi^\varepsilon_\infty,\lambda^\varepsilon_\infty)$.
Our main result states that the
virtual solitary manifold defined analogously to \eqref{virtMF} by the
functions
$(\t^\varepsilon_\infty,\psi^\varepsilon_\infty,\lambda^\varepsilon_\infty)$
is invariant.
In greater detail, the main result is as follows.
Assume that the perturbation
$
\varepsilon \mapsto F(\varepsilon,\cdot)
$
is analytic (mapping into a specific weighted Sobolev space on $\mathbb R$), where the derivatives with respect to $\varepsilon$ of $F$ satisfy specific bounds at $\varepsilon=0$ (stated below in \eqref{assumption bounds derivatives F}) and $ F(0,\cdot)=0$, $\pa \varepsilon F(0,\cdot)=0$.
Let $\xi_s\in \mathbb R $ and consider the Cauchy problem
\begin{eqnarray} \label{Cauchy_intro InfMF}
\partial_t \begin{pmatrix}
\t \\
\psi
\end{pmatrix}
=\l(\begin{matrix}
\psi \\
\pa x^2\t -\sin\t +F(\eps,x)\\
\end{matrix}\r) ,~~
\begin{pmatrix}
\t(0,x)\\
\psi(0,x)
\end{pmatrix}=\begin{pmatrix}
\t^\varepsilon_\infty(\xi_s,u_s,x)\\
\psi^\varepsilon_\infty(\xi_s,u_s,x)
\end{pmatrix} ,~~
\varepsilon\ll 1,
\end{eqnarray}
where the initial velocity satisfies the assumption $|u_s|< u_*$ for a specific $u_*$.
Then the Cauchy problem
\eqref{Cauchy_intro InfMF}
has a unique solution, which may be written in the form
\begin{eqnarray}\label{intro form ivariant mf}
\begin{pmatrix}
\t(t,x)\\
\psi(t,x)
\end{pmatrix}=\begin{pmatrix}
\t_\infty^\varepsilon(\bar\xi(t),\baru(t),x)\\
\psi_\infty^\varepsilon(\bar\xi(t),\baru(t),x)
\end{pmatrix},
\end{eqnarray}
where the parameters $(\bar\xi(t),\baru(t))$ satisfy the ODEs
\begin{eqnarray} \label{intro ODEs invariant mf}
{}&\d{\bar\xi}( t) = \baru(t) ,~~
\d{\baru}( t) = \lambda_{\infty}^\varepsilon\l(\bar\xi(t), \baru(t)\r),
\end{eqnarray}
with initial data
$
\bar\xi(0)=\xi_s,~\baru(0)=u_s
$.
The solution exists and has this form as long as the parameters stay in an appropriate pareameter area, i.e., as long as $|\bar\xi(t)| \le \Xi, ~|\bar u(t)|<u_*$, where $\Xi$ depends on the initial centre $\xi_s$.
In particular, if $|u_s|\le \tilde C \varepsilon^{}$ for a specific $\tilde C$,
then the unique solution exists and can be expressed in the presented form on the time scale
\begin{eqnarray}\label{intro time scale}
0\le t \le \frac {1} {\tilde C \varepsilon }.
\end{eqnarray}
If additionally the perturbation $F$ satisfies condition \eqref{intro condition on nontrivial dynamic} mentioned below, then
the
parameters $\bar\xi,\baru$ describe,
on the nontrivial time scale \eqref{intro time scale},
a fixed nontrivial perturbation of the uniform linear motion
as $\varepsilon \to 0$.
The result states that the solution remains on the virtual solitary manifold
defined by $(\t^\varepsilon_\infty,\psi^\varepsilon_\infty)$
and it yields a precise description of the solution $(\t,\psi)$ to the Cauchy problem \eqref{Cauchy_intro InfMF}, since the dynamics on the
manifold is exactly characterized by the ODEs \eqref{intro ODEs invariant mf}.
The maximal interval of existence (time interval) of the solution depends on the perturbation $F$ and on the initial data, which determine the ODEs \eqref{intro ODEs invariant mf}, whereas the ODEs determine for how long the parameters $(\bar\xi(t),\baru(t))$ stay in the corresponding parameter area.
A precise statement is found in \cref{se: Main Results}.
The existence of the invariant virtual solitary manifold has a tremendous theoretical value. Furthermore, the invariant manifold allows us to describe the solution of \eqref{SGE1 first order introduction} with appropriate initial data by far more accurate than it was done in \cite{Mashkin}.
Our main result can be considered as an extension of the work of \cite{Mashkin}, where we corrected the classical solitary manifold of the sine-Gordon equation
arbitrarily many times (finite number) and
improved the accuracy of the stability statement in each correction step.
In this paper the invariant virtual solitary manifold is generated by a limit process - that is, in infinitely many correction steps - in such a way that the manifold is adjusted to the perturbation term $F$.
There exists a
community,
which advocates
the following conjecture
for specific PDEs with soliton solutions: For appropriate classes of solutions to the corresponding PDE
there exists a manifold, which acts as an attractor. One expects that for appropriate initial data, not necessarily close to the manifold, the solution is going to come close to the manifold for advancing times. In case of the sine-Gordon equation the virtual solitary manifold generated in this paper is a serious candidate for such an attractive manifold, which makes our result even more interesting for further investigations.
Our approach and the fact of existence of an invariant manifold for an integrable equation with an external perturbation (invariant in the sense of our main result),
is to our knowledge a novelty in the field of stability of solitons.
However, singular corrections of the classical solitary manifold have been carried out in other works in different forms such as in \cite{HolmerLin} and in \cite{HoZwSolitonint} for the NLS equation, which corresponds to the first iteration in the scheme from \cite{Mashkin}.
The idea of modifying the classical solitary manifold of the sine-Gordon equation by utilizing implicitly defined functions appears in \cite[Section 3]{Stuart3}, where the purpose was to rewrite the Hamiltonian in a neighbourhood of the manifold of virtual solitons.
Neither the virtual solitary manifold \eqref{virtMF} nor the iteration scheme introduced in \cite{Mashkin} were considered in \cite{Stuart3}.
Several long (but finite)-time results for different
equations with external potentials can be found, for example, in \cite{MR2094474,MR2232367,MR2342704,MR2855072}.
Further results on orbital stability and long time soliton asymptotics are presented in
\cite{MR820338,MR0428914,MR0386438,MR2920823,MR1071238,MR1221351,ImaykinKomechVainberg,MR3630087,MR3461359}.
\paragraph{Our Techniques}
We generate the invariant virtual solitary manifold
by utilizing
the iteration scheme from \cite{Mashkin},
whereby we modify the scheme in certain points.
In the present paper, the scheme is
implemented for an analytic function
$\varepsilon\mapsto \tilde F(\varepsilon)$
mapping into a specific
Sobolev space on $\mathbb R^2$
such that $\tilde F(\varepsilon)$ depends on $(\xi,x)$ (for the sake of clarity, we skip the dependence on $(\xi,x)$ in the notation).
We assume that the derivatives of $\tilde F$ with respect to $\varepsilon$ satisfy specific bounds at $\varepsilon=0$ (stated below in \eqref{assumption depsF}) and that $\tilde F(0)=0$, $\pa \varepsilon\tilde F(0)=0$.
$\tilde F$ will be specified later. The iteration scheme is as follows:
The function
$(\t_0,\psi_0)$, given by \eqref{solitonsolution}, solves
\begin{eqnarray}\label{successive eq G0}
{}&\BR{u\pa \xi\l(\begin{matrix}
\t\\
\psi\\
\end{matrix}\r)
-\l(\begin{matrix}
\psi\\
\pa x^2\t-\sin\t\\
\end{matrix}\r)
}{\large$=:{\cal G}_0(\t,\psi)$}=0\,,
\end{eqnarray}
which is the equation characterizing the classical solitons.
In the first iteration step we amend
${\cal G}_0(\t,\psi)=0$
by introducing an
additional unknown variable $\lambda$ and
adding some terms involving $(\t_0,\psi_0)$ and $\tilde F$. The amended equation is of the form
\begin{eqnarray}\label{successive eq G1}
{}&\BR{u\pa \xi\begin{pmatrix}
\t\\
\psi\\
\end{pmatrix}
-\l(\begin{matrix}
\psi\\
\t_{xx}-\sin\t+\tilde F (\varepsilon)\\
\end{matrix}\r)
+\lambda \pa u\begin{pmatrix}
\t_0\\
\psi_0\\
\end{pmatrix}
}{\large$=:{\cal G}_1^{\eps}(\t,\psi,\lambda)$}=0\,.
\end{eqnarray}
Here and in the following iterations the functions $\t,\psi$ depend on $(\xi,u,x)$ and
$\lambda$ depends on $(\xi,u)$.
We solve ${\cal G}_1^\varepsilon(\t,\psi,\lambda)=0 $ implicitly for $(\t,\psi,\lambda)$ in terms of $\varepsilon$ and denote the solution by $(\t_1^\varepsilon,\psi_1^\varepsilon,\lambda_{1}^\varepsilon)$.
In the next iteration step we amend ${\cal G}_1^\varepsilon(\t,\psi,\lambda)=0 $
by adding some terms involving $(\t_1^\varepsilon,\psi_1^\varepsilon)$ and solve the amended equation
\begin{eqnarray}\label{successive eq G2}
{}&\BR{u\pa \xi\begin{pmatrix}
\t\\
\psi\\
\end{pmatrix}
-\l(\begin{matrix}
\psi\\
\t_{xx}-\sin\t+\tilde F(\varepsilon)\\
\end{matrix}\r)
+\lambda\pa u\begin{pmatrix}
\t_1^0+\pa \varepsilon\t_1^0\varepsilon\\
\psi_1^0+\pa \varepsilon\psi_1^0\varepsilon\\
\end{pmatrix}
}{\large$=:{\cal G}_2^{\eps}(\t,\psi,\lambda)$}=0\,
\end{eqnarray}
implicitly for $(\t,\psi,\lambda)$ in terms of $\varepsilon$.
Continuing the iteration process we obtain
in the $n$th step
the equation
\begin{eqnarray}\label{successive eq Gn}
{}&\BR{u\pa \xi\begin{pmatrix}
\t\\
\psi\\
\end{pmatrix}
-\l(\begin{matrix}
\psi\\
\t_{xx}-\sin\t+\tilde F(\varepsilon)\\
\end{matrix}\r)
+\lambda\pa u\begin{pmatrix}
\sum_{i=0}^{n-1} \frac{\pa \varepsilon^i\t_{n-1}^0}{i!}\varepsilon^i\\
\sum_{i=0}^{n-1} \frac{\pa \varepsilon^i\psi_{n-1}^0}{i!}\varepsilon^i\\
\end{pmatrix}
}{\large$=:{\cal G}_k^{\eps}(\t,\psi,\lambda)$}=0\,,
\end{eqnarray}
where $(\t_{n-1}^\varepsilon,\psi_{n-1}^\varepsilon,\lambda_{n-1}^\varepsilon)$ denotes the solution of ${\cal G}_{n-1}^\varepsilon(\t,\psi,\lambda)=0 $.
We solve ${\cal G}_k^{\eps}(\t,\psi,\lambda)=0$ implicitly for $(\t,\psi,\lambda)$ in terms of $\varepsilon$ and
denote the solution by $(\t_n^\varepsilon,\psi_n^\varepsilon,\lambda_{n}^\varepsilon)$. Due to the assumptions on $\tilde F$
it is possible to iterate this
procedure arbitrarily many times.
The
existence of the implicit solutions $\varepsilon\mapsto(\t_n^\varepsilon,\psi_n^\varepsilon,\lambda_{n}^\varepsilon)$ for $n\ge 1$
is ensured by the implicit function theorem.
In the actual proof,
we consider
rather
the transformed equations
\begin{eqnarray}\label{intro def tilde G}
\tilde {\cal G}_n^\varepsilon({\hat \t},{\hat\p},\lambda):={\cal G}_n^\varepsilon(\t_0+{\hat \t},\psi_0+{\hat\p},\lambda)=0,~~~~n\ge 1,
\end{eqnarray}
which will be solved for $ ({\hat \t},{\hat\p},\lambda)$ in terms of $\varepsilon$. This is caused by functional analytic reasons, among others, by the fact that $\t_0(\xi,u,x) \not\rightarrow 0$ as $|x|\to \infty$ for fixed $\xi$ and $u$.
We denote the solutions to the equations $\tilde {\cal G}_n^\varepsilon({\hat \t},{\hat\p},\lambda)=0,~n\ge 1,$ by $( {\hat \t}_{n}^\varepsilon , {\hat\p}_{n}^\varepsilon ,\lambda_{n}^\varepsilon)$, where $(\t_{n}^\varepsilon ,\psi_{n}^\varepsilon ,\lambda_{n}^\varepsilon)
=(\t_0+{\hat \t}_{n}^\varepsilon ,\psi_0+{\hat\p}_{n}^\varepsilon ,\lambda_{n}^\varepsilon)$.
The application of the implicit function theorem
relies on the fact that
$(0,0,0,0)$
solves all equations in a particular point, i.e.,
$\tilde {\cal G}_n^0(0,0,0)=0$.
As a consequence of the construction, the
solution obtained in the $n$th iteration
$\varepsilon\mapsto(\t_n^\varepsilon,\psi_n^\varepsilon,\lambda_{n}^\varepsilon)$ solves the equation
\begin{eqnarray}\label{intro spec PDE}
{u\pa \xi\begin{pmatrix}
\t\\
\psi\\
\end{pmatrix}
-\l(\begin{matrix}
\psi\\
\t_{xx}-\sin\t+\tilde F (\varepsilon)\\
\end{matrix}\r)
+\lambda \pa u\begin{pmatrix}
\t\\
\psi\\
\end{pmatrix}
}
=0\,
\end{eqnarray}
up to errors of order $\varepsilon^{n+1}$ for $ n \ge 1$.
In \cite{Mashkin}, the iterative equations
$\tilde {\cal G}_n^\varepsilon({\hat \t},{\hat\p},\lambda)=0$
were solved in spaces of different regularity in $u$
such that the regularity of the spaces
(which contain the corresponding iterative solutions) decreases after each iteration step by the order of $1$. This technique was used
for the following reason.
Each iterative equation contains a derivative
with respect to $u$ of the solution of the preceding equation, as one can see in \eqref{successive eq Gn}.
This derivative leads to loss of regularity in $u$ in the target set of the map $\tilde {\cal G}_n$ after each iteration step.
However,
the employment of the implicit function theorem for solving the iterative equations requires that
the corresponding linearizations are invertible and that the maps $\tilde {\cal G}_n$ are well-defined.
In \cite{Mashkin}, this is ensured by considering the maps $\tilde {\cal G}_n$ on spaces of decreasing regularity in $u$.
Since,
in the present paper, we need to execute infinitely many (and not only finitely many) iterations in order to obtain a sequence of implicit solutions, we
modify the iteration scheme and proceed as follows.
Due to the analyticity assumption on $F$ in the present paper (which was not supposed in \cite{Mashkin}), the implicit solutions (as well as its derivatives) are analytic in $\varepsilon$,
which is a consequence of the implicit function theorem.
In the first iteration we solve
$\tilde {\cal G}_1^\varepsilon({\hat \t},{\hat\p},\lambda)=0$
and the solution may be written in the form
\begin{eqnarray}\label{intro Taylor representation}
%
\begin{split}
(\hat\t_{1 }^\varepsilon,\hat\psi_{1 }^\varepsilon,\hat\lambda_{1 }^\varepsilon) = \l(\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\hat\t_{1 }^0}{i!}\varepsilon^i,\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\hat\psi_{1 }^0}{i!}\varepsilon^i,\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\lambda_{1 }^0}{i!}\varepsilon^i\r)\,
\end{split}
\end{eqnarray}
accordingly.
Further application of the implicit function theorem in spaces of higher regularity in $u$
yields that $(\t_{1 }^\varepsilon,\psi_{1 }^\varepsilon,\lambda_{1 }^\varepsilon)$ is sufficiently often differentiable in $u\in [-u_*,u_*]$, but possibly in a smaller neighbourhood of $\varepsilon=0
$
than that where representation \eqref{intro Taylor representation} holds.
We prove bounds on
the derivatives
$
\pa u^K\pa \varepsilon^N(\t_1^0,\psi_1^0,\lambda_{1}^0)
$ (derivatives with respect to $u\in [-u_*,u_*]$ and $\varepsilon $, evaluated at $\varepsilon=0$), which have the form
\begin{align}
\label{intro first bound it1}
\forall N\ge 2,~ 0\le K \le 2:{}&&
\l\Vert\begin{pmatrix}
\pa u^K\pa \varepsilon^N\t_1^0\\
\pa u^K\pa \varepsilon^N\psi_1^0\\
\pa u^K\pa \varepsilon^N\lambda_{1}^0
\end{pmatrix}
\r\Vert
&\le
C^{2N+ 2K -3}(N-2)!,\\
\label{intro second bound it1}
\forall N\ge 2, ~K\ge 3:{}&&
\l\Vert\begin{pmatrix}
\pa u^K\pa \varepsilon^N\t_1^0\\
\pa u^K\pa \varepsilon^N\psi_1^0\\
\pa u^K\pa \varepsilon^N\lambda_{1}^0
\end{pmatrix}
\r\Vert
&\le
C^{2N + 2K-3}(N-2)!(K-3)!\,,
\end{align}
where $\Vert \cdot \Vert$ is an appropriate norm.
These bounds
imply that
the implicit solution $(\t_{1 }^\varepsilon,\psi_{1 }^\varepsilon,\lambda_{1 }^\varepsilon)$ is differentiable in $u$ in
the same
neighbourhood of $\varepsilon=0$
where also representation \eqref{intro Taylor representation} holds.
Thus the map $\tilde {\cal G}_2$ is well defined on the same spaces where also
$\tilde {\cal G}_1^\varepsilon({\hat \t},{\hat\p},\lambda)=0$
was\\
solved initially.
This eliminates the loss of regularity problem faced in \cite{Mashkin} (in the first iteration)
and we are able to solve
the next iterative equation
$\tilde {\cal G}_2^\varepsilon({\hat \t},{\hat\p},\lambda)=0$
on the same spaces as also the preceding equation $\tilde {\cal G}_1^\varepsilon({\hat \t},{\hat\p},\lambda)=0$.
The process
of solving the iterative equations
will be continued using the same arguments, whereas
we prove successively
bounds on
the derivatives of the succeeding solutions
$
\pa u^K\pa \varepsilon^N(\t_n^0,\psi_n^0,\lambda_{n}^0)
$
(derivatives with respect to $u\in [-u_*,u_*]$ and $\varepsilon $, evaluated at $\varepsilon=0$).
The bounds are uniform in $n$ and have the form
\begin{align}
\retainlabel{intro first bound}
\forall N\ge 2,~ 0\le K \le 2:{}&&
\l\Vert\begin{pmatrix}
\pa u^K\pa \varepsilon^N\t_n^0\\
\pa u^K\pa \varepsilon^N\psi_n^0\\
\pa u^K\pa \varepsilon^N\lambda_{n}^0
\end{pmatrix}
\r\Vert
&\le
C^{2N+ 2K -3}(N-2)!,\\
\label{intro second bound}
\forall N\ge 2, ~K\ge 3:{}&&
\l\Vert\begin{pmatrix}
\pa u^K\pa \varepsilon^N\t_n^0\\
\pa u^K\pa \varepsilon^N\psi_n^0\\
\pa u^K\pa \varepsilon^N\lambda_{n}^0
\end{pmatrix}
\r\Vert
&\le
C^{2N + 2K-3}(N-2)!(K-3)!\,,
\end{align}
where $\Vert \cdot \Vert$ is as above.
Here and in \eqref{intro first bound it1}-\eqref{intro second bound it1} the higher order derivatives with respect to $u$ are needed in order to control the first order derivative terms (derivative
with respect to $u$) in the iterative equations (see \eqref{successive eq Gn}).
This fact itself and the proof of bounds \eqref{intro first bound it1}-\eqref{intro second bound} as well rely
on a recursive formula for
$
\pa u^K\pa \varepsilon^N(\t_n^0,\psi_n^0,\lambda_{n}^0)
$,
which is proved by induction on $N$ and $K$.
Furthermore, the assumptions on the derivatives of $\tilde F$
at $\varepsilon=0$ are used in the proof of
\eqref{intro first bound it1}-\eqref{intro second bound}.
Bounds
\eqref{intro first bound it1}-\eqref{intro second bound}
imply
that
all iterative implicit solutions are defined on the same neigbourhood,
can be represented there as Taylor series around $\varepsilon=0$ analogous to \eqref{intro Taylor representation} and are there differentiable in $u$. Moreover, it follows from
\eqref{intro first bound it1}-\eqref{intro second bound}
that the iterative implicit solutions are all contained in the same space
and that as $n\to \infty$
the sequence
$(\hat \t_n^\varepsilon,\hat \psi_n^\varepsilon,\lambda_{n}^\varepsilon)$ converges
to the limit
\begin{eqnarray}
({\hat \t}_\infty^\varepsilon,{\hat\p}_\infty^\varepsilon,\lambda_\infty^\varepsilon) := {}&\l(\sum_{i=1}^{\infty} \frac{\pa \varepsilon^i\t_{i}^0}{i!}\varepsilon^i,\sum_{i=1}^{\infty} \frac{\pa \varepsilon^i\psi_{i}^0}{i!}\varepsilon^i,\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\lambda_{i}^0}{i!}\varepsilon^i\r).
\end{eqnarray}
Using these facts and \eqref{intro first bound it1}-\eqref{intro second bound} we conclude that
the function
$$
(\t_\infty^\varepsilon,\psi_\infty^\varepsilon,\lambda_\infty^\varepsilon):=(\t_0+{\hat \t}_\infty^\varepsilon ,\psi_0+{\hat\p}_\infty^\varepsilon ,\lambda_\infty^\varepsilon)
$$
satisfies the equation
\begin{eqnarray}
\label{intro eqofinterest}
u\pa \xi\l(\begin{matrix}
\t_\infty^\varepsilon\\
\psi_\infty^\varepsilon\\
\end{matrix}\r)
-\l(\begin{matrix}
\psi_\infty^\varepsilon\\
[\t_\infty^\varepsilon]_{xx}-\sin\t_\infty^\varepsilon+\tilde F(\varepsilon)\\
\end{matrix}\r)
+\lambda_{ \infty}^\varepsilon\pa u\begin{pmatrix}
\t_\infty^\varepsilon\\
\psi_\infty^\varepsilon \\
\end{pmatrix}
=0 \,.
\end{eqnarray}
%
In order to generate the invariant virtual solitary manifold, we apply the iteration scheme to a specific $\tilde F$, which is
a truncated version of the perturbation term $F$ from \eqref{Cauchy_intro}, given by
\begin{eqnarray}\label{intro assumption Chi and F}
\begin{cases}
\tilde F(\varepsilon,\xi,x):=F(\eps,x) \chi(\xi),\\
\text{where } \chi\in C^{\infty}(\mathbb R),~\chi (\xi)=1 \text{ for } |\xi|\le |\xi_s|+3 \text{ and } \chi (\xi)=0 \text{ for } |\xi|\ge |\xi_s|+4.
\end{cases}
\end{eqnarray}
The limit of the thereby obtained sequence of iterative solutions, defines the solution of \eqref{intro eqofinterest} with the specific $\tilde F$ (given by \eqref{intro assumption Chi and F}), which implies our main result.
In order to simplify the computations we work in the present paper on spaces, which have lower regularity in $(\xi,x)$ than the corresponding spaces in \cite{Mashkin}.
Finally let us explain under which condition
the
parameters $\bar\xi,\baru$ describe
a fixed nontrivial perturbation of the uniform linear motion
as $\varepsilon \to 0$.
We consider the setting where the
assumption $|u_s|\le \tilde C \varepsilon^{}$
is satisfied and hence where
the solution of \eqref{Cauchy_intro InfMF} exists and may be expressed up to times $1/(\tilde C \varepsilon)$ in
the mentioned way.
For all $n\ge 1$ the linearization of
$({\hat \t},{\hat\p},\lambda) \mapsto
\tilde {\cal G}_n^\varepsilon({\hat \t},{\hat\p},\lambda)
$
carried out at
$({\hat \t},{\hat\p},\lambda)=(0,0,0)$, $\varepsilon=0$
is invertible and we denote the linearization by
$$
{\frak M}_0^\alpha:
( \t,
\psi,
\lambda) \mapsto
{\frak M}_0^\alpha
( \t,
\psi,
\lambda ).
$$
Thus there exist functions $( \bar\t,
\bar \psi,
\bar \lambda )$ such that the second derivative with respect to $\varepsilon$ of a general function $\tilde F$ (which operates on appropriate spaces),
evaluated at $\varepsilon=0$, can be written in the form
\begin{eqnarray} \label{intro condition on tiF}
\begin{pmatrix}
0\\
\pa \varepsilon^{2} \tilde F(0)
\end{pmatrix}
=
{\frak M}_0^\alpha
( \bar\t,
\bar \psi,
\bar \lambda ),~~\text{ ${\frak M}_0^\alpha$ given by \cref{le invertibilityMxiCtwo alpha} (case $m=0
$)}.
\end{eqnarray}
Here the functions $\bar\t,\bar\psi$ depend on $(\xi,u,x)$ and $\bar\lambda$ depends on $(\xi,u)$.
ODEs \eqref{intro ODEs invariant mf} can be rescaled in time by introducing $s=\varepsilon t
$,
$
\hat \xi(s)= \bar\xi(s/\varepsilon^{ })
$, and
$
\hat u(s)= \frac 1 {\varepsilon^{ }} {\bar u(s/\varepsilon^{ })}
$
such that the
corresponding transformed ODEs have the form
\begin{eqnarray}
\frac d {ds} \hat \xi(s) = \hat u(s) , ~~~~
\frac d {ds} \hat u(s) = \frac 1 {\varepsilon^{2 }} \lambda_{ \infty}^\varepsilon(\hat \xi(s), \varepsilon^{ }\hat u(s)).\nonumber
\end{eqnarray}
As $\varepsilon \to 0$, the transformed ODEs converge to ODEs that describe a fixed nontrivial perturbation of the uniform linear motion if
the next condition is satisfied:
\begin{eqnarray}
\label{intro condition on nontrivial dynamic}
\begin{cases}
{}&\text{There exists $\chi$ satisfying \eqref{intro assumption Chi and F} such that for $\tilde F$ given by \eqref{intro assumption Chi and F} }\\ {}& \text{the following holds: }
\bar \lambda(\cdot, 0)\not= 0 \text{ in
representation \eqref{intro condition on tiF}}.
\end{cases}
\end{eqnarray}
This is for the following reason.
The functions $(\t^\varepsilon_\infty,\psi^\varepsilon_\infty,\lambda^\varepsilon_\infty)$ satisfy the relation
\begin{eqnarray}\label{introduction infty relation}
u\pa \xi\l(\begin{matrix}
\t_\infty^\varepsilon\\
\psi_\infty^\varepsilon\\
\end{matrix}\r)
-\l(\begin{matrix}
\psi_\infty^\varepsilon\\
[\t_\infty^\varepsilon]_{xx}-\sin\t_\infty^\varepsilon+\tilde F(\varepsilon)\\
\end{matrix}\r)
+\lambda_{ \infty}^\varepsilon\pa u\begin{pmatrix}
\t_\infty^\varepsilon\\
\psi_\infty^\varepsilon \\
\end{pmatrix} =0.
\end{eqnarray}
Due to the assumption on $F$ it holds that $\pa \varepsilon \tilde F(0)=0$
and
differentiation of \eqref{introduction infty relation}
with respect to $\varepsilon$ yields
\begin{eqnarray}
\begin{pmatrix}
0\\
\pa \varepsilon^l \tilde F(0)
\end{pmatrix}
=
{\frak M}_0^\alpha
( \pa \varepsilon^l \t_\infty^0,
\pa \varepsilon^l \psi_\infty^0,
\pa \varepsilon^l\lambda_\infty^0 ),~~~~~1\le l\le 2.
\end{eqnarray}
Using invertibility of ${\frak M}_0^\alpha$,
condition \eqref{intro condition on nontrivial dynamic} and the fact that $\lambda_\infty^0=0$
it follows that $0\not=\lambda_\infty^\varepsilon(\cdot,0)= {\cal O}(\varepsilon^{2})$, which implies the claim.
\paragraph{Outline of the Paper}
The paper is organized as follows. In \cref{se: Main Results}, we formulate the main result.
In \cref{se: Implicit function theorem}, we
modify the iteration scheme from \cite{Mashkin},
construct a sequence of iterative solutions and prove bounds on the elements of the sequence.
In \cref{se: Convergence of the Sequence}, we show that the sequence of iterative solutions converges and that its limit satisfies the equation of interest.
Our main result, \cref{maintheorem}, is proved in \cref{se: Main Results Proof}.
\paragraph{Notation and Conventions}
For a Hilbert space $H$ we denote its inner product by $\langle\cdot,\cdot\rangle_H $.
To simplify notation,
occasionally we drop the dependence of functions on certain variables. We write ${L_x^{2}(\mathbb R)},{H_{\xi,x}^{k}(\mathbb R^2)}$ and so on for the Lebesgue and Sobolev spaces when we wish to emphasize the variables of integration.
We use the notation $\t(\xi,u,x)=\t(u)(\xi,x)$, $\psi(\xi,u,x)=\psi(u)(\xi,x)$.
\newpage
\section{Main Result}\label{se: Main Results}
To formulate our result precisely, we need some definitions.
\begin{definition}\label{def:PartfourMainResult}
Let $\alpha,k,m\in\mathbb{N}_0$ and $u_*>0$. Let us denote by $I(u_*):=[-u_*,u_*]$.
\begin{itemize}
\item [(a)] $H^{k,\alpha}(\mathbb R) $ denotes the weighted Sobolev space of functions with finite norm
$$|\t|_{H^{k,\alpha}(\mathbb R)}= |(1+|x|^2) ^\frac \alpha 2\t(x)|_{H_x^{k}(\mathbb R)}\,.$$
\item [(b)] $H^{k,\alpha}(\mathbb R^2) $ denotes the weighted Sobolev space of functions with finite norm
$$|\t|_{H^{k,\alpha}(\mathbb R^2)}= |(1+|\xi|^2+|x|^2) ^\frac \alpha 2\t(\xi,x)|_{H_{\xi,x}^{k}(\mathbb R^2)}\,.$$
\item [(c)] $ \ubar{ Y}^\alpha$ is the space $H^{2,\alpha}(\mathbb R^2) \oplus H^{1,\alpha}(\mathbb R^2) \oplus H^{2,\alpha}(\mathbb R)$
with the finite norm
$$
|y|_{\ubar{ Y}^\alpha} = |\t|_{H^{2,\alpha}(\mathbb R^2)}+ |\psi|_{H^{1,\alpha}(\mathbb R^2)}+|\lambda |_{H^{2,\alpha}(\mathbb R)}\,.
$$
\item [(d)] $Y_m^\alpha(u_*)$ is the space\\
$\\\ba
{}&\bigg\{ y=(\t,\psi,\lambda) \in C^m( I(u_*), \ubar{ Y}^\alpha) : \Vert y \Vert_{Y_m^\alpha(u_*)} <\infty;~\forall~ u\in I(u_*),~\forall~\mu\in H^{2,\alpha}(\mathbb R):\\
{}& \Ltwortwoaxix{ \begin{pmatrix} \t(u)(\xi,x)\\
\psi(u)(\xi,x) \end{pmatrix}}{\mu(\xi)\begin{pmatrix} \t_K'(\gamma(u)(x-\xi))\\
-u\gamma(u)\t_K''(\gamma(u)(x-\xi))\end{pmatrix}} =0 \bigg\}\,
\ea\\$\\
with the finite norm
$$
\Vert y \Vert_{Y_m^\alpha(u_*)} =\sup_{u\in I(u_*)} \l( \sum_{i=0}^m |\pa u^i y(u)|_{\ubar{ Y}^\alpha}\r) \,.
$$
\end{itemize}
\end{definition}
\noindent The weighted Sobolev spaces in \cref{def:PartfourMainResult} (a), (b) are defined as in \cite{Kopylova}. We are now ready to state our main result.
\begin{theorem}\label{maintheorem}
Let $\xi_s\in \mathbb R $,
$\Xi:=\Xi(\xi_s):= |\xi_s|+3$ and $\alpha\in \mathbb{N}_0$.
Assume that $F\in C^{\infty}((-1,1),H^{0,\alpha}(\mathbb R))$, $F$ is analytic and the conditions
\begin{eqnarray} \label{assumption derivatives F}
F(0)=0,~~\pa \varepsilon F(0)=0\,,
\end{eqnarray}
\begin{eqnarray} \label{assumption bounds derivatives F}
\forall N \ge 2:~~
\l|
\pa \varepsilon^N F(0)
\r|_{H^{0,\alpha}}
\le c^N (N-2)!
\,
\end{eqnarray}
are satisfied. Then there exist $\varepsilon^*>0$, $u_*>0$, $\tilde C>0$ and a map
\begin{eqnarray}\label{main theorem map}
(-\varepsilon^*,\varepsilon^*) \to Y_{0}^\alpha(u_*),~
\varepsilon \mapsto (\hat\t_\infty^\varepsilon,\hat\psi_\infty^\varepsilon,\lambda_\infty^\varepsilon)\label{map in main theorem}
\end{eqnarray}
of class $C^\infty$ such that the following holds.
Let $\varepsilon\in(0,\varepsilon^*)$.
Consider the Cauchy problem
\begin{eqnarray}\label{SGE1}
\partial_t \begin{pmatrix}
\t \\
\psi
\end{pmatrix}
=\l(\begin{matrix}
\psi \\
\pa x^2\t -\sin\t +F(\eps,x)\\
\end{matrix}\r) ,~~~~
\begin{pmatrix}
\t(0,x)\\
\psi(0,x)
\end{pmatrix}=\begin{pmatrix}
\t^\varepsilon_\infty(\xi_s,u_s,x)\\
\psi^\varepsilon_\infty(\xi_s,u_s,x)
\end{pmatrix} ,
\end{eqnarray}
where $(\t_\infty^\varepsilon,\psi_\infty^\varepsilon)=(\t_0+{\hat \t}_\infty^\varepsilon ,\psi_0+{\hat\p}_\infty^\varepsilon)$ with $(\t_0,\psi_0)$ given by \eqref{solitonsolution} such that the
initial velocity satisfies
$|u_s|< u_*$.
Then the Cauchy problem
has a unique solution, which may be written in the form
\begin{eqnarray}\label{form}
\begin{pmatrix}
\t(t,x)\\
\psi(t,x)
\end{pmatrix}=\begin{pmatrix}
\t_\infty^\varepsilon(\bar\xi(t),\baru(t),x)\\
\psi_\infty^\varepsilon(\bar\xi(t),\baru(t),x)
\end{pmatrix},
\end{eqnarray}
where
$\bar\xi,\baru$ solve the
system of equations
\begin{eqnarray}\label{exactODE virtual1}
{}&\d{\bar\xi}( t) = \baru(t) \,,~~
\d{\baru}( t) = \lambda_{\infty}^\varepsilon\l(\bar\xi(t), \baru(t)\r)\,, ~~~~\bar\xi(0)=\xi_s,~\baru(0)=u_s\,,
\end{eqnarray}
and representation \eqref{form} of the solution is valid
as long as $|\bar\xi(t)|\le \Xi, ~|\bar u(t)|<u_*$.
In particular, if $|u_s|\le \tilde C \varepsilon^{}$,
then the Cauchy problem \eqref{SGE1}
has a unique solution on the time interval
\begin{eqnarray}
0\le t \le \frac {1} {\tilde C \varepsilon }
%
\end{eqnarray}
and may be written in the form \eqref{form} with ODEs \eqref{exactODE virtual1}. If additionally the perturbation $F$ satisfies condition \eqref{intro condition on nontrivial dynamic}, then
the
parameters $\bar\xi,\baru$ describe
a fixed nontrivial perturbation of the uniform linear motion
as $\varepsilon \to 0$.
\eth
\noindent
The assumption on the first derivative of $F$ in \eqref{assumption derivatives F} is not crucial, it is made in order to simplify the computations in the proof of the bounds on the derivatives of the iterative solutions in \cref{se: Implicit function theorem} (\cref{derivatives estimate}).
We work in weighted Sobolev spaces in order
to ensure
that
symplectic decomposition (implemented by techniques of \cite{Mashkin}) is possible in a neighbourhood of the invariant virtual solitary manifold, since this is promising to be useful in our future works. The well-definedness of a corresponding
symplectic orthogonality condition formulated in analogy to \cite[Theorem 2.2 (b)]{Mashkin} is guaranteed if function \eqref{main theorem map} maps into a weighted space $Y_{0}^\alpha(u_*)$ where $\alpha \ge 1$ (nevertheless symplectic decomposition is not needed in the present paper).
\section{Construction of the Sequence of Iterative Solutions}\label{se: Implicit function theorem}
In this section we modify
the iteration scheme from \cite{Mashkin}
and
construct a sequence of iterative solutions.
By making stronger assumptions than in \cite{Mashkin} on the function $\tilde F$ (utilized in the scheme below),
we obtain more accurate information on the iterative solutions.
We start with a definition.
\begin{definition}
Let $\alpha,m\in\mathbb{N}_0$ and $u_*>0$.
\begin{itemize}
\item [(a)] $\ubar{ Z}^\alpha$ is the space $H^{1,\alpha}(\mathbb R^2) \oplus H^{0,\alpha}(\mathbb R^2)$
with the finite norm
$$
|z|_{\ubar{ Z}^\alpha} = |\v|_{H^{1,\alpha}(\mathbb R^2)}+ |w|_{H^{0,\alpha}(\mathbb R^2)}\,.
$$
\item [(b)] $
Z_m^\alpha(u_*)$ is the space $\bigg\{ z =(\v,w) \in C^m(I(u_*), \ubar{ Z}^\alpha) : \Vert z \Vert_{Z_m^\alpha(u_*)} <\infty \bigg\}\,$
with the finite norm
$$
\Vert z \Vert_{Z_m^\alpha(u_*)} =\sup_{u\in I(u_*)} \l( \sum_{i=0}^m |\pa u^i y(u)|_{\ubar{ Z}^\alpha}\r) \,.
$$
\item [(c)] Let us denote by
$t_{1}(\xi,u,x):= \begin{pmatrix}
\pa \xi\t_0(\xi,u,x)\\
\pa \xi\psi_0(\xi,u,x)\\
\end{pmatrix} $
and by
$t_{2}(\xi,u,x):=\begin{pmatrix}
\pa u\t_0(\xi,u,x)\\
\pa u\psi_0(\xi,u,x)\\
\end{pmatrix},
$ where $u\in(-1,1),~\xi,x\in\mathbb R $.
\end{itemize}
\end{definition}
\noindent
The application of the implicit function theorem in the iteration scheme is justified by the following proposition, which ensures that the corresponding linearization of
$({\hat \t},{\hat\p},\lambda) \mapsto
\tilde {\cal G}_n^\varepsilon({\hat \t},{\hat\p},\lambda),~n\ge 1,
$
carried out at
$({\hat \t},{\hat\p},\lambda)=(0,0,0)$, $\varepsilon=0$
is invertible.
\begin{proposition} \label{le invertibilityMxiCtwo alpha}
Let $\alpha \in \mathbb{N}_0$.
There exists $\underline{u}^\alpha>0$ such that for any $m\in \mathbb{N}_0$ the operator\\
$
{\frak M}_m^\alpha: Y_m^\alpha(u_*) \to Z_m^\alpha(u_*),~
( \t,
\psi,
\lambda) \mapsto
{\frak M}_m^\alpha
( \t,
\psi,
\lambda ),
$
given by
\begin{eqnarray}
{\frak M}_m^\alpha
( \t,
\psi,
\lambda )(u)
=\begin{pmatrix}
u\pa \xi\t(u) -\psi(u) \\
-\pa x^2\t(u) +\cos(\t_K(\gamma(u)(x-\xi)))\t(u) +u\pa \xi\psi(u) \\
\end{pmatrix}
+ \lambda(u)
t_2(\xi,u,x),
\end{eqnarray}
is invertible if $0< u_*< \underline{u}^\alpha$.
\end{proposition}
\noindent
\begin{proofNEW}
The proof was given in \cite[Proposition 3.2]{Mashkin}.
\epr
\noindent
The modified iteration scheme is formalized in the following theorem.
\begin{theorem}\label{thimplicitfunctionIT1 alpha}
Let $\alpha \in \mathbb{N}_0$ and let $\underline{u}^\alpha$
be from \cref{le invertibilityMxiCtwo alpha}. Let $0< u_*<\underline{u}^\alpha$,
$J=(-1,1)$ and let $\tilde F: J \to H^{0,\alpha}(\mathbb R^2)\,, \varepsilon \mapsto \tilde F(\varepsilon)$ be an analytic
function such that
\begin{eqnarray}\label{assumptions tildeF}
\tilde F(0)=0,~~\pa \varepsilon\tilde F(0)=0,
\end{eqnarray}
and
\begin{eqnarray} \label{assumption depsF}
\forall N \ge 2:~~\Bigg\Vert
\begin{pmatrix}
0\\
\pa \varepsilon^N \tilde F(0)
\end{pmatrix}
\Bigg\Vert_{Z_0^\alpha(u_*)}
\le \bar c^N (N-2)!
\,.
\end{eqnarray}
\noindent
Let $\tilde {\cal G}_1$ be given by
\begin{eqnarray}
{}&\tilde {\cal G}_1: J \times Y_{0}^\alpha (u_*) \to Z_{0}^\alpha(u_*)\,,
(\varepsilon,{\hat \t},{\hat\p},\lambda) \mapsto \tilde{\cal G}_1^{\eps}({\hat \t},{\hat\p},\lambda):={\cal G}_1^{\eps}(\t_0+{\hat \t},\psi_0+{\hat\p},\lambda)\,,
\end{eqnarray}
where ${\cal G}_1$ is defined by \eqref{successive eq G1}.
Then there exists $\varepsilon^*>0$ and
%
a map
\begin{eqnarray}
{}&(-\varepsilon^*,\varepsilon^*) \to Y_{0}^\alpha(u_*),~
\varepsilon \mapsto (\hat\t_1^\varepsilon,\hat\psi_1^\varepsilon,\lambda_{ 1}^\varepsilon)\,,
\end{eqnarray}
of class $C^\infty$ such that
$
\tilde {\cal G}_1^\varepsilon({\hat \t}_1^\varepsilon,{\hat\p}_1^\varepsilon,\lambda_{1}^\varepsilon)=0\,.
$
Let $\tilde {\cal G}_2$ be given by
\begin{eqnarray}
\tilde {\cal G}_2: J \times Y_{0}^\alpha(u_*) \to Z_{0}^\alpha(u_*)\,,
(\varepsilon,{\hat \t},{\hat\p},\lambda) \mapsto \tilde{\cal G}_2^{\eps}({\hat \t},{\hat\p},\lambda):={\cal G}_2^{\eps}(\t_0+{\hat \t},\psi_0+{\hat\p},\lambda)\,,
\end{eqnarray}
where ${\cal G}_2$ is defined by \eqref{successive eq G2} with $(\t_1^\varepsilon ,\psi_1^\varepsilon ,\lambda_{1}^\varepsilon)
=(\t_0+{\hat \t}_1^\varepsilon ,\psi_0+{\hat\p}_1^\varepsilon ,\lambda_{1}^\varepsilon)$. Then there exists
a map
\begin{eqnarray}
(-\varepsilon^*,\varepsilon^*) \to Y_{0}^\alpha(u_*),
\varepsilon \mapsto (\hat\t_2^\varepsilon,\hat\psi_2^\varepsilon,\lambda_{ 2}^\varepsilon)\,,
\end{eqnarray}
of class $C^\infty$ such that
$
\tilde {\cal G}_2^\varepsilon({\hat \t}_2^\varepsilon,{\hat\p}_2^\varepsilon,\lambda_{2}^\varepsilon)=0\,.
$
This process can be continued successively to arrive at $\tilde {\cal G}_n$ for any
$n\in \mathbb{N}$
be given by
\begin{eqnarray}
\tilde {\cal G}_n: J \times Y_{0}^\alpha(u_*) \to Z_{0}^\alpha(u_*)\,,
(\varepsilon,{\hat \t},{\hat\p},\lambda) \mapsto \tilde{\cal G}_n^{\eps}({\hat \t},{\hat\p},\lambda):={\cal G}_k^{\eps}(\t_0+{\hat \t},\psi_0+{\hat\p},\lambda)\,,
\end{eqnarray}
where ${\cal G}_n$ is defined by \eqref{successive eq Gn} with $(\t_{n-1}^\varepsilon ,\psi_{n-1}^\varepsilon ,\lambda_{n-1}^\varepsilon)
=(\t_0+{\hat \t}_{n-1}^\varepsilon ,\psi_0+{\hat\p}_{n-1}^\varepsilon ,\lambda_{n-1}^\varepsilon)$.
There exists
a map
\begin{eqnarray}
(-\varepsilon^*,\varepsilon^*) \to Y_{0}^\alpha(u_*),
\varepsilon \mapsto (\hat\t_n^\varepsilon,\hat\psi_n^\varepsilon,\lambda_{ n}^\varepsilon)\,,
\end{eqnarray}
of class $C^\infty$ such that
$
\tilde {\cal G}_n^\varepsilon({\hat \t}_n^\varepsilon,{\hat\p}_n^\varepsilon,\lambda_{n}^\varepsilon)=0\,.
$
The iterative solutions may be written in the form
\begin{eqnarray}
(\hat\t_n^\varepsilon,\hat\psi_n^\varepsilon,\lambda_n^\varepsilon) = {}&\l(\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\hat\t_{n}^0}{i!}\varepsilon^i,\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\hat\psi_{n}^0}{i!}\varepsilon^i,\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\lambda_{n}^0}{i!}\varepsilon^i\r)\,
\end{eqnarray}
as a limit in $Y_{0}^\alpha (u_*) $ for $\varepsilon\in(- \varepsilon^*, \varepsilon^* )$. We set $(\t_{n}^\varepsilon ,\psi_{n}^\varepsilon ,\lambda_{n}^\varepsilon)
:=(\t_0+{\hat \t}_{n}^\varepsilon ,\psi_0+{\hat\p}_{n}^\varepsilon ,\lambda_{n}^\varepsilon)$.
\eth
\noindent
In the following we point out the relation among the derivatives of the iterative solutions from \cref{thimplicitfunctionIT1 alpha} at $\varepsilon=0$.
\begin{lemma} \label{thITrelations}
Let the assumptions of Theorem \ref{thimplicitfunctionIT1 alpha} hold and let $n\ge 2$.\\
Then
$(\pa \varepsilon^k\t_{n-1}^{0},\pa \varepsilon^k\psi_{n-1}^{0},\pa \varepsilon^k\lambda_{ {n-1}}^{0})=(\pa \varepsilon^k\t_n^{0},\pa \varepsilon^k\psi_n^{0},\pa \varepsilon^k\lambda_{ n}^{0})$ for $k=0, \ldots, n-1$.
\end{lemma}
\begin{proofNEW}
Analogous to \cite[Theorem 3.4]{Mashkin}.
\epr
\begin{remark}
The derivatives of the iterative solutions coincide at $\varepsilon=0$ in the following way:
$(\pa \varepsilon^k\t_{1}^{0},\pa \varepsilon^k\psi_{1}^{0},\pa \varepsilon^k\lambda_{ {1}}^{0})
= (\pa \varepsilon^k\t_{2}^{0},\pa \varepsilon^k\psi_{2}^{0},\pa \varepsilon^k\lambda_{{2}}^{0})$ for $k=0, 1;$
$(\pa \varepsilon^k\t_{2}^{0},\pa \varepsilon^k\psi_{2}^{0},\pa \varepsilon^k\lambda_{{2}}^{0})
= (\pa \varepsilon^k\t_{3}^{0},\pa \varepsilon^k\psi_{3}^{0},\pa \varepsilon^k\lambda_{{3}}^{0})$ for $k=0, 1,2 $ and so on.
\end{remark}
\noindent
Now we prove some bounds on the derivatives of the iterative solutions.
These bounds will be used in the inductive proof of \cref{thimplicitfunctionIT1 alpha}.
Moreover, the bounds play a major key in the proof of convergence of the sequence of iterative solutions and they are also needed in
order to show that the corresponding limit defines a function which satisfies
the equation of interest.
\begin{lemma}\label{derivatives estimate}
Let the assumptions
of \cref{thimplicitfunctionIT1 alpha}
be satisfied.
There exists $C>0$ such that the following holds.
Let $n\in\mathbb N$ and
assume that for $1\le j \le n$ the iterative solutions of the equations
$\tilde {\cal G}_j^\varepsilon({\hat \t},{\hat\p},\lambda)=0$
exist, then
the following bounds are satisfied:
\begin{align}
\label{lebound1}
1\le K \le 2:{}&&
\l\Vert\begin{pmatrix}
\pa u^K\t_0\\
\pa u^K\psi_0\\
0
\end{pmatrix}
\r\Vert_{Y_0^\alpha(u_*)}
&\le
C ,
\\
\label{lebound2}
\forall K\ge 3:{}&&
\l\Vert\begin{pmatrix}
\pa u^K\t_0\\
\pa u^K\psi_0\\
0
\end{pmatrix}
\r\Vert_{Y_0^\alpha(u_*)}
&\le
C^{2K-3}(K-3)!,
\\
\label{lebound3}
\forall N\ge 2,~ 0\le K \le 2:{}&&
\l\Vert\begin{pmatrix}
\pa u^K\pa \varepsilon^N\t_n^0\\
\pa u^K\pa \varepsilon^N\psi_n^0\\
\pa u^K\pa \varepsilon^N\lambda_{n}^0
\end{pmatrix}
\r\Vert_{Y_0^\alpha(u_*)}
&\le
C^{2N+ 2K -3}(N-2)!,
\\
\label{lebound4}
\forall N\ge 2, ~K\ge 3:{}&&
\l\Vert\begin{pmatrix}
\pa u^K\pa \varepsilon^N\t_n^0\\
\pa u^K\pa \varepsilon^N\psi_n^0\\
\pa u^K\pa \varepsilon^N\lambda_{n}^0
\end{pmatrix}
\r\Vert_{Y_0^\alpha(u_*)}
&\le
C^{2N + 2K-3}(N-2)!(K-3)!\,.
\end{align}
\end{lemma}
\begin{proofNEW}
An argument for differentiability with respect to $u$ of the iterative solutions will be given in the proof of \cref{thimplicitfunctionIT1 alpha}.
The upper bounds in this proof are given by sums of certain types and the
major key is that those sums converge.
In the following we take a closer look at one of them, since the other cases can be treated similarly.
It holds for $l\ge 6$ that
\begin{eqnarray}
{}&
\sum_{k=3}^{l-3}\frac{(l-1) (l-2)}{(l-1-k)!k!}
(k-3)! (l-k-3)!
\\
={}&
\sum_{k=3}^{l-3}\frac{(l-1) (l-2)}{(l-1-k)(l-2-k)k(k-1)(k-2)}
\\
={}&
\sum_{3 \le k \le \lfloor (l-1)/2 \rfloor } \frac{(l-1) (l-2)}{(l-1-k)(l-2-k)k(k-1)(k-2)}
\\
{}&
+\sum_{ \lfloor (l-1)/2 \rfloor< k \le l-3}
\frac{(l-1) (l-2)}{(l-1-k)(l-2-k)k(k-1)(k-2)}
\end{eqnarray}
\begin{eqnarray}
\le{}&
\sum_{3 \le k \le \lfloor (l-1)/2 \rfloor }
\frac{ 1}{\frac{(l-1-k)}{(l-1)}\frac{(l-2-k)}{(l-2)}k(k-1)(k-2)}
\\
{}&
+
\sum_{ 2\le j < l-1 -\lfloor (l-1)/2 \rfloor}
\frac{1}{j(j-1)\frac{(l-1-j)}{l-1}\frac{(l-2-j)}{(l-2)}(l-3-j)}
\\
\le{}&
\sum_{3 \le k \le \lfloor (l-1)/2 \rfloor }
\frac{4}{ (k-2)(k-1)k}
+
\sum_{ 2\le j < l-1 -\lfloor (l-1)/2 \rfloor}
\frac{4}{j(j-1) }=:R(l)\,
\end{eqnarray}
and thus $\sup_l R(l)<\infty $.
Let us now deduce a recursive relation which will be needed later.
Taking the
$K$-th derivative with respect to $u$ of ${\cal G}_0(\t_0,\psi_0)=0$ yields
\begin{eqnarray}
0{}&=\begin{pmatrix}
u\pa \xi\pa u^K\t_0-\pa u^K\psi_0\\
u\pa \xi\pa u^K\psi_0-\pa x^2\pa u^K\t_0
\end{pmatrix}
+\begin{pmatrix}
0\\
\sum_{m=1}^{K-1}\binom{K-1}{m} \pa u^{m} \cos(\t_0) \pa u^{K-m}\t_0 + \cos(\t_0) \pa u^{K}\t_0
\end{pmatrix}\\
{}&
+K\begin{pmatrix}
\pa \xi\pa u^{K-1}\t_0\\
\pa \xi\pa u^{K-1}\psi_0
\end{pmatrix}
\,.
\end{eqnarray}
Thus
\begin{eqnarray}\label{rekursivrelationKderivative}
\pa u^K\begin{pmatrix}
\t_0\\
\psi_0\\
0
\end{pmatrix}
=
- \l[{\frak M}_{0}^\alpha\r]^{-1} \Bigg[
\begin{pmatrix}
0\\
\sum_{m=1}^{K-1}\binom{K-1}{m} \pa u^{m} \cos(\t_0) \pa u^{K-m}\t_0
\end{pmatrix}
+K\begin{pmatrix}
\pa \xi\pa u^{K-1}\t_0\\
\pa \xi\pa u^{K-1}\psi_0
\end{pmatrix}
%
\Bigg]
\,.
\end{eqnarray}
\noindent
We show first \eqref{lebound1}-\eqref{lebound2}.
We chose $C>1$ such that the claim \eqref{lebound1}-\eqref{lebound2} is true for $0\le K \le 3$ and such that
$
\sup_{u\in I(u_*)} |\pa u^{m}\cos\t_0|_{L^\infty_{\xi,x}(\mathbb R^2)} \le
C \,
$
for $0\le m \le 3$.
In the following we will put some more assumptions on $C$, where we tag each of them with an exclamation mark "!".
We assume that the claim \eqref{lebound1}-\eqref{lebound2} holds for all integers up to
$K-1$ and prove the induction step. Let $n\in \mathbb{N}$.
Firstly, we show that for $3\le m \le K$:
\begin{eqnarray}\label{ind cos}
\sup_{u\in I(u_*)}|\pa u^{m}\cos\t_0|_{L^\infty_{\xi,x}(\mathbb R^2)} \le
(m-3)!C^{2m-3+1/3}\,.
\end{eqnarray}
We assume that \eqref{ind cos} holds for all integers $3\le m \le K-1$ and show the
induction step. In the following we use Sobolev embedding theorems.
Notice that
\begin{eqnarray}
{}&
\sup_{u\in I(u_*)} \Bigg|
\sum_{k=0}^{l-1}\binom{l-1}{k} \pa u^k\cos(\t_0) \pa u^{l-k}{\t_n^{\varepsilon}}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}
\\
={}&\sup_{u\in I(u_*)} \Bigg| \Bigg(\sum_{k=3}^{l-3}\frac{(l-1)!}{(l-1-k)!k!} \pa u^k\cos(\t_0) \pa u^{l-k}\t_0
+\cos(\t_0)\pa u^{l}\t_0
+(l-1)\pa u\cos(\t_0)\pa u^{l-1}\t_0
\\
{}&
+\frac{(l-1)(l-2)}2\pa u^2\cos(\t_0)\pa u^{l-2}\t_0
+\pa u^{l-1}\cos(\t_0)\pa u\t_0
+(l-1)\pa u^{l-2}\cos(\t_0)\pa u^{2}\t_0
\Bigg)
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}\\
\end{eqnarray}
\begin{eqnarray}
\le{}&
(l-3)!\sum_{k=3}^{l-2}\frac{(l-1)(l-2)}{(l-1-k)!k!} (k-3)!(l-k-3)!
C^{2k-3+1/3} C^{2(l-k)-3}\\
{}&+
(l-3)!C^{2l-3}
+(l-1)C^{}
(l-5)!C^{2(l-1)-3}
+3\frac{(l-1)(l-2)}2 C^{4-3+1/3}
(l-5)!C^{2(l-2)-3}
\\
{}&
+
(l-4)!C^{2(l-3)}C+
(l-1)(l-5)!C^{2(l-2)-3+1/3}C\\
\OT{\le}{ ! }{}&
(l-3)!C^{2l-3+1/3} \,.
\end{eqnarray}
Using this estimate it follows for $3\le m \le K$ that
\begin{eqnarray}
{}&
\sup_{u\in I(u_*)} \Bigg|
\pa u^m(\cos(\t_0))
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}
\\
={}&
\sup_{u\in I(u_*)} \Bigg|
\pa u^{m-1}(\sin(\t_0)\pa \varepsilon\t_0)
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}
\\
={}&
\sup_{u\in I(u_*)} \Bigg|
\sum_{l=0}^{m-1}\binom{m-1}{l} \pa u^{l} \sin(\t_0) \pa u^{m-l}\t_0
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}
\\
= {}&
\sup_{u\in I(u_*)} \Bigg|
\Bigg(\sum_{l=1}^{m-1}\binom{m-1}{l} \pa u^{l-1} \l(\cos(\t_0) \pa u\t_0\r)\pa u^{m-l}\t_0 + \sin\t_0\pa u^m\t_0\Bigg)
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}
\\
= {}&
\sup_{u\in I(u_*)} \Bigg|
\Bigg(\sum_{l=1}^{m-1}\binom{m-1}{l} \l( \sum_{k=0}^{l-1}\binom{l-1}{k} \pa u^k\cos(\t_0) \pa u^{l-k}\t_0\r)\pa u^{m-l}\t_0 + \sin\t_0\pa u^m\t_0\Bigg)
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}
\\
= {}&
\sup_{u\in I(u_*)} \Bigg|
\Bigg(\sum_{l=3}^{m-1}\binom{m-1}{l} \l( \sum_{k=0}^{l-1}\binom{l-1}{k} \pa u^k\cos(\t_0) \pa u^{l-k}\t_0\r)\pa u^{m-l}\t_0 \\
{}&+(m-1) \cos(\t_0)\pa u\t_0
+ \sin\t_0\pa u^m\t_0\\
{}&+\frac{(m-1)(m-2)}2 \cos(\t_0)\pa u^2\t_0+\frac{(m-1)(m-2)}2\pa u\cos(\t_0)\pa u\t_0\Bigg)
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}
\\
\le {}& (m-3)!\sum_{l=3}^{m-1}\frac{(m-1)(m-2)}{(m-l-1)!l!} (l-3)! (m-l-3)!
C^{2l-3+1/3}C^{2(m-l)-3}\\
{}&+ (m-1)C+
(m-2)!
C^{2m-3}+
\frac{(m-1)(m-2)}2
C^3
+\frac{(m-1)(m-2)}2CC\\
\OT{\le}{ ! }{}&
(m-3)! C^{2m-3+1/3} \,,
\end{eqnarray}
which completes the induction step for \eqref{ind cos}.
In the following we denote by $\Vert \cdot \Vert$ the operator norm of $\l[{\frak M}_{0}^\alpha\r]^{-1} $.
Now we estimate
$
\pa u^K(
\t_0,
\psi_0,
0)
$
by using the recursive formula \eqref{rekursivrelationKderivative} and the bounds
\eqref{ind cos}:
\begin{eqnarray}
{}&\l\Vert
\l[{\frak M}_{0}^\alpha\r]^{-1}
\l[
\begin{pmatrix}
0\\
\sum_{m=1}^{K-1}\binom{K-1}{m} \pa u^{m} \cos(\t_n^0) \pa u^{K-m}\t_n^0
\end{pmatrix}
%
+K\begin{pmatrix}
\pa \xi\pa u^{K-1}\t_0\\
\pa \xi\pa u^{K-1}\psi_0
\end{pmatrix}\r]
\r\Vert_{Y_0^\alpha(u_*)}\\
\le{}&
\l\Vert\l[{\frak M}_{0}^\alpha\r]^{-1} \r\Vert
\Bigg((K-3)!
\sum_{m=3}^{K-1}\frac{(K-1)(K-2)}{(K-m-1)!m!} (m-3)!(K-m-3)!
C^{2m-3+1/3} C^{2(K-m)-3}\\
{}&
+(K-1)(K-4)!
CC^{2(K-1)-3}
+
\frac{(K-1)(K-2)(K-5)!}2 C^{4-3+1/3}C^{2(K-2)-3}\\
{}&
+
(K-4)!CC^{2(K-1)-3+1/3 }
+
(K-1) (K-5)!CC^{2(K-2)-3+1/3 }
+K(K-4)!C^{2(K-1)-3 }
\Bigg)\\
\OT{\le}{!}{}&
(K-3)!
C^{2K-3-1/3}\,.
\end{eqnarray}
Assuming that $C^{2K-3-1/3}\OT{\le}{ ! } C^{2K-3}$, the induction step for \eqref{lebound1}-\eqref{lebound2} is complete.
\noindent
Before proving the remaining claim, we deduce some recursive relations for further computations.
Taking the
$N$-th derivative with respect to $\varepsilon$ of ${\cal G}_k^{\eps}({\t_n^{\varepsilon}},\p_n^{\varepsilon},{\lambda_{n}^{\varepsilon}})=0$
yields
\begin{eqnarray}\label{ITn}
0={}&\pa \varepsilon^N{\cal G}_k^{\eps}({\t_n^{\varepsilon}},\p_n^{\varepsilon},{\lambda_{n}^{\varepsilon}})\\
={}&
\begin{pmatrix}
u\pa \xi\pa \varepsilon^N{\t_n^{\varepsilon}}-\pa \varepsilon^N\p_n^{\varepsilon}\\
u\pa \xi\pa \varepsilon^N\p_n^{\varepsilon}-\pa x^2\pa \varepsilon^N{\t_n^{\varepsilon}}
\end{pmatrix}
+\begin{pmatrix}
0\\
\sum_{m=1}^{N-1}\binom{N-1}{m} \pa \varepsilon^{m} \cos({\t_n^{\varepsilon}}) \pa \varepsilon^{N-m}{\t_n^{\varepsilon}} + \cos({\t_n^{\varepsilon}}) \pa \varepsilon^{N}{\t_n^{\varepsilon}}
\end{pmatrix}\\
{}&
-\begin{pmatrix}
0\\
\pa \varepsilon^N\ti F(\eps)
\end{pmatrix}
+ \begin{pmatrix}
\sum_{i=0}^{n-1} \sum_{l=0}^{N}\binom{N}{l} \pa \varepsilon^{N-l} {\lambda_{n}^{\varepsilon}} \pa \varepsilon^{l} \l[\frac{\pa u\pa \varepsilon^i\t_{n-1}^0}{i!}\varepsilon^i\r]\\
\sum_{i=0}^{n-1} \sum_{l=0}^{N}\binom{N}{l} \pa \varepsilon^{N-l} {\lambda_{n}^{\varepsilon}} \pa \varepsilon^{l} \l[\frac{\pa u\pa \varepsilon^i\psi_{n-1}^0}{i!}\varepsilon^i\r]\\
\end{pmatrix}\,.
\end{eqnarray}
\noindent
Thus we obtain
\begin{eqnarray}\label{recursive relation N}
\begin{pmatrix}
\pa \varepsilon^N\t_n^0\\
\pa \varepsilon^N\psi_n^0\\
\pa \varepsilon^N\lambda_{n}^0
\end{pmatrix}
={}&\l[{\frak M}_{0}^\alpha\r]^{-1} \Bigg[\begin{pmatrix}
0\\
\pa \varepsilon^N \tilde F(0)
\end{pmatrix}
-
\begin{pmatrix}
\sum_{1\le l \le \min\{n-1,N-1\}} \binom{N}{l} \pa \varepsilon^{N-l} \lambda_{n}^0 {\pa u\pa \varepsilon^l\t_{n}^0}\\
\sum_{1\le l \le \min\{n-1,N-1\}} \binom{N}{l} \pa \varepsilon^{N-l} \lambda_{n}^0 {\pa u\pa \varepsilon^l\psi_{n}^0}
\end{pmatrix}
\\
{}&
-\begin{pmatrix}
0\\
\sum_{m=1}^{N-1}\binom{N-1}{m} \pa \varepsilon^{m} \cos({\t_n^{\varepsilon}}) \pa \varepsilon^{N-m}{\t_n^{\varepsilon}}
\end{pmatrix}
\Bigg|_{\varepsilon=0}
\Bigg]
\,.
\end{eqnarray}
Due to assumption \eqref{assumptions tildeF}
it follows from case $N=1$ combined with \cref{le invertibilityMxiCtwo alpha}
that
$
(\pa \varepsilon\t_n^0,
\pa \varepsilon\psi_n^0,
\pa \varepsilon\lambda_{n}^0)=(0,0,0)$.
Taking the $K$th derivative with respect to $u$
of \eqref{ITn} yields\\
\begin{eqnarray}
0={}&
\begin{pmatrix}
u\pa \xi\pa u^K\pa \varepsilon^N\t_n^0-\pa u^K\pa \varepsilon^N\psi_n^0\\
u\pa \xi\pa u^K\pa \varepsilon^N\psi_n^0-\pa x^2\pa u^K\pa \varepsilon^N\t_n^0
\end{pmatrix}
+
\begin{pmatrix}
0\\
\cos(\t_n^0)\pa u^{K}\pa \varepsilon^{N}\t_n^0
\end{pmatrix}
+\pa u^{K} \pa \varepsilon^{N} \lambda_{n}^0
\begin{pmatrix}
\pa u \t_0\\
\pa u\psi_0
\end{pmatrix}
\\
{}&
+
\sum_{\substack{0\le m \le N-1,\\ 0 \le k \le K, ~(m,k)\not=(0,0) }}\binom{N-1}{m} \binom{K}{k}
\begin{pmatrix}
0\\
\pa u^{k} \pa \varepsilon^{m} \cos({\t_n^{\varepsilon}}) \pa u^{K-k}\pa \varepsilon^{N-m}{\t_n^{\varepsilon}}
\end{pmatrix}
\Bigg|_{\varepsilon=0}
\\
{}&
+ \sum_{\substack{0\le l \le \min\{n-1,N\}\\
0\le k \le K, ~(l,k)\not=(0,0) }} \binom{N}{l}\binom{K}{k} \begin{pmatrix}
\pa u^{K-k} \pa \varepsilon^{N-l} \lambda_{n}^0 {\pa u^{k+1} \pa \varepsilon^l\t_{n}^0}\\
\pa u^{K-k} \pa \varepsilon^{N-l} \lambda_{n}^0 {\pa u^{k+1} \pa \varepsilon^l\psi_{n}^0}
\end{pmatrix}
+K\begin{pmatrix}
\pa \xi\pa u^{K-1}\pa \varepsilon^N\t_n^0\\
\pa \xi\pa u^{K-1}\pa \varepsilon^N\psi_n^0
\end{pmatrix}
\, .
\end{eqnarray}
Thus
we obtain
\begin{eqnarray}\label{recursive relation KN}
\begin{split}
{}&\begin{pmatrix}
\pa u^K\pa \varepsilon^N\t_n^0\\
\pa u^K\pa \varepsilon^N\psi_n^0\\
\pa u^K\pa \varepsilon^N\lambda_{n}^0
\end{pmatrix}
\\
={}&-\l[{\frak M}_{0}^\alpha\r]^{-1} \Bigg[
\sum_{\substack{0\le m \le N-1,\\ 0 \le k \le K, ~(m,k)\not=(0,0) }}\binom{N-1}{m} \binom{K}{k}
\begin{pmatrix}
0\\
\pa u^{k} \pa \varepsilon^{m} \cos({\t_n^{\varepsilon}}) \pa u^{K-k}\pa \varepsilon^{N-m}{\t_n^{\varepsilon}}
\end{pmatrix}
\Bigg|_{\varepsilon=0}
\\
{}&
+\sum_{\substack{0\le l \le \min\{n-1,{ N-1}\}\\
0\le k \le K, ~(l,k)\not=(0,0) }}
\binom{N}{l}\binom{K}{k} \begin{pmatrix}
\pa u^{K-k} \pa \varepsilon^{N-l} \lambda_{n}^0 {\pa u^{k+1} \pa \varepsilon^l\t_{n}^0}\\
\pa u^{K-k} \pa \varepsilon^{N-l} \lambda_{n}^0 {\pa u^{k+1} \pa \varepsilon^l\psi_{n}^0}
\end{pmatrix}
+K\begin{pmatrix}
\pa \xi\pa u^{K-1}\pa \varepsilon^N\t_n^0\\
\pa \xi\pa u^{K-1}\pa \varepsilon^N\psi_n^0
\end{pmatrix}
\Bigg]
.
\end{split}
\end{eqnarray}
Now we show \eqref{lebound3}-\eqref{lebound4}.
We prove the claim by induction on $N$, whereas we conduct for each $N$ an induction on $K$.
In some further estimates we will use the fact that there exists $c>0$ such that
$$
|\lambda\t|_{H^{1,\alpha}(\mathbb R^2)}
\le
c|\lambda|_{H^{2,\alpha}(\mathbb R)}| \t|_{H^{1,\alpha}(\mathbb R^2)}\,
$$
for $\lambda\in H^{2,\alpha}(\mathbb R)$ and $\t \in H^{1,\alpha}(\mathbb R^2) $. This follows from Morrey's inequality. Let us start the induction.
\noindent\underline{$N=1$:} The terms
$(\pa u^K\pa \varepsilon\t_n^0,
\pa u^K\pa \varepsilon\psi_n^0,
\pa u^K\pa \varepsilon\lambda_{n}^0)$ vanish for any $K$ due to assumption \eqref{assumptions tildeF}.
\\
\noindent\underline{$N=2$:} This case can be treated similarly to the following proof of the induction step.
\noindent
\underline{$2,\ldots,N-1\rightarrow N$:}
We assume that
bound \eqref{lebound3} holds for derivatives with respect to $\varepsilon$ of order $2$ up to order $N-1$ and for derivatives with respect to $u$ of order $0$ up to order $2$.
Moreover, we assume that
bound \eqref{lebound4} holds for derivatives with respect to $\varepsilon$ of order $2$ up to order $N-1$ and for all derivatives with respect to $u$ from order $3$.
Now we show the
induction step $2,\ldots,N-1\rightarrow N$. This will be done by induction on $K$, where we use \eqref{recursive relation N} and \eqref{recursive relation KN}.\\
\underline{$K=0$}:
We consider separately the terms of
the recursive formula \eqref{recursive relation N}.
Due to \eqref{assumption depsF} we are able to estimate\\
\begin{eqnarray}
{}&\l\Vert
\l[{\frak M}_{0}^\alpha\r]^{-1}
\l[
\begin{pmatrix}
0\\
\pa \varepsilon^N \tilde F(0)
\end{pmatrix}
\r]
\r\Vert_{Y_0^\alpha(u_*)}\\
\OT{\le}{ ! }{}&
(N-2)!
C^{2N-3-1/3}\,,
\\
{}&\l\Vert
\l[{\frak M}_{0}^\alpha\r]^{-1}
\l[
\begin{pmatrix}
0\\
\sum_{m=1}^{N-1}\binom{N-1}{m} \pa \varepsilon^{m} \cos({\t_n^{\varepsilon}}) \pa \varepsilon^{N-m}{\t_n^{\varepsilon}}
\end{pmatrix}
\r]
\Bigg|_{\varepsilon=0}\r\Vert_{Y_0^\alpha(u_*)}\\
\le{}&
\l\Vert\l[{\frak M}_{0}^\alpha\r]^{-1} \r\Vert
\Bigg((N-2)!
\sum_{m=3}^{N-2}\frac{(N-1)}{(N-m-1)!m!} (m-2)!(N-m-2)!C^{2m-3} C^{2(N-m)-3}\\
{}&
+\frac{(N-1)(N-2)(N-4)!}2 C^{2\cdot 2-3+}C^{2(N-2)-3}
\Bigg)
\\
\OT{\le}{ ! }{}&
(N-2)!
C^{2N-3-1/3}\,,
\\
{}&
\l\Vert
\l[{\frak M}_{0}^\alpha\r]^{-1}
\begin{pmatrix}
\sum_{l=1}^{N-1}\binom{N}{l} \pa \varepsilon^{N-l} \lambda_{n-1}^0 {\pa u\pa \varepsilon^l\t_{n-1}^0}\\
\sum_{l=1}^{N-1}\binom{N}{l} \pa \varepsilon^{N-l} \lambda_{n-1}^0 {\pa u\pa \varepsilon^l\psi_{n-1}^0}
\end{pmatrix}
\r\Vert_{Y_0^\alpha(u_*)}\\
\le{}&
\l\Vert
\l[{\frak M}_{0}^\alpha\r]^{-1}
\r\Vert
(N-2)!
\sum_{l=2}^{N-2}\frac{N(N-1)}{(N-l)!l!}
(N-l-2)! (l-2)!C^{2(N-l)-3}C^{2l-3}\\
\OT{\le}{ ! }{}&
(N-2)!
C^{2N-3-1/3}\,.
\end{eqnarray}
Further we assume that $3C^{2N-3-1/3}\OT{\le}{ ! } C^{2N-3}$.
\\
\underline{$K=1$}:
We consider separately the terms of
the recursive formula \eqref{recursive relation KN} and obtain
\begin{eqnarray}
{}&\Bigg\Vert
\l[{\frak M}_{0}^\alpha\r]^{-1}
\Bigg[
\sum_{\substack{0\le m \le N-1,\\ 0 \le k \le 1, ~(m,k)\not=(0,0) }}\binom{N-1}{m} \binom{1}{k}
\begin{pmatrix}
0\\
\pa u^{k} \pa \varepsilon^{m} \cos({\t_n^{\varepsilon}}) \pa u^{1-k}\pa \varepsilon^{N-m}{\t_n^{\varepsilon}}
\end{pmatrix}
\Bigg]
\Bigg|_{\varepsilon=0}\Bigg\Vert_{Y_0^\alpha(u_*)}\\
\le{}&
\l\Vert\l[{\frak M}_{0}^\alpha\r]^{-1} \r\Vert
\Bigg((N-2)!
\sum_{\substack{2\le m \le N-2,\\ 0 \le k \le 1, ~(m,k)\not=(0,0) }} \binom{1}{k} \frac{(N-1)}{(N-m-1)!m!} (m-2)!(N-m-2)!C^{2N-6} \\
{}&
+ (N-2)! C C^{2N-3}
\Bigg)\\
\OT{\le}{ ! }{}&
(N-2)! C^{2N+2\cdot 1-3-1/3} \,,
\end{eqnarray}
\begin{eqnarray}
{}&\Bigg\Vert
\l[{\frak M}_{0}^\alpha\r]^{-1}
\Bigg[
\sum_{\substack{0\le l \le \min\{n-1,N-1\}\\
0\le k \le 1, ~(l,k)\not=(0,0) }}
\binom{N}{l}\binom{1}{k} \begin{pmatrix}
\pa u^{1-k} \pa \varepsilon^{N-l} \lambda_{n}^0 {\pa u^{k+1} \pa \varepsilon^l\t_{n}^0}\\
\pa u^{1-k} \pa \varepsilon^{N-l} \lambda_{n}^0 {\pa u^{k+1} \pa \varepsilon^l\psi_{n}^0}
\end{pmatrix}
+\begin{pmatrix}
\pa \xi\pa \varepsilon^N\t_n^0\\
\pa \xi\pa \varepsilon^N\psi_n^0
\end{pmatrix}
\Bigg]
\Bigg\Vert_{Y_0^\alpha(u_*)}\\
\le{}&
\l\Vert\l[{\frak M}_{0}^\alpha\r]^{-1} \r\Vert
\Bigg((N-2)!
\sum_{\substack{2\le m \le N-2,\\ 0 \le k \le K, ~(m,k)\not=(0,0) }} \binom{K}{k} \frac{N(N-1)}{(N-m)!m!} (m-2)!(N-m-2)!C^{2N-6} \\
{}&
+ (N-2)! C C^{2N-3}
+(N-2)!C^{2N-3}
\Bigg)\\
\OT{\le}{ ! }{}&
(N-2)!C^{2N+2\cdot 1-3-1/3}\,.
\end{eqnarray}
Further we assume that $2C^{2N+2-3-1/3}\OT{\le}{ ! } C^{2N+2-3}$.
\\
\underline{$K=2$}:
We consider separately the terms of
the recursive formula \eqref{recursive relation KN} and obtain
\begin{eqnarray}
{}&\Bigg\Vert
\l[{\frak M}_{0}^\alpha\r]^{-1}
\Bigg[
\sum_{\substack{0\le m \le N-1,\\ 0 \le k \le 2, ~(m,k)\not=(0,0) }}\binom{N-1}{m} \binom{2}{k}
\begin{pmatrix}
0\\
\pa u^{k} \pa \varepsilon^{m} \cos({\t_n^{\varepsilon}}) \pa u^{2-k}\pa \varepsilon^{N-m}{\t_n^{\varepsilon}}
\end{pmatrix}
\Bigg]
\Bigg|_{\varepsilon=0}\Bigg\Vert_{Y_0^\alpha(u_*)}\\
\le{}&
\l\Vert\l[{\frak M}_{0}^\alpha\r]^{-1} \r\Vert
\Bigg((N-2)!
\sum_{\substack{2\le m \le N-2,\\ 0 \le k \le 2, ~(m,k)\not=(0,0) }} \binom{2}{k} \frac{(N-1)}{(N-m-1)!m!} (m-2)!(N-m-2)!C^{2N-6} \\
{}&
+ (N-2)! 2 C C^{2N+2\cdot 1-3} + (N-2)! C C^{2N-3}
\Bigg)\\
\OT{\le}{ ! }{}& (N-2)! C^{2N+2\cdot 2-3-1/3} \,,
\end{eqnarray}
\begin{eqnarray}
{}&\Bigg\Vert
\l[{\frak M}_{0}^\alpha\r]^{-1}
\Bigg[
\sum_{\substack{0\le l \le \min\{n-1,N-1\}\\
0\le k \le 2, ~(l,k)\not=(0,0) }}
\binom{N}{l}\binom{2}{k} \begin{pmatrix}
\pa u^{2-k} \pa \varepsilon^{N-l} \lambda_{n}^0 {\pa u^{k+1} \pa \varepsilon^l\t_{n}^0}\\
\pa u^{2-k} \pa \varepsilon^{N-l} \lambda_{n}^0 {\pa u^{k+1} \pa \varepsilon^l\psi_{n}^0}
\end{pmatrix}
+2\begin{pmatrix}
\pa \xi\pa u^{ }\pa \varepsilon^N\t_n^0\\
\pa \xi\pa u^{ }\pa \varepsilon^N\psi_n^0
\end{pmatrix}
\Bigg]
\Bigg\Vert_{Y_0^\alpha(u_*)}\\
\le{}&
\l\Vert\l[{\frak M}_{0}^\alpha\r]^{-1} \r\Vert
\Bigg((N-2)!
\sum_{\substack{2\le m \le N-2,\\ 0 \le k \le 2, ~(m,k)\not=(0,0) }} \binom{2}{k} \frac{N(N-1)}{(N-m)!m!} (m-2)!(N-m-2)!C^{2N-6} \\
{}&
+ 2 (N-2)! C C^{2N+2\cdot 1-3} + (N-2)! C C^{2N-3}+ 2 (N-2)!C^{2N+2\cdot 1-3}
\Bigg)\\
\OT{\le}{ ! }{}& (N-2)!C^{2N+2\cdot 2-3-1/3}.
\end{eqnarray}
Further we assume that $2C^{2N+2-3-1/3}\OT{\le}{ ! } C^{2N+2-3}$.
\\
\underline{$K=3$}:
This case can be proven analogously to the case $K=2$.
\noindent
\underline{$0,\ldots,K-1\rightarrow K$}:
We assume that the claim holds for all integers up to $K-1$ and show the
induction step.
Recall that in the case $N=0$ we have proven:
$$
0\le k\le 2: ~\sup_{u\in I(u_*)}| \pa u^{k}\cos\t_0
|_{L^\infty_{\xi,x}(\mathbb R^2)}
\le
C\,,~~~~
\forall k\ge 3: ~
\sup_{u\in I(u_*)}|
\pa u^{k}\cos\t_0
|_{L^\infty_{\xi,x}(\mathbb R^2)}
\le
(k-3)!C^{2k-3+1/3}\,.
$$
To begin with, we show that for $2\le m \le N-1$:
\begin{eqnarray} \label{ind km cos one}
0\le k\le 2:~~ \sup_{u\in I(u_*)}| \pa u^{k}\pa \varepsilon^{m}\cos{\t_n^{\varepsilon}} |_{\varepsilon=0}
|_{L^\infty_{\xi,x}(\mathbb R^2)}
\le
(m-2)!C^{2m+ 2k-3+1/3}\,,
\end{eqnarray}
\begin{eqnarray}\label{ind km cos two}
\forall k\ge 3:~~
\sup_{u\in I(u_*)}|
\pa u^{k}\pa \varepsilon^{m}\cos{\t_n^{\varepsilon}} |_{\varepsilon=0}
|_{L^\infty_{\xi,x}(\mathbb R^2)}
\le
(k-3)!(m-2)!C^{2k+2m-3+1/3}\,.
\end{eqnarray}
The induction basis for $N=2$ can be shown similarly to the case $N=0$.
We assume that \eqref{ind km cos one}-\eqref{ind km cos two} holds for all integers $2\le m \le N-2$ and show the
induction step.
We start with a preliminary estimate for $l\ge 4,~i\ge 3$:
\begin{eqnarray}
{}&
\sup_{u\in I(u_*)}\Bigg|
\sum_{k=0}^{l-1}\binom{l-1}{k} \sum_{j=0}^{i}\binom{i}{j}\pa u^j\pa \varepsilon^k\cos({\t_n^{\varepsilon}}) \pa u^{i-j}\pa \varepsilon^{l-k}{\t_n^{\varepsilon}}
\Bigg|_{\varepsilon=0}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}
\\
=
{}&
\sup_{u\in I(u_*)}\Bigg|
\Bigg(\sum_{k=2}^{l-2}\binom{l-1}{k} \sum_{j=3}^{i-3}\binom{i}{j}\pa u^j\pa \varepsilon^k\cos({\t_n^{\varepsilon}}) \pa u^{i-j}\pa \varepsilon^{l-k}{\t_n^{\varepsilon}}\\
{}&+\cos({\t_n^{\varepsilon}}) \pa u^{i}\pa \varepsilon^{l}{\t_n^{\varepsilon}}
+i\pa u\cos({\t_n^{\varepsilon}}) \pa u^{i-1}\pa \varepsilon^{l}{\t_n^{\varepsilon}} \\
{}&+\frac{i(i-1)}{2}\pa u^2\cos({\t_n^{\varepsilon}}) \pa u^{i-2}\pa \varepsilon^{l}{\t_n^{\varepsilon}}
+\sum_{j=3}^{i}\binom{i}{j}\pa u^j\cos({\t_n^{\varepsilon}}) \pa u^{i-j}\pa \varepsilon^{l}{\t_n^{\varepsilon}}\\
{}&
+\sum_{k=2}^{l-2}\binom{l-1}{k} \pa \varepsilon^k\cos({\t_n^{\varepsilon}}) \pa u^{i}\pa \varepsilon^{l-k}{\t_n^{\varepsilon}}
\\
{}&
+\sum_{k=2}^{l-2}\binom{l-1}{k} i\pa u\pa \varepsilon^k\cos({\t_n^{\varepsilon}}) \pa u^{i-1}\pa \varepsilon^{l-k}{\t_n^{\varepsilon}}
+\sum_{k=2}^{l-2}\binom{l-1}{k} \frac{i(i-1)}{2}\pa u^2\pa \varepsilon^k\cos({\t_n^{\varepsilon}}) \pa u^{i-2}\pa \varepsilon^{l-k}{\t_n^{\varepsilon}}\\
{}&+\sum_{k=2}^{l-2}\binom{l-1}{k} \frac{i(i-1)}{2}\pa u^{i-2}\pa \varepsilon^k\cos({\t_n^{\varepsilon}}) \pa u^{2}\pa \varepsilon^{l-k}{\t_n^{\varepsilon}}
+\sum_{k=2}^{l-2}\binom{l-1}{k}i\pa u^{i-1}\pa \varepsilon^k\cos({\t_n^{\varepsilon}}) \pa u^{}\pa \varepsilon^{l-k}{\t_n^{\varepsilon}}\\
{}&+\sum_{k=2}^{l-2}\binom{l-1}{k} \pa u^i\pa \varepsilon^k\cos({\t_n^{\varepsilon}}) \pa \varepsilon^{l-k}{\t_n^{\varepsilon}}
\Bigg)\Bigg|_{\varepsilon=0}\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}
\end{eqnarray}
\begin{eqnarray}
\le{}&
(l-2)!(i-3)!C^{2i+2l-6+2/3}\cdot\\
{}&\sum_{k=3}^{l-2}
\sum_{j=3}^{i-3}
\frac{(l-1)i(i-1)(i-2)}{(l-1-k)k(k-1)(i-j)(i-j-1)(i-j-2)j(j-1)(j-2)} \\
{}&+(i-3)!(l-2)! C^{2i+2l-3}
+i (i-4)!(l-2)! CC^{2(i-1)+2l-3} \\
{}&+\frac{i(i-1)(i-5)!(l-2)!}{2}C C^{2(i-2)+2l-3}\\
{}&
+(l-2)!(i-3)!\sum_{j=3}^{i}\frac{i(i-1)(i-2)}{(i-j)(i-j-1)(i-j-2)j(j-1)(j-2)} C^{2i+2l-6}\\
{}&
+(l-2)!(i-3)!\sum_{k=2}^{l-2} \frac{l-1}{k(k-1)(l-k-1)}C^{2i+2l-6}
\\
{}&
+(l-2)!i(i-4)!\sum_{k=2}^{l-2} \frac{(l-1)}{k(k-1) (l-k-1)} C^{2i+2l-6}
\\
{}&
+(l-2)!\frac{i(i-1)}{2} (i-5)!\sum_{k=2}^{l-2} \frac{(l-1)}{k(k-1) (l-k-1)} C^{2i+2l-6}
\\
{}&
+(l-2)!\frac{i(i-1)}{2} (i-5)!\sum_{k=2}^{l-2} \frac{(l-1)}{k(k-1) (l-k-1)} C^{2i+2l-6}
\\
{}&
+(l-2)!i(i-4)!\sum_{k=2}^{l-2} \frac{(l-1)}{k(k-1) (l-k-1)} C^{2i+2l-6}
\\
{}&
+(l-2)!(i-3)!\sum_{k=2}^{l-2} \frac{l-1}{k(k-1)(l-k-1)}C^{2i+2l-6}\\
\OT{\le}{ ! } {}&(l-2)!(i-3)!C^{2i+2l-3} \,.\label{preliminaryboundcos}
\end{eqnarray}
\noindent
Applying the Leibniz's formula we obtain for $2\le m \le N-1$, $ r\ge 3$:
\begin{eqnarray}
{}&
\sup_{u\in I(u_*)}\Bigg|
\pa u^r\pa \varepsilon^m(\cos({\t_n^{\varepsilon}}))\Big|_{\varepsilon=0}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}
\\
={}&
\sup_{u\in I(u_*)}\Bigg|
\pa u^r\pa \varepsilon^{m-1}(\sin({\t_n^{\varepsilon}})\pa \varepsilon{\t_n^{\varepsilon}})\Big|_{\varepsilon=0}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}
\end{eqnarray}
\begin{eqnarray}
={}&
\sup_{u\in I(u_*)}\Bigg|
\sum_{l=0}^{m-1}\binom{m-1}{l} \sum_{i=0}^{r}\binom{r}{i}\pa u^i\pa \varepsilon^{l} \sin({\t_n^{\varepsilon}}) \pa u^{r-i}\pa \varepsilon^{m-l}{\t_n^{\varepsilon}} \Bigg|_{\varepsilon=0}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}
\\
= {}&
\sup_{u\in I(u_*)}\Bigg|
\Bigg(\sum_{l=1}^{m-1}\binom{m-1}{l}
\sum_{i=0}^{r}\binom{r}{i}
\pa u^i\pa \varepsilon^{l-1} \l(\cos({\t_n^{\varepsilon}}) \pa \varepsilon{\t_n^{\varepsilon}}\r)\pa u^{r-i}\pa \varepsilon^{m-l}{\t_n^{\varepsilon}}\\
{}&
+ \sum_{i=0}^{r}\binom{r}{i}\pa u^i\sin{\t_n^{\varepsilon}}\pa u^{r-i}\pa \varepsilon^m{\t_n^{\varepsilon}}\Bigg)\Bigg|_{\varepsilon=0}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}
\\
= {}&
\sup_{u\in I(u_*)}\Bigg|
\Bigg(\sum_{l=1}^{m-1}\binom{m-1}{l}\sum_{i=0}^{r}\binom{r}{i} \l[ \sum_{k=0}^{l-1}\binom{l-1}{k} \sum_{j=0}^{i}\binom{i}{j} \pa u^j\pa \varepsilon^k\cos({\t_n^{\varepsilon}}) \pa u^{i-j}\pa \varepsilon^{l-k}{\t_n^{\varepsilon}}\r]\pa u^{r-i}\pa \varepsilon^{m-l}{\t_n^{\varepsilon}}\\
{}&
+ \sum_{i=0}^{r}\binom{r}{i}\pa u^i\sin{\t_n^{\varepsilon}}\pa u^{r-i}\pa \varepsilon^m{\t_n^{\varepsilon}}\Bigg)\Bigg|_{\varepsilon=0}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)} \,.
\end{eqnarray}
In order to control the expression
\begin{eqnarray}\label{costerm1}
\sum_{l=1}^{m-1}\sum_{i=0}^{r}\binom{m-1}{l}\binom{r}{i} \l[ \sum_{k=0}^{l-1}\binom{l-1}{k} \sum_{j=0}^{i}\binom{i}{j} \pa u^j\pa \varepsilon^k\cos({\t_n^{\varepsilon}}) \pa u^{i-j}\pa \varepsilon^{l-k}{\t_n^{\varepsilon}}\r]\pa u^{r-i}\pa \varepsilon^{m-l}{\t_n^{\varepsilon}}
\Bigg|_{\varepsilon=0}
\end{eqnarray}
we split the sum \eqref{costerm1} over indices $l,i$ into two sums, one over indices $I_{m,r}:=\{(l,i) ~:~ 3\le l \le m-2 ~\text{and}~ 3\le i \le r-2\}$ and the other over indices $ \{(l,i) ~:~ 1\le l \le m-1,~ 0\le i \le r\} \setminus I_{m,r}$. Using bound \eqref{preliminaryboundcos} for the square brackets term we estimate the sum over indices $I_{m,r}$
by
\begin{eqnarray}
{}&
\sup_{u\in I(u_*)}\Bigg|
\sum_{l=3}^{m-2}\binom{m-1}{l}\sum_{i=3}^{r-2}\binom{r}{i} \l( \sum_{k=0}^{l-1}\binom{l-1}{k} \sum_{j=0}^{i}\binom{i}{j} \pa u^j \pa \varepsilon^k\cos({\t_n^{\varepsilon}}) \pa u^{i-j}\pa \varepsilon^{l-k}{\t_n^{\varepsilon}}\r)
\cdot\\
{}&
\pa u^{r-i}\pa \varepsilon^{m-l}{\t_n^{\varepsilon}}
\Bigg|_{\varepsilon=0}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)} \\
\le
{}&
(m-2)!(r-3)!C^{2r+2m-9}\cdot\\
{}&
\sum_{l=3}^{m-2}\sum_{i=3}^{r-2}\frac{(m-1) r(r-1)(r-2)}{(m-l-1)!l! (r-i)!i!} (l-2)! (i-3)! (r-i-3)!(m-l-2)!
\\
\le
{}&
(m-2)!(r-3)! C^{2r+2m-9}
\cdot
\\
{}&
\sum_{l=3}^{m-2}\sum_{i=3}^{r-2}
\frac{(m-1) r(r-1)(r-2)}{(m-l-1) l(l-1) (r-i)(r-i-1) (r-i-2)i(i-1)(i-2)},
\end{eqnarray}
where the supremum over $(r,m)$ of the double sum is finite.
All terms of the sum over indices
\begin{eqnarray}
{}&
\{(l,i) ~:~ 1\le l \le m-1,~ 0\le i \le r\} \setminus I_{m,r}\label{setofindices}\\
={}&
\{(l,0),~l= 1,\ldots,m-1\} \cup
\{(l,1),~l= 1,\ldots,m-1 \} \cup
\{(l,2),~l= 1,\ldots,m-1 \} \\
{}&
\cup
\{(l,r-1),~l= 1,\ldots,m-1 \} \cup
\{(l,r-2),~l= 1,\ldots,m-1 \} \cup
\{(1,i),~i= 0,\ldots,r\} \\
{}&
\cup
\{(2,i),~i= 3,\ldots,r\} \cup
\{(m-1,i),~i= 3,\ldots,r\}\cup
\{(m-2,i),~i= 3,\ldots,r\}
\end{eqnarray}
can be treated in a similar way, whereby one considers separately the sums over the subsets above. For instance, for indices $\{(l,0),~l= 1,\ldots,m-1\} $, we obtain due to \eqref{assumption derivatives F}
\begin{eqnarray}
{}&
\sup_{u\in I(u_*)}\Bigg|
\sum_{l=1}^{m-1}\binom{m-1}{l}
\sum_{k=0}^{l-1}\binom{l-1}{k} \pa \varepsilon^k\cos({\t_n^{\varepsilon}}) \pa \varepsilon^{l-k}{\t_n^{\varepsilon}}
%
\pa u^{r }\pa \varepsilon^{m-l}{\t_n^{\varepsilon}}
\Bigg|_{\varepsilon=0}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}
\\
\le
{}&
\sup_{u\in I(u_*)}\Bigg|
\binom{m-1}{2} \sum_{k=0}^{1}\binom{1}{k} \pa \varepsilon^k\cos({\t_n^{\varepsilon}}) \pa \varepsilon^{2-k}{\t_n^{\varepsilon}}
%
\pa u^{r }\pa \varepsilon^{m-2}{\t_n^{\varepsilon}}
\Bigg|_{\varepsilon=0}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}\\
{}&+
\sup_{u\in I(u_*)}\Bigg|
\binom{m-1}{3} \sum_{k=0}^{2}\binom{2}{k} \pa \varepsilon^k\cos({\t_n^{\varepsilon}}) \pa \varepsilon^{3-k}{\t_n^{\varepsilon}}
%
\pa u^{r }\pa \varepsilon^{m-3}{\t_n^{\varepsilon}}
\Bigg|_{\varepsilon=0}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}\\
{}&
+\sup_{u\in I(u_*)}\Bigg|
\sum_{l=4}^{m-1}\binom{m-1}{l}
\sum_{k=0}^{l-1}\binom{l-1}{k} \pa \varepsilon^k\cos({\t_n^{\varepsilon}}) \pa \varepsilon^{l-k}{\t_n^{\varepsilon}}
%
\pa u^{r }\pa \varepsilon^{m-l}{\t_n^{\varepsilon}}
\Bigg|_{\varepsilon=0}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}\\
\le
{}&
\sup_{u\in I(u_*)}\Bigg|
\frac{(m-1)(m-2)}{2} \cos({\t_n^{\varepsilon}}) \pa \varepsilon^{2 }{\t_n^{\varepsilon}}
%
\pa u^{r }\pa \varepsilon^{m-2}{\t_n^{\varepsilon}}
\Bigg|_{\varepsilon=0}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}\\
{}&+
\sup_{u\in I(u_*)}\Bigg|
\frac{(m-1)(m-2)(m-3)}{6}
\cos({\t_n^{\varepsilon}}) \pa \varepsilon^{3}{\t_n^{\varepsilon}}
%
\pa u^{r }\pa \varepsilon^{m-3}{\t_n^{\varepsilon}}
\Bigg|_{\varepsilon=0}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}\\
{}&
+\sup_{u\in I(u_*)}\Bigg|
\sum_{l=4}^{m-1}\binom{m-1}{l} \Bigg( \sum_{k=2}^{l-1}\binom{l-1}{k} \pa \varepsilon^k\cos({\t_n^{\varepsilon}}) \pa \varepsilon^{l-k}{\t_n^{\varepsilon}}
+
\cos({\t_n^{\varepsilon}})\pa \varepsilon^{l}{\t_n^{\varepsilon}}
\Bigg)
\pa u^{r }\pa \varepsilon^{m-l}{\t_n^{\varepsilon}}
\Bigg|_{\varepsilon=0}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}\\
\le
{}&
\frac{(m-1)(m-2)}{2} (m-4)! (r-3)!C^{1+2r+2(m-2)-3 }\\
{}&+
\frac{(m-1)(m-2)(m-3)}{6}
(m-5)! (r-3)!C^{1+2r+2(m-3)-3 }\\
{}& +(m-2)!(r-3)! C^{2r+2m-8} \cdot
\\
{}&
\Bigg(
%
+ \sum_{l=4}^{m-2}\sum_{k=2}^{l-1}
\frac{(m-1)}{(m-l-1) (l-k-1) k(k-1)}
+ \sum_{l=3}^{m-2}
\frac{(m-1)}{(m-l-1) l(l-1)}\Bigg) ,
\end{eqnarray}
where the supremum over $(m,l)$ of the expression in the last line is finite. \\
Now we consider the sum
\begin{eqnarray}
\sum_{i=0}^{r}\binom{r}{i}\pa u^i\sin{\t_n^{\varepsilon}}\pa u^{r-i}\pa \varepsilon^m {\t_n^{\varepsilon}}
\Bigg|_{\varepsilon=0}.
\end{eqnarray}
In order to control this sum, we write it
by utilizing Leibniz's formula in the following way:
\begin{eqnarray}
{}&
\Bigg(\sin{\t_n^{\varepsilon}}\pa u^r\pa \varepsilon^m{\t_n^{\varepsilon}}+
\sum_{i=5}^{r}\binom{r}{i}
\sum_{p=0}^{i-1} \binom{i-1}{p}\pa u^{p}(\cos{\t_n^{\varepsilon}}) \pa u^{i-p}{\t_n^{\varepsilon}}
\\
{}&
+\sum_{i=1}^{4}\binom{r}{i}
\sum_{p=0}^{i-1} \binom{i-1}{p}\pa u^{p}(\cos{\t_n^{\varepsilon}}) \pa u^{i-p}{\t_n^{\varepsilon}}\label{terms1}
\\
{}&
+\sum_{i=r-2}^{r}\binom{r}{i}
\sum_{p=0}^{i-1} \binom{i-1}{p}\pa u^{p}(\cos{\t_n^{\varepsilon}}) \pa u^{i-p}{\t_n^{\varepsilon}} \Bigg)
\Bigg|_{\varepsilon=0}\label{terms2}
.
\end{eqnarray}
Using the induction hypothesis we estimate the first term by
$$
\sup_{u\in I(u_*)} |
\sin{\t_n^{\varepsilon}}\pa u^r\pa \varepsilon^m{\t_n^{\varepsilon}}
\big|_{\varepsilon=0}
|_{L^\infty_{\xi,x}(\mathbb R^2)}
\le
(m-2)!(r-3)! C^{2r+2m-3} .
$$
For the second term we obtain
\begin{eqnarray}
{}&
\sup_{u\in I(u_*)}\Bigg|
\sum_{i=5}^{r-3}\binom{r}{i}
\sum_{p=0}^{i-1} \binom{i-1}{p}\pa u^{p}(\cos{\t_n^{\varepsilon}}) \pa u^{i-p}{\t_n^{\varepsilon}}
\pa u^{r-i}\pa \varepsilon^m{\t_n^{\varepsilon}}
\Bigg|_{\varepsilon=0}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}\\
={}&
\sup_{u\in I(u_*)}\Bigg|
\sum_{i=5}^{r-3}\binom{r}{i} \Bigg( \sum_{p=3}^{i-3} \binom{i-1}{p}\pa u^{p}(\cos{\t_n^{\varepsilon}}) \pa u^{i-p}{\t_n^{\varepsilon}}
+ (\cos{\t_n^{\varepsilon}}) \pa u^{i}{\t_n^{\varepsilon}}
+ (i-1)\pa u^{ }(\cos{\t_n^{\varepsilon}}) \pa u^{i-1}{\t_n^{\varepsilon}}
\\
{}&
+ \frac{(i-1)(i-2)}2\pa u^{2}(\cos{\t_n^{\varepsilon}}) \pa u^{i-2}{\t_n^{\varepsilon}}
+(i-1) \pa u^{i-2}(\cos{\t_n^{\varepsilon}}) \pa u^{2}{\t_n^{\varepsilon}}
\\
{}&
+ \pa u^{i-1}(\cos{\t_n^{\varepsilon}}) \pa u^{}{\t_n^{\varepsilon}}
\Bigg)\pa u^{r-i}\pa \varepsilon^m{\t_n^{\varepsilon}}
\Bigg|_{\varepsilon=0}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}
\\
\le
{}&
(m-2)!(r-3)!C^{2r+2m-8} \cdot \\
{}& \sum_{i=5}^{r-3}\Bigg(\sum_{p=3}^{i-3}\frac{r(r-1)(r-2) }{(r-i)(r-i-1)(r-i-2)i (i-p-1)(i-p-2)p(p-1)(p-2)}
\\
{}&
+
\frac{r(r-1)(r-2)}{i(i-1)(i-2)(r-i) (r-i-1)(r-i-2)}
+
\frac{r(r-1)(r-2)}{i(i-2)(i-3)(r-i) (r-i-1)(r-i-2)}
\end{eqnarray}
\begin{eqnarray}
{}&
+
\frac{r(r-1)(r-2)}{2i (i-3)(i-4)(r-i) (r-i-1)(r-i-2)}
\\
{}&
+
\frac{r(r-1)(r-2)}{i (i-2)(i-3)(i-4)(r-i) (r-i-1)(r-i-2)} \\
{}&
+
\frac{r(r-1)(r-2)}{i (i-1)(i-2)(i-3)(r-i) (r-i-1)(r-i-2)}
\Bigg),
\end{eqnarray}
where the supremum of the sum over $r$ is finite.
The summands of the sums \eqref{terms1}-\eqref{terms2} can be treated similarly. As an example we consider the case $i=2$:
\begin{eqnarray}
{}&
\sup_{u\in I(u_*)}\Bigg|
\binom{r}{2}\sum_{p=0}^{1} \pa u^{p}(\cos{\t_n^{\varepsilon}}) \pa u^{2-p}{\t_n^{\varepsilon}}
\pa u^2\sin{\t_n^{\varepsilon}}\pa u^{r-2}\pa \varepsilon^m{\t_n^{\varepsilon}}
\Bigg|_{\varepsilon=0}
\Bigg|_{L^\infty_{\xi,x}(\mathbb R^2)}
\\
\le
{}&
(m-2)! r(r-1) (r-5)! C^{2r+2m-8}.
\end{eqnarray}
This completes the induction step for \eqref{ind km cos two}, since due to previous estimates an appropriate constant $C$ can be found as it was done in the cases $0\le K \le 2$. One shows \eqref{ind km cos one} similarly.
\noindent
Now we estimate separately the terms of
the recursive formula \eqref{recursive relation KN}.
Firstly, we start for $K\ge 5, ~N\ge 3$ with the term
\begin{eqnarray}\label{firsttermnorm}
\l\Vert
\l[{\frak M}_{0}^\alpha\r]^{-1}
\Bigg[
\sum_{\substack{0\le m \le N-1,\\ 0 \le k \le K, ~(m,k)\not=(0,0) }}\binom{N-1}{m} \binom{K}{k}
\begin{pmatrix}
0\\
\pa u^{k} \pa \varepsilon^{m} \cos({\t_n^{\varepsilon}}) \pa u^{K-k}\pa \varepsilon^{N-m}{\t_n^{\varepsilon}}
\end{pmatrix}\Bigg]
\Bigg|_{\varepsilon=0}\r\Vert_{Y_0^\alpha(u_*)}.
\end{eqnarray}
We split the sum over indices $m,k$, analogous to \eqref{costerm1}, into two sums, one over indices $J_{N,K}:=\{(m,k) ~:~ 3\le m \le N-1 ~\text{and}~ 3\le k \le K\}$ and the other over indices $ \{(m,k)\not=0 ~:~ 0\le m \le N-1,~ 0\le k \le K\} \setminus J_{N,K}$.
The sum over indices $J_{N,K}$ can be estimated by
\begin{eqnarray}
{}&\l\Vert
\l[{\frak M}_{0}^\alpha\r]^{-1}
\Bigg[
\sum_{\substack{3\le m \le N-1,\\ 3 \le k \le K, ~
}}\binom{N-1}{m} \binom{K}{k}
\begin{pmatrix}
0\\
\pa u^{k} \pa \varepsilon^{m} \cos({\t_n^{\varepsilon}}) \pa u^{K-k}\pa \varepsilon^{N-m}{\t_n^{\varepsilon}}
\end{pmatrix}\Bigg]
\Bigg|_{\varepsilon=0}\r\Vert_{Y_0^\alpha(u_*)}\\
\le {}&\l\Vert\l[{\frak M}_{0}^\alpha\r]^{-1} \r\Vert (N-2)!(K-3)!C^{2K+2N-5}\cdot\\
{}&\sum_{\substack{3\le m \le N-1,\\ 3 \le k \le K
}}\frac{(N-1)K(K-1)(K-2)}{(N-m-1)!m!(K-k)!k!} (k-3)!(m-2)!(K-k-3)!(N-m-2)!
\end{eqnarray}
\begin{eqnarray}
\le
{}&\l\Vert\l[{\frak M}_{0}^\alpha\r]^{-1} \r\Vert
(N-2)!(K-3)!C^{2K+2N-5}\cdot
\\
{}&
\sum_{\substack{3\le m \le N-1,\\ 3 \le k \le K
}}\frac{(N-1)}{(N-m-1)m(m-1)}
\frac{K(K-1)(K-2)}{(K-k)(K-k-1)(K-k-2)k(k-1)(k-2)}\\
\OT{\le}{!}{}&
(N-2)!(K-3)!C^{2K+2N-4}.
\end{eqnarray}
We decompose the set of indices
$ \{(m,k)\not=0 ~:~ 0\le m \le N-1,~ 0\le k \le K\} \setminus J_{N,K}$
analogously to \eqref{setofindices} and consider the sums over the corresponding subsets. All those sums can be treated similarly. For instance, for indices $\{(2,k),~k= 0,\ldots,K\} $, we obtain by using \eqref{ind km cos one}-\eqref{ind km cos two}:
\begin{eqnarray}
{}&
\Bigg\Vert
\l[{\frak M}_{0}^\alpha\r]^{-1}
\frac{(N-1)(N-2)}2 \sum_{k=0}^K \binom{K}{k}
\begin{pmatrix}
0\\
\pa u^{k} \pa \varepsilon^{ 2} \cos({\t_n^{\varepsilon}}) \pa u^{K-k}\pa \varepsilon^{N-2}{\t_n^{\varepsilon}}
\end{pmatrix}
\Bigg|_{\varepsilon=0}
\Bigg\Vert_{Y_0^\alpha(u_*)}\\
\le
{}&
\l\Vert\l[{\frak M}_{0}^\alpha\r]^{-1} \r\Vert \Bigg(
\frac{(N-1)(N-2)}2 \Bigg\Vert \sum_{k=3}^K \binom{K}{k}
\begin{pmatrix}
0\\
\pa u^{k} \pa \varepsilon^{ 2} \cos({\t_n^{\varepsilon}}) \pa u^{K-k}\pa \varepsilon^{N-2}{\t_n^{\varepsilon}}
\end{pmatrix}
\Bigg|_{\varepsilon=0}
\Bigg\Vert_{Z_0^\alpha(u_*)}
\\
{}&+\frac{(N-1)(N-2)}2 (K-3)!(N-4)! C^{2+2K+2(N-2)-3}\\
{}&+\frac{(N-1)(N-2)}2 K (K-4)!(N-4)! C^{4+2(K-1)+2(N-2)-3} \\
{}& + \frac{(N-1)(N-2)}2 (K-5)!(N-4)! C^{6+2(K-2)+2(N-2)-3}
\Bigg) \\
\OT{\le}{!}{}&
(N-2)!(K-3)!C^{2K+2N-4}.
\end{eqnarray}
Secondly, we consider for $K\ge 5, ~N\ge 3$ the term
\begin{eqnarray}
{}&\l\Vert
\l[{\frak M}_{0}^\alpha\r]^{-1}
\Bigg[
\sum_{\substack{0\le l \le \min\{n-1,N-1\}\\
0\le k \le K, ~(l,k)\not=(0,0) }}
\binom{N}{l}\binom{K}{k} \begin{pmatrix}
\pa u^{K-k} \pa \varepsilon^{N-l} \lambda_{n}^0 {\pa u^{k+1} \pa \varepsilon^l\t_{n}^0}\\
\pa u^{K-k} \pa \varepsilon^{N-l} \lambda_{n}^0 {\pa u^{k+1} \pa \varepsilon^l\psi_{n}^0}
\end{pmatrix}
\Bigg]
\r\Vert_{Y_0^\alpha(u_*)} ,
\end{eqnarray}
from \eqref{recursive relation KN}. We treat this term analogously to \eqref{firsttermnorm} and
the sum over indices $J_{N,K}$ can be estimated by
\begin{eqnarray}
{}&
\l\Vert
\l[{\frak M}_{0}^\alpha\r]^{-1}
\Bigg[
\sum_{\substack{3\le l \le \min\{n-1,N-1\}\\
3\le k \le K
}}
\binom{N}{l}\binom{K}{k} \begin{pmatrix}
\pa u^{K-k} \pa \varepsilon^{N-l} \lambda_{n}^0 {\pa u^{k+1} \pa \varepsilon^l\t_{n}^0}\\
\pa u^{K-k} \pa \varepsilon^{N-l} \lambda_{n}^0 {\pa u^{k+1} \pa \varepsilon^l\psi_{n}^0}
\end{pmatrix}
\Bigg]
\r\Vert_{Y_0^\alpha(u_*)}
\end{eqnarray}
\begin{eqnarray}
\le
{}&\l\Vert\l[{\frak M}_{0}^\alpha\r]^{-1} \r\Vert (N-2)!(K-3)! C^{2K+2N-5} \cdot\\
{}&\sum_{\substack{3\le m \le N-1,\\ 3 \le k \le K
}}\frac{N(N-1)K(K-1)(K-2)}{(N-m)!m!(K-k)!k!} (k-2)!(m-2)!(K-k-3)!(N-m-2)!
\\
\le
{}&\l\Vert\l[{\frak M}_{0}^\alpha\r]^{-1} \r\Vert (N-2)!(K-3)! C^{2K+2N-5} \cdot \\
{}&\sum_{\substack{3\le m \le N-1,\\ 3 \le k \le K
}}\frac{N(N-1)}{(N-m)(N-m-1)m(m-1)}
\frac{K(K-1)(K-2)}{(K-k)(K-k-1)(K-k-2)k(k-1)}\\
\OT{\le}{!}{}&
(N-2)!(K-3)!C^{2K+2N-4}\,.
\end{eqnarray}
We decompose the set of indices
$ \{(m,k)\not=0 ~:~ 0\le m \le N-1,~ 0\le k \le K\} \setminus J_{N,K}$ and estimate the corresponding sums as above. For instance, for indices $\{(0,k),~k= 1,\ldots,K\} $, we obtain
\begin{eqnarray}
{}&
\Bigg\Vert
\l[{\frak M}_{0}^\alpha\r]^{-1}
\sum_{k=1}^K \binom{K}{k}
\begin{pmatrix}
\pa u^{K-k} \pa \varepsilon^{N} \lambda_{n}^0 {\pa u^{k+1} \t_{n}^0}\\
\pa u^{K-k} \pa \varepsilon^{N} \lambda_{n}^0 {\pa u^{k+1} \psi_{n}^0}
\end{pmatrix}
\Bigg\Vert_{Y_0^\alpha(u_*)}\\
\le
{}&
\l\Vert\l[{\frak M}_{0}^\alpha\r]^{-1} \r\Vert \Bigg( \Bigg\Vert
\sum_{k=3}^K \binom{K}{k}
\begin{pmatrix}
\pa u^{K-k} \pa \varepsilon^{N} \lambda_{n}^0 {\pa u^{k+1} \t_{n}^0}\\
\pa u^{K-k} \pa \varepsilon^{N} \lambda_{n}^0 {\pa u^{k+1} \psi_{n}^0}
\end{pmatrix}
\Bigg\Vert_{Z_0^\alpha(u_*)} \\
{}&
+ K (K-4)!(N-2)!C^{2(K-1)+2N-5} + K (K-1) (K-5)!(N-2)! C^{2(K-2)+2N-3}
\Bigg) \\
\OT{\le}{!}{}&
(N-2)!(K-3)!C^{2K+2N-3-1/3} .
\end{eqnarray}
The last term in \eqref{recursive relation KN},
\begin{eqnarray}
{}&\l\Vert
\l[{\frak M}_{0}^\alpha\r]^{-1}
\Bigg[
K\begin{pmatrix}
\pa \xi\pa u^{K-1}\pa \varepsilon^N\t_n^0\\
\pa \xi\pa u^{K-1}\pa \varepsilon^N\psi_n^0
\end{pmatrix}
\Bigg]
\r\Vert_{Y_0^\alpha(u_*)}\,,
\end{eqnarray}
can be estimated by
\begin{eqnarray}
{}&
\l\Vert\l[{\frak M}_{0}^\alpha\r]^{-1} \r\Vert
\Bigg\Vert
K\begin{pmatrix}
\pa \xi\pa u^{K-1}\pa \varepsilon^N\t_n^0\\
\pa \xi\pa u^{K-1}\pa \varepsilon^N\psi_n^0
\end{pmatrix}
\Bigg\Vert_{Z_0^\alpha(u_*)}\\
\le{}&
\l\Vert\l[{\frak M}_{0}^\alpha\r]^{-1} \r\Vert
K(K-4)!(N-2)!C^{2(K-1)+2N-3}\\
\OT{\le}{!}{}&
(N-2)!(K-3)!C^{2K+2N-4}\,,
\end{eqnarray}
which completes the proof by the same argument as in the cases $0\le K \le 2$.
\epr
\noindent
By using \cref{derivatives estimate} we prove now \cref{thimplicitfunctionIT1 alpha}.
\begin{proofNEW}[of \cref{thimplicitfunctionIT1 alpha} ]
In this proof, we use the notation $Y_m^\alpha=Y_m^\alpha(u_*)$, $Z_m^\alpha=
Z_m^\alpha(u_*)$.
We refer to \cite[Theorem 15.1]{Deimling} and check their proof of the implicit function theorem, whereas we show that $r$ and ${\delta}$ do not depend on $\tilde {\cal G}_n$.
Once
$
\tilde {\cal G}_{n }: J \times Y_{0}^\alpha \to Z_{0}^\alpha
$
is defined, one obtains that its derivative
with respect to $({\hat \t},{\hat\p},\lambda)$ evaluated at $(\varepsilon,{\hat \t},{\hat\p},\lambda)=(0,0,0,0)$
is given by
${\frak M}_{0}^\alpha$. We set
$$
S_{n }(\varepsilon,{\hat \t},{\hat\p},\lambda)= \l[{\frak M}_{0}^\alpha\r]^{-1} \tilde {\cal G}_{n}^\varepsilon({\hat \t},{\hat\p},\lambda)-I({\hat \t},{\hat\p},\lambda)\,.
$$
We start with $\tilde {\cal G}_{1}$. Notice that
$
\tilde {\cal G}_1^{0}(0,0,0)
=0\,.
$
Let the constant $C$ be such that it satisfies the assumptions demanded in the proof of \cref{derivatives estimate}.
Since
$
D_{({\hat \t},{\hat\p},\lambda)}S_{1 }(0,0,0,0)=0
$
and $D_{({\hat \t},{\hat\p},\lambda)}S_{1 }$ is continuous, we fix $k\in(0,1)$ and find $1\ge\delta>0$ such that
\begin{eqnarray}\label{bound DS1}
\l\Vert
D_{({\hat \t},{\hat\p},\lambda)}S_{1 }(\varepsilon,{\hat \t},{\hat\p},\lambda)
\r\Vert_{Z_0^\alpha(u_*)} + \l\Vert \l[{\frak M}_{0}^\alpha\r]^{-1}\r\Vert \sum_{n=1}^\infty c_n\varepsilon^n
\le k
\end{eqnarray}
on $\overline B_{\delta }(0) \times \overline B_{\delta }(0) $, where
$
c_1= C$,
$c_n= \frac{ C^{2n-3}}{n(n-1)}
$
for $n\ge 2$
and
$\Vert \cdot \Vert$ denotes the operator norm of $\l[{\frak M}_{0}^\alpha\r]^{-1} $.
Since
$
S_{1 }(0,0,0,0)=0
$
and
$
S_{1 }(\cdot,0,0,0)
$
is continuous, there exists $r=:\bar\varepsilon\le\delta$ such that
$$
\l\Vert
S_{1 }(\varepsilon,0,0,0)
\r\Vert_{Z_0^\alpha(u_*)}
< \delta(1-k)
$$
on
$\overline B_{r}(0)$.
Thus there exists by \cite[Theorem 15.1]{Deimling}
a map
$$
(-\bar\varepsilon,\bar\varepsilon) \to Y_{0}^\alpha,
~~~\varepsilon \mapsto (\hat\t_{1}^\varepsilon,\hat\psi_{1}^\varepsilon,\lambda_{1}^\varepsilon)
$$
such that
$
\tilde{\cal G}^{\eps}_{1}(\hat\t_{1}^\varepsilon,\hat\psi_{1}^\varepsilon,\lambda_{1}^\varepsilon)=0\,.
$
Let $\underaccent{\bar} \varepsilon>0$ be the radius of convergence of $\sum_{n=2}^\infty c_n\varepsilon^n$ and $\varepsilon^* := \min \{ \underaccent{\bar} \varepsilon, \bar \varepsilon\}$.
Since $\tilde F$ is analytic, the solution $(\hat\t_{1}^\varepsilon,\hat\psi_{1}^\varepsilon,\lambda_{1}^\varepsilon)$ is also analytic and
may be written in the form
\begin{eqnarray}\label{sol_zero}
(\hat\t_{1 }^\varepsilon,\hat\psi_{1 }^\varepsilon,\lambda_{1 }^\varepsilon) = {}&\l(\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\hat\t_{1 }^0}{i!}\varepsilon^i,\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\hat\psi_{1 }^0}{i!}\varepsilon^i,\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\lambda_{1 }^0}{i!}\varepsilon^i\r)\,
\end{eqnarray}
for $\varepsilon\in(-\varepsilon^*,\varepsilon^* )$
due to \cref{derivatives estimate}.
Considering the map
$\tilde {\cal G}_{1,m}$ on spaces of higher regularity, given by
\begin{eqnarray}
\tilde {\cal G}_{1,m}: J \times Y_{m}^\alpha \to Z_{m}^\alpha \,,
(\varepsilon,{\hat \t},{\hat\p},\lambda) \mapsto \tilde{\cal G}_1^{\eps}({\hat \t},{\hat\p},\lambda):={\cal G}_1^{\eps}(\t_0+{\hat \t},\psi_0+{\hat\p},\lambda)\,,
\end{eqnarray}
where ${\cal G}_1$ is defined by \eqref{successive eq G1},
we
obtain in the same way for any $m\in {\mathbb N}$ a constant
$\bar\varepsilon_{m}>0$ and a map
$$
(-\bar\varepsilon_{m},\bar\varepsilon_{m}) \to Y_{m}^\alpha,
~~~\varepsilon \mapsto (\hat\t_{1,m}^\varepsilon,\hat\psi_{1,m}^\varepsilon,\lambda_{1,m}^\varepsilon)
$$
such that
$
\tilde{\cal G}^{\eps}_{1,m}(\hat\t_{1,m}^\varepsilon,\hat\psi_{1,m}^\varepsilon,\lambda_{1,m}^\varepsilon)=0\,.
$
Since
$\tilde F$ is analytic and
$
(
\hat\t_{1 }^\varepsilon,
\hat\psi_{1 }^\varepsilon,
\lambda_{1 }^\varepsilon
)=(\hat\t_{1,m}^\varepsilon,\hat\psi_{1,m}^\varepsilon,\lambda_{1,m}^\varepsilon)\in Y_{m}^\alpha \,
$
for $\varepsilon\in(-\bar\varepsilon_{m},\bar\varepsilon_{m}) $, it follows from \cref{derivatives estimate} that
$
(
\hat\t_{1 }^\varepsilon,
\hat\psi_{1 }^\varepsilon,
\lambda_{1 }^\varepsilon
)\in Y_{m}^\alpha \,
$ for $\varepsilon\in(- \varepsilon^*, \varepsilon^* )$ and consequently that $
\tilde {\cal G}_{2 }: J \times Y_{0}^\alpha \to Z_{0}^\alpha
$
is well defined.
Since
\begin{eqnarray}
0=\l\Vert\begin{pmatrix}
\pa \varepsilon^1\t_1^0\\
\pa \varepsilon^1\psi_1^0\\
\pa \varepsilon^1\lambda_{1}^0
\end{pmatrix}
\r\Vert_{Y_0^\alpha(u_*)}\le c_1
\end{eqnarray}
due to \eqref{assumption derivatives F},\eqref{recursive relation N} and \cref{le invertibilityMxiCtwo alpha},
it follows from \eqref{bound DS1} that
$$
\l\Vert
D_{({\hat \t},{\hat\p},\lambda)}S_{2 }(\varepsilon,{\hat \t},{\hat\p},\lambda)
\r\Vert_{Z_0^\alpha(u_*)}
\le k
$$
on $\overline B_{\delta }(0) \times \overline B_{\delta }(0) $.
Obviously
$$
\l\Vert
S_{2 }(\varepsilon,0,0,0)
\r\Vert_{Z_0^\alpha(u_*)}
< \delta(1-k)
$$
on
$\overline B_{r}(0)$.
Thus there exists by the same argument as above
an analytic map
$$
(-\varepsilon_{ }^*,\varepsilon_{ }^*) \to Y_{0}^\alpha,
~~~\varepsilon \mapsto (\hat\t_{2 }^\varepsilon,\hat\psi_{2 }^\varepsilon,\lambda_{2 }^\varepsilon),
$$
which may be written in a form analogous to \eqref{sol_zero} for $\varepsilon\in(- \varepsilon^*, \varepsilon^* )$ such that
$
\tilde{\cal G}^{\eps}_{2 }(\hat\t_{2 }^\varepsilon,\hat\psi_{2 }^\varepsilon,\lambda_{2 }^\varepsilon)=0\,.
$
We continue this process successively, whereas we use in the second and in the succeeding iteration steps the following argument.
Assuming that the first $n-1$ iterative solutions are obtained, it holds
\begin{eqnarray}
\frac 1 {N!}\l\Vert\begin{pmatrix}
\pa \varepsilon^N\t_{n-1}^0\\
\pa \varepsilon^N\psi_{n-1}^0\\
\pa \varepsilon^N\lambda_{n-1}^0
\end{pmatrix}
\r\Vert_{Y_0^\alpha(u_*)}\le c_{N}~~~~\text{for}~~~~1\le N\le n-1,
\end{eqnarray}
due to
\cref{derivatives estimate}.
Thus \eqref{bound DS1} yields that
$$
\l\Vert
D_{({\hat \t},{\hat\p},\lambda)}S_{n }(\varepsilon,{\hat \t},{\hat\p},\lambda)
\r\Vert_{Z_0^\alpha(u_*)}
\le k
$$
on $\overline B_{\delta }(0) \times \overline B_{\delta }(0) $.
Since
$$
\l\Vert
S_{n }(\varepsilon,0,0,0)
\r\Vert_{Z_0^\alpha(u_*)}
< \delta(1-k)
$$
on
$\overline B_{r}(0)$
there exists by the same argument as above
an analytic map
\begin{eqnarray}
{}&(-\varepsilon_{ }^*,\varepsilon_{ }^*) \to Y_{0}^\alpha,
~~~\varepsilon \mapsto (\hat\t_{n }^\varepsilon,\hat\psi_{n }^\varepsilon,\lambda_{n}^\varepsilon),
\end{eqnarray}
which may be written in a form analogous to \eqref{sol_zero} for $\varepsilon\in(- \varepsilon^*, \varepsilon^* )$ such that
$
\tilde{\cal G}^{\eps}_{n }(\hat\t_{n }^\varepsilon,\hat\psi_{n }^\varepsilon,\lambda_{n }^\varepsilon)=0.
$
\epr
\section{Convergence of the Sequence of Iterative Solutions}\label{se: Convergence of the Sequence}
In this section, we show that the sequence of iterative solutions constructed in \cref{se: Implicit function theorem} converges and that its limit defines a function which solves the equation of interest.
\begin{lemma}\label{le limit}
Let $\alpha$, $u_*$ and $\varepsilon^*$ be from \cref{thimplicitfunctionIT1 alpha}. The limit
\begin{eqnarray}
({\hat \t}_\infty^\varepsilon,{\hat\p}_\infty^\varepsilon,\lambda_\infty^\varepsilon) := {}&\l(\sum_{i=1}^{\infty} \frac{\pa \varepsilon^i\t_{i}^0}{i!}\varepsilon^i,\sum_{i=1}^{\infty} \frac{\pa \varepsilon^i\psi_{i}^0}{i!}\varepsilon^i,\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\lambda_{i}^0}{i!}\varepsilon^i\r)
\end{eqnarray}
exists in $Y_{0}^\alpha (u_*) $ for $\varepsilon\in(-\varepsilon^*,\varepsilon^* )$.
We set $(\t_\infty^\varepsilon,\psi_\infty^\varepsilon,\lambda_\infty^\varepsilon):=(\t_0+{\hat \t}_\infty^\varepsilon ,\psi_0+{\hat\p}_\infty^\varepsilon ,\lambda_\infty^\varepsilon)$ with $(\t_0,\psi_0)$ given by \eqref{solitonsolution}.
\end{lemma}
\begin{proofNEW}
The claim follows from \cref{thimplicitfunctionIT1 alpha} and \cref{derivatives estimate}, since $\varepsilon^*$ is less or equal than the radius of convergence of
$
\sum_{n=2}^\infty \frac{C^{2n-3}}{n(n-1)} \varepsilon^n
$ with $C$ from \cref{derivatives estimate}.
\epr
\begin{theorem}\label{le limitsolution}
Let $u_*$ and $\varepsilon^*$ be from \cref{thimplicitfunctionIT1 alpha}.
Then it holds for any $ u\in I(u_*) $ and $\varepsilon\in(-\varepsilon^*, \varepsilon^* )$ that
$$\ba\label{ITnquantitativ}
{}&
u\pa \xi\l(\begin{matrix}
\t_\infty^\varepsilon\\
\psi_\infty^\varepsilon\\
\end{matrix}\r)
-\l(\begin{matrix}
\psi_\infty^\varepsilon\\
[\t_\infty^\varepsilon]_{xx}-\sin\t_\infty^\varepsilon+\tilde F(\varepsilon)\\
\end{matrix}\r)
+\lambda_{ \infty}^\varepsilon\pa u\begin{pmatrix}
\t_\infty^\varepsilon\\
\psi_\infty^\varepsilon \\
\end{pmatrix} =0\\
\ea.
$$
\eth
\begin{proofNEW}
Let $n\in \mathbb{N}$.
Notice that $$\forall u\in I(u_*):~~
u\pa \xi\l(\begin{matrix}
\t_n^\varepsilon\\
\psi_n^\varepsilon\\
\end{matrix}\r)
-\l(\begin{matrix}
\psi_n^\varepsilon\\
[\t_n^\varepsilon]_{xx}-\sin\t_n^\varepsilon+\tilde F(\varepsilon)\\
\end{matrix}\r)
+\lambda_{ n}^\varepsilon\pa u\begin{pmatrix}
\sum_{i=0}^{n-1} \frac{\pa \varepsilon^i\t_{n}^0}{i!}\varepsilon^i\\
\sum_{i=0}^{n-1} \frac{\pa \varepsilon^i\psi_{n}^0}{i!}\varepsilon^i \\
\end{pmatrix} =0\,.
$$
It holds due to \cref{thimplicitfunctionIT1 alpha} that
\begin{eqnarray}
(\hat\t_n^\varepsilon,\hat\psi_n^\varepsilon,\lambda_n^\varepsilon) = {}&\l(\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\hat\t_{n}^0}{i!}\varepsilon^i,\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\hat\psi_{n}^0}{i!}\varepsilon^i,\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\lambda_{n}^0}{i!}\varepsilon^i\r)\,.
\end{eqnarray}
Thus using \cref{thITrelations} and \cref{derivatives estimate} we obtain for $n\ge 2$
and $\varepsilon\in(-\varepsilon^*, \varepsilon^* )$:
\begin{eqnarray}
{}&\l\Vert \begin{pmatrix}
\t_\infty^\varepsilon -\t_n^\varepsilon\\
\psi_\infty^\varepsilon -\psi_n^\varepsilon\\
\lambda_\infty^\varepsilon-\lambda_{ n}^\varepsilon
\\
\end{pmatrix} \r\Vert_{Y_0^\alpha(u_*)}\\
=
{}&\l\Vert \begin{pmatrix}
\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\t_{i}^0}{i!} \varepsilon^i-\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\t_{n}^0}{i!}\varepsilon^i\\
\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\psi_{i}^0}{i!} \varepsilon^i -\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\psi_{n}^0}{i!}\varepsilon^i\\
\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\lambda_{i}^0}{i!} \varepsilon^i-\sum_{i=0}^{\infty} \frac{\pa \varepsilon^i\lambda_{n}^0}{i!}\varepsilon^i
\\
\end{pmatrix} \r\Vert_{Y_0^\alpha(u_*)}
\end{eqnarray}
\begin{eqnarray}
=
{}&\l\Vert \begin{pmatrix}
\sum_{i=n}^{\infty} \frac{\pa \varepsilon^i\t_{i}^0}{i!} \varepsilon^i-\sum_{i=n}^{\infty} \frac{\pa \varepsilon^i\t_{n}^0}{i!}\varepsilon^i\\
\sum_{i=n}^{\infty} \frac{\pa \varepsilon^i\psi_{i}^0}{i!} \varepsilon^i -\sum_{i=n}^{\infty} \frac{\pa \varepsilon^i\psi_{n}^0}{i!}\varepsilon^i\\
\sum_{i=n}^{\infty} \frac{\pa \varepsilon^i\lambda_{i}^0}{i!} \varepsilon^i-\sum_{i=n}^{\infty} \frac{\pa \varepsilon^i\lambda_{n}^0}{i!}\varepsilon^i
\end{pmatrix} \r\Vert_{Y_0^\alpha(u_*)} \\
\le {}&
2 \sum_{i=n}^{\infty} \frac{ C^{2i-3}}{i(i-1)}{ \varepsilon}^i \,.
\end{eqnarray}
The claim follows since
\begin{eqnarray}
\pa u(\t_\infty^\varepsilon,\psi_\infty^\varepsilon,\lambda_\infty^\varepsilon) = {}&\l(\sum_{i=0}^{\infty} \frac{\pa u\pa \varepsilon^i\t_{i}^0}{i!}\varepsilon^i,\sum_{i=0}^{\infty} \frac{\pa u\pa \varepsilon^i\psi_{i}^0}{i!}\varepsilon^i,\sum_{i=0}^{\infty} \frac{\pa u\pa \varepsilon^i\lambda_{i}^0}{i!}\varepsilon^i\r)
\end{eqnarray}
in $Y_{0}^\alpha (u_*) $ due to \cref{le limit} and \cref{derivatives estimate}.
\epr
\section{Proof of Theorem 2.2}\label{se: Main Results Proof}
We apply \cref{thimplicitfunctionIT1 alpha}
to a specific $\tilde F$ which is defined below.
\begin{definition}\label{de cutoff}
Let $F,\xi_s$ and $\Xi$ be from \cref{maintheorem}.
We set
$\tilde F(\varepsilon,\xi,x):=F(\eps,x) \chi(\xi),$
where $\chi$ is a smooth cutoff function
with $\chi (\xi)=1$ for $|\xi|\le \Xi$ and $\chi (\xi)=0$ for $|\xi|\ge \Xi+1$.
\end{definition}
\noindent
The next lemma follows immediately from the assumptions on $F$ in \cref{maintheorem}.
\begin{lemma} \label{le: Ftilde vs F}
Let $F$, $\Xi$ be from \cref{maintheorem} and let $ \tilde F$ be from \cref{de cutoff}.
Then it holds that
\begin{itemize}
\item[(a)] $\forall ~(\varepsilon,\xi,x) \in (-1,1)\times \l[-\Xi,\Xi \r] \times \mathbb R: \tilde F(\varepsilon,\xi,x)=F(\eps,x)$\,;
\item[(b)] $\tilde F$ satisfies the assumptions of \cref{thimplicitfunctionIT1 alpha}.
\end{itemize}
\end{lemma}
\noindent
We solve iteratively the equations in
\cref{thimplicitfunctionIT1 alpha} with the specific $\tilde F(\varepsilon,\xi,x):=F(\eps,x) \chi(\xi)$ from \cref{de cutoff} (\cref{thimplicitfunctionIT1 alpha} is applicable due to \cref{le: Ftilde vs F})
and obtain a sequence of solutions,
which converges due to \cref{le limit}.
From now on we denote its limit by $(\t_{\infty}^\varepsilon ,\psi_{\infty}^\varepsilon ,\lambda_{\infty}^\varepsilon)
$.
The function $(\t,\psi)$ given by \eqref{form} with
$\bar\xi,\bar u$ satisfying \eqref{exactODE virtual1},
solves the Cauchy problem \eqref{SGE1} due to \cref{le limitsolution} and \cref{le: Ftilde vs F}. The claim for $|u_s|\le \tilde C\varepsilon^{}$ follows
by using \eqref{exactODE virtual1} and the fundamental theorem of calculus (analogous to the proof of \cite[Lemma 9.2]{Mashkin}).
\qed
|
2,877,628,089,757 | arxiv | \section{Introduction}
The representation of 3D geometry is a key issue in the context of machine learning in general and deep learning in particular. A variety of approaches, from point clouds over voxel sets to range images, have been investigated.
When the input geometry is in the common form of a surface mesh, conversion to such representations typically comes with losses in fidelity, accuracy, or conciseness.
Hence, techniques have been introduced to more or less directly take such discrete surface data as input to machine learning methods. Examples are graph-based \cite{kostrikov2017surface,defferrard2016convolutional} and patch-based approaches \cite{masci2015geodesic,boscaini2016learning,monti2017geometric}.
While graph-based techniques rely on fixed mesh connectivity structures, patch-based techniques provide more flexibility. However, they crucially rely on some form of (re)sampling of the input mesh data, so as to achieve consistent, regular neighborhood encodings, similar to the regular pixel structures exploited for learning on image data.
In this paper we consider the question whether such resampling can be avoided, taking the mesh data as input even more directly. The rationale for our interest is twofold: the avoidance of resampling would increase the efficiency of inference (and perhaps training) and could possibly increase precision. The increase in efficiency would be due to not having to perform the (typically non-trivial) resampling (either as a preprocess or online). One could hypothesize an increase in precision based on the fact that resampling is, in general, accompanied by some loss of data fidelity.
We propose a resampling and conversion free input encoding strategy for local neighborhoods in manifold 3D surface meshes. In contrast to many previous approaches for learning on surface meshes, we then make use of RNNs and fully-connected networks instead of CNNs, so as to be able to deal with the non-uniform, non-regular structure of the input.
Though simple, this raw input encoding is rich enough that our networks could, in theory, learn to emulate common patch resampling operators based on it. Nevertheless, hand-crafting such resampling operators and preprocessing the input accordingly, as previously done, could of course be of benefit in practice. Hence it is important to evaluate practical performance experimentally.
We apply and benchmark our technique in the context of \emph{non-rigid shape correspondence estimation} \cite{van2011survey}.
The computation of such point-to-point (or shape) correspondences is of interest for a variety of downstream shape analysis and processing tasks (e.g. shape interpolation, texture transfer, etc.).
The inference of these correspondences, however, is a challenging task and topic of ongoing investigation.
Our experiments in this context reveal that the preprocessing efforts can indeed be cut down significantly by our approach without sacrificing precision. In certain scenarios, as hypothesized, precision can even be increased relative to previous resampling-based techniques.
\paragraph{\textbf{Contribution}} In this work we propose and investigate a novel form of using either fully-connected layers or LSTMs (Hochreiter and Schmidhuber~\cite{hochreiter1997long}) for point-to-point correspondence learning on manifold 3D meshes.
By serializing the local neighborhood of vertices we are able to encode relevant information in a straightforward manner and with very little preprocessing.
We experimentally analyze the practical behavior and find that our approach achieves competitive results and outperforms a number of current methods in the task of shape correspondence prediction.
\section{Related Work}
Several data- and model-driven approaches for finding correspondences between shapes have been proposed in previous works.
\paragraph{\textbf{Functional Maps}}
Ovsjanikov et al.~\cite{ovsjanikov2012functional} approach the problem of finding point-to-point correspondences by formulating a function correspondence problem.
They introduce functional maps as a compact representation that can be used for point-to-point maps. Various (model- and data-driven) improvements have been suggested \cite{kovnatsky2013coupled,pokrass2013sparse,huang2014functional,eynard2015multimodal,eynard2016coupled,rodola2017partial,nogneng2017informative,nogneng2018improved,Gehre:2018:InteractiveFunctionalMaps}.
Most closely related to our approach, Litany et al.~\cite{litany2017deep} use deep metric learning to optimize input descriptors for the functional maps framework.
However, point-to-point correspondence inference in all cases requires the computation of a functional map for each pair of shapes. This possibly costly computation can be avoided with our approach. Once trained, our model can be applied directly for inference.
\paragraph{\textbf{Generalized CNNs for 3D Meshes}}
Several data-driven methods that do not rely on functional maps were proposed in recent years.
Masci et al.~\cite{masci2015geodesic} generalize convolution operations in modern deep learning architectures to non-Euclidean domains.
To this end they define geodesic disks (patches) around each vertex. Based on a local polar coordinate system the patches can be resampled with a fixed number and fixed pattern of samples (cf.\ Figure \ref{fig:patch_geod}). This predefined sampling pattern allows to construct a convolution operation on these patches by computing weighted sums of features at sample positions.
In order to transfer the information (i.e. descriptors) available discretely at the vertices to the continuous setting of the geodesic disks for the purpose of resampling, they are blended by means of appropriate kernels.
Boscaini et al.~\cite{boscaini2016learning} propose to use anisotropic kernels in this context, while aligning the local coordinate systems with the principal curvature directions.
Monti et al.~\cite{monti2017geometric} generalize the construction of these blending kernels to Gaussian Mixture Models, which avoids the hand-crafting of kernels in favor of learning them.
Ezuz et al.~\cite{ezuz2017gwcnn} and Maron et al.~\cite{maron2017convolutional} both propose forms of global (instead of local patch-wise) structured resampling of the surface,
which can then be used as input to well-known CNN architectures used in computer vision.
Similar in spirit to our work is the method introduced by Kostrikov et al.~\cite{kostrikov2017surface}. They apply Graph Neural Networks (cf.~\cite{scarselli2009graph,defferrard2016convolutional,niepert2016learning}) in the domain of 3D meshes.
A key difference is that their network's layers see neighborhood information in reduced blended form (via Laplace or Dirac operators) rather than natively like our approach.
In comparison to these approaches we require very little preprocessing, no heavy online computation, and no resampling. Per-vertex descriptors are exploited directly rather than taking blended versions of them as input.
\section{Resampling-free Neighborhood Encoding}
\begin{figure}[tb]
\centering
\subfloat[][]{
\begin{overpic}[width=0.4\textwidth]
{figures/mesh_patch_geod_2}
\put(80,75){\color[RGB]{227,0,102}$(r,\theta)$}
\end{overpic}
\label{fig:patch_geod}
}
\subfloat[][]{
\begin{overpic}[width=0.4\textwidth]
{figures/mesh_patch_spiral_2}
\put(52,50){$a$}
\put(70,35){$b$}
\put(70,60){$g$}
\put(52,80){$f$}
\put(25,60){$e$}
\put(32,32){$d$}
\put(52,22){$c$}
\end{overpic}
\label{fig:patch_mesh}
}
\caption{The black graph represents a patch of a triangle mesh.
(a) For generalized CNNs on 3D meshes~\cite{masci2015geodesic,boscaini2016learning,monti2017geometric}, we would have to compute a blended $\mathrm{f}(r,\theta)$ for each node of the magenta polar grid in order to provide a fixed number and pattern of samples for a convolution kernel.
(b) Instead, we enumerate the neighborhood vertices of a center vertex $a$ by following a spiral pattern (magenta). For a given feature $\mathrm{f}(\cdot)$ we encode the local neighborhood information feeding $[\mathrm{f}(a),\allowbreak\mathrm{f}(b),\allowbreak\mathrm{f}(c),\allowbreak\mathrm{f}(d),\allowbreak\mathrm{f}(e),\allowbreak\mathrm{f}(f),\allowbreak\mathrm{f}(g), \ldots]$ into a LSTM Cell.
}
\label{fig:patch}
\end{figure}
We assume that the input domain is represented as a manifold triangle mesh $\mathcal{M}$.
Some form of input data (e.g. positions, normals, or geometry descriptors) is specified or can be computed at the vertices of $\mathcal{M}$. We denote the information (\emph{feature}) at a vertex $v$ by $\allowbreak\mathrm{f}(v)$.
As in previous work \cite{masci2015geodesic,boscaini2016learning,monti2017geometric}, for the task of correspondence estimation, we would like to collect this information $\mathrm{f}$ from a local neighborhood around a vertex $a$. As mentioned above, we intend to encode this relevant information in a very direct manner, essentially by a notion of serialization of the per-vertex features $\mathrm{f}$ in local neighborhoods, without any alterations.
\subsection{Spiral Operator}
To this end we make the observation that, given a center vertex, the surrounding vertices can quite naturally be enumerated by intuitively following a spiral, as illustrated in Figure \ref{fig:patch_mesh}. The only degrees of freedom are the orientation (clockwise or counter-clockwise) and the choice of 1-ring vertex marking the spiral's starting direction. We fix the orientation to clockwise here. The choice of starting direction is arbitrary, and a different sequence of vertices will be produced by the spiral operator depending on this choice. This rotational ambiguity is a common issue in this context, and has been dealt with, for instance, by max-pooling over multiple choices \cite{masci2015geodesic}, or by making the choice based on additional, e.g.\ extrinsic, information \cite{boscaini2016learning}.
We avoid this by instead making a random choice in each iteration during training, enabling the network to learn to be robust against this ambiguity, assuming a sufficient number of parameters in the network.
Given a starting direction (i.e. a chosen 1-ring vertex), the spiral operator produces a sequence enumerating the center vertex, followed by the 1-ring vertices, followed by the 2-ring vertices, and so forth. Thus,
for a given $k$, it is possible to trace the spiral until we have enumerated all vertices up to and including the $k$-ring.
In Figure~\ref{fig:patch_mesh} this is illustrated for the case $k=2$, where the sequence reads $[a,b,c,d,e,f,g,\ldots]$. Alternatively, for a given $N$, we can of course trace until we have enumerated exactly $N$ vertices, thereby producing fixed length sequences -- in contrast to the variable length sequences up to ring~$k$.
While the definition and practical enumeration of a spiral's vertices is really simple locally, some care must be taken to support the general setting, in particular with large $k$ or large $N$ (when $k$-rings are not necessarily simple loops anymore) or on meshes with boundary (where $k$-rings can be partial, maybe consisting of multiple components). The following concise definition of the spiral operator handles also such cases.
Let $k$-ring and $k$-disk be defined as follows:
\begin{align*}
0\text{-ring}(v) &= \{v\}, \\
(k\!+\!1)\text{-ring}(v) &= N(k\text{-ring}(v)) \,\backslash\, k\text{-disk}(v),\\
k\text{-disk}(v) &= \cup_{i = 0 \dots k}\, i\text{-ring}(v),
\end{align*}
where $N(V)$ is the set of all vertices adjacent to any vertex in set $V$.
The spiral$(v, k)$ is defined simply as the concatenation of the \emph{ordered} rings:
\begin{align*}
\text{spiral}(v, k) &= (0\text{-ring}(v) \,\dots\, k\text{-ring}(v)).
\end{align*}
The fixed-length spiral$(v,N)$ is obtained by truncation to a total of $N$ vertices.
The required order $<$ on the vertices of a $k$-ring is defined as follows:
The 1-ring vertices are ordered clockwise, starting at a random position. The
ordering of the $(k\!+\!1)$-ring vertices is induced by their k-ring neighbors in the
sense that vertices $v_1$ and $v_2$ in the $(k\!+\!1)$-ring being adjacent to a
common vertex $v^{*}$ in the $k$-ring are ordered clockwise around $v^{*}$,
while vertices $v_1$ and $v_2$ having no common k-ring neighbor are sorted
in the same order as (any of) their $k$-ring neighbors.
\subsection{Learning}
With the (either variable length or fixed length) vertex sequence $[a, \allowbreak b,\allowbreak c,\allowbreak d,\allowbreak e,\allowbreak f,\allowbreak g, \dots]$ produced for a given center vertex, one easily serializes the neighborhood features as the sequence $[\mathrm{f}(a),\allowbreak\mathrm{f}(b),\allowbreak\mathrm{f}(c),\allowbreak\mathrm{f}(d),\allowbreak\mathrm{f}(e),\allowbreak\mathrm{f}(f),\allowbreak\mathrm{f}(g), ...]$.
For the purpose of correspondence estimation our goal is to learn a compact high-level representation of these sequences. This can be done in a straightforward and intuitive way using recurrent neural networks. More specifically, we feed our vertex sequences into an LSTM cell as proposed by Hochreiter and Schmidhuber \cite{hochreiter1997long} and use the last cell output as representation. This representation is thus computed using the following equations:
\begin{align*}
f_t &= \sigma(W_f \cdot [x_t,h_{t-1}] + b_f), \\
i_t &= \sigma(W_i \cdot [x_t,h_{t-1}] + b_i), \\
o_t &= \sigma(W_o \cdot [x_t,h_{t-1}] + b_o), \\
c_t &= f_t \odot c_{t-1} + i_t \odot \tanh(W_c \cdot [x_t,h_{t-1}] + b_c), \\
h_t &= o_t \odot \tanh(c_t),
\end{align*}
where the learnable parameters are the matrices $W_f,W_i,W_o,W_c$ with their respective biases $b_f,b_i,b_o,b_c$.
$[x_t,h_{t-1}]$ is the concatenation of the input $x_t$ (e.g. $\mathrm{f}(a)$) and the previous hidden state $h_{t-1}$, while $c_t$ and $h_t$ are the current cell- and hidden-state respectively.
We denote the Hadamard product as $\odot$.
This generation of a representation of the local neighborhood of a vertex via a LSTM cell is, in an abstract sense, comparable to the generalized convolution operation of previous patch-based approaches. However, the resampling of neighborhoods and computation of blended features $\mathrm{f}(r,\theta)$ for each sample $(r,\theta)$ (see Figure~\ref{fig:patch_geod}) is avoided by our approach.
Here $r$ and $\theta$ are geodesic polar coordinates of some local coordinate system located at each center vertex.
$\mathrm{f}(r,\theta)$ is then computed based on a weighted combination of $\mathrm{f}$ at nearby vertices (e.g. $\mathrm{f}(r,\theta)=w_c \mathrm{f}(c) + w_d \mathrm{f}(d) + \cdots)$. Depending on the nature of $\mathrm{f}$ this linear blending can be lossy.
For the case of a fixed length serialization, the use of an RNN supporting variable length input is not necessary. A fully-connected layer (combined with some non-linearity) can be used instead.
Naturally, we apply these neighborhood encoding operations repeatedly in multiple layers in a neural network to facilitate the mapping of input features to a higher level feature representation. This is detailed in the following section.
\paragraph{\textbf{Tessellation Dependence}}
Our simple method of encoding the neighborhood obviously is not independent of the tessellation of the input.
By augmenting the features $\mathrm{f}$ with metric information (i.e. by appending length and angle information), we can mitigate this and essentially enable the network to possibly \emph{learn} to be independent. In Section \ref{sec:rem} we investigate the effects of this.
Concretely, we concatenate to the input feature $\mathrm{f}(c)$ the distance of the current vertex $c$ to the center vertex $a$
as well as the angle at $a$ between the previous vertex $b$ and $c$.
\subsection{Architecture Details}
To evaluate and compare our proposed methods (with variable or fixed length sequences) in the context of shape correspondence estimation, we construct our network architectures in a manner similar to the GCNN3 model proposed by Masci et al.~\cite{masci2015geodesic}. We replace the convolution layers in GCNN3 by the ones presented above, as detailed below.
For the sake of comparability, we use the SHOT descriptor proposed by Salti et al.~\cite{salti2014shot} with 544 dimensions and default parameter settings computed at each vertex as input, following \cite{boscaini2016learning,monti2017geometric}.
The original GCNN3 \cite{masci2015geodesic} network is constructed as FC16 + GC32 + GC64 + GC128 + FC256 + FC6890. FC$x$ refers to a fully connected layer with output size $x$, which is applied to each vertex separately. GC$x$ is the geodesic convolution operation followed by angular max-pooling, producing $x$-dimensional feature vectors for every vertex.
\paragraph{\textbf{LSTM-NET}} Our network (LSTM-NET) for sequences with varying length replaces the GC layers and is constructed as FC16 + LSTM150 + LSTM200 + LSTM250 + FC256 + FC6890. LSTM$x$ is the application of a LSTM cell to a sequence consisting of the input vertex and its neighborhood. In this manner we compute a new feature vector with dimensionality $x$ (encoding neighborhood information) for every vertex, similar to a convolution operation.
\paragraph{\textbf{FCS-NET}} For fixed-length sequences we make use of a network (FCS-NET) constructed as FC16 + FCS100 + FCS150 + FCS200 + FC256 + FC6890.
FCS$x$ refers to a fully-connected layer, which takes the concatenated features of a sequence as input and produces a $x$-dimensional output for every vertex, analogously to the LSTM$x$ operation above.
We apply ReLU~\cite{nair2010rectified} to all layer outputs except for the output of the final layer to which we apply softmax. As regularization we apply dropout~\cite{srivastava2014dropout} with $p=0.3$ after FC16 and FC256. For fair comparison, the layers of our LSTM-NET and FCS-NET were chosen such that the total number of learnable parameters is roughly equal to that of GCNN3 (cf.\ Table~\ref{tab:params}). Our networks are implemented with TensorFlow~\cite{tensorflow2015-whitepaper}.
\begin{table}[tb]
\caption{Number of parameters used in the different network architectures. FCS-NET (20) refers to FCS-NET applied to sequences with length 20, while GCNN3 is our implementation of GCNN3 \cite{masci2015geodesic} with the SHOT descriptor\label{tab:params}}
\centering
\begin{tabular}{ | c | c | }
\hline
Network & Number of Parameters\\
\hline
GCNN3 (SHOT) & 2,672,634 \\
LSTM-NET & 2,675,706 \\
FCS-NET (20) & 2,763,356 \\
\hline
\end{tabular}
\end{table}
\section{Experiments}
\begin{figure}[b!]
\centering
\subfloat[]{
\includegraphics[width=0.5\textwidth]{figures/orig_comp}
}
\subfloat[]{
\includegraphics[width=0.5\textwidth]{figures/rem_comp}
}
\caption{Here the percentage of correct point-to-point correspondence predictions included in varying geodesic radii is shown.
(a) shows a comparison of our approaches (FCS-NET, LSTM-NET) on sequences of length N=30 to current approaches. Dashed lines refer to results reported in previous work. For GCNN3 \cite{masci2015geodesic} we compare against the original version that uses the GEOVEC descriptor (dashed) as well as our implementation of GCNN3 (black), which takes the more advanced SHOT descriptor as input. ACNN \cite{boscaini2016learning} shows the results after a correspondence map refinement step. For the sake of fair comparison we show the raw (w/o refinement) performance of MoNet \cite{monti2017geometric}, as we do not perform any refinement for the output of FCS- and LSTM-NET either. (b) visualizes the results on the remeshed FAUST dataset (cf.\ Sec.~\ref{sec:rem}). As expected, the addition of relative angles and distances (++) is beneficial.\label{fig:quantitative}}
\end{figure}
\begin{figure}[tb]
\centering
\begin{minipage}{0.49\textwidth}
\centering
\subfloat[]{
\includegraphics[width=\textwidth]{figures/orig_fc}
}\\
\subfloat[]{
\includegraphics[width=\textwidth]{figures/orig_lstm}
}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\centering
\subfloat[]{
\includegraphics[width=\textwidth]{figures/rem_fc}
}\\
\subfloat[]{
\includegraphics[width=\textwidth]{figures/rem_lstm}
}
\end{minipage}
\caption{Here the percentage of correct point-to-point correspondence predictions included in varying geodesic radii is shown.
(a-b) show the effect of different sequence lengths (N=15,20,30) for the FAUST dataset. Even with relatively short sequences (15) we achieve competitive results. (c-d) visualize the results on the remeshed FAUST dataset. For comparison we also show the performance of the GCNN3~\cite{masci2015geodesic} network with the SHOT descriptor. (++) denotes the usage of additional metric information.\label{fig:quantitative_2}}
\end{figure}
For our experiments we used the FAUST dataset (consisting of 100 shapes)~\cite{Bogo:CVPR:2014}. This allows for comparisons to related previous methods, which have commonly been evaluated on this dataset. Following common procedure, for training we used the first 80 shapes (10 of which were used for validation). All experiment results were computed on the last 20 shapes (our test set).
We optimized all networks with Adam~\cite{kingma2014adam} ($lr = 0.001$, $\beta_1 = 0.9$, $\beta_2=0.999$), where each batch consisted of the vertices of one mesh.
In order to evaluate the performance of our LSTM-NET we restrict ourself to sequences of fixed length as input (even though it would be capable of dealing with variable length input). This is because the mesh connectivity is the same over all meshes of the dataset. For varying length sequences (e.g. the 1- and 2-ring of each vertex) the network would potentially be able to learn the valence distribution and use connectivity information as an (unfair) prediction help.
Following Kim et al.~\cite{kim2011blended} we compute point-to-point correspondences and plot the percentage of correct correspondences found within given geodesic radii. For the evaluation no symmetry information is taken into account. We compare to the results from~\cite{masci2015geodesic,boscaini2016learning,monti2017geometric}. In addition we also implemented GCNN3 (using the SHOT instead of the GEOVEC descriptor as input) after Masci et al.~\cite{masci2015geodesic} and evaluated the method in our setting. We used the parameters and loss proposed in the original paper.
As shown in Figure~\ref{fig:quantitative} (a) our method outperforms current patch-based approaches with both LSTM-NET and FCS-NET for a sequence length of 30. Note that, by contrast, the average number of interpolated vertices in a patch for GCNN3 is 80. Furthermore, we do not perform any post-processing or refinement on the network predictions. An evaluation of the effect of different sequence lengths is visualized in Figure~\ref{fig:quantitative_2} (a-b). Even with shorter sequence lengths (15) our method achieves competitive results.
Qualitative results are visualized in Figure~\ref{fig:orig_qual}. We show the geodesic distance to the ground truth target vertices on four shapes from the test set. Correspondence errors of relative geodesic distance $>0.2$ are clamped for an informative color coding.
\subsection{Tessellation Dependence}
\label{sec:rem}
\begin{figure}[tb]
\centering
\includegraphics[width=0.6\textwidth]{figures/orig_vs_rem_white}
\caption{Left: triangulation of a shape from the original FAUST dataset. Right: independently remeshed version.}
\label{fig:orig_vs_rem}
\end{figure}
\begin{figure}[tb]
\centering
\subfloat[]{
\includegraphics[width=0.5\textwidth]{figures/100_dense}
}
\subfloat[]{
\includegraphics[width=0.5\textwidth]{figures/100_rnn}
}
\caption{(a-b) show the robustness of our approach to random rotations of the spirals. We perform 100 inference runs on the test set of the remeshed FAUST dataset with varying random rotations. The 100 different resulting curves plotted here are not distinguishable due to the robustness of our trained networks.\label{fig:100}}
\end{figure}
\begin{figure}[tb]
\centering
\begin{overpic}[width=\textwidth]
{figures/orig_qualitative}
\put(39.75,97){0.0}
\put(55,97){0.2}
\put(25,94){FCS-NET (30)}
\put(74,94){LSTM-NET (30)}
\put(25,61){FCS-NET (15)}
\put(74,61){LSTM-NET (15)}
\put(25,27){GCNN3 (GEOVEC)}
\put(74,27){GCNN3 (SHOT)}
\end{overpic}
\caption{Geodesic error for 4 shapes from the test set of the FAUST dataset.\label{fig:orig_qual}}
\end{figure}
\begin{figure}[tb]
\centering
\begin{overpic}[width=\textwidth]
{figures/rem_qualitative}
\put(39.75,97){0.0}
\put(55,97){0.2}
\put(25,94){LSTM-NET (30++)}
\put(74,94){LSTM-NET (30)}
\put(25,61){LSTM-NET (15++)}
\put(74,61){LSTM-NET (15)}
\put(25,27){GCNN3 (SHOT)}
\put(74,27){FCS-NET (30++)}
\end{overpic}
\caption{Geodesic error for 4 shapes from the test set of the remeshed FAUST dataset.
\label{fig:rem_qual}}
\end{figure}
An important, but often overlooked detail is the fact that the shapes in the FAUST dataset are meshed compatibly, i.e. the mesh connectivity is identical across shapes, and identical vertices are at corresponding points. Unless a correspondence estimation method is truly tessellation-oblivious, this naturally has the potential to incur a beneficial bias in this artifical benchmark, as in any realistic correspondence estimation application scenario, the tessellation will of course be incompatible. We thus repeat our experiments with a remeshed version of the FAUST dataset (see Figure~\ref{fig:orig_vs_rem}), where each shape was remeshed individually and incompatibly.
Quantitative results are shown in Figure~\ref{fig:quantitative} (b). Here (++) denotes the additional relative information that we concatenate to the SHOT descriptor vectors. On this more challenging dataset we likewise achieve competitive results.
Especially the additional information (++) enables our networks to encode less tessellation-dependent representations of neighborhoods for better performance.
The effect of different sequence lengths is shown for this dataset in Figure~\ref{fig:quantitative_2} (c-d). For the sake of comparison to the performance of FCS-NET we also restrict LSTM-NET to sequences of fixed length.
See Figure~\ref{fig:rem_qual} for qualitative results.
Furthermore, we test the robustness of our network predictions to random starting points after the center vertex in our sequences (random rotations of the spiral). To this end we perform 100 predictions with different random rotations on the remeshed FAUST dataset with both FCS-NET and LSTM-NET. As shown in Figure~\ref{fig:100} our networks are highly robust to these random orientations, such that the curves of separate predictions are not discernible.
\section{Conclusion}
In this paper we presented a simple resampling free input encoding strategy for local neighborhoods in 3D surface meshes. Previous approaches rely on forms of resampling of input features in neighborhood patches, which incurs additional computational and implementational costs and can have negative effects on input data fidelity.
Our experiments show that our approach, despite its simple and efficient nature, is able to achieve competitive results for the challenging task of shape correspondence estimation.
\paragraph{\textbf{Limitations and Future Work}}
Although the introduction of metric information aims to make our method less sensitive to tessellation, it is nevertheless affected by it; this, however, is true to some extent in any practical setting for previous patch-based approaches as well. The design of truly tessellation-oblivious encoding strategies is a relevant challenge for future work, as it would relieve the training process from having to \emph{learn} tessellation independence, as required for optimal performance.
Furthermore, high resolution meshes require longer sequences to encode relevant neighborhood information. In the case of FCS-NET this also means an increase in the number of parameters required to learn, which can lead to memory issues. An interesting avenue for future work thus is the investigation of sub-sampled (but not resampled) serialization.
A related issue is that the training of RNNs tends to be slower than that of CNNs.
A possible solution to this problem could be the application of 1D convolutions instead of LSTM cells or fully connected layers.
An investigation into feature learning, given only raw input data (e.g.\ lengths, angles, or positions of mesh elements) instead of preprocessed information like the SHOT descriptor will also be of interest.
\subsection*{Acknowledgements}
The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement n$^\circ$ [340884].
We would like to thank the authors of related work \cite{masci2015geodesic,boscaini2016learning} for making their implementations available, as well as the reviewers for their insightful comments.
|
2,877,628,089,758 | arxiv | \section{Introduction}\label{introduction}
Helioseismology is the branch of heliophysics that uses measurements of the surface oscillations of the Sun to infer the properties of the solar interior. Global helioseismology, the study of the eigenfrequencies of the resonant oscillatory modes, has shed some light on the solar internal structure and its rotation \citep{Christensen-Dalsgaard2002}. The techniques of global helioseismology are, unfortunately, limited to the inference of the global properties of the Sun. As a natural evolution of global helioseismology, local helioseismology \citep{Braun+etal1987,Hill1988,Braun+etal1992,Duvall+etal1993} was developed to infer the properties of local regions of the solar interior or its surface. This is accomplished by studying the whole wave field, instead of just the eigenfrequencies \cite[see][for a review]{Gizon+Birch2005}.
One of the main techniques under the local helioseismology umbrella is helioseismic holography \citep{Lindsey+Braun1990}. Helioseismic holography applies helioseismic observations of the surface to a solar interior model with no local structure in time reverse, and then it samples the results of the model at various depths. This technique relies on the coherence of the waves on smooth acoustic media to detect structures that disturb their path. Phase-sensitive holography, the technique used in this work, is a specific version of holography that uses the measurement of time-travel perturbations to study active regions that are not directly visible. A more detailed description can be found in \cite{Lindsey+Braun2000}.
Phase-sensitive holography has been applied to the far-side imaging problem, resulting in a technique that is capable of detecting active regions in the non-visible solar hemisphere from the analysis of the near-side wave field \citep{Lindsey+Braun2000b, Braun+Lindsey2001}. Using the seismic data of a region in the visible hemisphere (the so-called “pupil”), the technique infers the wave field in a region of the far side or “focus point” \citep[see][for further details]{Lindsey+Braun2017}. The success of this technique is made possible by the phase shift that active regions introduce between ingoing and outgoing waves from a range of frequencies. This phase shift is fundamentally caused by the Wilson depression \citep{Lindsey+etal2010, Felipe+etall2017}.
Waves that arrive at regions on the far side where a Wilson depression is present are reflected into the Sun at a deeper layer than those that arrive at the quiet Sun's surface. This shortening of the wave path imprints a negative phase shift upon the arrival of the waves to the near side. The disturbance in the travel time can then be associated with the presence of magnetic activity. We point out that time-distance helioseismology is also being used to perform far-side activity detection \citep{Duvall+Kosovichev2001,Zhao2007, Ilonidis+etal2009}.
Although the results are impressive, far-side helioseismic methods are only able to detect strong sunspots due to the low signal-to-noise ratio of the seismic signal from most of the active regions. Smaller and fainter activity is left undetected \citep{GonzalezHernandez+etal2007,Liewer+etal2014, Liewer+etal2017}.
To bypass the limitations of these techniques, \cite{Felipe+Asensio2019} developed the deep neural network FarNet, a U-net \citep{Ronneberger+etal2015} that improves the sensitivity of phase-sensitive holography, opening the way for the detection of small active regions on the far side. Recently, \cite{Broock+etal2021} confirmed FarNet's reliability and its superior performance to that of the standard method. The goal of this paper is to present FarNet-II, a new architecture that further improves FarNet predictions by implementing convolutional long short-term memory (LSTM) modules and attention mechanisms into a U-net architecture.
The LSTM architectures \citep{LSTM} are recursive neural networks that were developed as an improvement over plain recurrent neural networks \citep[RNNs;][]{Rumelhart1986LearningIR}. Recurrent neural networks are architectures used to compute predictions over time series of data. They are especially useful for cases where temporal coherence is important (i.e., for forecasting problems). These types of networks take inputs recursively and use weights extracted from operations with previous inputs to compute the next outputs. Since RNNs are especially susceptible to the vanishing gradient problem, particularly when used over long time series \citep{HS}, LSTMs were proposed.
The second main novelty of FarNet-II is the use of attention mechanisms. They are a variety of algorithms that focus on weighting the importance of different parts of the data running through a neural network, and on using their relative importance to optimize the performance of the network.
These tools started by being applied to translation neural networks \citep{bahdanau2016neural, vaswani2017attention}, but their range of applicability has since become much wider. One of the fields in which these mechanisms have been used is computer vision, improving models for image classification \citep{hu2019local, hu2019squeezeandexcitation} and semantic segmentation \citep{fu2019dual, li2019expectationmaximization}.
The paper is organized as follows: Sect. \ref{Det} explains methods of detecting far-side activity, including FarNet-II; Sect. \ref{R} presents the obtained results; and those results are discussed in Sect. \ref{DC}, along with the conclusions of the paper.
\section{Detection of far-side activity}\label{Det}
\subsection{Phase-sensitive seismic method}
The detection of activity on the far side of the Sun is founded on the analysis of far-side phase-shift maps. Helioseismic holography is employed to compute these maps from continuous near-side Doppler data. The latter are acquired from synoptic observations \citep[the Global Oscillation Network Group, GONG,][]{Harvey+etal1996} or space-based observatories \citep[Helioseismic and Magnetic Imager, HMI][]{Schou+etal2012}. In this work, we focus on HMI far-side seismic maps, which are regularly published by the Joint Science Operation Center (JSOC)\footnote{\url{http://jsoc.stanford.edu/ajax/lookdata.html}}. Two different data products with a cadence of 12 hours are available: one that uses 24 hours of Doppler data for its generation and another one that uses five days of Doppler data instead.
The detection of active regions on the far side is routinely carried out by Stanford's Strong Active Region Discriminator (SARD)\footnote{\url{http://jsoc.stanford.edu/data/far-side/explanation.pdf}}. This process uses the phase-shift maps computed with five days of Doppler data from HMI, and searches for those regions with a phase-shift lower than $-0.085$ rad. Then, the integral over the region's area, in millionths of a hemisphere ($\mu$Hem), is calculated. This magnitude is the so-called seismic strength ($S$). The presence of a seismically detected far-side active region is claimed for regions with $S>400$ $\mu$Hem\,rad \citep{Liewer+etal2017}.
\subsection{FarNet}
FarNet was the first successful attempt at using deep learning to improve the interpretation of far-side phase-shift maps for activity detection. Activity detection can be seen as a binary semantic segmentation task, in which every pixel of the output needs to be classified as active or non-active. It is usually achieved by using deep convolutional neural networks (DCNNs). One of the architectures that is more commonly employed to pursue this task is U-net. This architecture is based on an encoder-decoder structure that uses tools such as convolutional layers, batch normalization \citep[BN;][]{Ioffe+etal2015}, and rectified linear unit activation functions \citep[ReLU;][]{Nair+Hinton2010}. The encoder reduces the spatial size of the images in successive steps via max-pooling \citep{Goodfellow-et-al-2016}, while the number of channels is increased. This is done to extract and combine spatial information at many scales.
On the decoder, the opposite happens, and the channel dimension is reduced while the spatial size is gradually increased to the spatial size of the input, via interpolation. U-net uses skip connections between the encoder and the decoder to better utilize multiscale information, and improve performance during training and evaluation.
\begin{figure*}[!tbp]
\centering
\includegraphics[width=8.3cm]{whole_vertical_fin.pdf}
\centering
\caption{\textbf{General representation of FarNet-II.} Original dimensions are batch (B), sequence (S), height (H), and width (W). Channel dimension (C) is additionally used in the following steps. Batch and sequence dimensions are joined together for every operation in the network, except for the application of the ConvLSTM modules. The red arrows symbolize simple convolutions, the light blue arrows symbolize the downward operation that reduces the spacial size of the images while increasing the number of channels, the violet arrows symbolize the bidirectional convolutional LSTM modules, and the dark blue arrows symbolize attention mechanisms. $\otimes$ symbolizes a dropout of 0.5.}
\label{NN}
\end{figure*}
FarNet takes, as input, sequences of 11
phase-shift maps with a temporal cadence of 12 hours, each of them computed in a 24-hour window of Doppler data (in contrast to SARD detections, which are currently based on phase-shift maps obtained from five days of Doppler data). A region spanning 120$^\circ$ in longitude and 144$^\circ$ in latitude is included in each map. The output is a probability map of the same far-side region from the central date of the input. Its values are constrained to the $[0,1]$ range using a sigmoid function at the output of U-net.
As proposed by \cite{Felipe+Asensio2019}, and later verified by \cite{Broock+etal2021}, it is
important to post-process the output of FarNet to infer which features of the output are reliable active region detections. First, a Gaussian filter is applied over the outputs, with a full width at half maximum of 1.5 pixels. Then, regions with five contiguous pixels with a probability value higher than 0.2 are selected. Finally, for each detected region, the integrated probability ($P_{i}$) is calculated. This quantity is the integral of the probability over the regions' area, measured in deg$^2$. A region with an integrated probability of $P_{i}>100$ is taken as a reliable detection.
\begin{figure*}[!tbp]
\centering
\includegraphics[width=9cm]{BidirConvLSTM.pdf}
\centering
\caption{\textbf{Bidirectional ConvLSTM module.} Before the application of this module, the batch and sequence dimensions are split, and the input is duplicated but inverted in the sequence dimension. A ConvLSTM module is applied over both tensors, and the result of the application over the inverted one is re-inverted and concatenated with the result in the forward direction. The output of the module is obtained with a convolution over the concatenated tensors, which gives an output with the same dimension as the input.}
\label{fig:convlstm}
\end{figure*}
\subsection{FarNet-II}
In this paper, we develop FarNet-II, an evolution of FarNet that greatly enhances its capabilities. It maintains some structural properties of FarNet, but it introduces some improvements, such as bidirectional convolutional LSTM modules, attention mechanisms, and dropout. One of the most relevant improvements is that FarNet-II can now produce one activity prediction per input date, instead of one only prediction for the central date of the input. This is achieved through the application of bidirectional ConvLSTM modules on specific parts of the network, which exploit the time coherence both forward and backward in time.
Figure \ref{NN} shows a graphical representation of our new model. It resembles a standard
U-net architecture, but with the addition of the attention and ConvLSTM blocks in the decoder. We use, as input, sequences
of 11 phase-shift maps of spatial size $H \times W$. The encoder is applied in parallel to all the input
maps of the sequences. For this reason, we combine the batch and sequence dimensions to fully exploit the parallelization capabilities of the hardware we use for training. The encoder increases the number of channels per map from one to $C$. Spatial dimensions are reduced by the consecutive application of a max-pooling and two convolutional layers with a ReLU activation function. Once the encoder operations are fully applied, the output of the encoder is passed through the decoder. Every operation, except those of the ConvLSTM layers, is applied in parallel to all elements in the input sequences. The decoder upsamples the information in the lowest spatial scale and combines it with the information obtained from the encoder at the same spatial resolution thanks to an attention layer. This process goes on until the decoder is applied over every scale. Figure \ref{fig:convlstm} shows how the decoder uses the ConvLSTM layer, which keeps memory from the previous elements of the sequence during prediction, in a bidirectional way, keeping also memory from the next elements. We explain the details of these two innovations in the following sections. Dropout is also applied to two locations in the network, one on the encoder and the other on the decoder. The output of the network goes through a sigmoid to limit its values to the [0,1] range before exiting the U-net architecture. The described model and an example of usage can be found online\footnote{GitHub repository: \url{https://github.com/EBroock/FarNet-II}}.
\subsubsection{LSTM and convolutional LSTM}
As a recursive network, the main innovation of the LSTM \citep{LSTM} over the RNN is the presence of a memory cell, $c_{t}$, which modulates the information from previous inputs used to compute the next output. The memory cell is modified thanks to three self-parameterized gates: the input, the forget, and the output gates. Information of the input currently being processed by the layer is included in the cell if the input gate is open. Likewise, the values from the previous cell state are forgotten according to the value of the forget gate. The output gate controls if the latest cell state is propagated onto the next hidden state. The status of all gates (open or closed) is determined by sigmoid activation functions.
The LSTM modules have been used for a variety of purposes, including computer vision, with high success, but as their algorithm was not purposefully developed to tackle that task, their initial implementations did not take into account the possible existence of spatial correlations between different positions of the image. Long short-term memory was initially applied to computer vision in a fully connected manner, having to unfold the inputs into one-dimensional vectors, losing important spatial information. Architectures as convolutional layers naturally take into account these relations, especially when used over various resolutions of the inputs, as in encoder-decoder architectures.
\cite{shi2015convolutional} developed an encoder-decoder architecture for the task of predicting the future rainfall intensity in a local region over a short time period (precipitation nowcasting), in which convolutions are directly introduced as an intrinsic part of LSTM layers. Since its conception, ConvLSTM modules have been used in machine learning architectures for multiple applications, such as the detection of violence in videos \citep{HansonViolence} or predictions of urban expansion \citep{boulila2021novel}.
For our work, we use a bidirectional version of the ConvLSTM proposed by \cite{shi2015convolutional}, which consists of two ConvLSTM layers. As shown in Fig. \ref{fig:convlstm}, the first layer is applied to the sequence in the original order, while the second is applied to the reversed sequence. After reversing again the result of the application of this second layer, both sequences are concatenated and a convolutional layer reduces the output to the input size.
\subsubsection{Attention}
Attention mechanisms on FarNet-II are used to modulate the skip connections before using them at each step of the decoder. The module uses the tensor on the decoder previous to its application and the corresponding skip connection. Both tensors are processed through convolutions, ReLU activations, and a final sigmoid, which produces a mask. This mask is then applied to the original skip connection, which is used on the decoder in the traditional manner of U-net. For this work, we used the implementation given by \citet{Fillioux2020} of the attention module proposed by \citet{oktay2018attention}.
\begin{figure*}
\centering
\includegraphics[width=16cm]{20131209_12_20131214_12_comp_PiS100400_20220223_2.pdf}
\caption{Comparison of an output sequence, centered on December 12, 2013, for each method. The first column shows the square root of the STEREO data used to compute the activity masks. The second column shows the activity masks. Third to fifth columns show outputs from FarNet-II, FarNet, and the phase-sensitive method, respectively, for the region corresponding to the EUV masks on the second column. In columns 2 to 5, the color gradient goes from purple for zero or values near zero to yellow for values near one or one. Outputs from FarNet are only valid on the central range of 120 degrees of longitude (vertical blue lines). Seismic strength and integrated probability were not taken into account to select the regions on the outputs from FarNet and the phase-sensitive method. Every region that passed the post-processing was included.}
\label{comp_filt}
\end{figure*}
\subsubsection{Dropout}
Dropout is a regularization method developed by \cite{dropout_sri}, whose goal is to improve the generalization of neural network capabilities, reducing overfitting to a specific training set. This is achieved by randomly ignoring some nodes on certain layers of the model during the training process. For FarNet-II, we applied dropout to 50\% of the nodes in two different stages of the data flow.
\subsubsection{Post-processing}
As in FarNet, filtering was applied to the outputs of FarNet-II to get rid of small disturbances in the background that do not correspond to active regions. We applied Gaussian filtering with a full width at half maximum of 1.5 pixels, and every pixel in regions with five contiguous pixels with a probability higher than 0.2 was set to one, while the rest of the pixels were set to zero. This made the comparison with STEREO masks more rigorous. The difference in the training metrics before and after applying the filtering is negligible.
\subsubsection{Extreme Ultraviolet data}
FarNet-II goes through supervised training by forcing the output to be as close to the desired
target as possible. These targets are obtained from a binarization process of 304 {\AA} Carrington maps from the Solar Museum Server of NASA \citep{Liewer+etal2017}. These maps join together images from the Extreme Ultraviolet Imager \citep[EUVI;][]{Wulser+etal2004} on board the Solar Terrestrial Relations Observatory \citep[STEREO;][]{Kaiser2004}, and from the Atmospheric Imaging Assembly \citep[AIA;][]{Lemen+etal2012} on board the Solar Dynamics Observatory \citep[SDO;][]{Pesnell+etal2012}. The training only employs the region of the maps corresponding to the far-side hemisphere, that is, the data acquired with STEREO. Precedents of extreme ultraviolet (EUV) image usage as a proxy of far-side activity can be found in \citet{Liewer+etal2012}, \citet{Liewer+etal2014}, and \citet{Zhao+etal2019}.
The process through which the EUV maps were binarized is explained in detail by \citet{Broock+etal2021}.
\subsubsection{Training}\label{train}
Dates from December 4, 2011 to August 18, 2014, were included in the training. The inputs were sequences of 11 consecutive far-side phase-shift maps, from which a region of 144$^\circ$ in latitude and 180$^\circ$ in longitude was taken, centered on the far side. As a target, we used the corresponding 11 binarized EUV masks of the same region.
The total number of available input-target pairs with good far-side coverage was 2253. We note that the size of this training set is very limited. We partially solved this issue by using augmenting. It consisted of vertically flipping the images and the expected values. Due to the lack of a larger dataset, we carried out the study using a cross-validation method, dividing the training set into segments of 60 elements and choosing one of those elements as validation on each run. In total, 37 independent trainings were performed, each of them with 4340 input-target pairs for training and 120 input-target pairs for validation (including augmenting). We made evaluations on the validation set of each training, and then we averaged the resulting metrics among them.
The training of the model was done by minimizing the Dice loss computed between the outputs and the EUV binary masks. The Dice loss derived from the Dice coefficient. The Dice coefficient is a measure of the superposition of arrays of data with binary labels. For this specific scenario, it accounts for the accuracy of pixel labeling and can take values from 0 (no overlapping between pixel values on the output and on the associated EUV activity mask) to 1 (complete overlapping of output and the EUV activity mask). Since the values of both the outputs and the EUV masks are restricted to lying between 0 and 1, the Dice coefficient is given by:
\begin{equation}\label{eq:1}
D = \frac{2\sum_{b,x,y} i(x,y,b) o(x,y,b)+\epsilon}{\sum_{x,y,b} i(x,y,b)+\sum_{x,y,b} o(x,y,b)+\epsilon},
\end{equation}
where $i(x,y,b)$ and $o(x,y,b)$ are the values of the output and target images for all pixels ($x$ and $y$ coordinates) and elements of the minibatch ($b$ coordinate), while $\epsilon$ is a small quantity (0.001 in our case) that prevents the Dice coefficient from being undefined. To use the Dice coefficient as an efficient loss function for the training of a model, the simplest way is subtracting it from the unit, computing the Dice loss, given by:
\begin{equation}\label{eq:2}
L_{D} = 1-D.
\end{equation}
We implemented the model and trained it using the open-source PyTorch library \citep{Paszke+etal2019},
and optimized the Dice loss using the Adam optimizer \citep{kingma2017adam} with a learning rate of 3$\times$10$^{-4}$ during ten epochs and a batch size of 10. These ten epochs were sufficient for the Dice loss to stop increasing on validation data.
\begin{figure*
\centering
\includegraphics[width=16cm]{20120629_12_20120704_12_comp_PiS100400_20220223_2.pdf}
\caption{Same as for Fig. \ref{comp_filt}, but for data centered on July 2, 2012.}
\label{comp_filt_2}
\end{figure*}
\section{Results}\label{R}
\begin{figure*}[!t]
\centering
\begin{minipage}{0.48\textwidth}
\includegraphics[width=1\linewidth]{diceseq_bymeth_20220705_segm_region_37sets_filtradosPi0S0_sinabl.pdf}
\caption{Dice coefficient per method, as a function of the position of the images on the output sequences of 11 elements. Thresholds for reliable detection for FarNet ($P_{i} > 100$) and the phase-sensitive method ($S>400$) were not taken into account. For FarNet, every region on outputs from FarNet with more than five contiguous pixels and a probability over 0.2 was used to compute the value. The vertical bars represent the standard deviation of the mean Dice coefficient over the 37 validation sets used in the study.}\label{Fig:dice_completo_1}
\end{minipage}\hfill
\begin{minipage}{0.48\textwidth}
\includegraphics[width=1\linewidth]{diceseq_bymeth_20220705_segm_region_37sets_filtradosPi100S400_sinabl.pdf}
\caption{Dice coefficient per method, as a function of the position of the images on the output sequences of 11 elements. Only regions with $P_{i}$>100, for FarNet, and with $S$>400, for the phase-sensitive method, are taken into account. The vertical bars represent the standard deviation of the mean Dice coefficient over the 37 validation sets used in the study.}\label{Fig:dice_completo_2}
\end{minipage}
\centering
\end{figure*}
To characterize the reliability of all the methods (standard phase-sensitive holography, FarNet, and FarNet-II) and compare their performance, we analyzed the values of the Dice coefficient between the EUV binary masks and the outputs of each method for each validation set. While FarNet-II produces a prediction spanning 180$^\circ$ in longitude, FarNet infers a shorter range. For this reason, the comparison was only made on the far-side section common to every method, spanning 120$^\circ$.
In our previous paper \citep{Broock+etal2021}, we used a different metric, in which each blob detected on the output of the model was assumed to be a different object. By comparison with the STEREO activity masks, we distinguished whether it was a true or a false detection. However, FarNet-II outputs tend to be less segmented, with various blobs merged, which makes the previous method of comparison less optimal.
\subsection{Qualitative comparisons}
We start by qualitatively comparing the outputs of all methods for two sequences, shown in Figs. \ref{comp_filt} and
\ref{comp_filt_2}. Each output batch of ten sequences takes between 13 and 14 seconds to be produced by the trained network on a CPU. The first columns show the square root of the STEREO EUV maps, with the second column displaying the STEREO masks defined using our procedure. The third, fourth, and fifth columns show the results obtained by FarNet-II, FarNet, and the standard phase-sensitive helioseismic method, respectively. Each row shows data of an element of the sequence, representing an instant in the temporal evolution of the far side for the dates of study. The solar rotation can be seen in the masks and also in the predictions. It is clear from these figures that the predictions of FarNet-II are much more accurate than those of the other methods when compared with the STEREO activity masks. FarNet-II can correctly predict small activity regions while also detecting many of the large regions. We speculate that the prediction of these regions is possible thanks to the time coherence exploited by the ConvLSTM in FarNet-II.
The results of FarNet-II displayed in Figs. \ref{comp_filt} and \ref{comp_filt_2} show an excellent prediction ability, which is in strong contrast with the poorer results from FarNet and the classical phase-sensitive helioseismic method.
\subsection{Quantitative comparisons}
The visual inspection of FarNet-II predictions demonstrates a very good prediction ability. In this section, we employ quantitative comparison methods to show that this is true for the large majority of cases in the validation sets.
This quantitative comparison is made using the Dice coefficient as a metric, which is then averaged over the validation sets of every training. We analyzed how the metric behaves for all of the considered methods. For an in-depth comparison, we checked the metrics when the thresholds in $P_{i}$ and $S$ (seismic strength) proposed for a reliable detection were considered or not.
The global results are shown in Table \ref{table:1}. The global prediction capabilities of all methods were measured with the Dice coefficient averaged over every element of the sequences and all validation sets. FarNet-II clearly has improved performance, with the Dice coefficient increasing by more than 0.2 points over the most performant of all previous methods.
\begingroup
\renewcommand*{\arraystretch}{1.2}
\begin{table}[!h]
\caption{Average of the Dice coefficients over every sequence element, for each method and model, including variations in the filtering on outputs from FarNet ($P_{i}$ value) and the phase-sensitive method ($S$ value). The standard deviation of the results for every validation set is shown in the third column.}
\label{table:1}
\centering
\begin{tabular}{l r r }
\hline
Method & Dice & Std\\
\hline
P-S S>0 & 0.251 & 0.13\\
P-S S>400 & 0.176 & 0.13\\
FarNet P$_{\textrm{\textit{i}}}$>0 & 0.310 & 0.08\\
FarNet P$_{\textrm{\textit{i}}}$>100 & 0.234 & 0.10\\
FarNet-II & \textbf{0.513} & \textbf{0.08}\\
\hline
\end{tabular}
\end{table}
\endgroup
Since FarNet-II produces a prediction for all times using the information of previous and later input images of the sequence, we expect the reliability of the prediction to be potentially different for different times inside the sequence. We analyzed this in detail by calculating the Dice coefficient for each element of the sequence on the validation set. Since FarNet and the standard phase-sensitive method do not make use of this temporal information, the results of each index were computed with the appropriate window of phase-shift maps centered on the specific date of each element of the sequence. The results are shown in Fig. \ref{Fig:dice_completo_1} when no special threshold is used. When the optimal thresholds are considered ($P_{i}>100$ for FarNet and $S>400$ for the phase-sensitive method), the results are those shown in Fig. \ref{Fig:dice_completo_2}. It is important to note that very restrictive values were selected for these optimal thresholds \citep{Broock+etal2021}. They guarantee the presence of an EUV emission counterpart to the seismic active region with a 96\% confidence level. When they are taken into account, the Dice coefficient exhibits poorer values since actual detections are discarded due to their lower confidence. According to these results, FarNet-II produces a Dice coefficient that is almost a factor of two larger than those of the other methods, on average. Additionally, this consistency is maintained for all indices of the sequence, with only a small improvement in the predictions for the central frames.
\subsection{Ablation study}
We carried out an ablation study to determine the relative importance of each of the new layers added to FarNet-II concerning the previous FarNet neural network. This study, although limited, demonstrates that the final architecture arguably produces the best predictions. To this end, we trained different models: 1) using unidirectional ConvLSTM layers instead of the bidirectional ones employed in the final model; 2) removing the attention layers; and 3) removing the regularizing effect of dropout. The average Dice coefficients computed on the validation sets for each model are represented in Figs. \ref{Fig:dice_unidir_1} and \ref{Fig:dice_unidir_2}. The production time of outputs does not vary significantly for ablated versions of FarNet-II. The total averaged Dice coefficient achieved by each model can be found in Table \ref{table:2}. The results demonstrate that adding recursion produces the largest
improvement over the baselines, with attention and dropout increasing the prediction ability only marginally, although still monotonically. In the case of using the unidirectional ConvLSTM, we see that the prediction for the elements at the end of the sequence is better than those at the beginning, demonstrating that exploiting the time correlation in the two directions is important.
\begin{figure*}[!t]
\centering
\begin{minipage}{0.48\textwidth}
\includegraphics[width=1\linewidth]{diceseq_bymeth_20220705_segm_region_37sets_filtradosPi0S0.pdf}
\caption{Dice coefficient per method, as a function of the position of the images on the output sequences of 11 elements. This figure includes the ablated versions of FarNet-II. Thresholds for reliable detection for FarNet ($P_{i} > 100$) and the phase-sensitive method ($S>400$) were not taken into account. For FarNet, every region on outputs from FarNet with more than five contiguous pixels and a probability over 0.2 was used to compute the value. The vertical bars represent the standard deviation of the mean Dice coefficient over the 37 validation sets used in the study.}\label{Fig:dice_unidir_1}
\end{minipage}\hfill
\begin{minipage}{0.48\textwidth}
\includegraphics[width=1\linewidth]{diceseq_bymeth_20220705_segm_region_37sets_filtradosPi100S400.pdf}
\caption{Dice coefficient per method, as a function of the position of the images on the output sequences of 11 elements. This figure includes the ablated versions of FarNet-II. Only regions with $P_{i}$>100, for FarNet, and with $S$>400, for the phase-sensitive method, are taken into account. The vertical bars represent the standard deviation of the mean Dice coefficient over the 37 validation sets used in the study.}\label{Fig:dice_unidir_2}
\end{minipage}
\centering
\end{figure*}
\begingroup
\renewcommand*{\arraystretch}{1.2}
\begin{table
\caption{Average of the Dice coefficients of every sequence element for FarNet-II and its ablated models (\textbf{U}: unidirectional ConvLSTM; \textbf{WA}: without attention; \textbf{WD}: without dropout). The standard deviation of the results for every validation set is shown in the third column.}
\label{table:2}
\centering
\begin{tabular}{l r r}
\hline
Method & Dice & Std\\
\hline
FarNet-II & \textbf{0.513} & \textbf{0.08}\\
FarNet-II-U & 0.438 & 0.08\\
FarNet-II-WA & 0.509 & 0.07\\
FarNet-II-WD & 0.495 & 0.07\\
\hline
\end{tabular}
\end{table}
\endgroup
\subsection{Performance as a function of longitude}
Figure \ref{dice_long} illustrates the dependence of the performance of FarNet-II on longitude. This study was individually performed for every index in the sequence of outputs. Additionally, the total average, including all the outputs from the sequence, is shown for completeness.
The results exhibit a marked variation with the sequence index. At the beginning of the series (indices 0-2), far-side regions with a lower longitude (that is, the solar region that has just rotated onto the far-side hemisphere) show a higher Dice coefficient. In contrast, a low Dice coefficient is retrieved at high longitudes (regions that are about to rotate onto the visible hemisphere). A progressive change in this trend is found as higher indices are considered. The variation in the Dice coefficient with the longitude exhibits a mostly flat profile at the middle of the series (indices 3-5), whereas in the last steps of the sequence (indices 6-10), the Dice coefficient increases with the longitude.
This dependence on the longitude is consistent with the displacement of the active regions across the far-side hemisphere due to the solar rotation. Active regions located at high longitudes in the first elements of the series quickly rotate onto the visible hemisphere, and their presence is no longer tracked by seismic far-side maps. Similarly, active regions with a low longitude in the last steps of the series have just rotated onto the far side and only appear in those last steps. In summary, a weaker performance is found in those two extremes (high longitude and low index, low longitude and high index) due to the limited information available in the input sequence of far-side seismic maps. These results also support the relevance of exploiting the bidirectional temporal correlation.
\section{Discussion and conclusions}\label{DC}
We have presented a new neural architecture, FarNet-II, which combines some characteristics of the original FarNet \citep{Felipe+Asensio2019} with the use of bidirectional Conv-LSTM, attention modules, and dropout. We have proven that this model further improves the capabilities of FarNet in the detection of far-side activity.
FarNet-II was trained using activity masks extracted from EUV data from the far side as expected values. This is an improvement over FarNet's training, where we used near-side binarized magnetograms from half a rotation later than the inputs to the network. Even though these magnetograms were processed to eliminate every active region emerging on the near side, they do not provide an accurate characterization of the far-side activity at the temporal period when the seismic maps were computed, since the size and shape of the regions can vary between both dates, and regions may have decayed before rotating onto the near side. In these new training sets, far-side activity fed to the network is obtained from direct far-side observations, and it is strictly co-temporal to the seismic maps, leading to higher accuracy.
The enhanced performance of FarNet-II, as compared with other methods, has been proven through the visual inspection of their outputs in comparison with the actual far-side activity captured by EUV observations from STEREO (Figs. \ref{comp_filt} and \ref{comp_filt_2}). We have also performed a quantitative comparison by employing the Dice coefficient as a metric to evaluate the similarity between the predicted far-side regions and the actual activity in EUV maps. The results show a remarkably higher value of the Dice coefficient for FarNet-II, strongly supporting its improved performance.
We have evaluated the individual contribution from each of the new ingredients implemented in FarNet-II. The analysis clearly points to the ConvLSTM modules as the main driver of the increased reliability of the new architecture. Interestingly, the version of FarNet-II with only unidirectional ConvLSTM modules (forward in time) exhibits an upward trend of the Dice coefficient as the sequence index increases (Fig. \ref{Fig:dice_unidir_1}). This is due to the increase in information that later elements on the sequence receive from previous iterations. When bidirectional ConvLSTM is implemented (full FarNet-II), the performance is similarly good for the whole sequence, except for longitudes near the limb at the beginning and the end of the sequence (Fig. \ref{dice_long}). The other novel modules implemented in FarNet-II (attention modules and dropout) provide a modest improvement in its performance. All in all, the best results are found when all these new layers act together. Although not shown explicitly in this work, we checked the coherence of the differential rotation for the sequence of FarNet-II's outputs. The results are consistent with the measured solar differential rotation.
We consider this work as a new step forward to improve the imaging of the far side of the Sun. Nowadays, direct far-side images can only be acquired by Solar Orbiter \citep{SolarOrbiter} during some periods of its orbit, even though they are fundamental for space weather applications (the branch of astrophysics dedicated to studying the Sun and the state of the interplanetary space on the Solar System). The global photospheric magnetic field, not only in the visible hemisphere, is necessary in order to model the heliospheric magnetic field and solar wind. Currently, synoptic maps that complete the magnetism in the non-visible hemisphere with near-side observations obtained many days in advance are generally employed. This approximation of the far-side magnetism can be updated with sophisticated flux transport models \citep{Schrijver+DeRosa2003}. However, these models cannot account for new active regions emerging on the far side or active regions that keep growing after rotating onto the non-visible hemisphere. The detection of activity on the far side is important to completely characterize the global photospheric magnetic field and, thus, the heliosphere. \citet{Arge+etal2013} proved that incorporating seismically detected far-side active regions into the modeling of the heliosphere produces results that better match in situ measurements of the solar wind. Our study provides a remarkable improvement of the capabilities of local helioseismology to characterize the far-side magnetism, getting us closer to the goal of implementing these techniques in space weather forecasting applications. The main limitation of this study is the lack of data to construct bigger training sets. We have been forced to work with small validation sets that can lead to somehow noisy statistics. We plan to bypass this limitation with new data in future works.
\begin{figure*}[!t]
\centering
\includegraphics[width=16cm]{estudio_dice_long.pdf}
\caption{Average over every validation set of the Dice coefficient on FarNet-II's outputs as a function of the longitude range in degrees. Each range covers 20º. Each panel represents the Dice coefficient on the sequence element indicated above it. The bottom right panel represents the average of all sequence elements. A longitude of $0^\circ$ corresponds to the west limb, $90^\circ$ to the center of the far-side hemisphere, and $180^\circ$ to the east limb.}
\label{dice_long}
\end{figure*}
\begin{acknowledgements}
We thank P. C. Liewer and collaborators for making publicly available the composite STEREO/EUVI and SDO/AIA maps necessary to carry out this research. We also thank C. Cid, from Universidad de Alcalá de Henares (UAH), for her invaluable ideas for future projects using FarNet-II. Financial support from grants PGC2018-097611-A-I00 and PID2021-127487NB-I00, funded by MCIN/AEI/ 10.13039/501100011033 and by “ERDF A way of making Europe”, and grant PROID2020010059 funded by Consejería de Economía, Conocimiento y Empleo del Gobierno de Canarias and the European Regional Development Fund (ERDF) is gratefully acknowledged. TF acknowledges grant RYC2020-030307-I funded by MCIN/AEI/ 10.13039/501100011033 and by “ESF Investing in your future”. We acknowledge the community effort devoted to the development of the following
open-source packages that were
used in this work: \texttt{numpy} \citep[\texttt{numpy.org},][]{numpy20},
\texttt{matplotlib} \citep[\texttt{matplotlib.org},][]{matplotlib}, \texttt{PyTorch}
\citep[\texttt{pytorch.org},][]{Paszke+etal2019}, and \texttt{SunPy}
\citep[\texttt{sunpy.org},][]{sunpy_community2020},
\texttt{einops} \citep{rogozhnikov2022einops}, and \texttt{h5py} \citep{hdf5}.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,089,759 | arxiv | \section{Introduction}
Collective cell migration plays a major role in the regulation of vital biological processes, including tissue morphogenesis, wound healing, and tumor progression~\cite{Ladoux2017,Friedl2009,Hakim2017}. Cell migration is driven by the cytoskeleton, a network of multiple protein filaments, such as actin, and molecular motor complexes, such as myosin. As an active material, the cytoskeleton can generate mechanical stresses at the cellular level by consuming the chemical fuel Adenosine-Triphosphate (ATP). Cell-cell junctions can transmit such mechanical stresses to neighboring cells, which leads to collective cell migration.
During morphogenesis and regeneration, cells commonly display anisotropic distributions of intracellular constituents. Examples are stress fibers, which are bundles of actin filaments and myosin motors. In cells, these structures can organize into phases with orientational order~\cite{Dalby2002,Prager-Khoutorsky2011,Gupta2019}. Other forms of orientational cellular order are resulting from the symmetry breaking between front and back of migrating cells. At the front, migration is generated by a distinct structure enriched with branching actin filaments called the lamellipodium.
Physical interaction between such anisotropic cells can lead to long-range orientational order with varying degrees of symmetry. For instance, polarity markers in mouse liver or confluent monolayers of fibroblasts \textit{in vitro} exhibit nematic order~\cite{Morales-Navarrete2019,Duclos2014}. Similar to liquid crystals~\cite{DeGennes1995}, nematic refers to order that is invariant under inversions of the cell orientation. Signatures of polar order, where this invariance is absent, have been reported in spreading epithelial monolayers~\cite{Farooqui2005,Trepat2009,Reffay2011,Peyret2019}.
Orientational fields exhibit topological defects, where the orientation is not well-defined. These defects are characterized by their topological charge, which is determined by counting the number of rotations the orientational field performs when following a closed trajectory around the defect center~\cite{DeGennes1995}. Polar order fields can present topological defects with an integer charge, whereas nematic order fields can also exhibit half-integer defects. In active materials, the characteristics of the mechanical patterns around topological defects depend on details of the underlying active processes. In particular, studying the dynamics of half-integer topological defects, one can infer whether the active stresses are contractile or extensile~\cite{Sanchez2012,Saw2017,Kawaguchi2017,Duclos2018,Blanch-Mercader2018m,Copenhagen2020}.
Several theoretical studies suggest that in active systems, well-defined mechanical patterns and flows can emerge around topological defects~\cite{Giomi2013,Giomi2014,Thampi2014,Shankar2018,Hoffmann2020}. Based on this idea, one can qualitatively understand the structure of collective flows of active systems, such as purified cytoskeletal motor-filament suspensions, by considering the dynamics of topological defect assemblies~\cite{Sanchez2012,Guillamat2016,Guillamat2017,Hardouin2019,Opathalage2019a}. Similar ideas were applied to multicellular systems to interpret various processes including cell extrusion~\cite{Saw2017}, changes in cell density~\cite{Kawaguchi2017}, or morphogenetic events during the regeneration of the freshwater polyp \textit{hydra}~\cite{Livshits2017,Maroudas-Sacks2020}. These findings suggest that orientational fields can organize cell stress patterns and guide collective cell migration.
In this work, we show that the dynamics of individual topological defects can be used to determine mechanical properties of active systems. To this end, we first develop a hydrodynamic approach to study the forces, orientation, and flows around integer topological defects in compressible active fluids. Our phenomenological description accounts for three types of active processes, corresponding to polar cell-substrate forces as well as isotropic and anisotropic nematic cell-cell stresses. We then analyze integer topological defects that are formed by muscle precursor cells (C2C12 myoblasts) when confined to small circular domains~\cite{PauScience}. Combining our experimental data and our theory allows us to determine material parameters of myoblast monolayers. The experiments analyzed in this work are published in~\cite{PauScience} and part of this work is published in an accompanying letter~$[$Letter$]$.
\section{Hydrodynamic description of monolayers of anisotropic cells }\label{Sec1}
In this section, we develop a phenomenological description of monolayers of elongated cells. After presenting the dynamic equations, we apply them to a monolayer of C2C12 myoblasts confined to a circular domain~\cite{PauScience}.
\subsection{Hydrodynamic fields and conservation equations}
To describe cell monolayers, we use a hydrodynamic approach and start by identifying the hydrodynamic variables characterizing such systems. Let us consider first the two-dimensional cell number density $n$. Cell division and growth occur on a time scale of ten hours. Focussing on shorter time scales, we can neglect these processes and write the conservation equation
\begin{align}
\partial_tn+\partial_{\gamma}(n v_{\gamma})&=0,\label{eq:massbalance}
\end{align}
where $\gamma$ represents the cartesian coordinates in the substrate plane and $\mathbf{v}$ is the in-plane velocity field. We adopt the Einstein convention such that summation over repeated indices is tacitly assumed. In principle, also the chemical fuel, adenosine-triphosphate (ATP), and its hydrolysis products, adenosine-diphosphate (ADP) and inorganic phosphate P$_i$, satisfy conservation equations. However, in our experiments, the cells metabolize nutrients provided by the buffer to replenish consumed ATP from ADP and P$_i$~\cite{PauScience}. Therefore, we assume that the concentrations of ATP, ADP, and P$_i$ are homogenous and constant in time.
Next, we consider momentum conservation. In our experiments, the Reynolds number $Re$ is small: The C2C12 myoblasts were confined to small circular domains of radius $\sim 100$~$\mu$m and moved at a typical speed $\sim 0.5$~$\mu$m/min. In addition, taking the density of water for the mass density of cells \cite{Grover2011} and using the viscosity of epithelial tissues, which is $\sim 10^9$ times that of water \cite{Blanch-Mercader2017}, we find $Re\sim 10^{-15}-10^{-16}$. We thus consider the overdamped limit and the conservation of momentum is expressed through force balance.
In our experiments, the lateral extension of C2C12 monolayers is an order of magnitude larger than its height, ~50~$\mu$m \textit{vs}~10~$\mu$m. In this limit, a thin-film approximation can be used to turn the 3d force balance equation into an effective 2d description for the height-averaged stress and the height itself~\cite{Kruse2006}. We neglect any fluctuations in the latter and assume it to be uniform, such that force balance is captured by the following effective equation
\begin{eqnarray}
\partial_{\beta}\sigma^\mathrm{tot}_{\alpha\beta}=\xi v_{\alpha}-T_0 p_{\alpha}.\label{eq:forcebalance}
\end{eqnarray}
Here $\sigma^\mathrm{tot}_{\alpha\beta}$ are the cartesian components of the in-plane total mechanical stress tensor obtained after averaging with respect to the height. On the right hand side of the equation, the external force density results from interactions of the cells with the substrate. No net force and torque is applied on the monolayers as a result of these interactions.
The external force density has two components: $\xi\mathbf{v}$ describes friction between the monolayer and the substrate, whereas $T_0\mathbf{p}$ is the traction force of the cells. The friction force depends on the velocity field $\mathbf{v}$. The traction force is independent of the velocity $\mathbf{v}$. It results, for example, from retrograde cytoskeletal flows in lamellipodia or from stress-fiber contraction transmitted to the substrate via long-lived adhesion points. The direction of the traction force derives from the local average orientation of these cellular structures, which is captured by the polarization field $\mathbf{p}$. Fluctuations around the average orientation are accounted for by higher order fields, like the nematic tensor $\mathsf{Q}$ \cite{DeGennes1995}. Here, we assume that such terms are determined by $\mathbf{p}$, for example, $\mathsf{Q}\sim\mathbf{p}\mathbf{p}$. A possible nematic contribution to the traction force will be discussed in Sec.~\ref{sec:ActiveNematicForce}.
\subsection{Constitutive relations}
To close the system of equations describing the dynamics of the myoblast monolayer, expressions for the total stress $\mathsf{\sigma}^\mathrm{tot}$ and the time evolution of the polarization field $\mathbf{p}$ are needed. To obtain such expressions, we follow the standard approach of non-equilibrium thermodynamics~\cite{DeGroot1963}. It consists of first identifying pairs of conjugated thermodynamic forces and fluxes by inspecting the time derivative of the free energy. In a second step, the fluxes are expressed to linear order in terms of the forces, where the coupling coefficients obey the Onsager relations.
Here, we choose the following quantities as thermodynamic forces~\cite{Kruse2005}: the symmetric part of the velocity gradient tensor with components $v_{\alpha\beta}=(\partial_\alpha v_\beta+\partial_\beta v_\alpha)/2$, the field $\mathbf{h}=-\delta\mathcal{F}/\delta\mathbf{p}$, where $\mathcal{F}$ is the equilibrium free energy, and the difference between the chemical potentials of ATP, ADP and P$_i$ $\Delta\mu=\mu_\mathrm{ATP}-\mu_\mathrm{ADP}-\mu_\mathrm{P}$. The corresponding thermodynamic fluxes are given by the deviatory stress tensor $\mathsf{\sigma} =\mathsf{\sigma}^\mathrm{tot}-\mathsf{\sigma}^e$, the co-rotational convective derivative of the polarization field $D\mathbf{p}/Dt$, and the rate $r$ of ATP-hydrolysis~\cite{Kruse2005}. As we assume constant densities of ATP, ADP, and P$_i$ we do not consider $r$ any further. The Ericksen stress $\mathsf{\sigma}^e$ is a generalization of the hydrostatic pressure, see App.~\ref{sec:EricksenStress}. In the context of liquid crystals \cite{DeGennes1995}, $\mathbf{h}$ is called the molecular field. It describes the restoring forces associated with deformations of $\mathbf{p}$. The co-rotational convective derivative of the polarization field is given by
\begin{align}
\frac{D}{Dt}p_\alpha &=\partial_t p_{\alpha}+v_\beta \partial_\beta p_\alpha+\omega_{\alpha\beta}p_\beta.\label{eq:corotCovecDerivative}
\end{align}
Here, $\omega_{\alpha\beta}=(\partial_\alpha v_\beta-\partial_\beta v_\alpha)/2$ is the antisymmetric part of the velocity gradient tensor.
Before proceeding to discuss the constitutive equations, let us first note that there is some freedom in choosing the stress tensor. Only the divergence of the stress has a physical significance, so one can always add a divergence-free component to the stress tensor. We adopt the same choice as in Ref.~\cite{Joanny2007,Furthauer2012}, such that the components of the antisymmetric part of the deviatory stress are
\begin{align}
\sigma^a_{\alpha\beta}&=\frac{1}{2}\left(p_\alpha h_\beta-p_\beta h_\alpha\right).
\end{align}
The symmetric part $\mathsf{\sigma}^s$ of the deviatory stress and the co-rotational convective derivative of the polarization field are obtained, as mentioned above, by expressing these fluxes in terms of the thermodynamic forces in lowest order. Explicitly, we find
\begin{align}
\sigma_{\alpha\beta}^s& = 2\eta\left( v_{\alpha\beta}-\frac{1}{2}v_{\gamma\gamma}\delta_{\alpha\beta}\right)+\bar\eta v_{\gamma\gamma}\delta_{\alpha\beta}+\frac{\nu}{2}\left(p_{\alpha}h_\beta+p_\beta h_{\alpha}-p_\gamma h_\gamma \delta_{\alpha\beta}\right)+\nu' p_\gamma h_\gamma \delta_{\alpha\beta} \nonumber\\
& \quad\quad-\left(p_{\alpha}p_\beta-\frac{1}{2}p_\gamma p_\gamma\delta_{\alpha\beta}\right)\zeta\Delta\mu-\delta_{\alpha\beta}\zeta'\Delta\mu-p_\gamma p_\gamma \delta_{\alpha\beta}\zeta'' \Delta\mu\label{eq:devstresstensor}\\
\frac{D}{Dt}p_\alpha&= \frac{h_\alpha}{\gamma}-\nu \left(v_{\alpha\beta}-\frac{1}{2}v_{\gamma\gamma}\delta_{\alpha\beta}\right)p_\beta-\nu' v_{\beta\beta}p_\alpha\label{eq:dinamicadirector}
\end{align}
In the expression for the symmetric part of the deviatory stress $\mathsf{\sigma}^s$, the first two terms account for viscous stresses, where the coefficient $\eta$ and $\bar\eta$, respectively, are the shear and bulk viscosities of the cell monolayer. The following two terms couple the mechanical stress to the field $\mathbf{h}$. All these terms also appear in the stress of liquid crystals~\cite{DeGennes1995}. The remaining terms couple the mechanical stress to ATP-hydrolysis and thus denote the active components of the stress. For our choice of the sign of the stress tensor, positive values of $\zeta$, $\zeta'$, and $\zeta''$ correspond to extensile active stresses. Let us remark that also the expressions for the friction and traction forces in Eq.~\eqref{eq:forcebalance} could be obtained from similar arguments~\cite{Julicher2009}. In this way, the traction force is coupled to ATP-hydrolysis.
In Equation~\eqref{eq:dinamicadirector}, the first term captures relaxation of the polarization field with $\gamma$ being a rotational viscosity. The parameters $\nu$ and $\nu'$ are the so-called flow-alignment parameters. They describe the response of the polarization field to gradients in the velocity field $\mathbf{v}$. In particular, $\nu$ describes the response to shear flows, whereas $\nu'$ that to divergent flows. Note that, in this equation, we have omitted an active term, that is a coupling to $\Delta\mu$. Such a term would be of the form $p_\alpha\lambda\Delta\mu$. We will see in Sect.~\ref{sec:activeAlignment} that this amounts to a renormalization of parameters.
Explicit expressions for the Ericksen stress $\mathsf{\sigma}^e$ and the field $\mathbf{h}$ are obtained by fixing the equilibrium free energy $\mathcal{F}$ of the system. We choose
\begin{align}
\mathcal{F}&=\int_\mathcal{A}\left\{\frac{B}{2}\left(1-\frac{n}{n_0}\right)^2+\frac{\chi}{2}p_{\alpha}^2+\frac{{\cal K}}{2}(\partial_{\alpha}p_{\beta})^2\right\}da. \label{eq:freeenergy}
\end{align}
The first term penalizes deviations of the cell density from the reference density $n_0$, where $B$ is the corresponding bulk modulus. The remaining terms capture the elastic energy associated with distortions of the polarization field similar to the free energy used for liquid crystals~\cite{DeGennes1995}. As suggested by our experiments, see Sect.~\ref{sec:experiments} below, we consider $\chi>0$ meaning that the preferred bulk equilibrium state is disordered. The energy cost associated with gradients of the polarization field is accounted for by the final term. It is equal to the Frank energy in the one-constant approximation with modulus $\mathcal{K}$. This approximation is appropriate for the experimental system as we show in Sec.~\ref{sec:FrankConstants}.
Let us remark that the term of uniform isotropic active stress $\zeta'\Delta\mu\mathbb{I}$ in Eq.~\ref{eq:devstresstensor} amounts to a renormalization of parameters. Explicitly, the bulk modulus $B$ and the reference density $n_0$ are transformed as follows: $B\rightarrow B-2\zeta'\Delta\mu$ and $n_0\rightarrow n_0\sqrt{1-2\zeta'\Delta\mu/B}$. For large enough positive $\zeta'\Delta\mu$, the effective bulk modulus $B$ is negative, which may lead to mechanical instabilities that are similar to those found in other contexts \cite{Joanny}. Henceforth, we consider $\zeta'\Delta\mu=0$ and exclude this scenario as we have not found signatures of such instabilities in our experiments.
Let us briefly summarize the parameters appearing in our description. Active processes are captured by the magnitude of the traction force $T_0$ and the parameters $\zeta$ and $\zeta'$ coupling ATP hydrolysis to the mechanical stress. Dissipation occurs through rearrangements of the polarization, the viscous dissipation, and friction with the substrate, which are, respectively, controlled by the coefficients $\gamma$, $\eta$, $\bar\eta$, and $\xi$. Flow alignement of the polarization is governed by $\nu$ and $\nu'$ and, finally, there are three elastic moduli, namely, $B$, $\chi$, and $\mathcal{K}$.
\subsection{Myoblast monolayers}
\label{sec:experiments}
We studied the collective behavior of C2C12 cells confined to fibronectin-coated circular domains with radii between $50$~$\mu$m and $150$~$\mu$m. In the following, we describe the main features of the methods used. For further experimental details, see~\cite{PauScience}.
Individually, C2C12 mouse myoblasts move at speeds of $20-50$~$\mu$m/h, and they can assume an elongated shape around $50$~$\mu$m in length and $10$~$\mu$m in width \cite{Sheets2013}. Extended C2C12 myoblast monolayers spontaneously generate long range nematic order~\cite{Duclos2017,Kawaguchi2017,PauScience}. This corresponds to $\chi<0$ in the equilibrium free energy~\eqref{eq:freeenergy}. Correspondingly, these monolayers can present half-integer topological defects~\cite{Kawaguchi2017}.
In our experiments, cells were confined to fibronectin-coated circular domains by coating the surrounding with non-adhesive polyethylene glycol, Fig.~\ref{fig001}a. Over the course of our experiments, the cell number increases by proliferation. After a transient, cells formed a uniform monolayer without visible cell-free gaps. In contrast to extended monolayers, in our small islands, we observe polar order near the domain boundary as reflected by continuous lamellipodial activity. Correspondingly, the cell monolayers arranged into integer topological defects with a disorganized center. We thus chose polar traction forces and $\chi>0$ in the free energy~\eqref{eq:freeenergy}.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{BlanchMercaderetal_Fig1.png}
\caption{(online color) Confined C2C12 monolayers. a) Schematic of the experimental setup. b) Phase-contract image of a spiral in a circular domain of 100~$\mu$m radius. c) Orientational order (left) and velocity fields (right) averaged over $N=12$ spirals. Colors correspond to $S$ and speeds, see legend. Gray lines: velocity stream lines. d) Phase-contrast image of an aster in a circular domain of 100~$\mu$m radius. Scale bar in (b,d): 50~$\mu$m.}\label{fig001}
\end{figure}
At low densities, we found that cell monolayers spontaneously arranged into spirals that collectively rotated, Fig.~\ref{fig001}b. The orientation of the cell bodies at the interface of the circular domains was approximately tangential, and the average rotational speed was on the order of $30$~$\mu$m/h, Fig.~\ref{fig001}c. As the cell number increased further, we found that cells at the periphery changed their orientation by aligning their bodies perpendicularly to the circular interface thus forming an aster, see Fig.~\ref{fig001}d. In this case, the collective rotation was lost. Further evolution of these cell monolayers led to 3d multicellular protrusions featuring long-range nematic order and collective cell dynamics perpendicular to the confinement plane, see \cite{PauScience}.
From phase-contrast movies, particle velocimetry techniques were used to determine a coarse-grained velocity field. From the same movies, we determined a coarse-grained orientational field via a structure tensor method \cite{Puspoki2016}. For a given 2d intensity pattern, this technique computes the direction of the minimal and maximal intensity anisotropy as the eigenvectors of a 2d structure matrix obtained from intensity gradients. Then, we set the orientational field parallel to the eigenvector with minimal eigenvalue. A representative example of both time-averaged fields for spiral configurations is shown in Fig.~\ref{fig001}c.
\subsection{Circular confinement}
In the following, we apply the equations derived in the previous sections to cell monolayers confined to circular islands. We therefore express the equations in polar coordinates $r$ and $\theta$. We focus on steady state solutions and assume that they are invariant with respect to rotations around the center of the island. Finally, we determine the boundary conditions for this situation.
\subsubsection{Steady state equations in polar coordinates}
We start with the conservation equation \eqref{eq:massbalance} for the cell number density. In steady state and assuming rotational invariance, it becomes
\begin{align}
\partial_{r}(n v_{r})+\frac{n v_r}{r}&=0\label{eq:massbalanceprimer}.
\end{align}
As will be detailed below, there are no flows across the domain boundaries, such that $v_r=0$ in steady state.
For the polarization field $\mathbf{p}$, we introduce the magnitude or 'polar order parameter' $S$ and the angle $\psi$ with respect to the radial direction, such that $p_r=S \cos(\psi)$ and $p_\theta=S \sin(\psi)$. In terms of the variables $S$ and $\psi$, the dynamic equation~\eqref{eq:dinamicadirector} for the polarization field reads
\begin{align}
\frac{h_\parallel}{\gamma}-\nu S v_{r\theta}\sin(2\psi)&=0\label{eq:dinamicadirectorparticular1}\\
\frac{h_\perp}{\gamma}+S v_{r\theta}\left(1-\nu \cos(2\psi)\right)&=0 .\label{eq:dinamicadirectorparticular2}
\end{align}
In these expressions, $h_\parallel$ and $h_\perp$ are the components of the field $\mathbf{h}$ parallel and perpendicular to $\mathbf{p}$. The explicit expressions of $h_\parallel$ and $h_\perp$ are given in Eqs.~\eqref{eq:molecularfieldpara} and \eqref{eq:molecularfieldperp} in App.~\ref{sec:molecularfield}. Furthermore, $v_{r\theta}=(\partial_r v_\theta-v_\theta/r)/2$ is the off-diagonal component of the symmetric part of the velocity gradient tensor. The components $v_{rr}$ and $v_{\theta\theta}$ vanish at steady state.
Using the variables $S$ and $\psi$, the components of the deviatory stress can be written as
\begin{align}
\sigma_{rr,\theta\theta}&=\mp\frac{1}{2}S^2 \cos(2\psi)\zeta\Delta\mu-S^2\zeta''\Delta\mu \nonumber\\
&\pm\frac{\nu}{2}S\left(h_\parallel \cos(2\psi)-h_\perp \sin(2\psi)\right) +\nu' S h_\parallel \label{eq:devStressTensorRR} \\
\sigma_{r\theta,\theta r}&=2\eta v_{r\theta}-\frac{1}{2}S^2 \sin(2\psi)\zeta\Delta\mu \nonumber\\
&+\frac{\nu}{2}S\left(h_\parallel \sin(2\psi)+h_\perp \cos(2\psi)\right) \pm\frac{S h_\perp}{2},\label{eq:devStressTensorRTheta}
\end{align}
where the upper (lower) signs correspond to the first (second) index pair.
The force balance equation \eqref{eq:forcebalance} takes the form
\begin{align}
\partial_{r}\sigma_{rr}^\mathrm{tot}+\frac{\sigma_{rr}^\mathrm{tot}-\sigma_{\theta\theta}^\mathrm{tot}}{r}&=-T_0 S \cos(\psi)\label{eq:forcebalance1}\\
\partial_{r}\sigma_{\theta r}^\mathrm{tot}+\frac{\sigma_{\theta r}^\mathrm{tot}+\sigma_{r\theta}^\mathrm{tot}}{r}&=\xi v_{\theta}-T_0 S \sin(\psi).\label{eq:forcebalance2}
\end{align}
By employing the Gibbs-Duhem relation \eqref{eq:FBStressEricksenParB}, we can furthermore eliminate the Ericksen stress in Eq.~\eqref{eq:forcebalance2} and obtain
\begin{align}
\partial_{r}\sigma_{\theta r}+\frac{2\sigma_{\theta r}}{r}&=\xi v_{\theta}-T_0 S \sin(\psi).\label{eq:forcebalance3}
\end{align}
\subsubsection{Boundary conditions}
It remains to fix the conditions on the fields at the boundary of the island at $r=R$, where $R$ is the radius of the domain. Compatible with our experiments, we impose that the there is no flux of material into the domain at the boundary. At the same time, there is no tangential force applied to the cell monolayer at the edge of the domain. For the boundary conditions on the polarization field, let us first note that the polar order parameter is maximal at the boundary. Without loss of generality, we fix this value to be one. Furthermore we impose that there are no gradients in $\psi$ at the boundary. In summary, we thus have
\begin{align}
S|_{r=R}&=1\label{eq:boundarycon1}\\
\partial_r\psi|_{r=R}&=0\label{eq:boundarycon2}\\
\sigma_{\theta r}^\mathrm{tot}|_{r=R}&=0\label{eq:boundarycon3}\\
v_r|_{r=R}&=0. \label{eq:boundarycon4}
\end{align}
Note that the total cell number is conserved and thus a parameter of our system.
In our experiments, the monolayers are disordered in the center of the domains, and we impose $S=0$ at $r=0$. Due to our assumption of rotational invariance, we also need to impose regularity of the solutions at $r=0$. In total we have
\begin{align}
S|_{r=0}&=0\label{eq:boundarycon5}\\
\partial_r\psi|_{r=0}&=0\label{eq:boundarycon6}\\
v_{\theta}|_{r=0}&=0\label{eq:boundarycon7}\\
v_{r}|_{r=0}&=0.\label{eq:boundarycon8}
\end{align}
\section{Active forces in integer topological defects}\label{sec:Activeforces}
Materials with orientational order are prone to exhibit singularities in the corresponding order parameter. Such singularities are called topological defects. They are characterized by their 'charge', that is, the number of turns of the polarization vector upon moving it along a closed path around the singularity. The most common types are defects with charges $\pm$1/2 and $\pm1$.
As mentioned in the Introduction, topological defects have been related to biological processes in cell monolayers~\cite{Saw2017,Kawaguchi2017,Maroudas-Sacks2020,PauScience}. For a better understanding of the mechanics of defects in monolayers under confinement, we analyze now the active force density associated with +1 defects. In our description, activity enters in different terms, namely, in the traction force $T_0\mathbf{p}$ and in the stress via
\begin{align}
\sigma^\mathrm{act}_{\alpha\beta}&=-\left(p_{\alpha}p_\beta-\frac{1}{2}p_\gamma p_\gamma\delta_{\alpha\beta}\right)\zeta\Delta\mu- p_\gamma p_\gamma\delta_{\alpha\beta}\zeta''\Delta\mu.
\end{align}
The surface active force density then is
\begin{align}
\mathbf{f}^{a,s}&=T_0 \mathbf{p}+\nabla\cdot\mathsf{\sigma}^\mathrm{act}.\label{eq:App5}
\end{align}
In addition, there is a line active force density at the boundary of the circular domain with radius $R$
\begin{align}
\mathbf{f}^{a,l}&=-\mathsf{\sigma}^\mathrm{act}\cdot\hat{\mathbf{r}}|_{r=R},\label{eq:App8}
\end{align}
where $\hat{\mathbf{r}}$ is the radial unit vector.
The simplest form of +1 defects corresponds to spirals with constant angle $\psi=\psi_0$. In the cases, $\psi_0=0$, $\pi$ and $\psi_0=\pm\pi/2$, the spirals turn into asters or vortices, respectively. For the polar order parameter $S$, we will assume a linear dependence on the radial coordinate $r$, such that $S=r/R$. As we will see below, this is a solution to our equations in the limit of small radius $R$. Using expressions \eqref{eq:devStressTensorRR}-\eqref{eq:devStressTensorRTheta} for the components of the active stress tensor, we obtain
\begin{align}
\mathbf{f}^{a,s}&=\left(T_0 R \cos{(\psi_0)}-2\zeta\Delta\mu \cos{(2\psi_0)}-2\zeta''\Delta\mu \right)\frac{r \hat{\mathbf{r}}}{R^2}\nonumber\\
&\quad+\left(T_0 R \sin{(\psi_0)}-2\zeta\Delta\mu \sin{(2\psi_0)} \right)\frac{r \hat{\bm{\theta}}}{R^2},\label{eq:App7a}\\
\intertext{and}
\mathbf{f}^{a,l}&=\left(\frac{\zeta\Delta\mu}{2}\cos(2\psi_0)+\zeta''\Delta\mu\right)\hat{\mathbf{r}}+\left(\frac{\zeta\Delta\mu}{2} \sin(2\psi_0)\right) \hat{\bm{\theta}}\label{eq:App7b}
\end{align}
where $\hat{\bm{\theta}}$ is the azimuthal unit vector. Figure~\ref{fig0} presents these force densities for asters and spirals.
\begin{figure}[b]
\centering
\includegraphics[]{BlanchMercaderetal_Fig2.pdf}
\caption{(online color) Active forces associated with integer topological defects: asters (a,c,e), and spirals (b,d,f). Active forces only generated by traction forces $T_0\mathbf{p}$ (a,b), by anisotropic active stresses proportional to $\zeta\Delta\mu$ (c,d), and by isotropic active stresses proportional to $\zeta''\Delta\mu$ (e,f). Gray lines indicate the polarization field, which points outwards. The angle of the spiral is $\psi_0=\pi/3$ (b,d,f). Magenta arrows: surface active force density at $r/R=\{1/3, 2/3, 1\}$, $\mathbf{f}^{a,s}$ in Eq.~\eqref{eq:App7a}. Green arrows: line active force density, $\mathbf{f}^{a,l}$ in Eq.~\eqref{eq:App7b}. Black circle: boundary at $r=R$. The shafts of the magenta arrows are scaled by $\mathbf{f}^{a,s}(r=R)$ and of the green arrows by $R\mathbf{f}^{a,s}(r=R)$. Scale bars indicate $\mathbf{f}^{a,s}(r=R)=R\mathbf{f}^{a,s}(r=R)=1$. We assumed $T_0,\zeta\Delta\mu,\zeta''\Delta\mu>0$.}
\label{fig0}
\end{figure}
For asters with $\psi_0=0$ both, the surface and the line active force densities only have radial components, see Fig.~\ref{fig0}a,c,e. In this case, $\mathbf{f}^{a,s}$ is pointing towards the center if $T_0 R-2(\zeta+\zeta'')\Delta\mu<0$ and vice versa.
For spirals, the surface and the line active force density has a radial and an azimuthal component, see Fig.~\ref{fig0}b,d,f. For spirals with $\psi_0>\pi/4$ but otherwise the same parameter values as for asters, the radial component of $\mathbf{f}^{a,s}$ can point away from the center, Eq.~\eqref{eq:App7a}. The same effect can be observed for the radial component of $\mathbf{f}^{a,l}$, Eq.~\eqref{eq:App7b}. The azimuthal components of $\mathbf{f}^{a,s}$ and $\mathbf{f}^{a,l}$ are independent of the isotropic active stress proportional to $\zeta''\Delta\mu$, Eqs.~\eqref{eq:App7a} and \eqref{eq:App7b}.
For vortices with $\psi_0=\pi/2$, the traction forces generate an azimuthal component in the surface active force density. In this case, $\mathbf{f}^{a,s}$ is pointing towards the center if $2(\zeta-\zeta'')\Delta\mu<0$ and vice versa.
In the following two sections, we discuss in detail the steady states of integer topological defects.
\section{Asters}
\label{sec:asters}
We consider first the special case of an aster, where $\psi_0=0$. In that case, the azimuthal velocity $v_\theta$ vanishes by symmetry. Equation~\eqref{eq:dinamicadirectorparticular2} then implies $h_\perp=0$, showing that the aster is a solution of our system. It follows from Equation~\eqref{eq:dinamicadirectorparticular1} that also $h_\parallel=0$. Using this result in Equation~\eqref{eq:molecularfieldpara} and the boundary conditions~\eqref{eq:boundarycon1} and \eqref{eq:boundarycon5}, the polar order parameter $S$ can be calculated. The general solution is given by a Bessel function. Since in our experiments, we see a single defect per island~\cite{PauScience}, we focus on the limit $R^2\ll {\cal K}/\chi$. In that case, the penetration length of the boundary polar order $\sqrt{{\cal K}/\chi}$ is larger than the system size $R$ and $S=r/R$. For larger island radii, multiple defects were reported for C2C12 monolayers~\cite{Duclos2017}.
It remains to determine the cell number density for the aster. To this end, we employ the radial component of the force balance Eq.~\eqref{eq:forcebalance1}. Note that the azimuthal component, Eq.~\eqref{eq:forcebalance2}, is automatically satisfied by symmetry. In the limit $R^2\ll {\cal K}/\chi$, the non-vanishing components of the total stress tensor read
\begin{align}
\sigma_{rr}^\mathrm{tot}&=\frac{B}{2}\left(1-\frac{n^2}{n_0^2}\right)-\left(\frac{1}{2}\zeta\Delta\mu +\zeta''\Delta\mu\right)\frac{r^2}{R^2} \\
\sigma_{\theta\theta}^\mathrm{tot}&=\frac{B}{2}\left(1-\frac{n^2}{n_0^2}\right)+\left(\frac{1}{2}\zeta\Delta\mu - \zeta''\Delta\mu\right)\frac{r^2}{R^2}.
\end{align}
In the limit that there are only small deviations from the reference density $n_0$, the solution to Eq.~\eqref{eq:forcebalance1} is
\begin{align}
\frac{n-n_0}{n_0}&\approx\frac{1}{B}\left[\left(\frac{R}{2}T_0-\zeta \Delta\mu-\zeta''\Delta\mu\right)\frac{r^2}{R^2}+n_c \right], \label{eq:densityasters}
\end{align}
where $n_c$ is an integration constant. If the total cell number in the circular island is $n^\mathrm{tot}\pi R^2$, then
\begin{align}
\frac{n-n^\mathrm{tot}}{n_0}&\approx\frac{1}{B}\left(\frac{R}{2}T_0-\zeta\Delta\mu -\zeta''\Delta\mu\right)\left(\frac{r^2}{R^2}-\frac{1}{2}\right). \label{eq:densityasters2}
\end{align}
In Figure~\ref{fig00}a, we show the density as a function of the radial coordinate for different ratios $T_0R/\zeta\Delta\mu$ and fixed $\zeta''\Delta\mu$.
\begin{figure}[b]
\centering
\includegraphics[]{BlanchMercaderetal_Fig3.pdf}
\caption{(online color) Steady state profiles for asters. a) Cell number density $B(n-n^\mathrm{tot})/n_0$, Eq.~\eqref{eq:densityasters2}, b) radial force density $\mathbf{f}_i\cdot\hat{\mathbf{r}}$, Eq.~\eqref{eq:forceinneraster}, as a function of the radial distance $r$ for varying values of the dimensionless ratio $T_0 R/\zeta\Delta\mu$ as indicated in the legend. We consider $\zeta''\Delta\mu=0$ (a) and $-\frac{\zeta''\Delta\mu}{2}-B\frac{n^\mathrm{tot}-n_0}{n_0}=0$ (b). Units are set by $\zeta\Delta\mu=R=1$.}
\label{fig00}
\end{figure}
Next, let us determine the momentum that the monolayer in the aster configuration exchanges with the environment. As the velocity $\mathbf{v}=0$, the force exerted by the monolayer on the substrate is
\begin{align}
\mathbf{t}=-T_0\frac{r}{R}\hat{\mathbf{r}}.\label{eq:Taster}
\end{align}
At the confinement boundary $r=R$ and to first order in $n^\mathrm{tot}/n_0$, the local force density per unit length is
\begin{align}
\mathbf{f}_o &= -\mathsf{\sigma}^\mathrm{tot}(r=R)\cdot\hat{\mathbf{r}}\\
&= \left(\frac{T_0 R}{4}+\frac{\zeta''\Delta\mu}{2}+B\frac{n^\mathrm{tot}-n_0}{n_0}\right)\hat{\mathbf{r}}. \label{eq:forceouteraster}
\end{align}
From Eqs.~\eqref{eq:Taster} and \eqref{eq:forceouteraster}, we see that the total force on the monolayer
\begin{align}
\mathbf{F}^\mathrm{tot}&=\int_\mathcal{A} \mathbf{t} da+\int_{\partial\mathcal{A}}\mathbf{f}_o dl\label{eq:Ftotal}
\end{align}
vanishes, $\mathbf{F}^\mathrm{tot}=\mathbf{0}$. Because the forces are all radial, also the total torque
\begin{align}
\mathbf{M}^\mathrm{tot}&=\int_\mathcal{A} \mathbf{r}\times\mathbf{t} da+\int_{\partial\mathcal{A}}R\hat{\mathbf{r}}\times\mathbf{f}_o dl\label{eq:Mtotal}
\end{align}
is zero. Therefore, neither a net force nor a net torque results from interactions between the monolayer and the substrate in steady state asters.
In our experiments~\cite{PauScience}, we used circular elastic pillars placed in the center of the circular domain to measure the force exerted by the monolayer. Neglecting deviations from the profiles calculated above that are caused by the finite diameter of the pillar, this force is
\begin{align}
\mathbf{f}_i&=\mathsf{\sigma}^\mathrm{tot}(r)\cdot\hat{\mathbf{r}}\\
&=\left[\frac{R}{2}\left(\frac{1}{2}-\frac{r^2}{R^2}\right)T_0 +\frac{1}{2}\left(\frac{r^2}{R^2}-1\right)\zeta\Delta\mu-\frac{1}{2}\zeta''\Delta\mu-B\frac{n^\mathrm{tot}-n_0}{n_0}\right]\hat{\mathbf{r}},\label{eq:forceinneraster}
\end{align}
see Fig.~\ref{fig00}b. Although this expression is correct only in the limit, where the diameter of the pillars tends to zero, it gives an approximate value for pillars with finite diameter.
\section{Spirals}
\label{sec:spiral}
In the following, we turn to the case of a general topological defect with charge +1, where $\psi(r)$ takes on an arbitrary constant value $\psi_0$. A constant value of $\psi$ implies $h_\perp=0$, see Eq.~\eqref{eq:molecularfieldperp}. Its value is fixed by the steady state Eq.~\eqref{eq:dinamicadirectorparticular2}, which implies $\nu \cos(2\psi_0)=1$. This condition requires $|\nu|\geq1$ for a real solution $\psi_0$. Note that $\psi(r)=\psi_0$ also satisfies the boundary conditions~\eqref{eq:boundarycon2} and \eqref{eq:boundarycon6}, see Fig.~\ref{fig3}a for a comparison of the analytic result with a numeric solution of the dynamic equations. Without restriction of generality we consider $0<\psi_0<\pi/2$.
\begin{figure}[b]
\centering
\includegraphics[]{BlanchMercaderetal_Fig4.pdf}
\caption{(online color) Steady-state profiles of the orientational order in spirals and with $R^2\ll{\cal K}/\chi$. a) Polarization angle $\psi$ and b) polar order parameter $S$. Purple lines: $S=r/R$ and $\psi=\psi_0$, respectively. Green dots: numerical solution of the dynamic equations. Parameter values are $\chi=0.1$, $\nu=-1.4$, $\zeta=10^{-2}$, $T_0=0$, $\eta=10^2$, and $\xi=1$ with the units being set by $R={\cal K}=\gamma=1$. For these parameter values $|\gamma\nu v_{r\theta}\sin(2\psi_0)|<2*10^{-5}\ll \chi$.}\label{fig3}
\end{figure}
Next, we consider Eq.~\eqref{eq:dinamicadirectorparticular1} with $h_\parallel$ given by Eq.~\eqref{eq:molecularfieldpara}. As for the case of asters discussed above, we focus on the case $R^2\ll\mathcal{K}/\chi$. Furthermore, we consider that $|\gamma\nu v_{r\theta}\sin(2\psi_0)|\ll \chi$. In this limit, flow alignment does not lead spontaneously to orientational order and the solution to Eq.~\eqref{eq:molecularfieldpara} is $S=r/R$, see Fig.~\ref{fig3}b.
\subsection{Velocity field}
Having obtained the polarization field, we now determine the velocity field. To this end, let us first consider force balance in the azimuthal direction, see Eq.~\eqref{eq:forcebalance3}. Using the expressions for $S$ and $\psi$, we obtain a differential equation for the azimuthal component $v_\theta$ of the velocity
\begin{align}
\partial_{r}\sigma_{\theta r}+\frac{2\sigma_{\theta r}}{r}&=\xi v_{\theta}-T_0\frac{r}{R}\sin(\psi_0),\label{eq:forcebalanceanalytic}
\end{align}
where the off-diagonal component $\sigma_{\theta r}$ of the deviatory stress tensor reads
\begin{align}
\sigma_{\theta r}&=\left(2\eta+\gamma\frac{ r^2 }{2 R^2}\tan(2\psi_0)^2\right) v_{r\theta}-\frac{r^2}{2 R^2} \sin(2\psi_0)\zeta \Delta\mu,\label{eq:devStressTR}
\end{align}
see Eq.~\eqref{eq:devStressTensorRTheta}. The boundary conditions are given by Eqs.~\eqref{eq:boundarycon3} and \eqref{eq:boundarycon7}.
In our system, azimuthal flows are generated by two different active processes, namely, gradients in the active stress, which is proportional to $\zeta\Delta\mu$, and traction forces, which are proportional to $T_0$ as discussed in Sect.~\ref{sec:Activeforces}. Since Eq.~\eqref{eq:forcebalanceanalytic} is linear in $v_\theta$, we discuss these two origins of flows by solving Eq.~\eqref{eq:forcebalanceanalytic} in various limiting regimes that differ in the dominant dissipative mechanism. Explicitly,
\begin{itemize}
\item Regime I, where dissipation is dominated by shear viscosity: $\gamma\tan(2\psi_0)^2\ll \eta$ and $\xi R^2\ll \eta$;
\item Regime II, where dissipation is dominated by relaxation of the polarization field: $\eta\ll \gamma\tan(2\psi_0)^2$ and $\xi R^2\ll \gamma\tan(2\psi_0)^2$;
\item Regime III, where dissipation is dominated by friction forces with the underlying substrate: $\gamma\tan(2\psi_0)^2 \ll \xi R^2$ and $\eta \ll \xi R^2$.
\end{itemize}
In Regime III we further distinguish the cases $\gamma\tan(2\psi_0)^2\ll\eta$ and $\eta\ll\gamma\tan(2\psi_0)^2$. Whereas in Regimes I and II there are long-ranged flows due to viscous coupling of different parts of the system, in Regime III, flows can be screened beyond distances of the order of the 'friction length' $\ell$, where
\begin{align}
\ell^2 &= \frac{1}{4\xi}\left(4\eta+\gamma\tan(2\psi_0)^2\right).\label{eq:frictionLength}
\end{align}
\subsubsection{Flows driven by traction forces}
In presence of traction forces only, the angular velocity takes the form
\begin{align}
v_{\theta}&=\frac{T_0}{\xi}\frac{r}{R}\sin(\psi_0) .\label{eq:solution1}
\end{align}
As a consequence, the system rotates as a block and no shear flows exist, i.e., $v_{\theta r}=0$. Consequently, neither viscous nor rotational dissipation affects these flows. We have verified numerically that this solution is a good approximation of the flow in Regimes I-III, see Fig.~\ref{fig1}.
\begin{figure}[t]
\centering
\includegraphics{BlanchMercaderetal_Fig5.pdf}
\caption{(online color) Steady-state azimuthal velocity for flows driven by traction forces and with $R^2\ll{\cal K}/\chi$. a) Regime I with $\eta=50$, $100$, $200$ and $\xi=1$, b) Regime II with $100\eta=0.5$, $1$, $2$ and $\xi=10^{-2}$, c) Regime III with $\eta=100$ and $10^{-5}\xi=0.5$, $1$, $2$, and d) Regime III with $\eta=0.01$ and $10^{-3}\xi=0.5$, $1$, $2$. Purple lines: Eq.~\eqref{eq:solution1}. Green dots: numerical solutions of the dynamic equations. Other parameter values are $\chi=10^{-1}$, $\nu=-1.4$, $T_0=10^{-2}$, and $\zeta\Delta\mu=0$ with the units being set by $R={\cal K}=\gamma=1$.}
\label{fig1}
\end{figure}
\subsubsection{Flows driven by gradients in active stresses}
In contrast to traction-force driven flows, those driven by gradients in anisotropic active stresses depend on the dominant mechanism of dissipation. We now take $T_0=0$ and consider the different regimes in turn.
For Regimes I and II, the friction term in Eq.~\eqref{eq:forcebalanceanalytic} can be neglected and we have
\begin{align}
\partial_{r}\sigma_{\theta r}+\frac{2\sigma_{\theta r}}{r}&=0.
\end{align}
We thus have $\sigma_{\theta r}=C/r^2$ for some constant $C$. Since $\sigma_{\theta r}$ is finite at $r=0$, it follows that $C=0$. Because the corresponding component of the Ericksen stress also vanishes, $\sigma^e_{\theta r}=0$, see Eq.~\eqref{eq:StressEricksenParrt}, the boundary condition \eqref{eq:boundarycon3} is satisfied. Using Eq.~\eqref{eq:devStressTR}, we can solve $\sigma_{\theta r}=0$ for $v_{\theta r}$ and find that the azimuthal velocity $v_\theta$ is determined by
\begin{align}
\frac{1}{2}\left(\partial_rv_\theta-\frac{v_\theta}{r}\right)&=\frac{ r^2\sin(2\psi_0) \zeta \Delta\mu}{4\eta R^2+\gamma r^2 \tan(2\psi_0)^2}.\label{eq:strainratespiral}
\end{align}
In Regime I, the term proportional to $\gamma$ in Eq.~\eqref{eq:strainratespiral} can be neglected and we obtain
\begin{align}
v_{\theta}&=\frac{ \sin(2\psi_0)\zeta\Delta\mu }{4\eta R^2}r^3+D_\eta r,\label{eq:aux1}
\end{align}
where $D_\eta$ is a constant of integration. Similarly, in Regime II, the term proportional to $\eta$ in Eq.~\eqref{eq:strainratespiral} can be neglected and
\begin{align}
v_{\theta}=\frac{2\cos(2\psi_0)\zeta\Delta\mu }{\gamma \tan(2\psi_0)}r\ln{(r)}+D_\gamma r,\label{eq:aux2}
\end{align}
where $D_\gamma$ is a constant of integration. Note that both solutions respect the condition $v_\theta=0$ at $r=0$.
For vanishing friction, $\xi=0$, the integration constants $D_\eta$ and $D_\gamma$ remain undetermined. By inserting the solutions \eqref{eq:aux1} and \eqref{eq:aux2} into the force balance Eq.~\eqref{eq:forcebalanceanalytic} and with the friction coefficient $\xi$ being small leads to the respective particular solutions
\begin{align}
v_{\theta}&=\frac{ \sin(2\psi_0)\zeta\Delta\mu}{4\eta}r\left(\frac{r^2}{R^2}-\frac{2}{3}\right)\label{eq:solvel1}\\
\intertext{in Regime I and}
v_{\theta}&=\frac{2\cos(2\psi_0)\zeta\Delta\mu}{\gamma \tan(2\psi_0)} r\ln{(r e^{1/4}/R)}\label{eq:solvel2}
\end{align}
in Regime II. Note that in both cases the azimuthal flow near the outer boundary of the circular domain is opposite to the flow close to the center. The distance from the center at which the flow changes sign is independent of the friction coefficient $\xi$. The stagnation point at which $v_\theta=0$ is placed such that the total torque vanishes, see Sect.~\ref{sec:forceDesnities}. Both solutions agree well with numerical solutions obtained in Regime I and II, see Fig.~\ref{fig2}a,b.
\begin{figure}[b]
\centering
\includegraphics{BlanchMercaderetal_Fig6.pdf}
\caption{(online color) Steady-state azimuthal velocity for flows driven by gradients in active stresses and with $R^2\ll{\cal K}/\chi$. a) Regime I with $\eta=50$, $100$, $200$ and $\xi=1$, b) Regime II with $10^4\eta=0.5$, $1$, $2$ and $\xi=10^{-2}$, c) Regime III with $\eta=100$ and $10^{-5}\xi=0.5$, $1$, $2$, and d) Regime III with $\eta=0.01$ and $10^{-3}\xi=0.5$, $1$, $2$. Purple lines: (a) Eq.~\eqref{eq:solvel1}, (b) Eq.~\eqref{eq:solvel2}, (c,d) Eqs.~\eqref{eq:solvel3} and \eqref{eq:solvel4}. Green dots: numerical solution of the dynamic equations. Other parameter values are $\chi=10^{-1}$, $\nu=-1.4$, $T_0=0$, and $\zeta\Delta\mu=10^{-2}$ with the units being set by $R={\cal K}=\gamma=1$.
}\label{fig2}
\end{figure}
Let us now turn to Regime III. There, the viscous part of the stress tensor is negligible except in a boundary layer of size $\ell$ that are determined below. Neglecting the viscous stress, the force balance equation~\eqref{eq:forcebalanceanalytic} reads
\begin{align}
-\frac{2 r}{R^2} \sin(2\psi_0)\zeta\Delta\mu&=\xi v_\theta\label{eq:solvel3}
\end{align}
and thus explicitly gives the azimuthal velocity. In the boundary layer, we introduce a new spatial variable $x=(R-r)/R$ and velocity $\tilde v_\theta(x)=v_\theta(R(1-x))$ with $0\le x\ll1$. We then express the force balance equation~\eqref{eq:forcebalanceanalytic} in terms of these variables and keep only terms of order 0 in $x$. Since $\partial_x\tilde v_\theta\sim\tilde v_\theta/(\ell/R)=R\tilde v_\theta/\ell\gg \tilde v_\theta$, we see that $\tilde v_\theta$ and $\partial_x \tilde v_\theta$ are negligible compared to $\partial_x^2\tilde v_\theta$, which further simplifies the force balance equation. Expressing the resulting equation in terms of $r$ and $v_\theta$, we obtain
\begin{align}
\ell^2\partial_r^2 v_\theta - \frac{2\sin(2\psi_0)}{R\xi}\zeta\Delta\mu &= v_\theta,
\end{align}
where the friction length $\ell$ is given by Eq.~\eqref{eq:frictionLength}.
The solution is
\begin{align}
v_\theta&=-\frac{2\zeta\Delta\mu}{R\xi} \sin(2\psi_0)+E e^{(r-R)/\ell} \label{eq:solvel4}
\end{align}
for $r\in(R-\ell,R)$. In this expression, we have neglected for simplicity the subdominant term proportional to $e^{-(r-R)/\ell}$. The integration constant $E$ is fixed by the boundary condition \eqref{eq:boundarycon3}. In the limit $\ell\ll R$ this condition takes the form
\begin{align}
\sigma_{\theta r}|_{r=R}&\approx \left(\eta+\frac{\gamma\tan{(2\psi_0)}^2}{4}\right) \partial_r v_\theta|_{r=R}-\frac{\zeta\Delta\mu}{2} \sin(2\psi_0) \\
\intertext{such that}
E&=\frac{2\zeta\Delta\mu \sin(2\psi_0)\ell}{4\eta+\gamma\tan{(2\psi_0)}^2}.
\end{align}
We have verified numerically that the solution given by Eqs.~\eqref{eq:solvel3} and \eqref{eq:solvel4} is valid for $\eta\ll \gamma\tan{(2\psi_0)}^2$ and $\eta\gg \gamma\tan{(2\psi_0)}^2$, see Fig.~\ref{fig2}c,d.
\subsection{Cell number density}
To obtain the cell number density profile, we use force balance in the radial direction, Eq.~\eqref{eq:forcebalance1}. We first compute the components of the total stress tensor. The components of the Ericksen stress are given by Eqs.~\eqref{eq:StressEricksenParrr}-\eqref{eq:StressEricksenPartt}, where the terms proportional to $B$ dominate if $R^2\ll\mathcal{K}/\chi$. The antisymmetric components of the deviatory stress vanish and its symmetric components are given by Eqs.~\eqref{eq:devStressTensorRR}.
From now on, we focus on Regimes I and II. With expression~\eqref{eq:strainratespiral} for $v_{r\theta}$ we then obtain for the total stress
\begin{align}\label{eq:devstresstensorspiral}
\sigma^\mathrm{tot}_{rr}&=\frac{B}{2}\left(1-\frac{n^2}{n_0^2}\right)-\left(\frac{1}{2}-\nu'\overline{\gamma} \frac{r^2}{R^2}\right)\frac{ \cos{(2\psi_0)}\frac{r^2}{R^2} }{1+\overline{\gamma} \frac{r^2}{R^2}}\zeta\Delta\mu - \frac{r^2}{R^2}\zeta''\Delta\mu \\
\sigma^\mathrm{tot}_{r\theta}&= \sigma^\mathrm{tot}_{\theta r}=0\\
\sigma^\mathrm{tot}_{\theta\theta}&=\frac{B}{2}\left(1-\frac{n^2}{n_0^2}\right)+\left(\frac{1}{2}+\nu'\overline{\gamma} \frac{r^2}{R^2}\right)\frac{ \cos{(2\psi_0)}\frac{r^2}{R^2} }{1+\overline{\gamma} \frac{r^2}{R^2}}\zeta\Delta\mu- \frac{r^2}{R^2}\zeta''\Delta\mu,
\end{align}
where $\overline{\gamma}=\gamma \tan{(2\psi_0)}^2/4\eta$.
Using the above expressions in the radial component of the force balance Eq.~\eqref{eq:forcebalance1}, we can integrate once and obtain
\begin{align}
\sigma_{rr}^\mathrm{tot}&=\sigma_{rr,0}^\mathrm{tot}-\frac{r^2}{2R}\cos{(\psi_0)}T_0 +\frac{\cos(2\psi_0)}{2\overline{\gamma}}\ln{\left(\frac{1+\overline{\gamma} \frac{r^2}{R^2}}{1+\overline{\gamma}}\right)\zeta\Delta\mu}. \label{eq:totaltensionrrspirals}
\end{align}
Here $\sigma_{rr,0}^\mathrm{tot}$ is an integration constant that is fixed by the boundary condition~\eqref{eq:boundarycon4}. We now assume that the cell density deviates only little from the reference density, $|n-n_0|\ll n_0$. Equating expressions \eqref{eq:devstresstensorspiral} and \eqref{eq:totaltensionrrspirals} for $\sigma^\mathrm{tot}_{rr}$ and writing the total cell number in the circular island as $n^\mathrm{tot}\pi R^2$, we obtain up to first order in $n/n_0$
\begin{align}
\frac{n-n^\mathrm{tot}}{n_0}&\approx\frac{1}{B}\left\{\left(\frac{r^2}{R^2}-\frac{1}{2}\right)\left[\frac{R}{2}\cos{(\psi_0)}T_0 -\zeta''\Delta\mu\right]\right.\nonumber\\
&\quad\quad\left.-\frac{\cos(2\psi_0)}{2\overline{\gamma}}\left[\frac{(1-2\nu'\overline{\gamma}\frac{r^2}{R^2})\overline{\gamma} \frac{r^2}{R^2}}{1+\overline{\gamma}\frac{r^2}{R^2}}
+\ln{\left({1+\overline{\gamma} \frac{r^2}{R^2}}\right)+\Gamma}\right]\zeta\Delta\mu\right\},\label{eq:densityspirals2}
\end{align}
where $\Gamma=\nu'(\overline{\gamma}-2)-(1-2\frac{\nu'}{\overline{\gamma}})\ln{(1+\overline{\gamma})}$. Note that unlike the case of asters the density profiles of spirals depend on couplings between the field $\mathbf{h}$ and flow gradients through $\nu'$.
In the limits $\overline{\gamma}\to0$ and $\overline{\gamma}\to\infty$ we have
\begin{align}
\frac{n-n^\mathrm{tot}}{n_0}& \approx\frac{1}{B}\left(\frac{R}{2}\cos(\psi_0)T_0 -\kappa\cos(2\psi_0)\zeta\Delta\mu-\zeta''\Delta\mu\right)\left(\frac{r^2}{R^2}-\frac{1}{2}\right).
\end{align}
Here, the constant $\kappa=1$ for $\overline{\gamma}\to0$ and $\kappa=-\nu'$ for $\overline{\gamma}\to\infty$. In these limiting cases, we thus have parabolic density profiles, which differ from the cell number density for asters, Eq.~\eqref{eq:densityasters2}, only in a global pre-factor.
\subsection{Force densities}
\label{sec:forceDesnities}
We end the discussion of spirals by determining the momentum that the monolayer exchanges with the environment in this configuration. As in the previous section, we consider only the Regimes I and II, where friction between the monolayer and the substrate is negligible. The force exerted by the monolayer on the substrate is
\begin{align}
\mathbf{t}&=-T_0\cos(\psi_0)\frac{r}{R}\mathbf{\hat{r}}.
\end{align}
At the confinement boundary $r=R$ and to first order in $n^\mathrm{tot}/n_0$, the local force density $\mathbf{f}_o$ per unit length is
\begin{align}
\mathbf{f}_o &= -\mathsf{\sigma}^\mathrm{tot}(r=R)\cdot\hat{\mathbf{r}}\\
&=\left[\frac{R}{4}\cos{(\psi_0)}T_0
-\frac{ \nu'}{\nu} \left(\frac{\overline{\gamma}-2}{2\overline{\gamma}}+\frac{\ln{(1+\overline{\gamma})}}{\overline{\gamma}^2}\right)\zeta\Delta\mu
+\frac{\zeta''}{2}\Delta\mu
+B\left(\frac{n^\mathrm{tot}-n_0}{n_0}\right)\right]\hat{\mathbf{r}}. \label{eq:forceouterspiral}
\end{align}
As there are no azimuthal components of the force densities, the total force and torque on the system vanish, Eqs.~\eqref{eq:Ftotal} and \eqref{eq:Mtotal}.
In presence of a small friction term, the force exerted by the monolayer on the substrate now is $\mathbf{t}=-T_0\cos(\psi_0)r\mathbf{\hat{r}}/R+\xi v_\theta \hat{\bm{\theta}}$, which implies the presence of local forces and torques. The velocity $v_\theta$ is given by Eq.~\eqref{eq:solvel1} in Regime I and by Eq.~\eqref{eq:solvel2} in Regime II. The total force, Eq.~\eqref{eq:Ftotal}, still vanishes due to symmetries, whereas the total torque~\eqref{eq:Mtotal}, vanishes because the contributions from clockwise and counter-clockwise flows compensate each other.
We can generalize expression~\eqref{eq:forceinneraster} for the force exerted by the monolayer on a pillar in the center of the island obtained for asters to the case of spirals. Making the same assumptions as in Sect.~\ref{sec:asters}, we have
\begin{align}
\mathbf{f}_i&=\mathsf{\sigma}^\mathrm{tot}(r)\cdot\hat{\mathbf{r}}\\
& =\left\{-\frac{R}{2}\cos(\psi_0)T_0 \left(\frac{r^2}{R^2}-\frac{1}{2}\right)\right.\nonumber\\
&\quad\quad\quad+\frac{\cos(2\psi_0)}{2\overline{\gamma}}\left[\ln\left(1+\overline{\gamma}\frac{r^2}{R^2}\right)+\frac{2\nu'-\overline{\gamma}}{\overline{\gamma}}\ln(1+\overline{\gamma})+\nu'(\overline{\gamma}-2)\right]\zeta\Delta\mu\nonumber\\
&\quad\quad\quad\left.-\frac{\zeta''}{2}\Delta\mu-B\left(\frac{n^\mathrm{tot}-n_0}{n_0}\right)\right\}\hat{\mathbf{r}}
\end{align}
In Regimes I and II we obtain parabolic force profiles similar to the case of asters, see Eq.~\eqref{eq:forceinneraster}, with rescaled coefficients. Note that similarly to the cell number density, the force on the pillars depends on the coupling between the field $\mathbf{h}$ and flow gradients via $\nu'$.
\section{Characterization of myoblast monolayers}
We now use the framework developed above to analyze monolayers of C2C12 myoblasts. To determine their physical properties, we analyze two different situations. First, we study the organization of cells around topological defects in extended confluent layers. Through our analysis, we constrain the Frank elastic constants, which characterize splay and bend deformations of the orientational order field. Second, we examine spiral arrangements of monolayers confined to small circular domains. This analysis allows us to comprehensively determine the material parameters of myoblast monolayers. For experimental details, we refer to Ref.~\cite{PauScience}.
\subsection{Nematic elastic moduli}\label{sec:FrankConstants}
In the following we determine the ratio of the nematic elastic constants for extended confluent C2C12 monolayers. In this situation, the cells exhibit long-ranged orientational order and arrange into patterns similar to passive nematic liquid crystals~\cite{Duclos2017}. The nematic organization is evidenced for instance by the presence of half-integer topological defects~\cite{PauScience}. We capture the nematic order by the director field $\mathbf{n}$ and analyze its configurations around +1/2 topological defects in terms of an equilibrium approach to nematic liquid crystals. Similar approaches were used in the context of synthetic or biological liquid crystals~\cite{Brugues2008,Zhang2017}.
For a two-dimensional nematic liquid crystal with director field $\mathbf{n}$, the elastic energy associated with distortions of the orientational order is
\begin{align}
\mathcal{F}&=\int_\mathcal{A}\left\{\frac{\mathcal{K}_1}{2}\left(\nabla\cdot \mathbf{n}\right)^2+\frac{\mathcal{K}_3}{2}\left(\mathbf{n}\times\left(\nabla\times\mathbf{n}\right)\right)^2\right\}da \label{eq:freeenergyanisotropic}
\end{align}
with Frank elastic constants $\mathcal{K}_1$ and $\mathcal{K}_3$. They, respectively, quantify the energetic costs of splay and bend deformations~\cite{DeGennes1995}.
The equilibrium director configuration is determined by minimizing the energy \eqref{eq:freeenergyanisotropic}. Near a topological defect, the solution is given by~\cite{dzyaloshinskii1970theory}
\begin{align}
\theta&=p\int_0^{\phi-\theta}\sqrt{\frac{1+\epsilon \cos{(2x)}}{1+\epsilon p^2 \cos{(2x)}}}dx, \label{eq:directorfield}
\end{align}
where the elastic anisotropy parameter is $\epsilon=(\mathcal{K}_1-\mathcal{K}_3)/(\mathcal{K}_1+\mathcal{K}_3)$, for which there is a one-to-one correspondance with the ratio ${\cal K}_1/{\cal K}_3$. Furthermore, $\phi$ denotes the angle of the director $\mathbf{n}$ with respect to a fixed axis and $\theta$ is the azimuthal angle with respect to the defect center, Fig.~\ref{figS1}a. The fixed axis is chosen such that $\phi(\theta=0)=0$. Note that Eq.~\eqref{eq:directorfield} is independent of the radial coordinate $r$, Fig.~\ref{figS1}a. Finally, $p$ is a constant that is determined by the condition that $\phi$ is a single-valued function of $\theta$, which leads to
\begin{align}
\pi=(s-1)p\int_0^\pi\sqrt{\frac{1+\epsilon \cos{(2x)}}{1+\epsilon p^2 \cos{(2x)}}}dx, \label{eq:boundarycondition}
\end{align}
where $s$ corresponds to the topological charge of the defect. Figure~\ref{figS1}b shows $\phi(\theta)$ for a $s=+1/2$ topological defect and for varying $\epsilon$.
\begin{figure}[b]
\centering
\includegraphics[]{BlanchMercaderetal_Fig7.pdf}
\caption{(online color) Half-integer topological defects in C2C12 myoblast monolayers. a) Schematic representation of the director field for a $+1/2$ topological defect. b) Theoretical profile $\phi(\theta)$, Eq.~\eqref{eq:directorfield}, with $s=+1/2$ for varying $\epsilon$ as indicated in the legend. The ratio of Frank constants is: $\mathcal{K}_1/\mathcal{K}_3=\{0.25,0.54,1,1.86,4.\}$ for $\epsilon=\{-0.6,-0.3,0,0.3,0.6\}$. c) Representative experimental curves $\phi^e(\theta)$ for varying radial distance $r$ as indicated in the legend. d) Fitted ratio $\mathcal{K}_1/\mathcal{K}_3$ as a function of the radial coordinate $r$. Error bars correspond to the std of all values of $\epsilon$ that lead to $\mathcal{E}<1.1\mathcal{E}_{min}$.} \label{figS1}
\end{figure}
For extended C2C12 monolayers, we obtained the experimental values $\phi^e$ by first determining the director field of the monolayer using structure factor methods~\cite{Puspoki2016}, see Methods in Ref.~\cite{PauScience}. We then averaged the director orientation over time for $N>100$ distinct $+1/2$ topological defects. For the overall average, we fixed the radial coordinate $r$ and thus obtained average profiles for different radial distances, see Fig~\ref{figS1}c. Within the experimental error, the director orientation did not depend on $r$, which is in agreement with the theory. We fitted the solution \eqref{eq:directorfield} for $\phi$ to the experimental data by using the elastic anisotropy $\epsilon$ as the only fit parameter. The parameter $\epsilon$ was obtained by minimizing the error function
\begin{align}
\mathcal{E}=\int_0^{2\pi}|\phi(\theta)-\phi^e(r,\theta)|d\theta.\label{eq:funcioerror}
\end{align}
We attributed an error to this value as the standard deviation (std) of all values of $\epsilon$ that lead to $\mathcal{E}<1.1\mathcal{E}_{min}$, where $\mathcal{E}_{min}$ is the absolute minimum.
The values of $\mathcal{K}_1/\mathcal{K}_3$ thus obtained are presented in Fig.~\ref{figS1}d as a function of the radial distance $r$ with respect to the defect center. Although there is some tendency of the ratio $\mathcal{K}_1/\mathcal{K}_3$ to increase with $r$, there is not a significant difference between the values of this ratio for different radii. The value averaged over all experimental data is $\mathcal{K}_1/\mathcal{K}_3=0.95\pm 0.10$ (mean$\pm$std). We conclude that the Frank elastic constants $\mathcal{K}_1$ and $\mathcal{K}_3$ are equal within the experimental error. This justifies our choice of the one-constant approximation made in Eq.~\eqref{eq:freeenergy}, where $\mathcal{K}=\mathcal{K}_1=\mathcal{K}_3$.
\subsection{Determination of material parameters}\label{sec:Fittingprocedure}
In order to determine the material parameters of C2C12 myoblast monolayers, we solve the full dynamic equations for a broad range of parameters numerically, see App.~\ref{sec:numerics}, and compare the velocity and polarization fields obtained in this way to our experimental data. Specifically, we used data from spirals on islands with radius $R=50~\mu$m, 100~$\mu$m, and 150~$\mu$m for the velocity $v_\theta$ and the polar order parameter $S$. For the polarization angle $\psi$, we used data from spirals on islands with a fixed radius $R=100~\mu$m.
The difference between the numerical and experimental fields are quantified via an error function $\mathcal{E}$ that are given below. The parameter set that gives the minimal error $\mathcal{E}_\mathrm{min}$ then provides the sought for material parameters. We will determine confidence intervals for these parameter values by considering the range of parameter values that yield an error within 10\% of the minimal error, that is, for which $\mathcal{E}<1.1\mathcal{E}_\mathrm{min}$.
The numerical solutions are computed after making the dynamic equations dimensionless. To this end, we use the radius $R$ of the smallest island as the length scale, $\mathcal{K}$ as the energy scale, and $\mathcal{K}/(R\gamma)$ as the velocity scale. The flow alignement parameter $\nu=1/\cos(2\psi_0)$ can be directly inferred from the angle $\psi=\psi_0$ between the polarization vector and the radial direction, Fig.~\ref{fig3a}. The average angle $\psi=76\pm13^\circ$, which leads to $\nu=-1.1\pm0.3$ (mean$\pm$std, $N=12$). For the numerical calculations, we used $\nu=-1.2$. This leaves us with 5 dimensionless parameters to determine: $\chi R^2/\mathcal{K}$, $\eta/\gamma$, $\xi R^2/\gamma$, $\zeta\Delta\mu R^2/\mathcal{K}$, and $T_0R^3/\mathcal{K}$. In the remainder of this section, we will use the same notation for the nondimensionalized parameters as for the original ones.
\begin{figure}[t]
\centering
\includegraphics[]{BlanchMercaderetal_Fig8.pdf}
\caption{(online color) Probability density of the polarization angle with respect to the radial direction $\psi$. The data was obtained from C2C12 monolayers in spiral configurations that were confined to an island of $100$~$\mu$m radius ($N=12$).}\label{fig3a}
\end{figure}
We computed solutions for parameters in the range $(\chi, \eta,\xi,|\zeta\Delta\mu|,|T_0|)\in(0.2,5)\times(10^{-1},10^1)\times(10^{-1},10^1)\times(10^{-4},10^{-2})\times(10^{-4},10^{-2})$, where $\zeta\Delta\mu$ and $T_0$ can take either sign. As error function we used
\begin{align}
\mathcal{E}&=\sum_i |v_{\theta,i}^{e}-v_{\theta,i}|\Delta r_i+\sum_i|S_i^{e}-S_i| \Delta r_i.\label{eq:errorfunction}
\end{align}
Here, the superscript 'e' indicates values averaged over at least $N=5$ experiments, and the index $i$ indicates that samples are taken at discrete radial positions $r_i$. Furthermore, $\Delta r_i=r_{i+1}-r_i$ is related to the experimental spatial resolution and $\Delta r_i\sim 5~\mu$m. In Figure~\ref{fig4}, we present various cuts through the parameter space and indicate the regions, where $\mathcal{E}<1.1\mathcal{E}_\mathrm{min}$.
\begin{figure}[t]
\centering
\includegraphics[]{BlanchMercaderetal_Fig9.pdf}
\caption{(online color) Parameter values leading to an error $\mathcal{E}<1.1\mathcal{E}^{min}$ for the error function~\eqref{eq:errorfunction}. The cuts of the parameter space are: a) $T_0$ vs $\zeta\Delta\mu$, b) $\eta$ vs $\xi$, c) $\chi$ vs $\zeta\Delta\mu$, and d) $\zeta\Delta\mu\xi/T_0\eta$ vs $\xi$. The units are fixed by $\mathcal{K}=\gamma=R=1$, and $\nu=-1.2$. Gray areas indicate parameter regions that were not analyzed. Green squares: active stress dominated region, dark green star: local minimum. Magenta circles: traction force dominated region, dark magenta star: global minimum. }\label{fig4}
\end{figure}
\subsection{Myoblast monolayers confined to circular domains}
\label{sec:monolayers}
In this section, we discuss the parameter values determined by the approach described in the previous section using our experiments of C2C12 monolayers on circular domains~\cite{PauScience}. Let us start by setting the units of our experiments. The length scale is set by the radius of the smallest island $R=50~\mu$m. The velocity scale is set by the azimuthal flow velocity at the edges of the island to $30~\mu$m/h. Finally, the energy scale is set by the stress exerted on pillars of radius 40~$\mu$m times $R^3$, that is, 10~kPa$\times1.25\cdot10^5~\mu$m$^3=1.25\cdot10^3~\mu$N$\mu$m.
The data presented in Figure~\ref{fig4} readily reveals several constraints on the parameter values. First of all $T_0>0$, Fig.~\ref{fig4}a, which shows that the azimuthal velocity $v_\theta$ is in the direction of the azimuthal component of the polarization field $\mathbf{p}$. Second, the penetration length of the polar order parameter $\sqrt{{\cal K}/\chi}$ is larger than $25$~$\mu$m, Fig.~\ref{fig4}c. It is thus at least of the same order as the confinement radii in our experiments, such that the orientational order induce by the boundaries propagates into the center of the island.
Further inspection of Fig.~\ref{fig4} shows two disjoint region in parameter space corresponding to solutions with distinct physical properties. In both cases, the parameters yield close fits to the polar order parameter $S$ and the azimuthal velocity $v_\theta$ measured in our experiments, see Fig.~\ref{fig5}. The two regions are narrow in several directions, meaning that the corresponding combinations of the dimensionless parameters are well determined by our experimental data. This is the case, for example, for $\zeta\xi R/T_0 \eta$, see Fig.~\ref{fig4}b and Table~\ref{tab:table1}. The directions that are less constrained still provide upper or lower bounds on our dimensionless parameters, see Table~\ref{tab:table1}.
\begin{figure}[t]
\centering
\includegraphics[]{BlanchMercaderetal_Fig10.pdf}
\caption{(online color) Theoretical fits to experimental data. a) Polar order parameters $S$ and b) azimuthal velocity $v_\theta$ as a function of the radial distance $r$. Mean theoretical profiles for the active stress dominated parameter region in solid magenta and for the traction force dominated parameter region in dashed green, see Fig.~\ref{fig4} and Table~\ref{tab:table1}. Blue: experimental profiles ($N=(11,12,5)$ for confining domain radius $(50,100,150)$~$\mu m$). Error bars in theoretical fits correspond to the std of parameter values that lead to $\mathcal{E}<1.1\mathcal{E}_{min}$ and in experimental curves to sem. Profiles for three different confinement radii $R=50$, $100$, and $150$~$\mu$m are shown. The theoretical curves are endowed with physical units such that $S(R)=1$ and $v_\theta(R)=21.4~\mu$m/h for $R=50~\mu$m.}\label{fig5}
\end{figure}
\begin{table}[b
\begin{ruledtabular}
\begin{tabular}{lcc}
&Active stress& Traction force \\
& dominated & dominated\\
$T_0 R/|\zeta\Delta\mu|$ with $T_0>0$ & $<0.6^*$ & $>16$ \\
$\sqrt{\eta/\xi}/R$ & $>0.5$ &$<0.24$ \\
$\sqrt{{\cal K}/\chi}/R$ & $>1$ & $(0.4,2)$ \\
$\zeta\Delta\mu R\xi/\eta T_0 $ with $T_0>0$ & $3.2\pm 1.3$ & $0.5\pm1.4$ \\
$\nu$ & $-1.1\pm0.3 $ & $-1.1\pm0.3 $
\end{tabular}
\end{ruledtabular}
\caption{Table of material parameters for the solutions in Fig.~\ref{fig5}. The errors correspond to std. To restore length units $R=50$~$\mu$m. $^*$ with $\zeta>0$. \label{tab:table1}}
\end{table}
The parameter region for the solid magenta fits in Fig.~\ref{fig5} corresponds to a mechanical regime where the anisotropic active stress $\zeta\Delta\mu$ is the dominating active mechanism, $T_0R/|\zeta\Delta\mu|< 0.6$. In this active stress dominated regime, the length scale $\sqrt{\eta/\xi}$, which is determined by the dissipative mechanisms, is bounded from below by 25~$\mu$m. The penetration length of the polar order is $\sqrt{\mathcal{K}/\chi}>50$~$\mu$m. There are two velocity scales associated with the two active mechanisms, $\zeta\Delta\mu R/\eta$ and $T_0/\xi$. The ratio between these two scales $\zeta\Delta\mu R\xi/\eta T_0 =3.2\pm 1.3$ shows that the flows are mainly generated by anisotropic active stresses.
The parameter region for the dashed green fits in Fig.~\ref{fig5} corresponds to a mechanical regime, where the traction force $T_0$ is the dominating active mechanism, $T_0R/|\zeta\Delta\mu|> 16$. In this traction force dominated regime, the length scale $\sqrt{\eta/\xi}$ is bounded from above by $12$~$\mu$m. The penetration length of the polar order is limited $22~\mu$m$<\sqrt{\mathcal{K}/\chi}<112$~$\mu$m. The ratio of the two velocity scales $\zeta\Delta\mu R\xi/\eta T_0 =0.5\pm 1.4$ shows that the flows are mainly generated by traction forces.
Although, the two parameter regions give comparably good fits to the polar order parameter and the azimuthal velocity in spirals, their mechanical characteristics are distinct. An important difference between the two regions is exhibited in the steady state force density and cell number density profiles of asters. In the active stress dominated region, the cell number density increases towards the center whereas it decreases towards the center in the traction force dominated regime, see Fig.~\ref{fig6}a.
\begin{figure}[t]
\centering
\includegraphics{BlanchMercaderetal_Fig11.pdf}
\caption{(color online) Theoretical fits of steady state profiles for asters. a) Cell number density $n$, b) radial force density as a function of the radial distance $r$. Averaged experimental profiles (blue, $N=10$ in (a) and $N=3$ in (b)), mean fit in the active-stress dominated (magenta, full lines) and in the traction dominated parameter region (green, dashed lines). The theoretical solutions are Eq.~\eqref{eq:densityasters2} in (a) and Eq.~\eqref{eq:forceinneraster} in (b). Parameters are given in Tab.~\ref{tab:table2}. We used $\zeta''\Delta\mu=0$. Error bars in theoretical fits correspond to std of all parameter values with $\mathcal{E}<1.1\mathcal{E}_{min}$ and in experimental curves to sem. In Fig.~3 of Ref.~\cite{PauScience}, the compressional stresses correspond to minus the radial force density in panel (b).}\label{fig6}
\end{figure}
Furthermore, the force density is pointing towards the center of the circular domain in the active stress dominated region, whereas it is pointing outwards in the traction force dominated region, see Fig.~\ref{fig6}b. In our experiments, we observed an increase of the cell number density in the center compared to the periphery, see Fig.~3 in Ref.~\cite{PauScience}. A further sign of cell accumulation in the center was the formation of mounds, see Figs.~1, 4 in Ref.~\cite{PauScience}. When elastic pillars were placed in the center of the circular domain, we observed compression of these structures, which is again compatible with the active stress dominated region, see Fig.~3 in Ref.~\cite{PauScience}.
\begin{table*}[t]
\begin{ruledtabular}
\begin{tabular}{cccccccc}
$T_0(\text{Pa})$ & $\zeta\Delta\mu(\text{kPa}~\mu\text{m})$ & $\eta(\text{kPa h}~\mu\text{m})$ & $\xi(\text{Pa h}/\mu\text{m})$ & $\sqrt{{\cal K}/\chi}(\mu\text{m})$ & $\nu$ & $B/n_0(\text{kPa}~\mu\text{m}^3)$ & $n^{\mathrm{tot}}(10^{-3}~\mu\text{m}^{-2})$ \\
$<600\pm60$ & $48\pm4$ & $34\pm8$ & $<40\pm20$ & $>50$ & $-1.1\pm0.3$ & $4600\pm800$ & $8.2\pm0.5$
\end{tabular}
\end{ruledtabular}
\caption{Table of material parameters for active stress dominated solutions. To convert 3d material parameters into 2d material parameters we use a cell monolayer height of $10$~$\mu$m. Error bars correspond to std of all parameter value with $\mathcal{E}<1.1\mathcal{E}_{min}$ except for $\nu$ (mean$\pm$std). \label{tab:table2}}
\end{table*}
For the fits presented in Fig.~\ref{fig6}b, we have imposed that the isotropic stress $\zeta''\Delta\mu$ vanishes. If this value were used as a fitting parameter, a qualitative agreement between the theory and the experiment could be achieved in the traction-force dominated regime, such that a discrimination between the two regimes might appear not to be possible based on these fits. However, in that case, the isotropic stress $\zeta''\Delta\mu$ needs to be comparable to $T_0R$ to achieve the same order of magnitude for the stress exerted on the pillars, see Eq.~\eqref{eq:forceinneraster}. We conclude that traction forces cannot be the dominating mechanism for generating pillar deformations.
To obtained the material parameters in the active stress dominated region, Table~\ref{tab:table2}, we combined the analysis from the polarization and velocity fields in spirals, Fig.~\ref{fig5}, with the cell number density and stresses fields in asters, Fig.~\ref{fig6}. Specifically, we restored the velocity units by setting $v_\theta(r=R)=21.3$~$\mu$m/h for $R=50$~$\mu$m and obtained the ratio $\zeta\Delta\mu /\eta=1.4\pm0.3$~h$^{-1}$. With a similar fitting procedure to that explained in Sec.~\ref{sec:Fittingprocedure}, we fitted the theoretical steady state profiles for asters, Fig.~\ref{fig6}, and obtained the parameters $B/n_0$, $n^\mathrm{tot}$, and $\zeta\Delta\mu$ listed in Table~\ref{tab:table2}. To transform the stress that cells exerted on deformable pillars into 2d cell monolayer stresses, we considered that the height of the monolayer was $10$~$\mu$m. Combining these new results with those from Table~\ref{tab:table1}, we obtained the material parameters from Table~\ref{tab:table2}.
\subsection{Comparison to other cell monolayers and conditions}
Next, we discuss how our estimates of the material parameters compare to other cellular systems or conditions. First, for contractile epithelial monolayers, $\zeta\Delta\mu<0$, an analog of a de-wetting transition was found~\cite{Perez-Gonzalez2019}. This transition was controlled by the length scale $-\zeta\Delta\mu/T_0$. In our case, such a transition is not expected to occur, because in both parameter regions the system is either dominated by traction forces or by extensile active stresses, Table~\ref{tab:table1}.
Previous experiments had identified C2C12 monolayers as being contractile ($\zeta\Delta\mu<0$). This conclusion was drawn from the dynamics of $+1/2$ topological defects~\cite{Kawaguchi2017}. In other experiments, based on the direction of the cellular shear flows with respect to the orientation of the cell bodies, it was concluded that these monolayers are extensile ($\zeta\Delta\mu>0$)~\cite{Duclos2018}. In our experiments, the observed flows in spirals are compatible with extensile active stresses in the active stress dominated regime. In the traction force dominated regime both, contractile and extensile active stresses, were compatible with the flows, see Fig.~\ref{fig4}a. Further work is necessary to understand the difference between these experiments.
The flow-alignment parameter $\nu=-1.1\pm 0.3$ controls the re-orientation of the polarization field $\mathbf{p}$ in response to shear flows. This value is similar to the typical range for passive liquid crystals~\cite{DeGennes1995}. In the drosophila wing, this parameter was estimated to be $-1<\nu<-10$~\cite{Aigouy2010}.
The mechanics of individual C2C12 cells was assessed by confining them to micropatterns of varying geometries~\cite{Bruyere2019}. There, it was found that traction forces of elongated C2C12 cells were concentrated at the distal ends of the cell body and pointed inwards. Depending on the cell geometry, these corresponding stresses ranged between $100$ and $1000$~Pa. For monolayers of other elongated cell types, the force per unit length associated with intracellular interactions were of the order of 10 kPa $\mu$m~\cite{Vincent2015}. In our experiments, we observed that confluent monolayers compressed elastic pillars with a stresses of the order of 1-10~kPa.
For spreading epithelial monolayers, the friction length was found between $100$ and $1000$~$\mu$m~\cite{Blanch-Mercader2017,Moitrier2019}. Such large values result from stable cell-cell junctions formed by epithelial cells. For cell types lacking such junctions, like C2C12 myoblasts, the friction length was found to be smaller, $10-40$~$\mu$m~\cite{Duclos2018}. The latter values are of the same order of magnitude as the bounds we found in both parameter regions for $\sqrt{\eta/\xi}$, which is smaller than the friction length $\ell$ given by Eq.~\eqref{eq:frictionLength}, see Table~\ref{tab:table1}.
Also the penetration length of the polarity field $\sqrt{{\cal K}/\chi}$ was measured in epithelial monolayers~\cite{Blanch-Mercader2017,Perez-Gonzalez2019}. It was found to be between $10$ and $100$~$\mu$m, which is of the same order as in our measurements. When epithelial monolayers were confined to circular islands with radii comparable to $\sqrt{{\cal K}/\chi}$, collective rotation was found~\cite{Doxzen2013,Deforet2014,Segerer2015}. However, in these cases, no evidence of topological defects organizing these flows was reported.
\section{Extensions}
In this section, we discuss the effects of extensions to our dynamical system. In particular, we consider nematic traction forces and active alignment.
\subsection{Nematic traction forces}\label{sec:ActiveNematicForce}
In the force balance Eq.~\eqref{eq:forcebalance}, we considered the active forces exerted by the monolayer onto the substrate result from processes with polar symmetry, $T_0\mathbf{p}$. In principle, also processes with nematic symmetry, which remain invariant under the operation $\mathbf{p}\rightarrow-\mathbf{p}$, could contribute to these forces. In some cases, these contributions have been shown to be of the same order as the polar contributions~\cite{Maitra2018}. We now discuss the effects of such terms on spirals and asters.
Up to second order in $\mathbf{p}$ and first order in derivatives, the nematic contributions to the right hand side of the force balance equation~\eqref{eq:forcebalance} can be written as
\begin{align}
\partial_\beta \left(p_\alpha p_\beta-\frac{1}{2}p_\gamma p_\gamma \delta_{\alpha\beta}\right)T_1+ \partial_\beta\left(p_\gamma p_\gamma \delta_{\alpha\beta}\right)T_2+\left( p_\alpha \partial_\beta p_\beta- p_\beta \partial_\beta p_\alpha\right)T_3. \label{eq:App2}
\end{align}
Addition of the first two terms to the force balance equation amounts to a redefinition of the coupling coefficients $\zeta$ and $\zeta''$ in the constitutive equation~\eqref{eq:devstresstensor} for the deviatory stress, $\zeta\Delta\mu\rightarrow \zeta\Delta\mu+T_1$ and $\zeta''\Delta\mu\rightarrow\zeta''\Delta\mu+T_2$. Due to substrate interactions, a contractile system can thus become extensile or vice versa, but the terms proportional to $T_1$ and $T_2$ do not introduce qualitatively new behavior.
The antisymmetric term proportional to $T_3$, in contrast, cannot be absorbed in the constitutive equation~\eqref{eq:devstresstensor}. In principle, this term can thus lead to new effects compared to our original system. Let us evaluate its effects on spirals and asters in small confinements with $R^2\ll\mathcal{K}/\chi$. Expressing the components of $\mathbf{p}$ in terms of the nematic order $S$ and the angle $\psi$ of the director with the radial direction, it reads
\begin{align}
\left(\frac{S^2}{r}\hat{\mathbf{r}}-S^2\partial_r\psi \hat{\bm{\theta}}\right)T_3 \label{eq:App3}.
\end{align}
For the steady-state spirals and asters considered above, we have $S=r/R$ and $\psi=const$, such that the term reduces to $T_3 r\hat{\mathbf{r}}/R^2$, which has the same form as the term proportional to $\zeta''\Delta\mu$ on the left hand side of the force balance equation~\eqref{eq:forcebalance1}. We conclude that nematic traction forces do not introduce new effects in spirals and asters aside from possibly introducing additional surface terms.
\subsection{Active alignment}\label{sec:activeAlignment}
In the constitutive equation for the dynamics of the polarization field, Eq.~\eqref{eq:dinamicadirector}, we have neglected a coupling to the chemical thermodynamic force $\Delta\mu$. Explicitly, the term would be of the form $\mathbf{p}\lambda\Delta\mu$. Depending on the sign of the phenomenological constant $\lambda$, this term favors the generation or inhibition of polar order by active processes~\cite{Julicher2007}. Note that this 'active alignement' is different from spontaneously emergent orientational order by active flows~\cite{Mueller2019,Santhosh2020}.
For our choice of the free energy, see Eq.~\eqref{eq:freeenergy}, the molecular field $\mathbf{h}$ contains a term $-\chi\mathbf{p}$, such that in the dynamic equation~\eqref{eq:dinamicadirector}, the presence of active alignment can be absorbed into the parameter $\chi$ such that $\chi\rightarrow\chi-\gamma\lambda\Delta\mu$. Due to activity, the sign of the redefined $\chi$ can thus be different from that of $\chi$. However, because C2C12 monolayers confined to small circular domains exhibit a disorganized center, the pre-factor of $\mathbf{p}$ in Eq.~\eqref{eq:dinamicadirector} should be positive, as in our above analysis.
A redefinition of the parameter $\chi$ also affects the symmetric part of the deviatory stress tensor, Eq.~\eqref{eq:devstresstensor} and the Ericksen stress tensor, Eq.~\eqref{eq:Ericksencomplete}. These effects can be absorbed by a redefinition of the coupling coefficients $\zeta$ and $\zeta''$. Explicitly, $\zeta\rightarrow\zeta+\nu\lambda\gamma$, and $\zeta''\rightarrow\zeta''+\lambda\gamma(\nu'-1/2)$. We conclude that an active alignment term in the dynamic equation for the polarization field $\mathbf{p}$ does not qualitatively change the behavior of our system aside from possibly introducing additional surface terms.
\section{Discussion}
In summary, we have analyzed in detail the steady state patterns of spirals and asters of a compressible active polar fluid. We showed that isolated topological defects provide information for quantifying material parameters of cell monolayers. Small circular confinements allowed us to control the position and topological charge of such defects. In principle, other techniques could be used for this purpose, in particular, micropatterning of the topography of the substrate~\cite{Endresen2019,Turiv2020} or application of external magnetic fields~\cite{Dua1996}. These methods allow to impose spatiotemporal cell orientation patterns, which in our system were self-organized. Combining these approaches opens a vast range of possibilities to improve our quantitative understanding of cell monolayer mechanics.
Ideally, asters and spirals in two-dimensional nematic phases exhibit a single point, where the orientational order is ill-defined. In our experiments, cell monolayers were disorganized in a central region, see Fig.~\ref{fig001}, that increased in size with the radius of the confining domain. Order was found in a region close to the domain boundary. An alternative interpretation of the steady state aster and spiral patterns considers the ordered region to be a boundary layer. Still, the same dynamic equations could be used to analyze the data, such that our results are independent of the interpretation.
The lack of spontaneously emerging orientational order in the center of the confining domain led us to consider $\chi>0$ in the free energy \eqref{eq:freeenergy}. In extended C2C12 monolayers, however, long-range orientational order can be observed for similar cell number densities~\cite{Duclos2017,Kawaguchi2017,PauScience}. This observation suggests that in the range of domain sizes used in this work, the boundary-induced order overcomes the density-induced order. To explicitly study this competition, a description of mixed orientation, nematic and polar, would be needed.
Furthermore, in our experiments, asters appeared as the cell number increases, suggesting that cell number density is a control parameter for the transition. Indeed, when proliferation was inhibited in spiral configurations~\cite{PauScience}, asters were not observed. This effect is not captured by our theory and would require a better understanding of the physics underlying cell orientation at interfaces.
Topological defects have been suggested to be involved in morphogenetic processes~\cite{Maroudas-Sacks2020}. In a similar way to our work, one could use these defects to quantify the material properties of the tissue. Such an analysis could reveal the physical conditions underlying collective cell migration during morphogenesis and provide essential pieces of information for understanding developmental processes.
\begin{acknowledgments}
We thank Zena Hadjivasiliou for suggesting the systematic parameter sampling performed in Sect.~\ref{sec:Fittingprocedure} and Jean-Fran\c cois Joanny for discussions.
\end{acknowledgments}
|
2,877,628,089,760 | arxiv | \section{Introduction}\label{intro}
Beam position monitor (BPM) calibration is important for various
techniques that measure optical parameters in accelerators, such as
quadrupole errors, beta functions, and others. In this
paper a method to find those calibration factors, partially based on
the tools used in action and phase jump (APJ) analysis, is developed
for a high energy accelerator such as the LHC.
This method has three parts: the first part is used to find the
calibration factors of arc BPMs and the other two are used to find the
calibration factors of high-luminosity interaction region (IR)
BPMs. The first part uses a measured beam position $z_{meas}$ and a true beam position $z_{true}$
so that the calibration factors $C_i$ are found with
\begin{equation}
C_i = \frac{z_{meas}}{z_{true}},
\label{cal1}
\end{equation}
where $i$ is the BPM index, $z$ is the horizontal or vertical component
of the beam position and $z_{true}$ is estimated with
\begin{equation}
z = \sqrt{2 J_c \beta_r(s)} \sin\left[ \psi_r(s) -\delta_c \right],
\label{betatron}
\end{equation}
where $\beta_r(s)$ and $\psi_r(s)$ are the lattice functions with
all gradient errors included, and $J_c$ and $\delta_c$ are the action and phase
constants. Electronic noise, uncertainties in the determination of the
lattice functions, and BPM calibration factors have been identified as the
main sources of uncertainty in Eq.~(\ref{betatron}). Although is not
possible to completely suppress the effect of these sources of
uncertainty, significant reductions can be achieved by using
average trajectories~\cite{jfc_prab17}, the most up-to-date techniques for finding
lattice function~\cite{softwarelhc2019}, and statistical techniques as described in Sec.~IV of
reference~\cite{cardona_arxiv2020}. Several other improvements that
can be made when using Eq.~(\ref{betatron}) are studied in this paper. For
example, the sensitivity to uncertainties in $\psi_r(s)$ and
$\delta_c$ can be almost completely suppressed using multiple average
trajectories, as explained in Sec.~\ref{phase_uncert}. Coupling can
also affect the validity of Eq.~(\ref{betatron}). This effect
is studied in Sec.~\ref{coup_eff} and compared with the other know
sources of uncertainty. Then, in Sec.~\ref{coup_red} is shown how to
build average trajectories to significantly reduce the coupling
effects. All these improvements are used in simulations for which
arc BPM gain errors are intentionally introduced and then measured to determine the accuracy of method in
Sec.~\ref{arc_bpms}. This section also presents estimates of
calibration factors from experimental data and the effects
this calibration has on action and phase plots.
The second and third parts of the method are introduced in Sec.~\ref{IR_bpms}
and, as in the previous case, accuracy studies are carried out using
simulations. In addition, in this section, a list of calibration factors for
the IR1 BPMs is obtained from experimental data. Finally, as an
application of the presented calibration method, the sensitivity of APJ to BPM
calibrations is evaluated in Sec.~\ref{quad_corrs}.
\section{Reducing the effect of $\psi_r(s)$ and $\delta_c$
uncertainties}\label{phase_uncert}
Equation~(\ref{betatron}) may be susceptible to $\delta_c$ uncertainties. This dependency can be minimized if $\delta_c$ is chosen
such that
\begin{equation}
\psi_r(s) -\delta_c = p \frac{\pi}{2},
\label{sin1}
\end{equation}
where $p$ is an odd, positive or negative number. A particular average trajectory will not meet this condition for
all BPMs in the ring since $\delta_c$ is constant. however, it is possible to build an average trajectory for
every BPM in the ring such that the condition~(\ref{sin1}) can always be
met. This procedure involves the construction of several hundred average
trajectories, which can be time-consuming and resource-intensive. Instead, some average trajectories can be built with equally spaced
$\delta_c$ values, and the average trajectory for which $\delta_c$ is
closest to meeting condition~(\ref{sin1}) is chosen as the optimal
trajectory for a particular BPM. Simulations indicate that an average
trajectory with $\delta_c$ 15 degrees apart from the condition~(\ref
{sin1}) is still good enough to hide any possible dependence of the
Eq.~(\ref{betatron}) on the uncertainties of $\delta_c$. This means that only 24 average trajectories are needed. In practice,
several average trajectories out of 24 are chosen to estimate the
calibration constant for a particular BPM. The criterion for selecting these trajectories is
\begin{equation}
\left| \sin\left[ \psi_r(s) -\delta_c \right] \right| < 0.9,
\end{equation}
which still provides enough independence from the
uncertainties of $\delta_c$. It should also be noted that following this
procedure, the propagated uncertainty in Eq.~(\ref{betatron}) due to
the uncertainties of $\psi_r(s)$ also become negligible.
\section{Coupling and the action and phase constants}\label{coup_eff}
Action and phase as a function of the axial coordinate $s$ are
expected to be horizontal straight
lines with values equal to $J_c$ and $\delta_c$. However, action and
phase plots obtained with 2016 LHC turn-by-turn (TBT) data
and lattice functions measured with the most up-to-date techniques~\cite{softwarelhc2019}
show small variations, as can be seen in Fig.~\ref{apexp}.
\begin{figure}[h]
\centering
\includegraphics{f1.eps}
\caption{\label{apexp} Action and phase plots of an average trajectory obtained
from an experimental TBT data set collected at the 2016 LHC run.}
\end{figure}
These variations are a combination of slow and fast oscillations that can be
understood by simulations with different sources of errors. The
slow oscillations can be attributed to quadrupole tilt errors, as can
be seen from the red curve of Fig.~\ref{apsimu}. This curve corresponds
to the action plot of a simulated average trajectory
generated with a quadrupole tilt error distribution with 2 mrad
standard deviation.
\begin{figure}[h]
\centering
\includegraphics{f2.eps}
\caption{\label{apsimu} Action plots of simulated average trajectories
with coupling and other sources of errors.}
\end{figure}
The fast oscillations can be attributed to BPM gain
and noise errors and uncertainties related to the determination of
the lattice functions. This is confirmed by the action plot (blue curve of Fig.~\ref{apsimu}) of a
simulated TBT data set generated with error distributions with the currently
accepted rms values for the LHC: 3\% rms BPM gain
errors~\cite{bpmcal2020}, 0.1 mm rms BPM noise~\cite{malina_noise},
1\% rms uncertainties in the determination of the beta
functions~\cite{al_prst15,analyNBPM}, 6 mrads rms uncertainties in
the determination of the betatron phases~\cite{psk_ipac16}, 2 mrad rms
quadrupole tilt errors, and $5*10^{-6}$ $m^{-2}$ rms gradient errors. It should
be noted that the amplitude of the slow oscillations can be comparable
to the amplitude of the fast oscillations, indicating that
the quadrupole tilt errors may be as important as the other sources of errors
in accurately finding $J_c$ and $\delta_c$. It should also be noted
that the amplitude of the fast oscillations in the simulations is
larger than in the experimental data. This may indicate that one
or all of the sources of these oscillations are smaller than the
currently accepted values. In Sec.~\ref{arc_bpms}, in fact, it is
found that the rms values of the calibration factors are somewhat smaller than
the 3\% mentioned earlier.
Gradient errors alone shift the action and
phase plots vertically, changing the $J_c$ and $\delta_c$ values that can be
estimated from these plots. These changes, however, do not affect the
estimate of $z_{true}$ as long as the lattice functions that include
the gradient errors are used in Eq.~(\ref{betatron}). The
displacement of the action plot can be seen by comparing the red
and the green curves in Fig.~\ref{apsimu}. The green curve
is an action plot obtained with the 2 mrad rms
quadrupole tilt error distribution used in the red curve plus a $5*10^{-6}$
$m^{-2}$ rms gradient error distribution.
\section{ Building average trajectories to reduce the effect of
coupling}\label{coup_red}
Average trajectories are built by selecting trajectories from a
TBT data set according to (complete procedure in Sec.~V of~\cite{jfc_prab17})
\begin{equation}
\left | \tilde{\delta}_z(n_m) -\tilde{\delta}_z(n) \right | <
\frac{\pi}{2},
\label{condi}
\end{equation}
where $\tilde{\delta}_z(n)$ is the phase (as defined in~\cite{jfc_prab17}) associated with the trajectory
with turn number $n$, and
\begin{equation}
\tilde{\delta}_z(n_m) =\psi_z(s_e) -p\frac{\pi}{2},
\label{defi}
\end{equation}
where $\psi_z(s_e)$ is the nominal betatron phase at the axial position
where the average trajectory should be a maximum, and $p$ is an
odd, positive or negative number. Regular average trajectories are
built using one-turn trajectories that satisfy the
condition~(\ref{condi}) in both planes simultaneously. As a
consequence, about a thousand of one-turn trajectories are selected from the 6600
turns contained on a TBT data set. Now, if this condition is imposed
to only one plane, the number of selected trajectories increases to
half the total number of trajectories in the TBT data set. More
importantly, the average trajectory in the plane for which the
condition is not imposed tends to be negligible. If this procedure is applied to an experimental TBT data set (the same one used to obtain Fig.~\ref{apexp}), the corresponding
average trajectory has significantly smaller oscillations in
one plane than in the other, as expected (Fig.~\ref{trajexp_max1plane}).
\begin{figure}[h]
\centering
\includegraphics{f3.eps}
\caption{\label{trajexp_max1plane} Average trajectory obtained by
selecting one-turn trajectories from experimental TBT data that satisfy the condition~(\ref{condi}) only
in the vertical plane. As a consequence,
the oscillations in the vertical plane are significantly larger than in
the horizontal plane. It is also possible to built average
trajectories with large oscillations in the horizontal plane and
significantly smaller in the vertical plane.}
\end{figure}
Significantly reducing the amplitude of the oscillations in one of the
planes also reduces the effect of linear coupling in the other plane,
as can be seen in the action and phase plots in Fig. ~\ref {apexp_comp}.
\begin{figure}[h]
\centering
\includegraphics{f4.eps}
\caption{\label{apexp_comp} Action and phase plots obtained with average
trajectories built by applying condition \ref{condi} to both planes and
to a single plane. The slow oscillations, corresponding to quadrupole
tilt errors, are significantly decreased with the new average trajectories.}
\end{figure}
In addition to the average trajectory with a maximum at $s_e$ (max trajectory), it is also
possible to built an average trajectory with a minimum at $s_e$ (min
trajectory). These two trajectories can be subtracted to obtain a max
trajectory that now is built from all the 6600 turns in the TBT data
set.
\section{Measuring Gain Errors in Arc BPMs}\label{arc_bpms}
All improvements mentioned in previous sections are used to find arc
BPM calibration factors with Eqs.~(\ref{cal1})~and~(\ref{betatron}), where all relevant
variables are found from average trajectories derived from TBT
data and lattice functions. Both, simulations and experimental
analysis are presented in the following subsections where two
conventions are adopted: first, the ``measured''
calibration factors correspond to the factors obtained with
Eqs.~(\ref{cal1})~and~(\ref{betatron}) regardless of whether simulated
or experimental data is used, and second, the gain errors
$\varepsilon_{g,i}$ and the calibration factors $C_i$ are related by
\begin{equation}
\varepsilon_{g,i} = C_i -1
\end{equation}
\subsection{Arc BPM calibration factors from simulations}
To evaluate the accuracy (as
defined in~\cite{metro}\footnote{closeness of the agreement between
the result of a measurement and a true value of the measurand}) of this part of the calibration method,
simulated TBT data with the errors listed in Table~\ref{errs_simu}
are generated with MADX~\cite{mad}. The new lattice functions (nominal lattice
plus gradient errors) are also generated by MADX and, in addition, the
uncertainties associated with the determination of the lattice functions listed in
Table~\ref{errs_simu} are added. TBT data and lattice functions are
then used to obtain the action and phase plots and the measured BPM
gain errors.
\begin{table}[h]
\caption{\label{errs_simu} Rms values of known errors in the LHC
lattice (first three rows) and uncertainties associated with the
determination of lattice functions (last two rows).}
\begin{ruledtabular}
\begin{tabular}{l l c}
\multicolumn{1}{c}{Errors}
&
\multicolumn{1}{c}{Rms value}
&
\multicolumn{1}{c}{Extracted from:}\\
\colrule
Gradients & $5*10^{-6}$ $m^{-2}$ & \cite{ml_fol_19} \\
BPM gains & 3\% & \cite{bpmcal2020} \\
Arc BPMs noise & 0.1 mm & \cite{malina_noise} \\
$\qquad \beta_r(s)$ & 1\% & \cite{al_prst15,analyNBPM} \\
$\qquad \psi_r(s)$ &6 mrads & \cite{psk_ipac16} \\
\end{tabular}
\end{ruledtabular}
\end{table}
An histogram of all the measured arc BPM gain errors obtained in this simulation for the vertical plane of beam 2 can be seen
in~Fig.~\ref{histo_arc_bpm_erry} (red bars). The same figure also
shows the histogram of the differences between the measured BPM gain
errors and their corresponding true values (green bars). These
histograms illustrate that calibration factors with an original 3\% rms
distribution can be reduced to a 0.8\% rms calibration factors distribution.
\begin{figure}[h]
\centering
\includegraphics{f5.eps}
\caption{\label{histo_arc_bpm_erry} The red histogram corresponds to
the measured gain errors of the arc BPMs obtained from a simulated TBT data
set. The standard deviation of the distribution is around 3\% as
expected. The green histogram is the difference between the
measured BPM gain errors and the true gain errors used in the simulation. The
standard deviation of this histogram indicates that the accuracy of
the calibration for the arc BPMs is approximately 0.8\% rms.}
\end{figure}
If the measured gain errors are used to calibrate the original
simulated TBT data set, clear reductions in the variations of their
corresponding action and phase plots can be seen ( Fig.~\ref{apsimu_calib} ).
\begin{figure}[h]
\centering
\includegraphics{f6.eps}
\caption{\label{apsimu_calib} The action and phase plots of the
simulated average trajectory with errors in Table~\ref{errs_simu} show
a significant reduction of oscillations after calibration.}
\end{figure}
\subsection{Arc BPM calibration factors from experimental data}
A few TBT data sets taken during the 2016 LHC run are used to find the
calibration factors of the arc BPMs. These TBT data sets were taken
after global and local coupling corrections were applied on the IRs, but there
were no quadrupole corrections for gradient errors in the IRs. The
lattice functions are obtained directly from the same TBT data sets
using the most up-to-date algorithms, currently used in the LHC and
automatically provided by the orbit and measurement correction
(OMC) software~\cite{softwarelhc2019}. Once the experimental TBT data and lattice
functions are available, same procedure used to obtain calibration
factors from simulated data is also used with 5 TBT data set of
beam 1 and 5 TBT data sets of beam 2. As an example, the gain error
histogram for the BPMs in the vertical plane of beam 2 is shown in Fig.~\ref{histoexp_arcbpms}.
\begin{figure}[h]
\centering
\includegraphics{f7.eps}
\caption{\label{histoexp_arcbpms} Histogram of arc BPM gain errors
(vertical plane) measured from an experimental TBT data of beam
2. Calibration factors in this histogram are distributed within a
standard deviation of 2.2\% rms (2.3\% rms in the horizontal
plane).}
\end{figure}
This histogram indicates that the rms gain error
in the arc BPMs is around 2\%, which is slightly smaller than the 3\%
reported in~\cite{bpmcal2020}. Using the measured gain errors, the
experimental TBT data set is calibrated, leading to cleaner action and
phase plots, as seen in Fig~\ref{apexp_calib}.
\begin{figure}[h]
\centering
\includegraphics{f8.eps}
\caption{\label{apexp_calib} Action and phase plots of an experimental
TBT data set before and after the calibration factors in Fig.~\ref{histoexp_arcbpms} are
applied.}
\end{figure}
\section{Measuring Gain Errors in IR BPMs}\label{IR_bpms}
In each quadrupole triplet of the low $\beta^*$ IRs there are 3 BPMs:
BPMSW, BPMS and BPMYS. To find their calibration factors two
different methods are used.
\subsection{Method to find calibration factors of BPMSWs}
The calibration method for these BPMs is essentially the same as for
the BPM arcs. The difference is that the beta functions are
estimated from \textit{k}-modulation experiments~\cite{kmod}. These experiments provide the minimum value of the beta function
between the triplets $\beta_w$ (commonly known as the beta function at the waist), and the distance between the position of the waist and the center of the
inter-triplet space $w$ (commonly known as the waist shift). The beta
functions at the BPMSWs are
\begin{eqnarray}
\beta(s^{\pm}_b) = \beta_w + \frac{(L \pm w)^2}{\beta_w}
\label{betaw}
\end{eqnarray}
where $s^{\pm}_b$ are the axial position of the BPMSWs located in the
left and right triplets and $L$ is half the length of the inter-triplet
space. The equation~(\ref{betaw}) leads to
very accurate measurement of the $\beta$ functions at the two BPMSWs,
which should also allow a more accurate BPM calibration.
\subsection{Method to find calibration factors of BPMS and BPMYS}
For BPMS and BPMYS, the \textit{k}-modulation technique is currently not
available, but a modification of the method for finding the calibration factors can
take advantage of the accurate calibration of the BPMSWs.
Suppose that a particle of beam 1, coming from the inter-triplet space
of IR1, passes through BPMSW registering a beam position $z(s_b)$. The particle
then passes through the first quadrupole of the right triplet
(Q1) and then arrives to BPMS, where a beam position $z(s_s)$ is
recorded. According to the action and phase method, any one-turn particle trajectory can be described by
\begin{equation}
z(s) = \sqrt{J(s) \beta_n(s)}\sin \left[ \psi_n(s) -\delta(s) \right],
\label{eqapj}
\end{equation}
where the subscripts $n$ are used to refer to the nominal
variables. Hence,
\begin{eqnarray}
\label{apjbpms}
z(s_b) & = & \sqrt{J(s_b) \beta_n(s_b)} \sin \left[ \psi_n(s_b) -
\delta(s_b) \right]\\
\label{apjbpms2}
z(s_s) & = & \sqrt{J(s_s) \beta_n(s_s)} \sin \left[ \psi_n(s_s) -
\delta(s_s) \right].
\end{eqnarray}
On the other hand, $z(s_s)$ can also be expressed based on the
original action and phase $J(s_b)$ and $\delta(s_b)$ plus the
kick $\theta$ experienced by the particle due to a magnetic error
present in Q1 at $s_e$ (see Eq.~(1) of reference~\cite{jfc_prab17})
\begin{eqnarray}
\label{xplus}
z(s_s) &= &\sqrt{J(s_b) \beta_n(s_s)} \sin \left[ \psi_n(s_s) -
\delta(s_b) \right]+\\
& &\theta \sqrt{\beta_n(s_s) \beta_n(s_e)} \sin
\left[ \psi_n(s_s)-\psi_n(s_e) \right], \nonumber
\end{eqnarray}
The phase advance between $s_s$ and $s_e$ is negligible
and hence
\begin{equation}
z(s_s) = \sqrt{J(s_b) \beta_n(s_s)} \sin \left[\psi_n(s_s) -
\delta(s_b)\right].
\label{xapj2}
\end{equation}
Finally, using Eqs.~(\ref{apjbpms}) and~(\ref{xapj2})
\begin{equation}
z(s_s) = z(s_b) \sqrt{\frac{\beta_n(s_s)}{\beta_n(s_b)}}\frac{\sin
\left[\psi_n(s_s) -\delta(s_b)\right]}{\sin \left[\psi_n(s_b) -\delta(s_b)\right]},
\label{xss}
\end{equation}
which allows estimating $z(s_s)$ from the nominal lattice functions
and $z(s_b)$ that is already calibrated. $\delta(s_b)$ corresponds
to the phase in the inter-triplet space $\delta_t$ and can be
estimated with the formulas developed and tested in~\cite{cardona_arxiv2020}. A similar procedure can be used to
estimate the beam position at BPMSY.
\subsection{ IR BPM calibration factors from simulations}
Two hundred simulated TBT data sets with the errors listed in Table~\ref{errs_sim2} are generated with MADX to
asses the accuracy of the calibration methods presented in this section.
\begin{table}[h]
\caption{\label{errs_sim2} In addition to the errors and uncertainties listed in Table~\ref{errs_simu},
the uncertainties associated to k-modulation experiments are included
since the beta functions derived from these experiments are used for
calibration of IR BPMs.}
\begin{ruledtabular}
\begin{tabular}{l l c}
\multicolumn{1}{c}{Errors}
&
\multicolumn{1}{c}{Rms value}
&
\multicolumn{1}{c}{Extracted from:}\\
\colrule
Grads & $5*10^{-6}$ $m^{-2}$ & \cite{ml_fol_19} \\
BPM gains & 3\% & \cite{bpmcal2020} \\
Arc BPMs noise & 0.1 mm & \cite{malina_noise} \\
$\qquad \beta_r(s)$ & 1\% & \cite{al_prst15,analyNBPM} \\
$\qquad \psi_r(s)$ &6 mrads & \cite{psk_ipac16} \\
Trip. quad grads & $2*10^{-5}$ $m^{-2}$ & \cite{lhc_2015} \\
Match quad grads & $1*10^{-4}$ $m^{-2}$ & \cite{cardona_arxiv2020} \\
$\qquad w_r$ & 1 cm & \textit{k}-modulation experiments\\
$\qquad \beta_{w_r}$ & 0.3 mm & \textit{k}-modulation experiments\\
\end{tabular}
\end{ruledtabular}
\end{table}
Random calibration factors with a standard deviation of 3\% rms are
assigned to the triplet BPMs plus a systematic shift of 5\% with
respect to the calibration factors of the arcs (as suggested by~\cite{bpmcal2020}). Since
there are two hundred simulations, there are two hundred measured calibration
factors obtained with Eq.~(\ref{xss}) and two hundred true calibration
factors for every IR BPM. The rms differences of these two quantities
are reported in Table~\ref{acur_arcbpms1} for the six beam-2 BPMs
in IR1. Also, Fig.~\ref{histoIRbmps} shows a histogram of the measured
gain error minus the true gain errors for BPMSW.1L1. Similar histograms can be found for the
other 5 BPMs.
\begin{table}[h]
\caption{\label{acur_arcbpms1} Rms differences between measured calibration factors and true calibration factors for 200
simulations. The errors listed in Table~\ref{errs_sim2} are added to
each simulation according to a Gaussian distribution with the indicated standard deviations.}
\begin{ruledtabular}
\begin{tabular}{l c}
&
\multicolumn{1}{c}{Rms accuracy}\\
BPM
&
\multicolumn{1}{c}{(\%)}\\
\colrule
BPMSW.1L1.B2 & 0.34 \\
BPMSW.1R1.B2 & 0.3 \\
BPMSY.4L1.B2 & 0.42 \\
BPMS.2L1.B2 & 0.23\\
BPMS.2R1.B2 & 0.28\\
BPMSY.4R1.B2 & 0.28\\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[h]
\centering
\includegraphics{f9.eps}
\caption{\label{histoIRbmps} Histogram of the differences
between the measured gain errors and the true gain errors in BPMSW.1L1
for 200 hundred simulations with the random errors in Table~\ref{errs_sim2} . The
standard deviation of this histogram indicates that the calibration
accuracy for BPMSW.1L1 is approximately 0.3\% rms with the method
presented in this section. }
\end{figure}
\subsection{IR BPM calibration factors from experimental data}
The same 5 experimental TBT data sets of beam 1 and the 5 TBT data sets of beam 2 mentioned in
Sec.~\ref{arc_bpms} are used to find the calibration factors for the IR
BPMs. Furthermore, data from \textit{k}-modulation experiments performed
simultaneously while taking the experimental TBT data are used to
estimate the beta functions in the BPMSWs with Eq.~(\ref{betaw}). These
analyses finally lead to calibration factors for the 6 triplet BPMs
of beam 1 and the six triplet BPMs of beam 2 in both planes, as can be
seen in Table~\ref{calf_b1}~\ref{calf_b2}.
\begin{table}[h]
\caption{\label{calf_b1} Calibration Factor for IR1 BPMs obtained from
2016 experimental TBT and \textit{k}-modulation data of beam
1.}
\begin{ruledtabular}
\begin{tabular}{l D{,}{\pm}{-1} D{,}{\pm}{6.4} }
& \multicolumn{2}{c}{Beam 1 Calibration Factors}\\
BPM Name &\multicolumn{1}{c}{HOR}
& \multicolumn{1}{c}{VERT}\\
\colrule
BPMSW.1L1 & 0.968,0.001 & 0.942,0.001\\
BPMSW.1R1 & 0.961,0.001 & 0.906,0.001 \\
BPMS.2L1 & 0.954, 0.001& 0.926,0.001\\
BPMS.2R1 &0.981, 0.001& 0.941,0.002\\
BPMSY.4R1 & 0.928,0.001 & 0.939,0.001\\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}[h]
\caption{\label{calf_b2} Calibration Factor for IR1 BPMs obtained from
2016 experimental TBT and \textit{k}-modulation data of beam 2.}
\begin{ruledtabular}
\begin{tabular}{l D{,}{\pm}{-1} D{,}{\pm}{6.4} }
& \multicolumn{2}{c}{Beam 2 Calibration Factors}\\
BPM Name &\multicolumn{1}{c}{HOR}
& \multicolumn{1}{c}{VERT}\\
\colrule
BPMSW.1L1 & 0.953,0.002 & 0.942,0.001\\
BPMSW.1R1 & 0.935,0.001 & 0.944,0.001 \\
BPMSY.4L1 & 0.951,0.001 & 0.948,0.002\\
BPMS.2L1 & 0.942, 0.001& 0.941,0.001\\
BPMS.2R1 &0.947, 0.002& 0.954,0.002\\
BPMSY.4R1 & 0.955,0.001 & 0.979,0.001\\
\end{tabular}
\end{ruledtabular}
\end{table}
IR BPM Calibration factor are shifted about 5\% as reported
in~\cite{bpmcal2020}. The experimental uncertainty is estimated as the standard deviation of
the five measurements available for every calibration factor and it is
remarkably small.
\section{BPM gain errors and quadrupole corrections in the
IRs}\label{quad_corrs}
Corrections to linear magnetic errors in the IRs can be estimated
with the action and phase jumps that can be seen in action and phase
plots obtained with nominal lattice functions~\cite{jfc_prab17,
cardona_arxiv2020}. Since these plots are derived from BPM
measurements, it is necessary to asses their sensitivity to BPM
calibrations. To evaluate this sensitivity, the calibration factors
found for the arc and IR BPMs in
Secs.~\ref{arc_bpms}~and~\ref{IR_bpms} are applied to the same
experimental TBT data sets used in those sections and the corresponding action and phase plots are obtained
(Fig.~\ref{apj_IR1}). Comparisons between the action and phase plots
before and after calibration show significant improvements,
particularly in the action plots.
\begin{figure}[h]
\centering
\includegraphics{f10.eps}
\caption{\label{apj_IR1} Action and Phase jump in the horizontal plane
of beam 2 at IR1 before and after applying the BPM calibration. The
calibration procedure make it possible to define the jump more clearly, especially in the action plot.}
\end{figure}
Also, since now the average trajectories are much larger in one plane than the other, the simplified expressions
\begin{eqnarray}
{ B_1}_{x,e} &= & - \frac{\theta_{x,e}}{x_e}, \\
{ B_1}_{y,e} &= & \frac{\theta_{y,e}}{y_e}
\end{eqnarray}
can be used to estimate the quadrupole components ${ B_1}_{z,e}$ of
the equivalent kick $\theta_{x,e}$ instead of Eqs.~(7) of~\cite{cardona_arxiv2020}. Once these components are known, the
corrections are estimated before and after calibration and no
significant variations are found (Table~\ref{quad_corr}). The equivalence between
the two corrections can also be verified through the beta-beating that
they produce as can seen in Fig.~\ref{beat_corrs}.
\begin{table}[h]
\caption{\label{quad_corr} Quadrupole correction estimated for IR1 from experimental TBT data
before and after the calibration procedure. Most of the proposed
corrections in the 12 quadrupoles have only small variations between the
two cases.}
\begin{ruledtabular}
\begin{tabular}{c D{.}{.}{2.2} D{.}{.}{2.2} }
& \multicolumn{2}{c}{Correction strengths}\\
& \multicolumn{2}{c}{($10^{-5}$ $m^{-2}$)}\\
Magnet &\multicolumn{1}{c}{Before calibration}
& \multicolumn{1}{c}{After calibration}\\
\colrule
Q2L & 1.24 & 1.19\\
Q2R &-0.84 &-0.76 \\
Q3L & 1.44 & 1.36\\
Q3R &-2.75&-2.60\\
Q4L.B1 & 11.3 & 10.2\\
Q4L.B2&-11.3 & 10.2\\
Q4R.B1 & -8.0 & -8.2\\
Q4R.B2 & 8.0 & 8.2\\
Q6L.B1 &-41.1 & -35.9 \\
Q6L.B2 & 34.2 & 29.9 \\
Q6R.B1 & 25.4 & 27.4 \\
Q6R.B2 &-22.2 & -24.0\\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[h]
\centering
\includegraphics{f11.eps}
\caption{\label{beat_corrs} Beta-beating generated by the corrections
obtained before and after BPM calibration. Based on the small differences in the two plots, it can be stated that the two
corrections are equivalent.}
\end{figure}
\section{Other Simulations}
The results of Sec.~\ref{arc_bpms} indicate that arc BPM gain errors are approximately
2.3\% rms. Also, the current number of turns has been increased to 10000,
which reduces the effect of electronic noise. Except fo these
changes, TBT data sets are simulated with the same errors listed in
Table~\ref{errs_simu}. The calibrations factors of the BPM arcs can now be
recovered with 0.7\% accuracy instead of the original 0.8\% quoted in
Sec.~\ref{arc_bpms}. The IR BPMs calibrations also have a better associated
accuracy, as can be seen comparing Tables~\ref{accur_IRbpms}~and~\ref{acur_arcbpms1}.
\begin{table}[h]
\caption{\label{accur_IRbpms} Rms differences between measured calibration factors and true calibration factors for 200
simulations. Table~\ref{errs_sim2} is still used in the simulations, except
that gain errors are now 2.3\% and the electronic noise
corresponds to what would exist for 10000 turns.}
\begin{ruledtabular}
\begin{tabular}{l c}
&
\multicolumn{1}{c}{Rms accuracy}\\
BPM
&
\multicolumn{1}{c}{(\%)}\\
\colrule
BPMSW.1L1.B2 & 0.26 \\
BPMSW.1R1.B2 & 0.25 \\
BPMSY.4L1.B2 & 0.38 \\
BPMS.2L1.B2 & 0.19\\
BPMS.2R1.B2 & 0.26\\
BPMSY.4R1.B2 & 0.26\\
\end{tabular}
\end{ruledtabular}
\end{table}
\section{Conclusions}
A method has been developed to find calibrations factors based on
average trajectories. Simulations show that the calibration factors for
arc BPMs can be recovered with an accuracy of 0.7\% rms and the calibatrion factors
for IR BPMs can be recovered with an accuracy of 0.4\% rms. The method has
been used to obtain the calibration factors of six BPMs of beam 1 and
six BPMs of beam 2 at the IR1 of the LHC. For these estimates, several
TBT data sets, measured lattice functions, and \textit{k}-modulation
measurements in the IRs are needed.
This method has also been used to test the BPM calibration
sensitivity of the action and phase jump method. Although the
calibration helps to more clearly define the action and phase jump in the IR, its
effect on the estimation of corrections is negligible.
\section*{Acknowledgments}
Many thanks to all members of the optics measurement and correction team
(OMC) at CERN for support with their \textit{k}-modulation software, the
GetLLM program, and experimental data.
|
2,877,628,089,761 | arxiv | \section{Introduction}
In many areas of investigations, in natural as well as technical and
socio-economical sciences, a description of phenomena in terms of
(partial) differential equations (PDEs) is quite natural and has
received a lot of attention, also in recent years. However the
necessity of taking care of stochastic (or random) influences on
systems primarly described by (P)DEs in particular through
stochastic (P)DEs, S(P)DEs for short, has also came to the
forefront of research, see, e.g. \cite{Wal, Kall},
\cite{DPZVerde},\cite{DPZRosso}, \cite{PeZa,Mumf, GaMa, Holden et
al}
In the present paper we concentrate on PDE's perturbed by a space-time noise of the
additive type. Such SPDE's have been studied extensively
particularly in the case of Gaussian noises and they have found
applications in several areas, from physics to biology and financial
mathematics, see e.g. \cite{Wal}, \cite{DPZVerde}, \cite{DPZRosso},
\cite{GaMa, Holden et al, AlMaLy, Carm, AlRo,PRRO}. The extension to
the treatment of additive L\'evy type noises (which are more general
in the sense that random variables with L\'evy distributions extend
Gaussian random variables) is relatively more recent, see e.g.
\cite{AlWuZh, PeZa}. A natural question which arises in such
extensions from a deterministic description of phenomena to a
stochastic one, is in which sense one can recover the deterministic
description by "switching off" the noise and possibly obtain "small
noise expansions" around the limit. In the case of SDE's (as
opposite to SPDE's) this is a rather well studied problem,
especially in the case of Gaussian noises and it has also relations
with the study of the classical limit from quantum mechanics(see
e.g. \cite{Wa, TuEsp,IkWa, AMa, ASK, AL, Marc2, Ma, MaTa, Si, IK07,
InKa}). The infinite dimensional case and the case of SPDEs is less
studied, even in the case of Gaussian noises, see however \cite{CeF,
AlMaLy, ARoSk, RT, Ma,Marc2}. Concerning applications, the case of
stochastic perturbations of the FitzHugh-Nagumo equations of
neurobiology and its relations with the classical, deterministic
FitzHugh Nagumo equations is particularly interesting, due to the
fact that those equations, and their extensions to the case where
the underlying euclidean domain in space is replaced by a network,
are extensively used in neurobiology, see e.g. \cite{CaMu, AlDiP,
Tu, Wal}. In two recent papers \cite{AlDiPMa}, \cite{ADPM} a
systematic study of SPDE's with additive Gaussian noise which
includes in particular the above stochastic FitzHugh-Nagumo
equations, has been given, together with a detailed study of the
corresponding diffusion expansion around the deterministic limit.
One basic difficulty which had to be overcome (besides the infinite
dimensionality of the stochastic process involved) consisted in the
non global Lipschitz character of the nonlinear terms. A global
Lipschitz condition would in fact exclude the interesting case of
the FitzHugh-Nagumo equations; similarly other interesting equations
like those arising in stochastic quantization \cite{ALKR},
hydrodynamics \cite{AFER} or solid state physics ( e.g, Allen-Cahn
equations), would be excluded. Despite the interest of modeling the
noise in such systems through a L\'evy-type one instead of a
Gaussian one, which has motivations in all the areas which have been
mentioned, apparently a corresponding study of asymptotic expansions
around the underlying deterministic systems has yet to be performed.
We shall here adopt the method used in \cite{AlDiPMa} to cope with
this case. The adaptation involves, in particular, using methods
developed by \cite{PeZa} to handle stochastic convolutions in the
case of L\'evy noise. Let us note that our results seem to be new
even in the finite dimensional case, where small stochastic
perturbation expansions have also not been provided in details for
equations of the type we consider. Before we go over to describe the
contents of the present paper, let us mention that our study of
SPDE's with L\'evy noise can also be related to the study of certain
pseudo-differential equations with such noises which occur in
quantum field theory and statistical mechanics (see e.g \cite{AGoWu,
AlGYo}. Also relations to certain problems in the study of
statistics of processes described by L\'evy noises should be
mentioned \cite{GoSm, GST}.
\section{Outline of the paper}
Let us consider the following deterministic nonlinear evolution problem:
\begin{equation}\label{eq:det}
\begin{cases}
d\phi(t)= [A\phi(t)+F(\phi(t))]dt , \quad t \in [0,+\infty) \\
\phi(0) = u^0, \quad u^0 \in H\:,
\end{cases}
\end{equation}
where
$A$ is a linear operator on a separable Hilbert space $H$ which generates a $C_0$-semigroup of strict negative type.
The term $F$ is a \textit{smooth} nonlinear, quasi-$m$-dissipative mapping from the domain $D(F)\subset H$ (dense in $H$) with values in
$H$ (this means that there exists $ \omega \in \mathbb{R}$ such that $(F-\omega I )$ is $m$-dissipative in the sense of \cite[p. 73]{DPZVerde}),
with (at most) polynomial growth at infinity and satisfying some further assumptions which will be specified in Hypothesis \ref{hp:A+F} below.)
Existence and uniqueness of solutions for equation \eqref{eq:det} is discussed in Proposition \ref{prop:MildDeterministica} below.
Our aim is to study a stochastic (white noise) perturbation of \eqref{eq:det} and to write its (unique) solution as an expansion in powers of a parameter $\varepsilon>0$, which controls the strength of the noise, as $\varepsilon$ goes to zero. More precisely, we are concerned with the following stochastic Cauchy problem on the Hilbert space $H$:
\begin{equation}\label{eq:eps}
\begin{cases}
du(t)= [Au(t)+F(u(t))]dt + \varepsilon \sqrt{Q}dL(t) , \quad t \in [0,+\infty) \\
u(0) = u^0, \quad u^0 \in K \:,
\end{cases}
\end{equation}
where $A$ and $F$ are as described above, $L$ is a mean square integrable L\'evy process taking values in a Hilbert space $U$, $Q$ is a positive trace
class linear operator from $H$ to $H$ and $\varepsilon>0$ is the parameter which determines the magnitude of the stochastic perturbation. The initial datum
$u^0$ takes values in a continuously embedded Banach space $K$ of $H$.
A unique solution of the problem \eqref{eq:eps} can be shown to exist exploiting as in \cite{BoMa} results on stochastic differential equations
(contained, e.g., in \cite{DPZRosso, DPZVerde}).
Our purpose is to show that the solution of the equation \eqref{eq:eps}, which will be denoted by $u=u(t), t \in [0,+\infty)$, can be written as
$$
u(t) = \phi(t) + \varepsilon u_1(t) + \dots + \varepsilon^n u_n(t) + R_n(t, \varepsilon) \:,
$$
where $n$ depends on the differentiability order of $F$. The function $\phi(t)$ solves the associated deterministic problem \eqref{eq:det},
$u_1(t)$ is the stochastic process which solves the following linear stochastic (non-autonomous) equation
\begin{equation} \label{eq:u1}
\begin{cases}
du_1(t)= [Au_1(t)+\nabla F(\phi(t)) [u_1(t)]]dt + \sqrt{Q}dL(t) , \quad t \in [0,+\infty) \\
u_1(0) = 0\:,
\end{cases}
\end{equation}
while for each $k=2,\ldots,n \; ,$ $u_k(t)$ solves the following non-homogeneous linear differential equation with stochastic coefficients
\begin{equation}\label{eq:uk}
\begin{cases}
du_k(t) = \left[A u_k(t) + \nabla F (\phi (t) )[u_k(t)]\right] dt + \Phi_k(t) dt, \\
u_k(0) = 0 \:.
\end{cases}
\end{equation}
$\Phi_k(t)$ is a stochastic process which depends on $u_1(t),\ldots,u_{k-1}(t)$ and the Fr\'echet derivatives of $F$ up to order $k$,
see Section \ref{sec:2} for details.
Let us shortly describe the content of the different sections of the
present paper. In Section 3 we set the basic assumptions needed to
perform the construction of solutions and their asymptotic
expansion. In section 4 we describe the mild solutions to SDE's
driven by L\'evy processes on Hilbert spaces, basically following
the setting of \cite{PeZa}. Since our expansion will be around
solutions of the corresponding deterministic equations, we start by
presenting results on the latter equations (Subsection 4.1). In the
Subsection 4.2 we present the setting for the stochastic
perturbation, first describing the noise. In Section 5 we describe
the basic assumptions on the nonlinear term and provide its Taylor
expansion. Section 6 contains the main results, in particular the
construction of the expansion, the proof of its asymptotic character
and of detailed estimates on remainders, to any order. We close with
an application to the case of a FitzHugh Nagumo equation on a
network.
\section{Assumptions and Basic Estimates} \label{sec:1}
Before recalling some known results on problems of the types
\eqref{eq:det}, \eqref{eq:eps}, \eqref{eq:u1} and \eqref{eq:uk}, we
begin by presenting our notation and assumptions. We are concerned
with a real separable Hilbert space, with the inner product $\langle
\cdot,\cdot\rangle$. Moreover, in what follows, $(B, |\cdot|)$ is a
reflexive Banach space continuously embedded into $H$ as a dense
Borel subset and $(K, |\cdot|)$ is a reflexive Banach space
continuously embedded in $B$.
On $H$ there are given a linear operator $A: D(A)\subset H \to H$, a
nonlinear operator $F: D(F) \subset H \to H$ with dense domain in
$H$ and a bounded linear operator $Q$ on from $H$ to $H$. Moreover,
we are given a complete probability space $(\Omega,F, (F_t)_{t\geq
0}, \mathbb{P})$ which satisfies the usual conditions, i.e., the
probability space is complete, $F$ contains all $\mathbb{P}$-null subsets
of sets in $F$ and the filtration $(F_t)_{t\geq 0}$ is right
continuous. Further, for any trace-class linear operator $Q$, we
will denote by ${\rm Tr}\, Q$ its trace; if $f$ is any mapping on
$H$ which is Fr\'echet differentiable up to order $n$, $n\in
\mathbb{N}$, we will denote by
$f^{(i)}, \, i=1,\dots,n$ its $i$-th Fr\'echet derivative and by
$D(f^{(i)} )$ the corresponding domain (for a short survey on
Fr\'echet differentiable mappings we refer to Section 4). For any
$j\in \N$ and any vector space $X$, $L(X^j;X)$ denotes the space of
$j$-linear bounded mappings from
$X^j$ into $X$ while the space of linear bounded mappings from $X$ into $L(X^j;X)$
is denoted by $L^j(X)$.
We denote by $|\cdot|_X$ the norm on $X$, by $\|\cdot\|_{L^j(X)}$ the norm of any $j$-linear operator on $X$ and by $\|\cdot\|_{HS}$ the Hilbert-Schmidt
norm of any linear operator
on $X$.
Finally, for any $p\geq 1$, we will denote by $\mathcal{C}_{\mathcal{F}}([0,T]; L^p(\Omega;X))$ the space of $X$-valued, adapted mean square continuous processes
$Y$ on the time interval $[0,T]$ such that the following norm is finite
$$
|\!|\!| Y|\!|\!|= (\sup_{t \in[0,T]}\mathbb{E} \left |Y(t)|_X^p \right)^{1/p}<\infty.
$$
\begin{hypothesis}\label{hp:A+F}
\begin{enumerate}
\item[]
\item The operator $A: D(A) \subset H \to H$ generates an analytic semigroup $(e^{tA})_{t\geq 0}$,
on $H$ of strict negative type such that
\begin{equation*}
\left\|e^{tA}\right\|_{L(H)} \leq e^{-\omega t}, \quad t\geq 0
\end{equation*}
with $\omega$ a strictly positive, real constant
Moreover, if $A_B$ denotes the part of $A$ in the reflexive Banach space $B$, that is
$$
D(A_B):=\left\{ x\in D(A)\cap B; A x\in B \right\}, \quad A_B x=Ax,
$$
then $A_B$ generates an analytic semigroup $($of negative type$)$ $e^{tA_B}, t\geq 0$ on $B$.
\item The mapping $F: D(F) \subset H \to H$ is continuous,
nonlinear, Fr\'echet differentiable up to order $n$ for some positive integer
$n$ and quasi-$m$-dissipative, i.e., there exist $\eta>0$ such that
$$
\left\langle F(u)-F(v)-\eta(u-v),u-v\right\rangle < 0, \qquad for \ all \ u,v\in D(F).\\
$$
\item If $F^{(j)}_B$, $j=1,\dots,n$ denotes the part of $F^{(j)}$ in $B$, that is
$$
D(F^{(j)}_B):=\left\{ x\in D(F^{(j)})\cap K; F^{(j)}_B( x)\in B \right\}, \quad F^{(j)}_B (x)=F^{(j)}(x),
$$
then there exists a reflexive Banach space $K$ densely and continuously embedded in $B$ which makes the following assumptions satisfied:
\begin{enumerate}
\item there exists a positive real number
$\gamma$ and a positive natural number $n $ such that:
\begin{align*}
&\left| F_B(u) \right|_{B} \leq \gamma \left(1+\left|u\right|_{K}^{2n+1} \right),\quad u \in K,
\end{align*}
\item for some $n $ and any $u\in D(F_K^{(i)}), i=1,\dots,n$, there exist positive real constants $\gamma_i, \ i=1,\dots,n$ such that
\begin{align*}
\|F^{(i)}_B(u) \|_{L^{j}(B)} \leq \gamma_i(1+|u|_K^{2n+1-i}) \:,\quad \textrm{with $n$ as in $(${\rm iii}$)$, $u\in K$}
\end{align*}
\end{enumerate}
\item The constants $\omega,\eta$ satisfy the inequality $\omega -\eta >0$; this implies that
the term $A+F$ is $m$-dissipative in the sense of \cite{DPZRosso}, \cite[p. 73]{DPZVerde}.
\item The term $L$ is a L\'evy process $($for example in the sense of \cite{AW, PeZa}$)$ on some Hilbert space $U$; moreover we assume that
\begin{align*}
\int_U |y|^m \nu({\rm d} y) <\infty,
\end{align*}
for all $m\in \N$, where $\nu$ is the jump intensity measure introduced in section
(\ref{ABOE}).
\item $Q$ is a positive linear bounded operator on $H$ of trace class, that is ${\rm Tr}\,Q<\infty$.
\end{enumerate}
\end{hypothesis}
\begin{example} \label{ex:F} \rm
Let us give an example of a mapping $F$ satisfying the above
hypothesis (in view of the application to stochastic neuronal
models, which we will present in section 6).
Let $H = L^2(\Lambda)$ with $\Lambda \subset \mathbb{R}^n$, bounded and open; set
$B:=L^{2(2n+1)}(\Lambda)$, $K:=L^{2(2n+1)^2}(\Lambda)$
and let $F$ be a multinomial of odd degree $2n+1$, $n \in \mathbb{N}$, i.e. a mapping of the form
$
F(u)=g_{2n+1}(u)
$, where $g_{2n+1}(u)$, $u \in H$, is a polynomial of degree $2n+1$, that is, $ g_{2n+1}(u)=a_0 +a_1 u+ \dots +a_{2n+1} u^{2n+1}$, with $a_i \in \mathbb{R}$, $i=0,\ldots,2n+1$. Then
it is easy to prove that
$D(F) = L^{2(2n+1)}(\Lambda) \subsetneq L^2(\Lambda),n>0$, $D(F)=L^{2}(\Lambda)=H,n=0$ and (by using the H\"older inequality) $D(F^{(i)})=L^{2(2n+1-i)}(\Lambda)$. Moreover, it turns out that, for any $u\in K$, $F^{(i)}(u)$ can be identified with the element $g_m^{(i)}(u)$ (both in $D(F)$ and $K$).
Consequently,
\begin{align*}
|F(u)|_B &= \left(\int_{\Lambda}|g_{2n+1}(u(\xi))|^{2(2n+1)} {\rm d} \,\xi\right)^{1/(2(2n+1))} \\
& \leq C_{2n+1} \left(1+ \int_{\Lambda}|u(\xi)|^{2(2n+1)^2} {\rm d} \,\xi\right)^{1/(2(2n+1))} \\
&= C_{2n+1} (1+|u|_K^{2n+1})
\end{align*}
and, similarly,
\begin{align*}
|\nabla^{(j)} F(u)|_{L^j(K;B)} & \leq C_{2n+1-i} (1+|u|_K^{{2n+1-i}}) \\
&= C_{2n-i} (1+|u|_K^{2n+1-j}), \qquad j=0,1,\dots,m.
\end{align*}
Hence $F$ satisfies Hypothesis \ref{hp:A+F} (ii), (iii). Further, in
the case $g_3(u)=-u(u-1)(u-\xi), 0<\xi<1$ the corresponding mapping
$F$ coincides with the non linear term of the first equation in the
FitzHugh-Nagumo system (see Example \ref{Remark:FHN} below).
\end{example}
\section{Mild solutions to SDE's driven by L\'{e}vy on Hilbert spaces}
\noindent
In this section we basically use the setting of \cite{PeZa}.
\subsection{The deterministic case}
\label{sec:section1}
Let $A_0$ be a densely defined linear operator on a Banach space $B$, with domain $D(A_0)$. Let assume that the differential equation
\begin{equation}
\left\{
\begin{aligned}
\frac{\mathrm{d}y}{\mathrm{d}t} &= A_0y \\
y(0) &= y_0 \in D(A_0)
\end{aligned}
\right.
\label{eq:equation1}
\end{equation}
has a unique solution $y(t)$, $t\geq0$, $y(t)\in B$. The equation being linear, we have
\begin{equation}
y(t) = S(t)y_0\text{,} \quad t\geq0\text{,}
\nonumber
\end{equation}
with $S(t)$ a linear operator from $D(A_0)$ into $B$. If for each
$t\geq0$, $S(t)$ has a \underline{continuous} extension to all of
$B$, and for each $z\in B$, $t\to S(t)z$ is continuous, then one
says that the Cauchy problem (\ref{eq:equation1}) is well posed.
$t\to S(t)z$, defined then for all $z\in B$, is called a generalized
solution to (\ref{eq:equation1}). One has: that
$\bigl(S(t)\bigr)_{t\geq0}$ is a $C_0$-semigroup:
\begin{enumerate}
\item $S(0)=\mathbb{1}$, $S(t)S(s)=S(t+s)$, $t,s\geq0$,
\item $\left| S(t)z-z \right|_{B} \to 0$ as $t\downarrow 0$, for every $z\in B$.
\end{enumerate}
Let $D(A)$ be the definition domain of the generator $A$ of $S(t)$. We have $D(A)\supset D(A_0)$ and $A$ is an extension of $A_0$.
Moreover, see, e.g. (\cite{PeZa}, Theorem 9.2):
\begin{enumerate}
\item $\left| S(t)z \right|_{B} \le \mathrm{e}^{\omega t}M\left| z \right|_{B}$, for some $\omega,M>0 \ \forall z\in B$, $\forall t\geq0$,
\item $A$ is closed and for any $z\in D(A)$, $t>0$ one has $S(t)z\in D(A)$ and $\frac{\mathrm{d}}{\mathrm{d}t}S(t)z=AS(t)z=S(t)Az$.
In particular for $z=y_0\in D(A)$, $t\to S(t)y_0$ solves (\ref{eq:equation1}) with $A$ replacing $A_0$.
\end{enumerate}
Now, let $H$ be a Hilbert space such that $B\subset H$, with a dense, continuous embedding, $B$ being a Borel subset of $H$. Let $\psi(t)$, $t\geq0$ be $H$-valued and continuously differentiable. Then the \grqq variation of constants formula\glqq
\begin{equation}
y(t) = S(t)y_0 + \int\limits_{0}^{t}S(t-s)\psi(s)\,\mathrm{d}s\text{,} \quad t\geq0
\label{eqn:equation2}
\end{equation}
solves
\begin{equation}
\left\{
\begin{aligned}
\frac{\mathrm{d}y}{\mathrm{d}t}(t) &= Ay(t) + \psi(t) \\
y(0) &= y_0 \in H\text{.}
\end{aligned}
\right.
\end{equation}
In general, whenever the integral in (\ref{eqn:equation2}) has a meaning for a given $y_0$ in $H$, one says that (\ref{eqn:equation2}) is a
\underline{mild solution} of
\begin{equation}
\left\{
\begin{aligned}
\frac{\mathrm{d}y(t)}{\mathrm{d}t} &= Ay(t) + \psi(t) \\
y(0) &= y_0\text{.}
\end{aligned}
\right.
\end{equation}
The formal definition of mild solution is given below.
\begin{definition}\label{def:det}
Let $y_0 \in H$; we say that the function $\phi: \, [0,\infty) \to H$ is a mild solution of equation \eqref{eq:det}
if it is continuous $($in $t)$, with values in $H$ and it satisfies:
\begin{equation} \label{mildsolution}
\phi(t)=e^{tA}y_0+\int_0^t e^{(t-s)A}\psi(s) \, {\rm d} s, \quad t\in[0,+\infty),
\end{equation}
with the integral existing in the sense of Bochner integrals on Hilbert spaces.
\end{definition}
In the case of $\psi$ being substituted by a mapping $F: D(F)
\subset H \to H$ satisfying the assumptions given in Hypothesis
$\ref{hp:A+F}$ we have the following result.
\begin{proposition}\label{prop:MildDeterministica}
Under Hypothesis $\ref{hp:A+F}$ there exists a unique mild solution $\phi=\phi(t), t \in [0,\infty)$
of the deterministic problem
\begin{equation}
\left\{
\begin{aligned}
\frac{\mathrm{d}y}{\mathrm{d}t} &= A_0y + F(y) \\
y(0) &= y_0 \in D(A_0)
\end{aligned}
\right.
\end{equation} such that
\begin{equation}\label{stimaLunardi}
|\phi(t)|_H \leq e^{-2(\omega-\eta) t}|u^0|_H, \quad t\geq 0.
\end{equation}
\end{proposition}
\begin{proof}
The proof of existence and uniqueness can be found,e.g in \cite[Theorem 7.13, p. 203]{DPZRosso},
while estimate \eqref{stimaLunardi} is a direct consequence of the application of Gronwall's lemma to the following inequality
\begin{align*}
\frac{d}{dt} | \phi(t) |_{H}^2 &= 2 \langle A \phi(t), \phi(t) \rangle dt + 2\langle F(\phi(t)),\phi(t) \rangle \\
&\leq - 2(\omega-\eta)|\phi(t)|_H^2.
\end{align*}
\end{proof}
\begin{remark}
It can be shown that, under Hypothesis $\ref{hp:A+F}$, there exists a $K$-continuous version of the unique solution of equation \eqref{mildsolution} such that, for any $T>0$, $p\geq 1$
\begin{align*}
\sup_{t\in [0,T]} |\phi(t)|_K^p <\infty.
\end{align*}
$($see \cite[Section 5.5.2, Proposition 5.5.6]{DPZVerde}$)$.
Hence, in the following, by $\phi$ we will always understand this $K$-valued version of the solution of \eqref{eq:det}.
\end{remark}
\subsection{The stochastically perturbed case}\label{ABOE}
Let $G$ be a linear operator from a Hilbert space $U$ into a
Hilbert space $H$. Let $S(t)$ be a $C_0$-semigroup on the Hilbert
space $H$.
Assume the generator $\left(A,D(A)\right)$ of $S(t)$ in $H$ is almost $m$-dissipative (i.e.\ $[(\lambda\mathbb{1}-A)+\eta]H=H$ for any
$\lambda>0$ and some $\eta\in\mathbb{R}$: [\cite{PeZa}, p. 180]; this is equivalent to quasi m-dissipative in the sense of \cite[p. 73]{DPZVerde}.) Assume $B \subset H$ as in \eqref{sec:section1} and that the restriction $A_{B}$ of $A$ to
$B$ is also almost $m$-dissipative. Let $L$ be a square-integrable mean zero L\'{e}vy process taking values in a Hilbert space $K$.
I.e.\ $L=\bigl(L(t)\bigr)_{t\geq0}$ takes values in a Hilbert space $K$, has independent, stationary, increments, one has $L(0)=0$, and $L(t)$ is
stochastically continuous (see \cite{PeZa}, Def. 4.1, p.\ 38). Let $Q$ be the covariance of $L$. Then $Q^{\frac{1}{2}}(K)$ is the reproducing kernel Hilbert space
(RKHS) of $L$, assume $Q^{\frac{1}{2}}(\textcolor{red}{K})$ is embedded into $U$. \\
We recall the following basic notions and results.
\begin{definition}
Let $\nu$ be a finite measure on a Hilbert space $U$ such that $\nu(\left\{0\right\})=0$.
A compound Poisson process with L\'evy measure (also called jump
intensity measure) $\mu$ is a c$\grave{a}$dl$\grave{a}$g L\'evy
process $L$ satisfying
\begin{align*}
P(L(t)\in \Gamma)= e^{-\nu(U)t}\sum_{k=0}^\infty \frac{t^k}{k!} \nu^{*k}(\Gamma), \qquad t\geq 0, \Gamma\,\in\, \mathcal{B}(U).
\end{align*}
$\mathcal{B}(U)$ being the $\sigma-$algebra of Borel subsets of $U.$
\end{definition}
Given a Borel set $I$ separated from $0$, write
\begin{align*}
\pi_I(t)=\sum_{s\leq t} \chi_I(\Delta L(s)), \qquad t\geq 0.
\end{align*}
The c$\grave{a}$dl$\grave{a}$g property of $L$ implies that $\pi_I$
is $\mathbb{Z}_+$-valued. We notice that it is a L\'evy process with
jumps of size
$1$ and thus, a Poisson process (see [\cite{PeZa}, Proposition 4.9 (iv)] for more details.) We also have that $\mathbb{E} \pi_I(t)= t \mathbb{E} \pi_I(t)=t\nu(I)$,
where $\nu$ is a measure that is finite on sets separated from $0$. We shall write
\begin{align*}
L_I(t)=\sum_{s\leq t} \chi_I(\Delta L(s)) \Delta L(s).
\end{align*}
Then $L_I$ is a well-defined L\'evy process. The theorem below
provides the corresponding L\'evy-Khinchine decomposition:
\begin{theorem}
\begin{enumerate}
\item If $\nu$ is a jump intensity measure corresponding to a L\'evy process then
\begin{align*}
\int_U (|y|^2_U\wedge 1) \nu({\rm d} y) <\infty.
\end{align*}
\item Every L\'evy process has the following representation:
\begin{align*}
L(t):= at + \sqrt{Q}W(t)+\sum_{k=1}^\infty \left(L_{I_k}(t)- t \int_{I_k} y \nu({\rm d} y)\right)+
L_{I_0}(t),
\end{align*}
where $I_0:=\left\{ x: |x|_U\geq r_0\right\}$, $I_k:= \left\{ x: r_k \leq |x|_U < r_{k-1}\right\}$, $(r_k)$ is an arbitrary sequence decreasing to $0$, $W$ is a Wiener process, all members of the representation are independent processes and the series converges $\mathbb{P}-a.s.$, uniformly on each bounded subinterval $[0,\infty)$.
\end{enumerate}
\end{theorem}
In the following (see Hypothesis \ref{hp:A+F}), with no loss of
generality, we assume that
\begin{align}\label{eq:medianu}
\sum_{k=1}^\infty \int_{I_k} y \nu({\rm d} y) =0.
\end{align}
We also assume throughout that the L\'evy process is a pure jump process, i.e. $a=0$
and $Q=0$ and that
\begin{align}\label{eq:momenti}
\int_U |y|^m \nu({\rm d} y) <\infty, \qquad for \ all \ m\in \mathbb{N},
\end{align}
which leads to the representation
\begin{align*}
L(t)= \sum_{k=1}^\infty L_{I_k}(t)+ L_{I_0}(t),
\end{align*}
in view of assumptions \eqref{eq:medianu} and \eqref{eq:momenti}.
Let $L_A(t)=\int\limits_{0}^{t}S(t-s)\sqrt{Q}\,\mathrm{d}L(s)$, $t\geq0$, be the L\'{e}vy Ornstein-Uhlenbeck process associated with $S,\sqrt{Q},L$,
assumed to exist and have a c\`{a}dl\`{a}g version in $B$ (the latter is satisfied if $B$ is a Hilbert space $K$ and $S(t)$ is a contraction on $K$),
see e.g. (\cite{PeZa}, p.\ 155), or $S$ is analytic and $L$ takes values in $D\bigl((-A)^{\alpha}\bigr)$ for some $\alpha>0$; see, e.g.
(\cite{PeZa}, p.\ 155). Assume $F$ is an operator on $H$ (possibly nonlinear, nor everywhere defined) satysfying Hypothesis \ref{hp:A+F}.
An adapted $B$-valued process $X$ is said to be a \underline{c\`{a}dl\`{a}g mild solution} to
\begin{equation}\label{eqn:equation3}
\left\{
\begin{aligned}
\mathrm{d}X(t) &= AX(t)\,\mathrm{d}t + F\bigl(X(t)\bigr)\,\mathrm{d}t + \sqrt{Q}\,\mathrm{d}L(t) \\
X(0) &= x \in D(F)
\end{aligned}
\right.
\end{equation}
if it is c\`{a}dl\`{a}g in $B$ and satisfies, $P$-a.s., the equation
\\
$X(t)=S(t)x+\int\limits_{0}^{t}S(t-s)F\bigl(X(s)\bigr)\,\mathrm{d}s+L_A(t)$,
$t\geq0$, with $X(s)\in D(F)$ for $s\geq0$ (\cite{PeZa}, p.\ 182).
The formal definition of mild solution for the stochastic problem
\eqref{eqn:equation3} is given below; next we recall the definition of stochastic convolution and we list some of its properties.
\begin{definition}\label{def:stoc-conv}
Let $u^0\in K$. A predictable $H$-valued process $u:=(u(t))_{t\geq 0}$ is called
a mild solution to the Cauchy problem \eqref{eq:eps} with initial condition $u^0 \in D(F)$ if for arbitrary $t\geq 0$ we have
\begin{equation*}
u(t)=e^{tA}u^0+\int_0^t e^{(t-s)A}F(u(s))ds + \varepsilon \int_0^t e^{(t-s)A} \sqrt{Q}dL(s), \quad \textrm{$\mathbb{P}$-a.s.}
\end{equation*}
$L_A(t) := \int_0^t e^{(t-s)A} \sqrt{Q}dL(s)$ is called a stochastic convolution and under our hypothesis it is a well defined mean square continuous
$\mathcal{F}_t$-adapted process with values in $B$ and c$\grave{a}$dl$\grave{a}$g trajectories (see e.g., \cite{PeZa}, Proposition 9.28, p.\ 163).
\end{definition}
The first integral on the right hand side is defined pathwise in the Bochner sense, $\mathbb{P}$-almost surely.
For further use, in the following we introduce some additional condition on the stochastic convolution:
\begin{hypothesis}\label{prop:StochasticConvolution}
The stochastic convolution $L_A(t),t\geq 0$ introduced in Definition $\ref{def:stoc-conv}$, admits a $K$-valued version such that,
for any $T>0$, it satisfies the following estimate
{\small
\begin{equation}
\mathbb{E}\left(\sup_{t \in [0,T]} | L_A(t) |_K^{m}
\right) \leq C_T
\end{equation}
}
for every $m\in \mathbb{N}$ and some positive constant $C_T$ $($possibly depending on $T)$.
\end{hypothesis}
\begin{example}\rm
Let us give an example for the setting $(H,B,K,L,A,Q)$ where $L_A$ is well-defined and Hypothesis \ref{prop:StochasticConvolution} is satisfied.
This example is related to the application to the stochastic FitzHugh-Nagumo model which we discuss in Example \ref{Remark:FHN}.
Let $H,B,K$ be as in Example \ref{ex:F}. Let $A=\Delta$ be the
Laplacian in $L^2(\Lambda)$ with Neumann boundary conditions on the
boundary $\partial\,\Lambda$ of the bounded open subset $\Lambda$ of
$\R^n$.
Let $Q$ be a bounded trace class operator commuting with
$A$ and $L$ be a L\'evy process such that the corresponding measure $\nu$ satisfies
\begin{align*}
\int_{L^2(\Lambda)} |x|^m_{W^{\beta,2(2n+1)}} \nu({\rm d} x) <\infty \quad for \ all \ m\in \mathbb{N},
\end{align*}
where $W^{\beta,2(2n+1)}$ is a fractional Sobolev space with given $\beta >0$. Finally, let
$(A_{18},D(A_{18}))$ denote the generator of the heat semigroup with Neumann boundary conditions on $L^{18}(\Lambda)$.
By {\bf \cite[Appendix ]{DPZRosso}} $L_A(t)\in D((-A_{2(2n+1)}^\gamma)$, $\gamma>0$; in particular $L_A(t)\in K$, $L_A$ being in addition a L\'evy process.
This implies the bound in
Hypothesis \ref{prop:StochasticConvolution}.
\end{example}
The next result concerns the existence and uniqueness of solutions for the stochastically perturbed problem. Moreover,
we shall use Hypothesis \ref{prop:StochasticConvolution} above concerning the Ornstein-Uhlenbeck process associated with $e^{tA},\sqrt{Q}$ and $L$ in order
to prove a useful estimate on the solution.
\begin{thm}
Assume that $A$ and $F$ satisfy Hypothesis $\ref{hp:A+F}$. Assume that $A$ and $Q$ satisfy Hypothesis $\ref{prop:StochasticConvolution}$.
Then there exists a unique c\`{a}dl\`{a}g mild solution of (\ref{eqn:equation3}) for any $x\in B$. For each $x\in H$ there exists a unique generalized
solution for (\ref{eqn:equation3}) (in the sense that $\exists(X_n)_{n\in\mathbb{N}}$, $X_n\in B$, unique c\`{a}dl\`{a}g mild solutions of \eqref{eqn:equation3} with $X_n(0)=x$
s.t.\ $\left| X_n(t)-X(t) \right|_{H}\to0$ uniformly on each bounded interval). Moreover (\ref{eqn:equation3}) defines Feller families on $B$ and on $H$
(in the sense that the Markov semigroup $P(t)$ associated with $X(t)$ maps for any $t\geq0$, $C_{b}(H)$ into $C_b(H)$ and $C_b(B)$ into $C_b(B)$).
Moreover, the solution $X$ to (\ref{eqn:equation3}) belongs to the
space $\mathcal{L}^p(\Omega;C([0,T];H))$, i.e., is such that
\begin{equation}\label{stima:u}
\mathbb{E}\left( \sup_{t \in [0,T]} \left|X(t)\right|^p_H \right)< +\infty,
\end{equation}
for any $p \in[2, \infty)$.
\end{thm}
\begin{proof}
The first part of the result is proven in (\cite{PeZa}, Theorem 10.14).
We only have to prove the estimate \eqref{stima:u}.
Let $z(t) := X(t) - L_A(t)$; then it is not difficult to show that $z(t)$ is the unique solution of the following deterministic equation:
$$
\begin{cases}
z^\prime(t) = Az(t) + F(z(t)+L_A(t)) \\
z(0) = u^0
\end{cases}
$$
with $z^\prime (t) := \frac{d}{dt} z(t)$.
With no loss of generality (because of inclusion results for $L^p$-spaces with respect to bounded measures) we can assume that $p=2a$, $a \in \N$.
Now combining condition $(i)$ with $(i)$ in Hypothesis (\ref{hp:A+F}) and recalling
Newton's binomial formula we have:
\begin{equation}\label{eq:ItoFormula}
\begin{aligned}
\frac{d}{dt} |z(t)|_H^{2a} &=
2a \langle z^\prime(t),z(t) \rangle |z(t)|_H^{2a-2} = 2a \langle Az(t) + F(z(t)+L_A(t)),z(t) \rangle |z(t)|_H^{2a-2} \\
& \leq
-2a \omega |z(t)|_H^{2a} + 2a \langle F(z(t)+L_A(t)),z(t) \rangle |z(t)|_H^{2a-2} \\
& \leq
-2a (\omega -\eta) |z(t)|_H^{2a} +2a | F(L_A(t)) |_H |z(t)|_H^{2a-1} \\
&\leq
-2a (\omega -\eta) |z(t)|_H^{2a} + 2a \frac{C_a}{\xi} | F(L_A(t)) |_H^{2a} +C_a 2a \xi | z(t) |^{2a}_H \:,
\end{aligned}
\end{equation}
for some constant $C_a >0$ and a sufficiently small $\xi>0$ such that $-2a (\omega-\eta) +2a \xi C_a <0$.
Applying the previous inequality and Gronwall's lemma we get:
$$
|z(t)|_H^{2a} \leq e^{(-2a (\omega-\eta) +\xi C_a 2a) t} |u^0|_H^{2a} +
\frac{ 2a C_a }{\xi}\int_0^t e^{-2a (\omega-\eta) (t-s)} |F(L_A(s))|_H^{2a} ds.
$$
Then there exists a positive constant $C$ such that:
\begin{equation}\label{eq:Dis2N}
|X(t)|_H^{2a} \leq C \left( e^{(-2a (\omega-\eta) +\xi C_a 2a) t} |u^0|_H^{2a} +
2a \int_0^t e^{-2a (\omega-\eta) (t-s)} |F(L_A(s))|_H^{2a} ds + |L_A(t)|_H^{2a} \right) .
\end{equation}
Since by condition $(iii)$ in Hypothesis \ref{hp:A+F}, the restriction of $F$ to $K$ has (at most)
polynomial growth at infinity in the $K$-norm and, by the assumption on $L_A(t)$ made in Hypothesis \ref{prop:StochasticConvolution},
$L_A$ takes value in $K$, for any $a\in \N$ we have:
$$
|F(L_A(t))|_H^{2a} \leq
C_{a,m} (1+|L_A(t)|_K^m)^{2a} \leq C_{a,m} (1+|L_A(t)|_K^{2am}),
$$
for some positive constant $C_{a,m}$ depending on $m$ and $a$.
Moreover, we observe that, again by Hypothesis \ref{prop:StochasticConvolution}, it holds that
$$
\mathbb{E}\left(\sup_{t \in [0,T]} |L_A(t)|_K^{2am} \right)\leq C_{a,m,T}^\prime,
$$ where $C_{a,m,T}^\prime$ is again a positive constant depending on $m$, $a$ and $T$; hence
\begin{multline}\label{nonloso}
\mathbb{E} \left[\sup_{t\in [0,T]} \int_0^t e^{-2a (\omega-\eta) (t-s)} |F(L_A(s))|_H^{2a} ds \right] \leq
\tilde{C} \mathbb{E} \left[\sup_{t \in [0,T]} \int_0^t e^{-2a (\omega-\eta) (t-s)}
(1+|L_A(t)|_K^{2am}) ds \right] \\
\leq \tilde{C} \mathbb{E} \left[ \sup_{t \in [0,T]} \int_0^t e^{-2a (\omega-\eta) (t-s)} ds +
C_{a,m}^\prime \int_0^t e^{-2a (\omega-\eta)} ds \right] \leq \bar{C}\:,
\end{multline}
for some positive constants $\widetilde{C}, \bar{C}$ depending on $a$, $m$ and $T$. Consequently,
putting together inequalities \eqref{eq:Dis2N}, \eqref{nonloso}, we obtain
$$
\mathbb{E}\left( \sup_{t\in [0,T]}|X(t)|_H^{2a}\right) \leq C |u^0|_H^{2a} + \bar{\bar{C}} \:,
$$
for some positive constant $\bar{\bar{C}}$, so that the proposition follows.
\end{proof}
\section{Properties of the non-linear term $F$ and Taylor expansions} \label{sec:2}
%
%
In this section we study the non-linear term $F$ in order to write its Taylor expansion around the solution $\phi(t)$ of \eqref{mildsolution} with respect to an increment given in terms of powers of $\varepsilon$.
In order to do that we recall some basic properties of Fr\'echet differentiable functions.
Let $U$ and $V$ be two real Banach spaces. For a mapping
$F:U\to V$ the G\^ateaux differential at $u \in U$ in the direction
$h\in U$ is defined as
$$
\nabla F(u)[h]=\lim_{s\to 0}\frac{F(u+sh)-F(u)}{s},
$$
whenever the limit exists in the topology of $V$ (see for example \cite[p. 12]{Lal}).
We notice that if $\nabla F(u) [h]$ exists in a neighborhood of $u_0 \in U$ and is continuous in $u$ at $u_0$ and also continuous in $h$ at $h=0$, then $\nabla F (u) [h]$ is linear in $h$ (see for instance \cite[Problem 1.61, p 15]{Lal}).
If $\nabla F (u_0) [h]$ has this property for all $u_0 \in U_0 \subseteq U$ and all $h \in U$ we shall say that $F$ belongs to the space $G^1(U_0;V)$.
If $F$ is continuous from $U$ to $V$ and $F \in G^1(U_0;V)$ and one has $F(u+h) = F(u) + \nabla F(u)[h]+R(u,h)$, for any $u \in U_0$ with:
\begin{align}\label{eq:FrechetTaylor}
\lim_{\left| h \right|_U \rightarrow 0} \frac{\left| R(u,h) \right|_V }{\left| h \right|_U} =0
\end{align}
with $|\cdot|_V $ and $| \cdot|_U$ denoting respectively the norm in $V$ and $U$, then the map $ h \rightarrow \nabla F (u) [h]$ is a bounded linear operator from $U_0$ to $V$, and $\nabla F(u)[h]$ is, by definition, the unique Fr\'echet differential of $F$ at $u \in U_0$ with increment $h \in U$. The function $R(u,h)$ is called the remainder of this Fr\'echet differential, while
the operator sending $h$ into $\nabla F(u) [h]$ is then called the Fr\'echet derivative of $F$ at $u$ and is usually denoted by $F^\prime(u)$ (see for instance \cite[pp. 15-16, Problem 1.6.2 and Lemma 1.6.3]{Lal}).
We have then $\nabla F(u) [h] = F^\prime(u) \cdot h$, with the symbol $\cdot$ denoting the action of the linear bounded operator $F^\prime(u)$ on $h$.
The mapping $F^\prime(u)$ is also called the gradient of $F$ at $u$ (see for example \cite[p. 15]{Lal}) and it coincides with the G\^ateaux derivative of $F$ at $u$.
We shall denote by $\mathcal{F}^{(1)} (U_0,V)$ the subset of $G^1(U_0,V)$ such that the Fr\'echet derivative exists at any point of $U_0$.
Similarly we introduce the Fr\'echet derivative $F^{\prime\prime}(u)$ of $F^\prime$ at $u \in U$.
This is a bounded linear map from a subset $D(F^\prime)$ of $U$ into $L(U,V)$ ($L(U,V)$ being the space of bounded linear operators from $U$ to $V$). One has thus $F^{\prime\prime} \in L(U, L(U,V))$. If we choose $h,k \in U$ then $F^{\prime\prime}(u) \cdot k \in L(U,V)$ and $\left(F^{\prime\prime}(u) \cdot k\right) \cdot h \in V$. The latter is also written $F^{\prime\prime}(u) \;h\;k$ or $F^{\prime\prime}(u) [h,k]$. The mapping
$F^{\prime\prime}(u) [h,k]$ is bilinear in $h,k$, for any given $u \in D(F^{\prime\prime})$ and it can be identified with the G\^ateaux differential $\nabla^{(2)} F(u) [h,k]$ of $\nabla F(u)[h]$ in the direction $k$, the latter looked upon as a map from $U$ to $L(U,V)$.
Similarly one defines the $j$-th Fr\'echet derivative $F^{(j)}(u)$ and the $j$-th G\^ateaux derivative $\nabla F^{(j)}(u) [h_1, \ldots, h_j]$. The function
$F^{(j)}(u)$ acts $j$-linearly on $h_1,\ldots,h_j$ with $h_i \in U$ for any $i=1,\ldots,j$.
Let $U_0$ be an open subset of $U$ and consider the space $\mathcal{F}^{(j)}(U_0,V)$ of maps $F$ from $U$ to $V$ such that $F^{(j)}(u)$ exists at all $u \in U_0$ and is uniformly continuous on $U_0$. The following Taylor formula holds for any $u,h \in U$ for which $F(h)$ and $F(u+h)$ are well defined (i.e. $h$ and $u+h$ are elements of $D(F)$), and $j=1,\ldots,n+1$ with $u \in \cap_{j=1}^n \mathcal{F}^{(j)}(U_0,V)$:
\begin{equation}\label{eq:FrechetTaylorn}
F(u+h) = F(u) + \nabla F (u) [h]+ \frac{1}{2} \nabla^{(2)}F(u) [h,h]+ \cdots + \frac{1}{n !} \nabla^{(n)} F(u) \underbrace{[h,\ldots,h]}_{\textrm{$n$-terms}}
+R^{(n)}(u;h) \:,
\end{equation}
where $\left| R^{(n)}(u;h) \right|_U \leq C_{u,n} \cdot \left|h\right|_U^n$ for some
constant $C_{u,n}$ depending only on $u$ and $n$ (see for example \cite[Theorem X.1.2]{KolmogorovFomin}).
Now let us consider the case $U = H$, with $H$ being the same Hilbert space appearing in
problem \eqref{eq:det}.
%
Let $F$ be as in Hypothesis \ref{hp:A+F} and set $U_0 = D(F)$. Let us define
for $0< \varepsilon\leq 1$ the function $h(t)$, $t \geq 0 $:
$$
h(t) = \sum_{k=1}^n \varepsilon^k u_k(t)+r^{(n)}(t;\varepsilon) \:,
$$
where the functions $u_k(t),k=1,\dots,n$ and $r^{(n)}(t;\varepsilon)$ are $p$-mean integrable continuous stochastic processes with values in $H$, defined on the whole interval $[0,T]$ for $p \in [2,\infty)$. Moreover we suppose $r^{(n)}(\cdot;\varepsilon) ={\bf o}(\varepsilon^{n})$, i.e.,
\begin{align*}
\lim_{\varepsilon \to 0}\mathbb{E}\left[ \sup_{t\in [0,T]}\frac{|r^{(n)}(t;\varepsilon)|^p}{\varepsilon^n}\right] =0, \qquad for \ any \ T>0.
\end{align*}
Let $\phi$ be a $p$-mean integrable continuous stochastic process with values in the Banach space $K$. Then using the above Taylor formula we have
\begin{equation}\label{eq:nablacasoconcreto}
\begin{aligned}
F(\phi(t)+h(t))= F(\phi(t))+\nabla F (\phi(t)) [h(t)]+ \frac{1}{2}\nabla^{(2)} F[h(t),h(t)]+\cdots\\
\qquad \qquad \qquad +
\frac{1}{n !} \nabla^{(n)} F(u) \underbrace{[h(t),\ldots,h(t)]}_{\textrm{$n$-terms}}
+R^{(n)}(\phi(t);h(t)) \:,
\end{aligned}
\end{equation}
and, recalling that for any $j=1,\dots,n$, $\nabla^{(j)}F(\phi(t))$ is multilinear, we have
\begin{equation}\label{eq:oj}
\begin{aligned}
&\frac{1}{j!} \nabla^{(j)}F(\phi(t)) \underbrace{[h(t),\dots,h(t)]}_{\textrm{$j$-terms}}=\\
&\qquad \qquad \frac{1}{j!}\sum_{k_1+\dots+k_j=j}^{nj} \varepsilon^{k_1+\dots+k_j} \nabla^{(j)} F(\phi(t)) [u_{k_1}(t),\dots,u_{k_j}(t)]
+ {\bf o}_j(\varepsilon^{nj})
\end{aligned}
\end{equation}
where ${\bf o}_j(\varepsilon^{nj})$ is the contribution to the right member of the above equality coming from the term $r^{(n)}(t;\varepsilon)$ and satisfies the estimate
\begin{align*}
\lim_{\varepsilon \to 0} \mathbb{E} \left[\sup_{t\in [0,T]} \frac{|{\bf o}_j(\varepsilon^{nj})|^p}{\varepsilon^{nj}} \right]=0, \quad for \ any \ T >0.
\end{align*}
We notice that any derivative appearing in the member on the right hand side of \eqref{eq:oj} is multiplied by the parameter $\varepsilon$ raised to a power
between $j$ and $nj$.
Taking into account the above equality we can rewrite \eqref{eq:nablacasoconcreto} as
\begin{equation} \label{TaylorF}
\begin{aligned}
F(\phi(t)+h(t)) &= F(\phi(t)) + \sum_{k=1}^n \varepsilon^k \nabla F(\phi(t)) [u_k(t)] \\
&+ \sum_{j_1+j_2=2}^n
\frac{\varepsilon^{j_1+j_2}}{2!} \nabla^{(2)} F(\phi(t)) [u_{j_1}(t),u_{j_2}(t)] + \cdots \\
&+ \sum_{ j_1+\dots + j_k = k}^n
\frac{\varepsilon^{j_1+\dots+j_k}}{k!} \nabla^{(k)} F(\phi(t)) [u_{j_1}(t),\ldots,u_{j_k}(t)] + \cdots \\
&+ \frac{ \varepsilon^n}{n!} \nabla^{(n)} F(\phi(t)) [u_{1}(t),\ldots,u_{1}(t)] + R_1^{(n)}(\phi(t);h(t),\varepsilon)\:,
\end{aligned}
\end{equation}
where the quantity $ R_1^{(n)}(\phi(t);h(t),\varepsilon) $ is given in terms of the derivatives of $F$ with
the parameter $\varepsilon$ raised to powers greater than $n$, in terms of the $n$-th remainder $R^{(n)} (\phi(t);h(t))$ in the Taylor expansion of the map $F$ (as stated in equation \eqref{eq:FrechetTaylorn}) and in terms of the remainders ${\bf o}_j(\varepsilon^{nj})$, $j=2,\dots, n$ introduced in \eqref{eq:oj}. Namely, we have:
\begin{equation}\label{R1}
\begin{aligned}
&R_1^{(n)} (\phi(t);h(t),\varepsilon) = \sum_{j=2}^n\sum_{i_1+\cdots+i_j = n+1}^{nj} \varepsilon^{i_1+\dots+i_j}
\frac{1}{j!} \nabla^{(j)} F(\phi(t)) [u_{i_1}(t), \dots , u_{i_j}(t)] \\
& \qquad \qquad \qquad \qquad \qquad + \sum_{j=2}^n {\bf o}_j(\varepsilon^{nj})
+ R^{(n)}(\phi(t);h(t)),
\end{aligned}
\end{equation}
$R^{(n)}(\phi(t);h(t))$ being as in \eqref{eq:FrechetTaylorn} (with $u$ replaced by $\phi$). In this way equation \eqref{TaylorF} can be rearranged as
\begin{equation} \label{EpsilonTaylorF}
\begin{aligned}
& F(\phi(t)+h(t)) \\
&\quad = F(\phi(t)) +
\sum_{j=2}^n \varepsilon^{j}\left( \sum_{i_1+\dots+i_j=j}^n
\frac{1}{j!} \nabla^{(j)} F(\phi(t)) [u_{i_1}(t), \dots , u_{i_j}(t)] \right)\\
& \quad + R_1^{(n)}(\phi(t);h(t),\varepsilon).
\end{aligned}
\end{equation}
\begin{lemma} \label{lm:R1}
Let $R_1^{(n)}$ be as in formula \eqref{R1}. Then for all $p \in [2, \infty)$ and $T >0$ there exists a constant $C>0$, depending on $| \phi |_{K},\ldots,| u_n |_{H},\nabla^{(1)} F,\ldots, \nabla^{(n)} F,p,n$, such that:
$$
\mathbb{E}\left[\sup_{t \in [0,T]} | R_1^{(n)} (\phi(t);h(t),\varepsilon) |_H^p \right] \leq C \varepsilon^{p(n+1)}
$$
for all $0< \varepsilon \leq 1$.
\end{lemma}
\begin{proof}
First of all we notice that
$$
\sum_{j=2}^n {\bf o}_j(\varepsilon^{nj}) = {\bf O}(\varepsilon^{2n}),
$$
meaning that
\begin{align}\label{eq:O}
\left|\sum_{j=2}^n{\bf o}(\varepsilon^{nj})\right|\leq C_n \varepsilon^{2n}, \qquad \varepsilon \to 0,
\end{align}
for some constant $C_n>0$.
Now since:
\begin{align*}
&R_1^{(n)} (\phi(t);h(t),\varepsilon) = \sum_{j=2}^n\sum_{i_1+\ldots+i_j = n+1}^{nj} \varepsilon^{i_1+\dots+i_j}
\frac{1}{j!} \nabla^{(j)} F(\phi(t)) [u_{i_1}(t), \dots , u_{i_j}(t)] \\
& \qquad \qquad \qquad \qquad + \sum_{j=2}^n {\bf o}_j(\varepsilon^{nj})
+ R^{(n)}(\phi(t);h(t)),
\end{align*}
using the estimate given in condition (3.b) in Hypothesis \ref{hp:A+F} and
\eqref{eq:O}, for $\varepsilon \in (0,1]$ we have
\begin{equation}
\begin{aligned}\label{eq:StimaR_1}
& | R_1^{(n)} (\phi(t);h(t),\varepsilon) |_H^p \\
& \qquad \leq C_{n,p}^{1} \varepsilon^{(n+1)p} \left[ \left( \max_{j=1,\ldots,n} \| \nabla^{(j)} F (\phi(t))\|_{L^j(K)} \right)^p
\left( \sum_{i=1}^n |u_i(t)|_H^p \right) \right]
\\
&\qquad \qquad \qquad +( {\bf O}(\varepsilon^{2n}))^p+ C_{n,p}^2\left| R^{(n)}\left(\phi(t);h(t) \right) \right|_H^p\\
& \qquad \leq C_{n,p}^{(1)} \varepsilon^{(n+1)p} \max_{j=1,\dots,n} \left[ \gamma_j^p(1+|\phi(t)|_K^{m-j})^p \right] \left( \sum_{i=1}^n |u_i(t)|_H^p \right)\\
& \qquad \qquad \qquad+C_n \varepsilon^{2np}+ C_{n,p}^{(2)}|R^{(n)}(\phi(t);h(t))|_H^p\\
&\qquad \leq\tilde{C}_n \varepsilon^{(n+1)p} + C_{n,p}^{(2)}|R^{(n)}(\phi(t);h(t))|_H^p,
\end{aligned}
\end{equation}
where $C^1_{n,p},C_{n,p}^{(1)},C_{n,p}^{(2)}$ are constants depending only on $n,p$ and the constant $C_n$ in \eqref{eq:O} while $\tilde{C}_n$ is a suitable positive constant depending on $p,n, \max_{j=1,\dots,n} \left[ \gamma_j^p(1+|\phi(t)|_K^{m-j})^p \right]$ ($\gamma_i$ being the constants appearing in Hypothesis \ref{hp:A+F}, condition (3)) and $\left| u_i(t) \right|_H^p$, $i=1,\ldots,n$.
We notice that the above inequality follows by recalling that the deterministic
function $\phi(t)$ is bounded (in the $H$-norm) (see Proposition \ref{prop:MildDeterministica}).
Now by the bound on $R^{(n)}$ in the equation \eqref{eq:FrechetTaylorn} we have that
$$
|R^{(n)}(\phi(t);h(t))|_H^p \leq \hat{C}_n |h(t)|_H^{(n+1)p}
$$
with $\hat{C}_n$ depending on $\phi(t)$ and $n$ but independent of $h(t)$.
Since $h(t) = \sum_{k=1}^n \varepsilon^k u_k(t)+r^{(n)}(t;\varepsilon)$ with $|r^{(n)}(t;\varepsilon)|\leq C_n \varepsilon^{n+1}$ for some $\tilde{C}_n$, then:
\begin{equation}\label{eq:StimaR^n}
|R^{(n)}(\phi(t);h(t))|_H^p \leq \varepsilon^{(n+1)p} \hat{C}_{n,p}(|u_1(t)|_H,\ldots,|u_n(t)|_H)
\end{equation}
with $\hat{C}_{n,p}= \hat{C}_{n,p} (\left|u_1(t)\right|_H, \ldots, \left|u_n(t)\right|_H)$ independent of $\varepsilon$.
Hence by \eqref{eq:StimaR_1} and \eqref{eq:StimaR^n} we have that
$$
\mathbb{E}\left[\sup_{t \in [0,T]} | R_1^{(n)} (\phi(t);h(t),\varepsilon) |_H^p \right]
\leq
C^\prime_n \varepsilon^{n+1} ,
$$
where $C^\prime_n:=C^\prime_n(p, \nabla^{(1)}F,\ldots,\nabla^{(n)}F,|\phi|_H,\ldots,|u_n|_H )$ is independent of $\varepsilon$. This gives the lemma, with $C = C_n^\prime$.
\end{proof}
As we said before, we want to expand the solution of the equation \eqref{eq:eps} around $\phi(t)$, that is we
want to write $u(t)$ as:
\begin{equation} \label{eq:Espansioneu(t)}
u(t)=\phi(t)+\varepsilon u_1(t)+\dots+\varepsilon^n u_n(t)+R_n(t,\varepsilon),
\end{equation}
(with the term $R_n(t,\varepsilon) = {\bf O }(\varepsilon^{n+1}) )$, for any $t\geq 0$),
where the processes $(u_i(t))_{t\geq 0}, i=1,\dots,n$ can be found by using the Taylor expansion of $F$ around $\phi(t)$ and \textit{matching terms}
in the equation \eqref{eq:eps} for $u$.
Given predictable $H$-valued stochastic processes $w(t), v_1(t),\ldots , v_n(t)$ let us use the notation:
\begin{equation}\label{eq:phik}
\Phi_k( w(t))\left[v_1(t), \ldots, v_k(t)\right]:= \sum_{j=2}^k \sum_{i_1+\dots+i_j =k} \nabla^{(j)}F(w(t))[v_{i_1}(t),\dots,v_{i_j}(t)]\:,
\end{equation}
with $i_1,\ldots,i_j$, running from $0$ to $k$ and the given restriction $i_1+ \cdots + i_n =k$.
With the above notation the processes
$u_1(t),\dots,u_n(t)$ occurring in \eqref{eq:Espansioneu(t)} satisfy the following equations:
$$
\begin{cases}
du_1(t)= [Au_1(t)+\nabla F(\phi(t))[u_1(t)]]dt + \sqrt{Q}dL(t), \\
u_1(0)=0,
\end{cases}
$$
and
\begin{equation}\label{eq:StochasticSystem}
\begin{cases}
du_k(t)= [Au_k(t)+\nabla F(\phi(t))[u_k(t)]]dt
+ \Phi_k(t)dt, \\
u_k(0)=0,\\
\end{cases}
\end{equation}
with
\begin{equation}\label{eq:Phi_k}
\Phi_k(t) :=\Phi_k( \phi(t))\left[u_1(t), \ldots,u_{k-1}(t) \right] \:,\: k \in \mathbb{N}, n \geq k \geq 2\:.
\end{equation}
Notice that while $u_1(t)$ is the solution of a linear stochastic differential equation (with time dependent drift operator $A+\nabla F(\phi(t))$), the processes $u_2,\ldots,u_n$ are solutions of non-homogenous differential equations with random coefficients whose meaning is given below.
\begin{definition}\label{def:SolutionUk}
Let $2 \leq k \leq n$. Then a predictable $H$-valued stochastic process $u_k = u_k(t) \;, t\geq 0$ is a solution of the problem \eqref{eq:uk} $($i.e. \eqref{eq:StochasticSystem}$)$ if almost surely it satisfies the following integral equation
\begin{equation*}
u_k(t) = \int_0^t e^{(t-s)A} \nabla F (\phi(s)) [u_k(s)] ds + \int_0^t \Phi_k(s) ds , \qquad t \geq 0 \; , \; 2 \leq k \leq n,
\end{equation*}
with $\phi$ as in Proposition $\ref{prop:MildDeterministica}$ and $\Phi_k$ as in \eqref{eq:phik}
and \eqref{eq:Phi_k}.
\end{definition}
In the following result we estimate the norm of $\Phi_k$ in $H$ by
means of the norms of
the G\^ateaux derivatives of $F$ and the norms of $v_j(t)$, $j=1,\dots,k-1$, where $v_j(t)$ are $H$-valued stochastic processes.
\begin{lemma} \label{lemma:phik}
Let us fix $2\leq k\leq n;$ let $w(t)$ and $v_1(t),\ldots,v_{k-1}(t)$ be respectively a $K$-valued process and $H$-valued stochastic processes.
Then $\Phi_k( w(t))\left[v_1(t),\ldots,v_{k-1}(t)\right]$ as in \eqref{eq:phik} satisfies the following inequality
\begin{equation*}
\left|\Phi_k( w(t))\left[v_1(t),\ldots,v_{k-1}(t)\right]\right|_H \leq C |w(t)|_K k^2 (k+|v_1(t)|_H^{k-1}+\dots+|v_{k-1}(t)|_H^{k-1}),
\end{equation*}
where $C$ is some positive constants depending on $k$ and the constant $\gamma_j$, $j=2,\ldots,k$
introduced in Hypothesis $\ref{hp:A+F}$.
\end{lemma}
\begin{proof}
We have
\begin{equation}
\begin{aligned}
\left| \Phi_k( w(t))\left[v_1(t),\ldots,v_{k-1}(t)\right] \right|_H &= \left| \sum_{j=2}^k \sum_{i_1+\dots+i_j =k} \frac{\nabla^{(j)}F(w(t)) [v_{i_1}(t),\dots,v_{i_j}(t)]}{j!}
\right|_H \\
&\leq \sum_{j=2}^k \sum_{i_1+\dots+i_j =k} \left| \frac{\nabla^{(j)}F(w(t))[v_{i_1}(t),\dots,v_{i_j}(t)]}{j!} \right|_H
\end{aligned}
\end{equation}
and using the assumption (3) in Hypothesis \ref{hp:A+F}, we get
\begin{equation}
\begin{aligned}
| \Phi_k (t)|_H & \leq
\sum_{j=2}^k \sum_{i_1+\dots+i_j =k}
\frac{1}{j!}\| \nabla F^{(j)}(w(t))\|_{L^j(H) }
\prod_{l=1}^j |v_{i_l}(t)|_H \\
& \leq
\sum_{j=2}^k \frac{1}{j!} \gamma_j(1+ |w(t)|_K)^{m-j}
\sum_{i_1+\dots+i_j =k}
\sum_{l=1}^j |v_{i_l}(t)|_H^j \\
& \leq \sum_{j=2}^k \frac{1}{j!} \gamma_j(1+ |w(t)|_K)^{m-j}
\sum_{i_1+\dots+i_j =k}
\left( j + \sum_{l=1}^{k-1} |v_l(t)|_H^{k-1} \right)\\
& \leq
\sum_{j=2}^k \frac{1}{j!} \gamma_j(1+ |w(t)|_K)^{m-j}
k^2
\left(k + \sum_{l=1}^{k-1} |v_l(t)|_H^{k-1} \right) \\
& \leq
C (1+|w(t)|_K^{m-2}) k^2 \left(k + \sum_{l=1}^{k-1} |v_l(t)|_H^{k-1} \right),
\end{aligned}
\end{equation}
for some positive constant $C$, from which the assertion in Lemma \ref{lemma:phik} follows.
\end{proof}
\begin{remark}\rm
Notice that by Lemma \ref{lemma:phik}, if $v_1,\ldots,v_{k-1}$
are $p$-mean ($p \in [2, \infty)$), integrable continuous stochastic processes
then the same holds for $\Phi_k$.
\end{remark}
\section{Main results} \label{sec:3}
\begin{proposition}\label{prop:u1}
Under Hypothesis $\ref{hp:A+F}$ the following stochastic differential equation$:$
\begin{equation}\label{eq:u1mr}
\begin{cases}
du_1(t)= [Au_1(t)+\nabla F(\phi(t)) [u_1(t)]]dt + \sqrt{Q}dL(t) , \quad t \in [0,+\infty) \\
u_1(0) = 0,
\end{cases}
\end{equation}
has, with $\phi$ as in Proposition $\ref{prop:MildDeterministica}$, a unique mild solution satisfying, for any $p\geq2$, the following estimate$:$
\begin{equation}\label{stimau1}
\mathbb{E}\left[ \sup_{t \in [0,T]} | u_1(t) |_H^p \right]< +\infty, \qquad \textit{for any } T>0.
\end{equation}
\end{proposition}
\begin{proof}
First we show the uniqueness. Let us suppose that $w_1(t)$ and $w_2(t)$
are two solutions of \eqref{eq:u1mr}. Then by It\^o's formula we have:
\begin{multline*}
d |w_1(t)-w_2(t)|_H^2= \left\langle A
(w_1(t)-w_2(t)),w_1(t)-w_2(t)\right\rangle d t
\\ + \left\langle \nabla F(\phi(t))[w_1(t)-w_2(t)],
w_1(t)-w_2(t)\right\rangle d t,
\end{multline*}
so that, by the dissipativity condition on $A$ and the estimate
on $\nabla F$ in Hypothesis \ref{hp:A+F}, (3), we have
\begin{align*}
d |w_1(t)-w_2(t)|_H^2 \leq - \omega |w_1(t)-w_2(t)|_H^2 + \gamma_1
(1+|\phi|_K^{m-1}) |w_1(t)-w_2(t)|_H^2.
\end{align*}
Now uniqueness follows by applying Gronwall's lemma.
As far as the existence is concerned, we proceed by a fixed point argument.
We introduce the mapping $\Gamma$
from $\mathcal{L}^p(\Omega;C([0,T];H))$ into itself defined by
\begin{align*}
\Gamma(w(t)):= \int_0^t e^{(t-s)A}\nabla F(\phi(s)))[w(s)]d s
+ L_A(t).
\end{align*}
We are going to prove that there exists $\tilde{T}>0$ such that $\Gamma$ is a contraction on $\mathcal{L}^p(\Omega;C([0,\tilde{T}];H))$. In fact, for any $v,w \in
\mathcal{L}^p(\Omega;C([0,\tilde{T}];H))$ we have, for any $0\leq t \leq \tilde{T}$:
\begin{align*}
& \| \Gamma(v(t))-\Gamma(w(t))\|^p =
\mathbb{E}\left[ \sup_{t\in [0,\tilde{T}]} \left|\int_0^t
e^{(t-s)A}\nabla F(\phi(s)))[v(s)-w(s)]d s \right|^p_H \right]
\\
& \leq \mathbb{E}\left[ \sup_{t\in[0,\tilde{T}]} \int_0^t \| e^{(t-s)A}\|_{L(H)}^p \left| \nabla F(\phi(s)) [v(s)-w(s)] \right|_H^p ds\right]\\
& \leq \mathbb{E} \left[\sup_{s\in [0,\tilde{T}]} \left| \nabla F(\phi(s)) [v(s)-w(s)] \right|_H^p \right] \int_0^{\tilde{T}} \| e^{(\tilde{T}-s^\prime)A}\|_{L(H)}^p ds^\prime\\
& \leq
\mathbb{E}\left[ \sup_{s\in [0,\tilde{T}]} |v(s) -w(s) |_H^p \right]\gamma_1^p\left(1+|\phi(s)|_{K}^{m-1}\right)^p
\frac{1}{\omega p} \left(1- e^{-\omega p \tilde{T}}\right) \\
& \leq
\gamma_1^p(1+|u^0|_K^{m-1})^p
\| v-w \|^p \frac{1}{\omega p} \left(1- e^{-\omega p \tilde{T}}\right),
\end{align*}
where we used
condition $(iii)$ in Hypothesis \ref{hp:A+F} for the third
inequality and Proposition \ref{prop:MildDeterministica} for the
last inequality.
Then if $\tilde{T}$ is sufficiently small (depending on $\omega, p, \gamma_1, \phi$), we see that $\Gamma$ is a contraction on $\mathcal{L}^p(\Omega;C([0,\tilde{T}];H))$.
By considering the map $\Gamma$ on intervals $[0,\tilde{T}], [\tilde{T},2\tilde{T}], \ldots, [(N-1)\tilde{T},T]$, $\tilde{T} \equiv T/N$, $N \in \mathbb{N}$,
we have that $\Gamma$ is a contraction on $\mathcal{L}^p(\Omega;C([0,T];H))$ and hence we have the existence and uniqueness of the solution for the equation \eqref{eq:u1mr} in the space $\mathcal{L}^p(\Omega;C([0,T];H))$ for any $p \in [2,\infty)$.
Let us now consider the estimate \eqref{stimau1}. We write the It\^o formula for the function
$|\cdot|_H^{2a}$ H, applied to the process X. To this end, we recall
the expressions for the first and second derivatives of the function
$H(x):=|x|^{2a}, a \,\in\,\N$.
We have
\begin{align*}
&\nabla F(x)= 2a |x|^{2(a-1)}x\\
& \frac{1}{2}{\rm Tr}(Q\nabla F^2(x) )= a {\rm Tr}(Q) |x|^{2(a-1)} + (a-1)a |x|^{2(a-2)} |\sqrt{Q}x|^2.
\end{align*}
Moreover (see, \cite{BoMZ} and \cite{OZS}), we recall that It\^o
formula implies:
\begin{align*}
{\rm d} F(u(t))= \nabla F(u(t-)) {\rm d} u(t) + \frac{1}{2}{\rm Tr}(Q\nabla F^2(u(t-)) ) {\rm d} u
+ {\rm d} [u](t)
\end{align*}
Although our computations are only formal, they can be justified
using an approximation argument.
By condition (iii) in Hypothesis \ref{hp:A+F} we have for all points in the probability space and $p=2a$ with $a \in \mathbb{N}$:
\begin{multline}\label{eq:stimau1d2}
{\rm d} |u_1(t)|^{2a} = 2a \langle u_1(t-),{\rm d} u_1(t)\rangle_H |u_1(t)|^{2a-2} +
a {\rm Tr}(Q) |u_1(t-)|^{2(a-1)} {\rm d} t\\+ (a-1)a |u_1(t-)|^{2(a-2)} |\sqrt{Q}u_1(t)|^2{\rm d} t+ {\rm d} [u_1(t)](t).
\end{multline}
By the dissipativity of $A+F$, the first term in the above inequality is estimated by
\begin{equation}\label{sbadiglio1}
\begin{aligned}
& \langle u_1(t-),{\rm d} u_1(t)\rangle_H |u_1(t)|^{2a-2}
\\= &\ \langle A u_1(t), u_1(t) \rangle |u_1(t)|_H^{2a-2} +
\langle \nabla F(\phi(t))[u_1(t)], u_1(t) \rangle |u_1(t)|_H^{2a-2}
+ \langle\sqrt{Q} {\rm d} L(t-), u_1(t) \rangle |u_1(t)|_H^{2a-2}
\\\leq
& - \omega
|u_1(t)|_H^{2a} + 2a \gamma(1+|u^0|_K^{m-1})|u_1(t)|_H^{2a}+2a \langle\sqrt{Q} {\rm d} L(t), u_1(t)\rangle |u_1(t)|_H^{2a-1}
\\ \leq &
- \tilde{\omega}
|u_1(t)|_H^{2a} + \langle \sqrt{Q} L(t), u_1(t)\rangle |u_1(t)|_H^{2a-1},
\end{aligned}
\end{equation}
where $\tilde{\omega}:=\omega-\gamma(1+|u^0|_H)$.
Moreover, the second and third term in \eqref{eq:stimau1d2} can be estimated in the following way:
\begin{align}\label{eq:d2}
a {\rm Tr}(Q) |u_1(t)|^{2(a-1)} + (a-1)a |u_1(t-)|^{2(a-2)} |\sqrt{Q}u_1(t)|^2
\leq C_{a} (\epsilon{\rm Tr }^{2a}(Q)+ \frac{1}{\epsilon}|u_1(t)|^{2a}
),
\end{align}
for any $\epsilon\,>0,$ where we used the elementary inequality $a
b^{2(a-1)} \leq C_a (\epsilon a^{2a}+\frac{1}{\epsilon} b^{2a})$,
with $C_a$ being a suitable positive constant. Therefore
\begin{align*}
|u_1(t)|_H^{2a}& \leq -2a \left(\tilde{\omega} - \frac{C_a}{\epsilon}\right)
\int_0^t |u_1(s)|_H^{2a}{\rm d} \, s + 2a \int_0^t \langle \sqrt{Q} {\rm d} L(s), u_1(s)\rangle |u_1(s)|_H^{2a-1}\\
& +C_a \epsilon {\rm Tr }(Q)^{2a}\,T + \int_0^t {\rm Tr\, Q} {\rm d} |L|(s)
\end{align*}
and
\begin{multline*}
\mathbb{E} \sup_{t\leq T}
|u_1(t)|_H^{2a} \leq -(2a \tilde{\omega} T-\frac{C_a}{\epsilon})
\mathbb{E} \sup_{t\leq T} |u_1(t)|_H^{2a}\\ + 2a \, \mathbb{E} \sup_{t\leq T} \left|\int_0^t \langle \sqrt{Q} {\rm d} L(s), u_1(s)\rangle |u_1(s)|_H^{2a-1} {\rm d} s\right| + C_{a}\epsilon T + T\int_H {\rm Tr Q} |x|^{2}\nu({\rm d} x),
\end{multline*}
where we used the relation
\begin{align}\label{eq:dovesitrova}
\mathbb{E} \sup_{t\leq T }[u_1](t) \leq \mathbb{E} \int_0^T {\rm Tr}(Q) {\rm d} [L](t) = \mathbb{E} \int_0^T {\rm Tr}(Q)
{\rm d} \langle L\rangle (t) = T \int_H {\rm Tr}(Q) |x|^2\nu({\rm d} x).
\end{align}
By the Burkholder-Davis-Gundy inequality, see.e.g, (\cite{PeZa}, p.
37, \cite{HABU, KALLO}) applied to
\begin{align*}
M(t):= \int_0^t \langle \sqrt{Q} {\rm d} L(s), u_1(s)\rangle |u_1(s)|_H^{2a-1},
\end{align*}
there exists a constant $c_1$ such that
\begin{align*}
\mathbb{E} \sup_{t\leq T} \left|\int_0^t \langle \sqrt{Q} {\rm d} L(s), u_1(s-)\rangle |u_1(s)|_H^{2a-1} \right| \leq
& \,c_1 \, \mathbb{E} \left(\left[ \int_0^{\cdot} \langle \sqrt{Q} {\rm d} L(s), u_1(s-)\rangle |u_1(s)|_H^{2a-1} \right](T)\right)^{1/2} \\
\leq & \,c_1 \, \mathbb{E} \left( \sup_{t\leq T} |u_1(t)|^{2a} \int_0^T {\rm Tr}(Q) {\rm d} [L](s)\right)^{1/2}\\
\leq &\, \ c_1 \, \epsilon \mathbb{E} \sup_{t\leq T}
|u_1(t)|_H^{2a} + \frac{c_1 T}{4\epsilon} \int_H {\rm Tr Q} |x|^{2}\nu({\rm d} x),
\end{align*}
where we used the elementary inequality $ab\leq \epsilon a^2 +
(1/4\epsilon)b^2, \epsilon\,> 0.$ Collecting the above estimates we
obtain
\begin{align*}
\mathbb{E} \sup_{t\leq T}
|u_1(t)|_H^{2a} &\leq -(2a \tilde{\omega} -\frac{c_a}{\epsilon})T
\mathbb{E} \sup_{t\leq T} |u_1(t)|_H^{2a} \\
&+ 2c_1 \epsilon \mathbb{E} \sup_{t\leq T}
|u_1(t)|_H^{2a} + \left( \frac{c_1}{2\epsilon}+1\right)T \int_H {\rm Tr Q} |x|^{2}\mu({\rm d} x) + C_a \epsilon T,
\end{align*}
Hence
$$
\mathbb{E} \left[ \sup_{t \in [0,T]} |u_1(t)|_H^{2a} \right] \leq C_{a,T}^\prime e^{-2\, a\,(\tilde{\omega} -c_a/\epsilon)T} < C_{a,T} \:,
$$
where $C_{a,T}$ is a positive constant and \eqref{stimau1} follows.
\end{proof}
\begin{theorem}\label{thm:uk}
Let us fix $2\leq k\leq n$, assume that Hypothesis \ref{hp:A+F} holds, and let $u_1$ be the solution of the problem \eqref{eq:u1}.
Suppose moreover that $u_j$ is the unique mild solution of the following Abstract Cauchy Problem (ACP):
\begin{equation}\label{eq:prb-j}
\begin{cases}
du_j(t)= [Au_j(t)+\nabla F(\phi(t))[u_j(t)]]dt,
+ \Phi_j(t)dt \notag \\
u_j(0) = 0
\tag{$\rm{ACP}_j$}
\end{cases}
\end{equation}
for $j=2,\ldots,k-1$ satisfying$:$
\begin{equation}\label{stimauj}
\mathbb{E} \left[ \sup_{t \in [0,T]} | u_j(t) |_H^p \right]< +\infty, \qquad T>0, \ for \ any \ p\in [2,\infty);
\end{equation}
then there exists a unique mild solution $u_k(t)$ of the following non-homogeneous linear differential equation with stochastic coefficients
$($in the sense of Definition $\ref{def:SolutionUk} ):$
\begin{equation}\label{eq:prb-k}
\begin{cases}
du_k(t)= [Au_k(t)+\nabla F(\phi(t))[u_k(t)]]dt
+ \Phi_k(t)dt, \quad t\in[0,+\infty), \notag \\
u_k(0) = 0
\tag{$\rm{ACP}_k$}
\end{cases}
\end{equation}
and it satisfies the following estimate, for any $T >0 :$
\begin{equation}\label{stimauk}
\mathbb{E}\left[\sup_{t\in[0,T]} | u_k(t) |_H^p\right] < +\infty .
\end{equation}
\end{theorem}
\begin{proof}
We proceed by a fixed point argument, where
the contraction is given by
\begin{equation*}
\Gamma(y(t)):=\int_0^{t} e^{(t-s)A} \nabla F (\phi(t)) [y(t)]
ds + \int_0^t e^{(t-s)A} \Phi_k(s) ds
\end{equation*}
on $\mathcal{L}^p(\Omega;C([0,T];H))$.
In fact, arguing as in Proposition \ref{prop:u1}, we see that for $\tilde{T} \in [0,T]$ sufficiently small,
$\Gamma$ is a contraction on $\mathcal{L}^p(\Omega;C([0,\tilde{T}];H))$, $p \in [2,\infty)$, so that the existence and the uniqueness of the solution
for \eqref{eq:prb-k} follows.
Let us consider the estimate \eqref{stimauk}.
By the condition (iv) in Hypothesis \ref{hp:A+F} we have, for $p=2a$ with $a \in \mathbb{N}$ (and all points in the probability space) :
\begin{equation}\label{sbadiglio}
\begin{aligned}
\frac{d}{dt} |u_k(t)|_H^{2a}
&= 2a \langle A u_k(t), u_k(t) \rangle |u_k(t)|_H^{2a-2} +
2a \langle \nabla F(\phi(t))[u_k(t)], u_k(t) \rangle |u_k(t)|_H^{2a-2}\\
&+ 2a \langle \Phi_k(t), u_k(t) \rangle |u_k(t)|_H^{2a-2} \\
&\leq
-2a \omega
|u_k(t)|_H^{2a} + 2a \gamma(1+|u^0|_K)|u_k(t)|_H^{2a}+2a |\Phi_k(t)|_H |u_k(t)|_H^{2a-1} \\
& \leq
-2a \tilde{\omega}
|u_k(t)|_H^{2a} + C_a |\Phi_k(t)|_H^{2a},
\end{aligned}
\end{equation}
where $\tilde{\omega}:=\omega-\gamma(1+|u^0|_K)$ as in the proof of Proposition \eqref{prop:u1}.
By the assumption \eqref{stimauj} made on $u_j(t),j=1,\dots,k-1$ and Lemma \ref{lemma:phik} we have that:
$$
\mathbb{E}\left[ \sup_{t \in [0,T]} | \Phi_k(t) |_H^{2a} \right]\leq C_a^\prime, \qquad T >0,
$$
so that taking the expectation of inequality \eqref{sbadiglio} and applying Gronwall's lemma (similarly as in the proof of Proposition \ref{prop:u1}) we obtain:
$$
\mathbb{E}\left[\sup_{t \in [0,T]} |u_k(t)|_H^{2a}\right] \leq C_a^\prime e^{-2\,a\,\tilde{\omega} T} < C_a \:,
$$
where $C_a$ is a positive constant, and the theorem follows.
\end{proof}
We are now able to state the main result of this section:
\begin{theorem}\label{Th:espansione}
Under Hypothesis $\ref{hp:A+F}$ the mild solution $u(t)$ of \eqref{eq:eps}
$($in the sense of Definition $\ref{def:stoc-conv})$ can be expanded in powers of $\varepsilon>0$ in the following form
\begin{equation*}
u(t)=\phi(t)+\varepsilon u_1(t)+\dots+\varepsilon^n u_n(t)+R_n(t,\varepsilon), \quad n \in \mathbb{N},
\end{equation*}
where $u_1$ is the solution of
\begin{align*}
du_1(t)&=[Au_1(t)+\nabla F(\phi(t))[u_1(t)]]dt+\sqrt{Q}dL(t)\\
u_1(0)&=0,
\end{align*}
while $u_k$, $k=2,\ldots,n$ is the solution of
\begin{equation}
\begin{cases}
du_k(t)= [Au_k(t)+\nabla F(\phi(t))[u_k(t)]dt
+\Phi_k(t)dt, \notag \\
u_k(0)=0.
\end{cases}\tag{$\rm{ACP}_k$}
\end{equation}
The remainder $R_n(t,\varepsilon)$ is defined by
\begin{equation}
\begin{aligned}
R_n(t,\varepsilon) & := u(t)-\phi(t) - \sum_{k=1}^n \varepsilon^k u_k(t) \\
&= \int_0^t e^{(t-s)A}
\left( F(u(s))-F(\phi(s))-\sum_{k=1}^n \varepsilon^k \nabla F (\phi(s))[u_k(s)] - \sum_{k=2}^n \varepsilon^k\Phi_k(s) \right) ds,
\end{aligned}
\end{equation}
and verifies the following inequality
\begin{equation*}
\mathbb{E} \left[\sup_{t\in[0,T]}\left|R_n(t,\varepsilon)\right|_H^p\right] \leq C_p \varepsilon^{n+1},
\end{equation*}
with a constant $C_p>0$.
\end{theorem}
\begin{proof}
Let us define $R_n(t,\varepsilon)$, $n \in \mathbb{N}$, as stated in the theorem.
Since by construction
\begin{itemize}
\item $\phi(t) = e^{t A} u^0 + \int_0^t e^{(t-s)A} F(\phi(s)) ds$ (cf. Definition \ref{def:det});
\item $u(t) = e^{t A} u^0 + \int_0^t e^{(t-s)A} F(u(s)) ds + \varepsilon L_A(t)$ (cf. Definition \ref{def:stoc-conv});
\item $u_1(t) = \int_0^t e^{(t-s)A} \nabla F(\phi(s))[u_1(s)] ds + L_A(t)$ (cf. Proposition
\ref{prop:u1} and Definition \ref{def:stoc-conv});
\item $u_k(t) = \int_0^t e^{(t-s)A} \nabla F (\phi(s)) [u_k(s)] ds + \int_0^t e^{(t-s)A} \Phi_k(s) ds$ for $k=2,\ldots,n$, with $\Phi_k(s) := \Phi_k( \phi(s))\left[u_1(s),\ldots, u_{k-1}(s)\right] $ defined in \eqref{eq:Phi_k}
(cf. Theorem \ref{thm:uk} and Definition \ref{def:stoc-conv});
\end{itemize}
we have
$$
R_n(t,\varepsilon) = \int_0^t e^{(t-s)A}
\left( F(u(s))-F(\phi(s))-\sum_{k=1}^n \varepsilon^k \nabla F (\phi(s))[u_k(s)] - \sum_{k=2}^n \varepsilon^k \Phi_k(s) \right) ds \:.
$$
Recalling that $R_1^{(n)}(\phi(s);h(s),\varepsilon) = F(u(s))-F(\phi(s))-\sum_{k=1}^n \varepsilon^k \nabla F (\phi(s))[u_k(s)] - \sum_{k=2}^n \varepsilon^k \Phi_k(s)$ we get:
\begin{equation}
\begin{aligned}
\mathbb{E}\left[ \sup_{t \in [0,T]} \left| R_n(t,\varepsilon) \right|_H^p\right] \leq
& \mathbb{E} \left[\sup_{t \in [0,T]}
\left| \int_0^t e^{(t-s)A} R_1^{(n)}(\phi(s);h(s),\varepsilon) ds \right|_H^p \right] \\
\leq
& \: \mathbb{E} \left[ \sup_{t\in[0,T]} \int_0^t \| e^{(t-s)A} \|_{L(H)}^p
|R_1^{(n)}(\phi(s);h(s),\varepsilon)|_H^p ds \right]\\
\leq
& \: \mathbb{E}\left[ \sup_{t\in[0,T]}
|R_1^{(n)}(\phi(t);h(t),\varepsilon)|_H^p
\int_0^t e^{-\omega (t-s) p} ds \right]\\
\leq
& \: C_{n,p}\varepsilon^{p(n+1)},
\end{aligned}
\end{equation}
for some positive constant $C_{n,p}$ (depending on $n,p$, but not on $\varepsilon$), where in the second and third inequality we have used the contraction property of the semigroup generated by $A$. Now recalling Lemma \ref{lm:R1} the inequality in Theorem \ref{Th:espansione} follows.
\end{proof}
\begin{example} \label{Remark:FHN} \rm
Our results apply in particular to stochastic PDEs describing the FitzHugh-Nagumo equation with a L\'evy noise perturbation
(related to those studied with a Gaussian noise, for example, in \cite{Tu1, Tu2, Tu92} and \cite{BoMa}).
The reference equation is given by (see \cite[equation (1.1)]{BoMa})
\begin{equation}\label{eq:bm08}
\begin{cases}
&\partial_t
v(t,x)=\partial_x (c(x)\partial_x v(t,x))
-p(x)v(t,x)-w(t,x)+f(v(t,x))+\varepsilon\dot{L}_1(t,x),\\
& \partial_t w(t,x)=\gamma v(t,x)-\alpha w(t,x)+\varepsilon \dot{L_2}(t,x), \\
&\partial_x v(t,0)=\partial_x v(t,1)=0,\\
&v(0,x)=v_0(x), \quad w(0,x)=w_0(x),
\end{cases}
\end{equation} with the parameter $\varepsilon>0$ in front of the noise, where $u,w$ are real valued random variables, $\alpha, \gamma$ are strictly positive
real phenomenological constants and $c,p$ are strictly positive smooth functions on $[0,1]$. Moreover, the initial values $v_0, w_0$ are in $C([0,1])$.
The nonlinear term is of the form $f(v) = -v (v-1) (v- \xi)$, where $\xi \in (0,1)$. Finally $L_1, L_2$ are independent $Q_i$-L\'evy processes with values in
$L^2(0,1)$, with $Q_i$ positive trace class commuting operators, commuting also with $A_0$, $A_0$ being defined below.
The above equation can be rewritten in the form of an infinite dimensional
stochastic evolution equation on the space
\begin{equation}\label{eq:SettingBoMa}
H: = L^2 (0,1) \times L^2(0,1)
\end{equation}
by introducing the following operators:
\begin{align*}
&A_0:=\partial_x c(x)\partial_x, \\
&D(A_0):=\left\{u \in H^2(0,1); v_x(0)=v_x(1)\right\},\,\, acting\,\, in\,\,\, L^2(0,\,1)\\
\intertext{and}
&A =
\begin{pmatrix}
A_0- p & -I \\
\gamma I & \alpha I \\
\end{pmatrix},
\end{align*}
with domain $D(A):= D(A_0)\times L^2(0,1)$, and
$$
F \binom{v}{w} =
\begin{pmatrix} -v (v-1) (v-\xi)\\ 0 \end{pmatrix} \:,\: \text{with } D(F) := L^6 (0,1) \times L^2(0,1).
$$
Further, we introduce the Banach space $K:=L^{18}(0,1)\times L^2(0,1)$, endowed with the norm
$|\cdot|_K:=|\cdot|_6+|\cdot|_2$ and consider $u^0\in K$.
In this way, the equation \eqref{eq:bm08} can be rewritten as
\begin{align*}
\begin{cases}
d u(t)= Au(t)+ F(u(t))d t + \sqrt{Q}d L(t)\\
u(0)=u^0:=(v^0,w^0) \in K
\end{cases}
\:,
\end{align*}
with $A$ and $F$ satisfying Hypothesis \ref{hp:A+F} when
$\xi^2-\xi+1 \leq 3\min_{x\in[0,1]}p(x)$. In fact, the properties of the two operators $A$ and $F$ can be
determined starting from the problems considered in \cite{BoMa} and
\cite{Cerrai99}. In particular from \cite[Section 2.2]{Cerrai99} the
estimates on the nonlinear term $F$ and its derivatives can easily
be deduced. Moreover we claim that the stochastic convolution
\begin{align*}
L_A(t):= \int_0^t e^{(t-s)A} {\rm d} L(s),
\end{align*}
(where $e^{tA}, t\geq 0$ denotes the semigroup generated by $A$) is
well-defined and admits a continuous version with values into the
space $K$. This fact can be proved by an application of \cite{ PeZa}
and its proof, taking into account that the domain of fractional
powers of $A$ are contained in $K$ (cf. \ Appendix A - in particular
Example A.5.2 - in \cite{DPZRosso}) and moreover we are assuming
${\rm Tr}\,Q<\infty$.
Then by Theorem \ref{Th:espansione} we get an asymptotic expansion in powers of $\varepsilon>0$ of the solution, in terms of solutions of the corresponding
deterministic FitzHugh-Nagumo equation and the solution of a system of (explicit) linear (non homogeneous) stochastic equations.
The expansion holds for all orders in $\varepsilon>0$.
The remainders are estimated according to Theorem \ref{Th:espansione}.
These results should allow to obtain rigously results similarly to
those obtained numerically up to second order in $\epsilon$ in
\cite{TuEsp,Tu92} in which the noise was of Gaussian type. Tuckwell,
in particular, has made heuristic expansions up to second order in
$\varepsilon$ for the mean and the variance of the solution process
$u=(u(t))_{t\geq 0}$ (see \cite{TuEsp,Tu92}), proving in particular
that one has enhancement
(respectively reduction) of the mean according to whether the expansion is around which stable point of the stationary deterministic equation.\newline
\noindent
\end{example}
\section*{Acknowledgments}
\noindent This paper was greatly influenced by the research project
NEST at the University of Trento. We thank Stefano Bonaccorsi and
Luciano Tubaro and especially Luca di
Persio for many stimulating discussions.\\
The authors would like to gratefully acknowledge the great
hospitality of various institutions. In particular for the first
author CIRM and the Mathematics Department of the University of
Trento; for him and the third author King Fahd University of
Petroleum and Minerals at Dhahran; for the second and third author
IAM and HCM at the University of Bonn, Germany.
|
2,877,628,089,762 | arxiv | \section{Introduction}
Strategyproof mechanisms guarantee that the agents never find it convenient to misreport their types, that is, truth-telling is a dominant strategy. Such mechanisms play a key role to cope with selfish behavior, and they received a lot of attention also when considering protocols for optimally allocating resources that necessarily involve selfish entities \cite{NisRon99}.
One of the critical issues with strategyproof mechanisms is that agents can still manipulate the mechanism, and improve their utilities, by \emph{bribing} one another:
\begin{quote}
An agent can offer money to another for misreporting her type and in this way the utility of both improves.
\end{quote}
The famous second-price auction provides a clear example of such an issue. If two agents are willing to pay $10$ and $9$ for an item, the one bidding $10$ wins and pays $9$. However, if before the auction starts the winner offers some money to the other agent for bidding a low value (say $1$), then both agents would be better off (now the winner pays only $1$ and the other agent gets some money).
The concept of \emph{bribeproof} mechanism \cite{Sch00} strengthens strategyproofness by requiring that bribing another agent is also not beneficial.
The appeal of this notion is that it does not consider unreasonably large coalitions.\footnote{The notion of \emph{coalitional strategyproofness} requires the mechanism to be immune to manipulations by any group of agents. As already observed in \cite{Sch00,Wak13}, this notion turns out to be too restrictive as it rules out all but a few unreasonable mechanisms. Moreover, large coalitions would require all members to coordinate their actions.} Despite bribeproofness is apparently adding only a minimal condition, this has a tremendous impact on what the mechanisms can do in general:
\begin{itemize}
\item The class of strategyproof mechanisms is extremely rich and, among others, it includes VCG mechanisms which optimize the social welfare;
\item In contrast, the class of bribeproof mechanisms consists of only \emph{trivial} mechanisms which output a \emph{fixed outcome} \cite{Sch00,Miz03}.
\end{itemize}
That is to say, while strategyproofness by itself is not an obstacle to optimization, the only way to get bribeproofness is to ignore the agents types, which clashes with most optimization criteria. One example of such VCG mechanisms is for the path auction problem \cite{NisRon99} where we want to select the shortest path between two nodes of a given network, every edge is owned by an agent, and the cost of the edges are private. Selecting the shortest path means that we want the solution minimizing the sum of all agents' costs, that is, to optimize the social welfare. Clearly, any trivial mechanism which returns a fixed path has no guarantee to find the shortest path.
\subsection{Our contribution}
Because the impossibility results on bribeproof mechanisms hold for unrestricted or for ``sufficiently rich'' domains \cite{Sch00,Miz03}, we are interested in designing bribeproof mechanisms for \emph{restricted domains}. Specifically, we present a novel construction of bribeproof mechanisms for the following class of a \emph{two-values} problems. Every feasible solution corresponds to some \emph{amount of work} allocated to each agent, and every agent has a \emph{private cost} per unit of work which is either $L$ (low) or $H$ (high).\footnote{Throughout this work we adopt the terminology used by \cite{ArcTar01} in the context of procurement auctions, though these domains have been investigated earlier in the context of allocating identical goods, as well as for certain restricted combinatorial auctions. All the results apply to these problems as well
(see Appendix~\ref{sec:restricted CA}).
}
Typically
the amount of work allocated to the agents cannot be arbitrary, but it is rather determined by the ``combinatorial structure'' of the problem under consideration. For instance, in the path auction problem \cite{NisRon99}, the mechanism must select a path in a graph, and each agent owns one edge of the graph (see Figure~\ref{fig:two-examples-path-auctions}). Selecting a path means allocating one unit of work to each agent in the path, and no work to all other agents.
\begin{figure}
\centering
\subfloat[][]{%
\begin{tikzpicture}[scale=.8]
\node[circle, fill=gray!50] (s) at (0,1) {$s$};
\node[circle, fill=gray!50] (t) at (4,1) {$t$};
\draw [-,thick] (s) .. controls (2,1.7) ..
(t) node[pos=.5,sloped,above] {$\theta_1$} ;
\draw [-,thick] (s) .. controls (2,0.3) ..
(t) node[pos=.5,sloped,below] {$\theta_2$} ;
\end{tikzpicture}
\label{fig:parallel-links}}
\hspace{2cm}
\subfloat[][]{%
\begin{tikzpicture}[scale=.8]
\stpath{\theta_1}{\theta_2}{\theta_3}{\theta_4}
\end{tikzpicture}
\label{fig:sp-diamond}
}
\caption{Two instances of the path auction problem.}
\label{fig:two-examples-path-auctions}
\end{figure}
\noindent
In a nutshell our results can be summarized as follows:
\begin{itemize}
\item An extremely simple construction yields bribeproof mechanisms if the underlying algorithm satisfies certain monotonicity conditions (Section~\ref{sec:class-mechanisms}).
\item One application of the above result, is a class of bribeproof mechanisms optimizing the social welfare for every \emph{binary} allocation problem, that is, whenever each agent is either selected or not selected (Section~\ref{sec:binary}).
\item These mechanisms actually \emph{characterize} the whole class of bribeproof mechanisms for certain problems, including the path auction one, and the boundary conditions for which such mechanisms exist (Section~\ref{sec:characterizations}).
\item The positive result is more general as it can be applied to non-binary problems and to other optimization criteria (Section~\ref{sec:non-binary}).
\end{itemize}\newcommand{\mathcal{M}}{\mathcal{M}}
More in detail, our mechanisms simply provide all agents the \emph{same} amount of money $\mathcal{M}$ for each unit of work that they get allocated (Definition~\ref{def:linear-mechanism}). Such mechanisms are bribeproof if certain monotonicity conditions hold (Theorem~\ref{th:wokload:bribeproof}).
Roughly speaking, these conditions relate the ``influence'' that an agent has on her own allocation to the influence she has on the \emph{others'} allocation. In particular, by taking the special case $\mathcal{M}=\frac{L+H}{2}$ in our construction leads to the following natural sufficient condition (Corollary~\ref{cor:wokload:bribeproof:1/2}):
\begin{quote}
\emph{Bounded influence:} No agent can change the allocation of \emph{another} agent by more than the change caused to \emph{her own} allocation.
\end{quote}
For the class of \emph{binary allocations}, where the allocated work is $zero$ or $one$,
this condition is nothing but \emph{non-bossiness}: no agent can change the allocation of the others, without changing her own allocation (Theorem~\ref{th:sbribeproof:equivalence:sbribeproof}). The main positive result here is that every problem in which one wants to minimize the weighted \emph{sum} of all agents' costs admits an exact strongly bribeproof mechanism (Theorem~\ref{th:binary-utilitarian}).
Interestingly, our general construction provides both characterizations of bribeproof mechanisms as well as the boundary conditions for which such mechanisms exist in several problems:
\begin{itemize}
\item For the path auction problem, our mechanism with $\mathcal{M}=\frac{L+H}{2}$ is essentially the only possible, and no mechanism exist on slightly more general domains (with three values, or heterogeneous two values), nor collusion-proof mechanisms for coalitions of three or more agents.
\item For the $k$-items procurement auction, the mechanism with $\mathcal{M}=M$ is bribeproof on \emph{three values} domains $L$ (low), $M$(medium), $H$(high). This is the only mechanism for $k=1$ and no mechanism for \emph{four values} domains exist.
\end{itemize}
We then turn our attention to problems with different objective function and non-binary allocations. Specifically, we consider minimizing the \emph{maximum} cost among the agents (note that this is different from welfare maximization which would minimize the \emph{sum} of all agents' costs). In the scheduling terminology, we aim at minimizing the \emph{makespan} on related machines \cite{ArcTar01}. In the fractional version, when each job can be divided among the machines, we get an exact bribeproof mechanism (Theorem~\ref{th:fractional-makespan}) since the problem is equivalent to allocating a single job (in a fractional way) and the bounded influence condition holds. On the contrary, when jobs cannot be divided \cite{ArcTar01}, we show that our method cannot give exact or even approximate bribeproof mechanisms. The existence of other mechanisms for this and other problems is an interesting open question. More in general, it would be interesting to obtain approximate mechanisms when the domain does not allow for exact ones.
\subsection{Related work}\label{sec:previous}
Schummer~\cite{Sch00} introduced the notion of bribeproofness and proved that, on certain domains, the only bribeproof mechanisms are the \emph{trivial} mechanisms which return a \emph{fixed outcome}; \cite{Miz03} proved the same but under weaker assumptions. In simpler domains, bribeproof (or even collusion-proof) mechanisms can be obtained via \emph{take-it-or-leave-it} prices \cite{GolHar05,GolVen13}: these mechanisms fix a price for each agent, who then wins a copy of the item if bidding above this price, independently of what happens to the other agents.
Note that our mechanisms are different from these mechanisms since in our setting we cannot treat agents separately.
Though strategyproofness is much less stringent and quite well understood, restricting the domain is also very common, as unrestricted domains are often unrealistic and impose unnecessary limitations (see e.g. \cite{Rob79,LavMuaNis03,DobNis15}). In multi-dimensional
domains, minimizing the \emph{makespan} or \emph{min-max fairness} is not possible using strategyproof mechanisms \cite{NisRon99,Gam07,KouVid07,MuaSch07,LavSwa09,AshItaDobLav12}, while for one-parameter domains optimal solutions are possible \cite{ArcTar01,MuaSch07} also in polynomial time \cite{ChrKov08}. Our domains are at the intersection of one-parameter domains in \cite{ArcTar01} and the two-values domains in \cite{LavSwa09}, and they also appear in study of revenue of take-it-or-leave-it identical items auctions \cite{GolVen13}. One-parameter domains have been studied by \cite{Mye81} who characterized strategyproofness and obtained optimal-revenue mechanisms for selling a single item.
The strong limitations imposed by bribeproofness lead to the study of weaker or variants of this notion. \emph{Group strategyproofness} assumes that the members of the coalitions can coordinate their reports but cannot exchange compensations (see e.g. \cite{Mou99,PouVid12,Muk12,Jua13}). The restriction to coalitions of size two is called \emph{pairwise strategyproofness} \cite{serizawa2006pairwise}, and it corresponds to strong bribeproofness when compensations between agents are not allowed.
The class of \emph{deferred acceptance} mechanisms \cite{MilSeg14} satisfies (weakly) group strategyproofness\footnote{This condition relaxes group strategyproofness, by requiring that no coalition could deviate from truth-telling in a way that makes
\emph{all} of its members strictly better off.}, at the price of significantly worse social welfare even in rather simple settings \cite{DutGkaRou14}.
Mechanisms with \emph{verification} \cite{PenVen12} are based on the assumption that it is possible to partially verify the agents types after the solution is computed. \emph{Collusive dominant-strategy truthfulness} \cite{CheMic12} is based on the idea that the mechanism asks the agents to report also their coalitions, and it provides better performance for selling identical items. In the so-called \emph{single-peaked} domains, agents receive a variable amount of a divisible item, and they can bribe each other by transferring part of the item (with no money involved).
Interestingly enough, \cite{Wak13} characterizes bribeproofness in terms of a \emph{bounded impact} condition which is very similar to our bounded influence, despite the two settings being not equivalent.
Finally, while \cite{SchGEB00} shows that bribeproofness is closely related to Pareto efficiency together with strategyproofness,
\cite{ohseto2000strategy} proved that the latter two requirements cannot be achieved simultaneously in a setting involving finite ``small'' domains.
\subsection{Preliminaries}\label{sec:preliminaries}
There is a set $N=\{1,2,\ldots,n\}$ of $n \geq 2$ agents and a set $\mathcal{A}\subseteq \Re_+^n$ of feasible allocations, where each allocation $a\in \mathcal{A}$ is an $n$-dimensional vector $(a_1,\ldots,a_n)$ with $a_i$ being the amount of
work allocated to agent $i$. For each agent $i$, her cost for an allocation $a$ is
equal to
\[
a_i \cdot \theta_i,
\]
where $\theta_i\in \Re$ is some private number called the \emph{type} of this agent (her cost for a unit of work).
Every type $\theta_i$ belongs to a publicly-known set $\Theta_i$ which is the domain of agent
$i$, and the agent can misreport her type $\theta_i$ to any $\hat \theta_i\in \Theta_i$. The cross-product $\Theta := \Theta_1\times\cdots\times \Theta_n$ is the types domain representing the possible type vectors that can be reported by the agents.
A mechanism is a pair $(A,p)$ where $A:\Theta \rightarrow \mathcal{A}$ is an algorithm and $p:\Theta \rightarrow \Re^n$ is a suitable payment function. For any type vector $\hat \theta \in \Theta$ reported by the agents, each agent $i$ receives $p_i(\hat \theta)$ units of money and $A_i(\hat \theta)$ units of work.
A mechanism $(A,p)$ is \textbf{bribeproof} if for all $\theta \in \Theta$, all $i$ and $j$, and
all $\hat \theta_i \in \Theta_i$
\begin{align}
& p_i(\theta) - A_i(\theta)\cdot \theta_i &+ & & p_j(\theta) - A_j(\theta)\cdot \theta_j & &\geq \label{eq:bribeproof:breibee} \\
& p_i(\hat \theta) - A_i(\hat \theta)\cdot \theta_i &+ & & p_j(\hat \theta) - A_j(\hat \theta)\cdot \theta_j & \nonumber
\end{align}
where
$
\hat \theta = (\hat \theta_i,\theta_{-i}):=(\theta_1,\ldots,\theta_{i-1},\hat \theta_i, \theta_{i+1},\ldots,\theta_n)
$
denotes the vector obtained by replacing the $i^{th}$ entry of $\theta$ with $\hat \theta_i$.
Inequality \eqref{eq:bribeproof:breibee} says that no
agent $j$ can bribe another agent $i$ with $b$ units of money to misreport her type so that they both improve.
By taking $i=j$ in the definition above, we obtain the
(weaker)
notion of \textbf{strategyproof} mechanism, that is,
$ p_i(\theta) - A_i(\theta)\cdot \theta_i \geq p_i(\hat \theta_i,\theta_{-i}) - A_i(\hat \theta_i,\theta_{-i})\cdot \theta_i,$
for all $\theta\in \Theta$, for all $i$, and for all $\hat \theta_i \in \Theta_i$.
Strong bribeproofness requires that no two agents can improve even if they \emph{jointly} misreporting their types (see \cite[page 184]{Sch00}). Let $(\hat \theta_i,\hat \theta_j,\theta_{-ij})$ denote the vector obtained by replacing the $i^{th}$ and the $j^{th}$ entry of $\theta$ with $\hat \theta_i$ and $\hat \theta_j$, respectively. A mechanism $(A,p)$ is \textbf{strongly bribeproof} if inequality \eqref{eq:bribeproof:breibee} holds also for all
$
\hat \theta = (\hat \theta_i,\hat \theta_j,\theta_{-ij})
$,
with $\theta_i\in \Theta_i$, $\theta_j \in \Theta_j$, and $\theta\in \Theta$.
A domain $\Theta$ is a \textbf{two-values domain} if there exist two constants $L$ and $H$ with $L<H$ such that
$
\Theta_i = \{L,H\}
$
for all $i\in N$. More generally, for any ordered sequence of reals $w_1 < w_2 < \cdots < w_k$, we denote by $\Theta^{(w_1,w_2,\ldots,w_k)}$ the \textbf{$\mathbf{k}$-values} domain $\Theta$ such that
$
\Theta_i = \{w_1,w_2,\ldots,w_k\}
$
for all $i\in N$.
We say that a mechanism is (strongly) bribeproof over a $k$-values domain if
the corresponding condition \eqref{eq:bribeproof:breibee} holds for $\Theta$ being a $k$-values domain.
\begin{example}[path auction and perfectly divisible good]\label{exa:path-auction}\label{exa:perfectly-divisible}
In the \emph{path auction problem} instance in Figure~\ref{fig:two-examples-path-auctions}, the two feasible allocations are
$(1,1,0,0)$ for the ``upper path'' and $(0,0,1,1)$ for the ``lower path''. The problem of allocating a single \emph{perfectly divisible} good among the agents corresponds to the set of feasible allocations consists of all vectors $a=(a_1,\ldots,a_n)$ such that
$
a_i \geq 0 $ and $\sum_{i=1}^n a_i = 1$.
\end{example}
In a bribeproof mechanism the payments must depend only on the allocation:
\begin{fact}\label{fac:adj:pay}
We say that two type vectors $\theta'$ and $\theta''$ are $A$-equivalent
if they differ in exactly one agent's type
and algorithm $A$ returns the same allocation in both cases. That is,
$
\theta''=(\theta''_i, \theta'_{-i})$ and $A(\theta'')=A(\theta').
$
In a bribeproof mechanism $(A,p)$ the payment for two $A$-equivalent type vectors must be the
same.
\end{fact}
\begin{proof}
Suppose by way of contradiction that $p(\theta')\neq p(\theta'')$ and, without loss of generality, that $p_j(\theta')>p_j(\theta'')$ for some agent $j$. Since $A(\theta')=A(\theta'')$, this violates bribeproofness \eqref{eq:bribeproof:breibee}.
\end{proof}
\section{A class of bribeproof mechanisms}\label{sec:class-mechanisms}
The idea to obtain bribeproof mechanisms is to pay each agent
the same fixed amount for each unit of work she gets allocated.
\begin{definition}[linear mechanism]\label{def:linear-mechanism}
A mechanism $(A,p)$ is a \linearmechanism{\lambda} if every agent $i$ receives a fixed payment $f_i$ plus $\lambda L + (1-\lambda)H$
units of money for each unit of allocated work, where $\lambda \in [0,1]$. That is,
\[
p_i(\theta) = A_i(\theta) \cdot q^{(\lambda)} + f_i\ \ \ \ \mbox{ where } \ \ \ \ q^{(\lambda)} =\lambda L + (1-\lambda)H
\]
for all $i$ and for all $\theta \in \Theta$.
\end{definition}
\begin{remark}
Note that, in Definition~\ref{def:linear-mechanism}, we limit ourself to $q \in [L,H]$ because otherwise the mechanism would not be even strategyproof in general. Moreover, the constants $f_i$ can be used to rescale the payments without affecting bribeproofness. For instance, one can set each $f_i$ so that truthfully reporting agents are guaranteed a nonnegative utility, i.e., the mechanism satisfies voluntary participation or individual rationality.
\end{remark}
In the following we define
\[
\influence{k}:= A_k(L,\theta_{-i}) - A_k(H,\theta_{-i}).
\]
The monotonicity condition for
strategyproofness \cite{ArcTar01,Mye81} requires that the allocation of each agent is weakly decreasing
in her reported cost, that is, for all $i\in N$ and for all $\theta\in \Theta$
\begin{align}
\label{eq:mon}
\influence{i}&\geq 0 & \mbox{(monotonicity)}&.
\end{align}
We next show that a stronger condition suffices for bribeproofness.
\begin{theorem}\label{th:wokload:bribeproof}
The \linearmechanism{\lambda} is bribeproof for a two-values domain if and only if algorithm $A$ satisfies the following conditions: for all $\theta\in \Theta^{(L,H)}$ and for all $i\in N$ condition \eqref{eq:mon} holds and, for all $\ell\in N$ with $\theta_\ell=L$ and for all $h\in N$ with $\theta_h=H$,
\begin{align}
(1-\lambda)&\cdot\influence{i} & \geq & &(\lambda - 1)&\cdot\influence{\ell}, \label{eq:pmon:LL}\\
(1-\lambda)&\cdot\influence{i} & \geq & &\lambda\phantom{-1}&\cdot\influence{h}, \label{eq:pmon:LH}\\
\lambda\phantom{-1}&\cdot\influence{i} & \geq & &(1- \lambda)&\cdot\influence{\ell}, \label{eq:pmon:HL}\\
\lambda\phantom{-1}&\cdot\influence{i} & \geq & &-\lambda\phantom{-1} &\cdot\influence{h}. \label{eq:pmon:HH}
\end{align}
\end{theorem}
\begin{proof}
It is easy to see that condition \eqref{eq:bribeproof:breibee} for $i=j$ is equivalent to the monotonicity condition \eqref{eq:mon}.\footnote{This case corresponds to the notion of strategyproofness and the equivalence to monotonicity is proved in \cite{Mye81,ArcTar01} for the more general class of one-parameter domains.} We thus consider the case $i\neq j$ and show that \eqref{eq:bribeproof:breibee} is equivalent to the conditions \eqref{eq:pmon:LL}-\eqref{eq:pmon:HH} above.
By definition of \linearmechanism{\lambda} the utility of a generic agent $k$ is
\begin{align}
p_k(\hat \theta) - A_k(\hat \theta) \cdot \theta_k &= f_k + A_k(\hat \theta) \cdot (\lambda L + (1-\lambda)H - \theta_k) \nonumber
\intertext{which can be rewritten as follows using $\theta_k\in \{L,H\}$:}
p_k(\hat \theta) - A_k(\hat \theta) \cdot \theta_k &= f_k + A_k(\hat \theta)\cdot (H-L) \cdot
\begin{cases}
1-\lambda & \mbox{if $\theta_k=L$}; \\
\phantom{1} -\lambda & \mbox{if $\theta_k=H$}.
\end{cases}
\label{eq:linear-mechanism:utility}
\end{align}
We rewrite \eqref{eq:bribeproof:breibee} according to the above utility function for each of the four possible cases of $\theta_i$ and $\theta_j$:
\begin{description}
\item[($\theta_i=L$ and $\theta_j=L$)] This case corresponds to
\[
(1-\lambda)A_i(L,\theta_{-i}) + (1-\lambda)A_j(L,\theta_{-i}) \geq (1-\lambda)A_i(H,\theta_{-i}) + (1-\lambda)A_j(H,\theta_{-i})
\]
which is equivalent to \eqref{eq:pmon:LL}.
\item[($\theta_i=L$ and $ \theta_j=H$)] This case corresponds to
\[
(1-\lambda)A_i(L,\theta_{-i}) + (-\lambda)A_j(L,\theta_{-i}) \geq (1-\lambda)A_i(H,\theta_{-i}) + (-\lambda)A_j(H,\theta_{-i})
\]
which is equivalent to \eqref{eq:pmon:LH}.
\item[($\theta_i=H$ and $ \theta_j=L$)] This case corresponds to
\[
(-\lambda)A_i(H,\theta_{-i}) + (1-\lambda)A_j(H,\theta_{-i}) \geq (-\lambda)A_i(L,\theta_{-i}) + (1-\lambda)A_j(L,\theta_{-i})
\]
which is equivalent to \eqref{eq:pmon:HL}.
\item[($\theta_i=H$ and $ \theta_j=H$)] This case corresponds to
\[
(-\lambda)A_i(H,\theta_{-i}) + (-\lambda)A_j(H,\theta_{-i}) \geq (-\lambda)A_i(L,\theta_{-i}) + (-\lambda)A_j(L,\theta_{-i})
\]
which is equivalent to \eqref{eq:pmon:HH}.
\end{description}
This completes the proof.
\end{proof}
A simple corollary
of the previous theorem is that the following natural condition implies bribeproofness when setting $\lambda=1/2$.
\begin{definition}[bounded influence]
An algorithm $A$ satisfies bounded influence if, for all $\theta\in \Theta$ and for all $i,j\in N$, the following condition holds:
\begin{equation}
\label{eq:bounded-influence}
\influence{i} \geq |\influence{j}|.
\end{equation}
\end{definition}
\begin{corollary}\label{cor:wokload:bribeproof:1/2}
The \linearmechanism{\left(\frac{1}{2}\right)} is bribeproof for two-values domains if and only if its algorithm $A$ satisfies bounded influence.
\end{corollary}
\section{Binary allocations}\label{sec:binary}
In this section we apply our results to the case of \emph{binary allocations}, that is, the problems in which each agent is allocated either an amount equal $0$ or $1$ (the path auction and the $k$-item procurement auction are two examples).
We first observe that bounded influence boils down to the following natural condition called \emph{non-bossiness} (no agent can change the allocation of another agent without changing her own allocation), and for our construction this condition is equivalent to (strong) bribeproofness.
\begin{definition}[non-bossiness]\label{def:stable}
An algorithm $A$ satisfies non-bossiness if, for all $i$ and for all $\theta$, the following implication holds:
if $\influence{i}=0$ then $\influence{j}=0$ for all $j$.
\end{definition}
\begin{theorem}\label{th:sbribeproof:equivalence}
For binary allocations and two-values domains, the following statements are equivalent:
\begin{enumerate}
\item The \linearmechanism{\left(\frac{1}{2}\right)} $(A,p)$ is bribeproof. \label{th:sbribeproof:equivalence:bribeproof}
\item Algorithm $A$ satisfies monotonicity and non-bossiness. \label{th:sbribeproof:equivalence:non-bossiness}
\item The \linearmechanism{\left(\frac{1}{2}\right)} $(A,p)$ is strongly bribeproof. \label{th:sbribeproof:equivalence:sbribeproof}
\end{enumerate}
\end{theorem}
Note that, in general, there exist bribeproof mechanisms whose algorithm does not satisfy non-bossiness.
\begin{example}[non-bossiness is not necessary]\label{exe:non-boossy-not-necessary} For two agents, consider the algorithm which selects the one having a strictly smaller type, if that is the case, and otherwise no agent is selected:\footnote{This example applies to the problem of allocating a single item considered in \cite{MisQua13} where not allocating the item is also an option.}
\begin{align*}
A(L,H)&= (1,0), & A(H,L)&=(0,1) & \mbox{and}& & A(L,L)&=(0,0)=A(H,H).
\end{align*}
Note that this algorithm does not satisfies non-bossiness. We next show that the \linearmechanism{1} is strongly bribeproof. Its payments are
\begin{align*}
p(L,H)&= (L,0), & p(H,L)&=(0,L) & \mbox{and}& & p(L,L)&=(0,0)=p(H,H),
\end{align*}
and it is easy to check that a truthful report yields utility $0$ to both agents, while any misreport can only give the same or a negative utility.
\end{example}
\subsection{Utilitarian (min-sum) problems}
The main application of Theorem~\ref{th:sbribeproof:equivalence} is a general construction of exact mechanisms for utilitarian problems (see e.g. \cite{NisRon99}), that is, for minimizing the (weighted) \emph{sum} of all agents' costs.
\begin{definition}[weighted social cost minimization]
An algorithm $A$ minimizes the weighted social cost if there exist nonnegative constants $\{\alpha_i\}_{i\in N}$ and arbitrary constants $\{\beta_a\}_{a \in \mathcal A}$ such that, for all $\theta\in \Theta$, it holds that
\[
A(\theta) \in \operatornamewithlimits{arg\ min}_{a\in \mathcal{A}} \{\operatorname{sum}(a,\theta)\},
\]
where $\operatorname{sum}()$ is defined as
$\operatorname{sum}(a,\theta) := \left(\sum_{i\in N} \alpha_i a_i \theta_i\right) + \beta_a.$
\end{definition}
Obviously, every mechanism minimizing the sum of all agents' costs correspond to the case $\alpha_i=1$ and $\beta_a=0$. Welfare maximization problems correspond to the case in which agents have valuations instead of costs.
\begin{definition}[consistent ties]
An algorithm $A$ minimizes the weighted social cost breaking ties consistently if there exists
a total order $\preceq$ over the set $\mathcal{A}$ of feasible allocations such that, for all $\theta\in \Theta$ and for all $a'\in \mathcal{A}$, the following implication holds:
if $\operatorname{sum}(A(\theta),\theta)=\operatorname{sum}(a',\theta)$, then $A(\theta) \preceq a'.$
\end{definition}
\begin{theorem}\label{th:binary-utilitarian}
For binary allocation problems over two-values domains, if algorithm $A$ minimizes the weighted social cost breaking ties consistently, then the corresponding \linearmechanism{\left(\frac{1}{2}\right)} is strongly bribeproof.
\end{theorem}
\begin{proof}
We show that every algorithm $A$ which minimizes the weighted social cost breaking ties consistently, satisfies non-bossiness and monotonicity \eqref{eq:mon}. The theorem then follows from Theorem~\ref{th:sbribeproof:equivalence}.
It is convenient to rewrite the weighted social cost into two parts, the contribution of
a fixed agent $i$ and the rest:
\begin{align*}
\alpha_i a_i \theta_i &+ \operatorname{sum}_{-i}(a, \theta_{-i}) & \mbox{ where }& & \operatorname{sum}_{-i}(a, \theta_{-i})&:=
(\sum_{j\in N \setminus \{i\}} \alpha_j a_j \theta_j)+ \beta_a.
\end{align*}
For ease of notation, also let $\theta^L:= (L,\theta_{-i})$ and $\theta^H:=(H,\theta_{-i})$. Observe that since $A$ minimizes the weighted social cost we have the following:
\begin{align}
\operatorname{sum}(A(\theta^L),\theta^L)& &=& &L \alpha_i A_i(\theta^L) + \operatorname{sum}_{-i}(A(\theta^L),\theta_{-i}) & &\leq & \label{eq:utilitarian:SC-L} \\
\operatorname{sum}(A(\theta^H),\theta^L)& &=& &L \alpha_i A_i(\theta^H) + \operatorname{sum}_{-i}(A(\theta^H),\theta_{-i}), & & \nonumber \text{ and }\\
\operatorname{sum}(A(\theta^H), \theta^H)& &=& &H \alpha_i A_i(\theta^H) + \operatorname{sum}_{-i}(A(\theta^H),\theta_{-i}) & &\leq \label{eq:utilitarian:SC-H} \\ \nonumber
\operatorname{sum}(A(\theta^L), \theta^H)& &=& &H \alpha_i A_i(\theta^L) + \operatorname{sum}_{-i}(A(\theta^L),\theta_{-i}) .& &
\end{align}
First, we show the following implication:
\begin{align}\label{eq:utilitarian:SC-non-bossiness}
\alpha_iA_i(\theta^H)&=\alpha_iA_i(\theta^L) & \Rightarrow& & A(\theta^L)&=A(\theta^H).
\end{align}
The left-hand side implies that both inequalities \eqref{eq:utilitarian:SC-L} and \eqref{eq:utilitarian:SC-H} hold with ``$=$''. Since ties are broken consistently, we have $A(\theta^L) \preceq A(\theta^H)$ by \eqref{eq:utilitarian:SC-L} and $A(\theta^H) \preceq A(\theta^L)$ by \eqref{eq:utilitarian:SC-H}, thus implying $A(\theta^L)=A(\theta^H)$.
Now observe that \eqref{eq:utilitarian:SC-non-bossiness} implies that $A$ satisfies non-bossiness, and thus it only remains to prove that the monotonicity condition holds. By summing inequalities \eqref{eq:utilitarian:SC-L} and \eqref{eq:utilitarian:SC-H} we obtain $\alpha_i(H-L) A_i(\theta^H) \leq \alpha_i(H-L) A_i(\theta^L)$. From this the inequality
\[
A_i(\theta^H) \leq A_i(\theta^L)
\]
follows immediately for the case $\alpha_i>0$, while for $\alpha_i=0$ it follows by \eqref{eq:utilitarian:SC-non-bossiness}. By definition, the inequality above is equivalent to the monotonicity condition \eqref{eq:mon}.
\end{proof}
The mechanism for the path auction problem consists in paying each agent in the chosen path an amount equal to $\mathcal{M}=\frac{L+H}{2}$, where the algorithm breaks ties between paths in a fixed order. Similar mechanisms can be obtained for other utilitarian problems like minimum spanning tree \cite{NisRon99} or for the $k$-item procurement auction (see Section~\ref{sec:characterizations} for the latter).
\section{Characterizations for two problems}\label{sec:characterizations}
In this section we show that the \linearmechanism{\left(\frac{1}{2}\right)} is the only bribeproof mechanism for the path auction on general networks (this result applies also to combinatorial auctions with known single minded bidders -- see Appendix~\ref{sec:restricted CA} for details). We then obtain analogous characterizations for the $k$-item procurement auction in terms of our \linearmechanism{\lambda}s.
\subsection{Path auction mechanisms}
As for the path auction problem, we actually prove a stronger result saying that the \linearmechanism{\left(\frac{1}{2}\right)} is the only bribeproof for
the simple network in Figure~\ref{fig:sp-diamond} on the following generalization of two-values domains:
\begin{definition}
The path auction with $\epsilon$-perturbed domain ($\epsilon \geq 0$) is the path auction problem restricted to the network in Figure~\ref{fig:sp-diamond} in which the agents domain are as follows:
$\Theta_1 =\Theta_2=\{L-\epsilon,H+\epsilon\}$ and $\Theta_3 =\Theta_4=\{L,H\}.$
\end{definition}
Clearly the two-values domain corresponds to setting $\epsilon=0$.
\begin{theorem}\label{th:sp:generalized-domain}
A mechanism which is bribeproof for the path auction with $\epsilon$-perturbed domain must be
a \linearmechanism{\left(\frac{1}{2}\right)}.
\end{theorem}
\begin{proof}[Main Ideas]
We first show that, no matter how the mechanism breaks ties, the payments must depend only on which path is selected (using Fact~\ref{fac:adj:pay}).
This means that the payments are of the form
\[
p_i(\theta) = f_i +
\begin{cases}
q_i & \mbox{if $i$ is selected for types $\theta$,} \\
0 & \mbox{otherwise.}
\end{cases}
\]
In order to conclude that the mechanism must be a \linearmechanism{\left(\frac{1}{2}\right)} it is enough to prove that $q_i=\frac{L+H}{2}$ for all $i$. This is the technically involved part, because we have to consider the possible tie breaking rules. At an intermediate step, we show that $q_1+q_2=L+H=q_3+q_4$, for otherwise there exists a coalition which violates bribeproofness.
\end{proof}
By taking $\epsilon=0$ we obtain a characterization for this problem:
\begin{corollary}\label{cor:sp:characterization}
The \linearmechanism{\left(\frac{1}{2}\right)} is the only bribeproof mechanism for the path auction on general networks.
\end{corollary}
Since in these instances of path-auction problem \linearmechanism{\left(\frac{1}{2}\right)} are \emph{not} bribeproof on three-values domains, we obtain the following result.
\begin{theorem}\label{th:sp:no-three-vals}
There is no bribeproof mechanism for the path auction problem on general networks and for
three-values domains.
\end{theorem}
Theorem~\ref{th:sp:generalized-domain} implies that we cannot extend the positive result to coalitions of larger size, nor to \emph{heterogeneous} two-values domains in which $\theta_i\in\{L_i,H_i\}$.
\begin{corollary}\label{cor:path-auction:no-collusion-proof}
There is no collusion-proof mechanism for the path auction problem
on general networks and two-values domains. The same remains true even if we restrict to coalitions of size three (in which two agents bribe another for misreporting her type).
\end{corollary}
\begin{corollary}\label{cor:sp:no-heterogeneous}
There is no bribeproof mechanism for the path auction problem on general networks and certain heterogeneous two-values domains.
\end{corollary}
\subsection{$k$-Item auction mechanisms}
We remark that on a simple network consisting of $n$ parallel edges the path auction problem is the same as the $1$-item procurement auction.
\begin{figure}
\block{\textbf{\linearmechanism{\lambda_M} (normalized to $f_i=0$):}
Select the $k$ agents with smallest types, breaking ties in favor of agents with smaller index; Pay each of the selected agents an amount $M$, and non-selected agents receive no money.
}
\caption{A bribeproof mechanism for $k$-item procurement auction over three-values domains $\Theta^{(L,M,H)}$.}
\label{fig:M-compensation}
\end{figure}
For the $k$-item procurement auction over three values domains, we consider the mechanism which provides a payment equal $M$ to the selected agents (see Figure~\ref{fig:M-compensation}). Note that this is a \linearmechanism{\lambda} with $$\lambda=\lambda_M:= \frac{H-M}{H-L}$$
which is the value such that $$M=L\lambda_M+(1-\lambda_M)H.$$
We show that the \linearmechanism{\lambda_M} in Figure~\ref{fig:M-compensation} is bribeproof over
three-values domains (Theorem~\ref{th:one-job:bribeproof}) and, for the case $k=1$, only \linearmechanism{\lambda_M}s can be bribeproof (Theorem~\ref{th:parallel:three-vals:char}).
\begin{theorem}\label{th:one-job:bribeproof}
The \linearmechanism{\lambda_M} is bribeproof for the $k$-item procurement auction in the case of three-values domains.
\end{theorem}
Also in this problem our construction yields the only mechanism, and results cannot be extended to more complex domains.
\begin{theorem}\label{th:parallel:three-vals:char}
The \linearmechanism{\lambda_M} is the only bribeproof mechanism for the $1$-item procurement auction with three-values domains and two agents.
\end{theorem}
This implies the impossibility result.
\begin{corollary}\label{cor:parallel:four-vals:char}
There is no bribeproof mechanism for the $1$-item procurement auction with two agents and four-values domains.
\end{corollary}
\subsection{Back to path auction: Graph structure and three-values domains}
One way to restate the result on $1$-item auction, is that path auction admits a bribeproof mechanism on three-values domains when restricted to the \emph{parallel links} graph in Figure~\ref{fig:parallel-links}. The proof of Theorem~\ref{th:sp:no-three-vals} says that on the graph in Figure~\ref{fig:sp-diamond} there are no mechanisms for three-values domains. We next strengthen the result to the simple ``triangle'' graph in Figure~\ref{fig:sp:no-three-vals-sp:triangle}. Unlike Theorem~\ref{th:sp:no-three-vals}, this applies to some combination of values defining the domain. The result says that parallel links is the ``most general''
graph for which bribeproof mechanisms on any three-values domain exist.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\stpathtriangle{\theta_1}{\theta_2}{\theta_3}
\end{tikzpicture}
\end{center}
\caption{A simple network for which there is no bribeproof mechanism for the path auction problem on certain three values domains.}
\label{fig:sp:no-three-vals-sp:triangle}
\end{figure}
\begin{theorem}\label{th:sp:no-three-vals:triangle}
There is no bribeproof mechanism for the path auction problem on the network in Figure~\ref{fig:sp:no-three-vals-sp:triangle} and for
some three-values domains. In particular, this holds whenever $\Theta^{(L,M,H)}$ satisfies
\begin{align*}
2L &< M, & L+H &< 2M, & \mbox{and}& & L+M &<
H.
\end{align*}
\end{theorem}
\section{Min-max fairness and non-binary problems}\label{sec:non-binary}
In this section we consider problems with \emph{min-max} fairness optimization criteria, and \emph{non-binary} allocations. Thus, the algorithm
$A$ should satisfy
\begin{equation}
A(\theta) \in \operatornamewithlimits{arg\ min}_{a \in \mathcal{A}} \max_{i\in N} \{a_i \cdot \theta_i\}. \label{eq:min-max}
\end{equation}
In particular we consider the problem of allocating a perfectly divisible item (Example~\ref{exa:perfectly-divisible}) according to the above min-max fairness criteria \eqref{eq:min-max}. In such allocation all agents will get some positive amount so that all costs will be identical.
\begin{theorem}[min-max fairness]\label{th:fractional-makespan}
There is a strongly bribeproof\ \linearmechanism{\left(\frac{1}{2}\right)} satisfying min-max fairness for allocating a perfectly divisible item.
\end{theorem}
We next consider the problem of scheduling selfish related machines \cite{ArcTar01}. In this problem, we are given several indivisible items (jobs) each of them with some size. Each item must be assigned to some agent (machine) and the goal is to minimize the maximum cost (the makespan). Note that the allocation of each machine is the sum of the size of the jobs allocated to this machine.
\begin{example}\label{exa:scheduling}
Consider three machines and three jobs of size $10$, $6$, and $6$. For $L=1$ and $H=2+\epsilon$, for some small $\epsilon$ to be specified (below). The allocation of the jobs minimizing the \emph{makespan} for types $\theta=(L,L,H)$ and $\hat{\theta}=(L,H,H)$, for any $0 < \epsilon <2/3$, is as follows: $A(L,L,H) = (6+6,10,0)$ and $A(L,H,H)=(10,6,6)$.
This is unique up to a permutation of
the allocation of machines with the same type.
\end{example}
\newcommand{\frac{4}{\sqrt{3}}-2}{\frac{4}{\sqrt{3}}-2}
\newcommand{\frac{2}{\sqrt{3}}}{\frac{2}{\sqrt{3}}}
Using this example we can show that our construction cannot lead to bribeproof mechanisms for minimizing the makespan in the
scheduling problem above, or even to approximate the makespan within some small factor $\alpha>1$, i.e., returning an allocation whose makespan is at most $\alpha$ times the optimum makespan.
\begin{theorem}[selfish related machines]\label{th:makespan:no-pmon}
No bribeproof \linearmechanism{\lambda} for the makespan minimization on three agents with two values-domains can approximate the makespan within a factor smaller than $\frac{2}{\sqrt{3}} \approx 1.1547$.
\end{theorem}
We note that the same impossibility result applies to randomized mechanisms using algorithms that
pick an optimal allocation with some probability distribution:
\begin{remark}[randomized allocations]
It is in principle possible to consider randomized allocations in which $a_i$ is a random variable and the allocation is
given by a probability distribution over those minimizing the makespan. In particular, we could define an algorithm which permutes the jobs allocated to machines of the same type. For the instance used to prove Theorem~\ref{th:makespan:no-pmon}, when the types are $(L,L,H)$ there are two optimal allocations,
\begin{align*}
(12,10,0)& & \mbox{and}& & (10,12,0)
\end{align*}
and by picking one of them with uniform probability the resulting algorithm $A$
returns
\begin{align*}
A(L,L,H) &= (11,11,0) & \mbox{and}& & A(L,H,H) &= (10,6,6).
\end{align*}
The same argument used to prove Theorem~\ref{th:makespan:no-pmon} applies also in this case and thus no
\linearmechanism{\lambda} can be bribeproof in expectation.
\end{remark}
\begin{remark}
We stress that the previous impossibility results are conditioned to our \emph{\linearmechanism{\lambda}} for every choice of $\lambda$. Whether there exist bribeproof mechanisms for makespan minimization at all is an interesting open question. Note that strategyproof mechanisms minimizing the makespan do exist \citep{ArcTar01} as well as computationally-efficient ones which guarantee $(1+\epsilon)$-approximation in polynomial time for every fixed $\epsilon>0$ \citep{ChrKov08}.
\end{remark}
\section{Collusion-proofness}\label{sec:collusion-proof}
In this section we present an application of our general construction: A sufficient condition for which the \linearmechanism{1} is \emph{collusion-proof}, while \linearmechanism{\left(\frac{1}{2}\right)} is not even bribeproof.
\begin{definition}[collusion-proof]
A mechanism is collusion-proof if no coalition $C$ can improve its total utility by a join misreport of the types. That is,
for every $\theta$ and for every coalition $C \subseteq \{1,\ldots,n\}$
\begin{equation} \sum_{i\in C} u_i(\theta;\theta_i) \geq \sum_{i\in C} u_i(\hat\theta;\theta_i) \label{eq:collusion-proof}\end{equation}
where $\hat \theta$ is any type vector which differ from $\theta$ only in (some of) the types of agents in $C$, and $u_i(\hat\theta;\theta_i) = P_i(\hat \theta) - A_i(\hat \theta)\cdot \theta_i$ is the utility of agent $i$ when reporting $\hat \theta$; $u_i(\theta;\theta)$ is defined analogously.
\end{definition}
We consider \emph{non-binary} allocations and prove that the following two conditions are sufficient for collusion-proofness:
\begin{description}
\item[Pareto efficiency:] If there is at least one agent of type $L$, then all agents of type $H$ should get zero allocation,
\[ A_i(\theta)>0 \Rightarrow \theta_i=L \mbox{ or } \theta = \mathbf{H}, \]
where $\mathbf{H}:=(H,\ldots,H)$ denotes the vector in which all types are $H$.
\item[Minimality:] For types $\mathbf{H}$, no group of agents can decrease its total allocation by changing some of its types,
\[ \sum_{i\in C} A_i(\mathbf{H}) \leq \sum_{i\in C} A_i(\hat\theta_C,\mathbf{H}_{-C}), \]
for any vector $(\hat\theta_C,\mathbf{H}_{-C})$ in which all types not in $C$ are $H$.
\end{description}
\begin{theorem}
The \linearmechanism{1} is collusion-proof if algorithm $A$ satisfies Pareto efficiency and minimality.
\end{theorem}
\begin{proof}
Observe that in the \linearmechanism{1} the utility of each agent is zero or negative, and a negative utility occurs only when the agent under consideration has type $H$ and she gets a positive allocation.
We show that \eqref{eq:collusion-proof} holds for every $\theta$, for every coalition $C \subseteq \{1,\ldots,n\}$, and for every $\hat \theta$ which differ from $\theta$ only in (some of) the types of agents in $C$.
For $\theta\neq \mathbf{H}$, every truth-telling agent has \emph{zero} utility (because of Pareto efficiency), and thus \eqref{eq:collusion-proof} follows from the observation that utilities are nonnegative.
For $\theta = \mathbf{H}$, condition \eqref{eq:collusion-proof} follows from minimality and by the observation that in \linearmechanism{1} the utility of agents of type $H$ is of the form
\[u_i(\hat\theta;H) = -(H-L)A_i(\hat \theta). \]
This completes the proof.
\end{proof}
We next give two examples of such allocation rules. The first one is the \emph{uniform rule} which divides a perfectly-divisible good equally among all agents having the smallest type:
\begin{equation}
U_i(\theta)=\begin{cases}
1/n & \mbox{if $\theta=(H,\ldots,H)$},\\
1/n_L & \mbox{if $\theta \neq (H,\ldots,H)$ and $\theta_i=L$},\\
0 & \mbox{if $\theta \neq (H,\ldots,H)$ and $\theta_i=H$.}
\end{cases}\label{exa:uniform-rule}
\end{equation}
(Here $n_L$ denotes the number of agents of type $L$ in $\theta$.)
\begin{corollary}
The \linearmechanism{1} with uniform rule \eqref{exa:uniform-rule} is collusion-proof.
\end{corollary}
The second rule, defined for the case of three agents, is a simple modification of the previous one so that unequal amounts are allocated when the three types are identical:
\begin{align}
A(\theta) = \begin{cases}
(2/3,1/6,1/6) & \mbox{if $\theta\in\{(L,L,L),(H,H,H)\}$},\\
U(\theta) & \mbox{otherwise.}
\end{cases}\label{exa:uniform-rule-tilted}
\end{align}
To see that also this rule satisfies minimality, observe that $\sum_{i\in C} A_i(\hat\theta) = 1$
for all $\hat\theta=(\hat\theta_C,\mathbf{H}_{-C}) \neq \mathbf{H}$. The next simple corollary shows that the uniform rule is not the only collusion-proof rule. Moreover, it shows that in some applications $\lambda=1$ gives the correct choice for \linearmechanism{\lambda}.
\begin{corollary}\label{cor:bribeproof}
For three agents, the \linearmechanism{1} with rule in \eqref{exa:uniform-rule-tilted} is collusion-proof, while the \linearmechanism{\left(\frac{1}{2} \right) } with the same rule is not even bribeproof.
\end{corollary}
\section{Conclusion and open questions}
This work provides a sufficient condition for obtaining \emph{non-trivial} bribeproof mechanisms on certain \emph{restricted domains}. Specifically, the proposed \emph{two-values domains} can be regarded as the simplest domains which still capture some ``combinatorial'' structure of the problem. For instance, it may be the case that two particular agents cannot be selected simultaneously, and therefore have a ``conflict of interests''.
In our opinion, the simple construction proposed here is interesting for the following reasons:
\begin{itemize}
\item The impossibility results known in the literature \citep{Sch00,Miz03} do not apply as the mechanisms are far from trivial (``fixed solution'') since they maximize the social welfare in binary allocation problems.
\item In some natural welfare-maximization problems, the proposed mechanisms are essentially \emph{the only} bribeproof ones. The results are also the best possible in the sense that there is no mechanism on more general domains, nor a mechanism dealing with coalitions of size more than two.
\end{itemize}
On the one hand, this suggests that the results are tight for welfare maximization problems. On the other hand, it would be interesting to understand, or even characterize, bribeproofness in finite domains (without appealing to welfare-maximization). The impossibility results for scheduling problems presented in Section~\ref{sec:non-binary} apply only to our construction. Whether other bribeproof mechanisms exist for this problem is an interesting open question. More generally, the results suggest to investigate the trade-off between the social welfare and the ``richness'' of the domain. In that sense, our results can be seen as ``complementary'' of the impossibility results in \cite{Sch00,Miz03}: On general domains nothing is possible, while optimal social welfare for certain problems can be achieved in two-values domains but not on three-values domains. Therefore, one might consider \emph{approximate} social welfare in discrete domains with ``few'' possible values (e.g., approximate path-auction with three or more values). More generally, the trade-off may also involve the type of incentive compatibility condition that we want. For example, \cite{DutGkaRou14} showed that a certain class of weakly group strategyproof mechanisms (studied in \cite{MilSeg14}) can only achieve an approximate social welfare in binary one-parameter domains.
Concerning the construction, we note that \linearmechanism{\lambda}s guarantee bribeproofness (or even stronger conditions) for different classes of problems. The choice of the parameter $\lambda$ plays a central role (this specifies the fixed payment per unit of work provided to the agents) and, in particular, different problems require a different $\lambda$ (cf., Corollary~\ref{cor:sp:characterization}, Theorem~\ref{th:parallel:three-vals:char}, and Corollary~\ref{cor:bribeproof}).
\paragraph{Acknowledgments.} We are grateful to the anonymous reviewers for a through reading of the paper, and for suggesting a possible connection with the work \cite{ohseto2000strategy}.
\bibliographystyle{plain}
|
2,877,628,089,763 | arxiv | \section{Introduction}
The nonlinear properties of low-dimensional electron systems attract a great deal of attention for its fundamental significance as well as for potentially important applications in nanoelectronics. In response to microwave radiation and $dc$ bias, strongly nonlinear electron transport\cite{zudov2001,engel2001,yang2002,dorozh2003,willett2004,mani2004,kukushkin2004,stud2005,bykov2005,bykovJETP2006,bykov2007R,zudov2007R,du2007,stud2007,zudovPRB2008,gusev2008a,hatke2009a,hatke2009b,dorozh2009,durst2003,ryzhii1970,anderson,shi,liu2005,dietel2005,inarreaPRB2005,vavilov2004,dmitriev2005,alicea2005,volkov2007,glazman2007,dmitriev2007} that gives rise to unusual electron states \cite{mani2002,zudov2003,zudov2007,bykov2007zdr,zudov2008zdr,andreev2003,auerbach2005} has been reported in two-dimensional systems of highly mobile electrons in a high magnetic field. There has also been great interest in the nonlinear response of quantum ballistic constrictions, where the effects of quantum interference, spatial dispersion and electron-electron interaction play essential roles \cite{dicarlo,wei,leturcq,zumbhl,lofgren,zhang2006,brouwer,vavilovnl,sanchez,spivak,polianski,andreev2006}.
Recent experiments, in which a $dc$ electric field applied to highly mobile 2D electrons placed in strong magnetic fields, have demonstrated a variety of fascinating nonlinear phenomena \cite{yang2002,bykov2005,bykov2007R,zudov2007R,gusev2008b,zudov2009}. Oscillations of the nonlinear magnetoresistance with a magnetic field, which appear at a finite $dc$ bias, have been reported \cite{yang2002,bykov2005,bykov2007R,zudov2007R}. These interesting oscillations, decaying at high temperatures \cite{zudov2009}, are attributed to Landau-Zener transitions between Landau levels \cite{yang2002}. At substantially smaller $dc$ biases another important class of nonlinearities has been identified \cite{bykov2007R,gusev2008b}.
In this paper we study in detail the effect of the small $dc$ electric field $E$ on the longitudinal resistance of two-dimensional electrons in GaAs quantum wells placed in a strong magnetic field. In such a magnetic field the density of states of the 2D electrons is modulated due to the Landau quantization of the electron motion. The electric field $E$ decreases the resistance significantly \cite{bykov2005,bykov2007R,zudov2007R,gusev2008b}. The effect, existing in a broad range of temperatures, can not be explained by an increase of the electron temperature due to the heating by the electric field $E$ \cite{bykov2007R,romero2008warming}. In the paper \cite{bykov2007R} the effect is attributed to a non-uniform spectral diffusion of the 2D electrons induced by the electric field \cite{dmitriev2005}. The spectral diffusion produces a specific distribution of 2D electrons in the quantized spectrum, which is significantly different from the canonical Fermi-Dirac form. In fact the observed strong nonlinearity is result of the deviations of the electron distribution from the Fermi-Dirac function. The effect is considerably enhanced in electron systems with high mobility and high electron density. The high electron mobility provides strong absolute variations of the density of states and the spectral diffusion with electron energy, increasing appreciably the magnitude of the non-temperature deviations. The high electron density provides substantial decrease of the electron-electron scattering, which makes the relaxation of the deviations to be weak.
Effects of an electric field $E$ on the resistance of two dimensional electrons placed in strong magnetic fields have been studied in many works \cite{heating,pinch}.
Substantial part of these studies was focused on an effect of the electric field $E$ on an amplitude of quantum oscillations of the resistivity. The quantum (Shubnikov de Haas, SdH) oscillations are result of the quantization of the electron spectrum in strong magnetic field \cite{shoenberg1984}.
The amplitude of the oscillations depends significantly on the electron temperature \cite{shoenberg1984,ando}. It has been found that the amplitude of the SdH oscillations decreases with the electric field $E$ \cite{heating}. The effect is attributed to an increase of the electron temperature $ T_e$ due to the electric heating. The explanation is based on an assumption that the surplus of the Joule energy provided by the electric field $E$ is rapidly shared among the carriers through electron-electron interaction, establishing the thermal (Fermi-Dirac) distribution at an elevated temperature $T_e$ \cite{dolgopol1985,pepper}. The $T_e$ approximation works well in systems with a strong electron-electron scattering. It ignores any deviations of the non-equilibrium electron distribution from the Fermi-Dirac form. The approximation has been widely and successfully used for 2D electron systems with low electron density and/or mobility \cite{heating}. We note, however, that a substantial discrepancy between the temperature $T_e$, obtained from the analysis of the amplitude of the quantum oscillations in the $T_e$ approximation, and the one obtained, using another experimental method, has been reported in GaAs 2D systems with a high electron mobility \cite{pepper}.
Despite the apparent applicability of the $T_e$ approximation to the overheated electron systems, recent studies have revealed an inadequacy of the temperature description of the nonlinear transport of highly mobile 2D carriers \cite{bykov2007R,romero2008warming,gusev2008b}. Instead of the $T_e$ approximation in this paper we use a different approach \cite{dmitriev2005}. Below we evaluate the distribution function, using an equation of the spectral diffusion. In the computations any assumptions regarding the shape of the electron distribution function are relaxed. In contrast to the $T_e$ approximation the new approach to the heating via the direct evaluation of the electron distribution function is more universal and accurate. It takes into account, in principle, $both$ the broadening ("temperature" increase) of the distribution function $and$ the deviations of the distribution function from Fermi-Dirac form in response to the electric field $E$. The later appears to be the dominant source of the strong nonlinearity observed in highly mobile 2D electron systems at small electric fields.
The spectral diffusion is limited by an electron inelastic relaxation, which moves the electron system back to thermal equilibrium. It opens new possibilities to study inelastic processes and nonlinear electron kinetics of low dimensional systems. In the present paper we explore these possibilities. We study the effect of electric fields on the resistivity in a broad range of magnetic fields and temperatures. We compare the experimental results with numerical simulations of the spectral diffusion. The comparison gives the inelastic scattering time of 2D electrons in a broad range of magnetic fields and temperatures.
In the temperature interval $T=2-10K$ for overlapping Landau levels, the inelastic scattering rate $1/\tau_{in}$ is found to be proportional to the square of the temperature, indicating the dominant contribution of the electron-electron interaction into the relaxation of the electron distribution function. At a strong magnetic field, at which Landau levels are well separated, the nonlinear resistance demonstrates an interesting scaling behavior. In this regime at high temperatures the inelastic scattering rate is found to be proportional to $T^3$, indicating leading contribution of the electron-phonon scattering to the inelastic relaxation. At low temperature and separated Landau levels an additional regime of the inelastic electron relaxation is observed: $\tau_{in} \sim T^{-1.26}$.
The paper has the following organization. The "Experimental Setup" section presents the main kinetic parameters of samples and details of the experiment. The "Theory and Numerical Simulations" section presents basic components of the theory and discusses essential steps used to calculate the longitudinal resistance. Experimental results and a comparison with numerical simulations are presented in the section "Results and Discussion". Section "Conclusion" contains a summary of the research.
\section{Experimental Setup}
Our samples are high-mobility GaAs quantum wells grown by molecular beam epitaxy on semi-insulating (001) GaAs substrates. The width of the GaAs quantum well is 13 nm. Two AlAs/GaAs type-II superlattices grown on both sides of the well served as barriers, providing a high mobility of 2D electrons inside the well at a high electron density\cite{fried1996}. Two samples (N1 and N2) were studied with electron density $n_1$ = 12.2 $\times 10^{15}$ m$^{-2}$, $n_2$=8.2 $\times 10^{15}$ (m$^{-2}$) and mobility $\mu_1$= 93 m$^2$/Vs, $\mu_2$=85 (m$^2$/Vs) at T=2.7K. At higher densities the cyclotron radius $r_C$ of 2D electrons at Fermi level is larger. As it is shown below, this increases the spectral diffusion and the nonlinear response in strong magnetic fields.
\begin{figure}[tbp]
\vbox{\vspace{0 in}} \hbox{\hspace{+0.1in}} \epsfxsize 3.0 in
\vskip -0.5in %
\epsfbox{sample.eps} \vskip 0.5in
\caption{ Schematic view of experimental setup. Studied 2D electron system is etched in the shape of a Hall bar. White area schematically presents the details of the Hall bar: the width and the length of the measured part of the sample are $d=$50 $\mu m$ and $L=$250 $\mu m$. Direct current $I_{dc}$ is applied simultaneously with $ac$ current $I_{ac}$ through current contacts formed in the 2D electron layer. The longitudinal $ac$ voltage $V_{ac}$ is measured between potential contacts displaced 250 $\mu m$ along each side of the sample. }
\label{sample}
\end{figure}
Measurements were carried out between T=0.3K and T=30K in a He-3 insert in a superconducting solenoid. Samples and a calibrated thermometer were mounted on a cold cooper finger in vacuum. Magnetic fields up to 1 T were applied perpendicular to the 2D electron layers patterned in a form of $d$=50 $\mu m$ wide Hall bars with a distance of 250 $\mu m$ along the bars between potential contacts. A schematic view of experimental setup is shown in Fig.\ref{sample}. To measure the resistance we have used the four probes method. Direct electric current $I_{dc}$ ($dc$ bias) is applied simultaneously with an $ac$ excitation $I_{ac}$ through the same current contacts (x-direction). The current contacts are placed far away from the measured area at a distance of 500 $\mu m$, which is much greater than the inelastic relaxation length of the 2D electrons $L_{in}=(D \tau_{in})^{1/2} \sim 1-5 $ $\mu m$ (see below). The later insures that possible nonlinearities near the current leads provide negligibly small contribution to the total nonlinear response measured in the experiments.
Experiments are done at fixed magnetic fields corresponding to maximums of the Shubnikov de Haas oscillations. At this condition the Fermi level is located at a maximum of the density of states and contributions of the edge states to the total electron transport is small. Below we consider the density of the electrical current across the samples to be a constant.
The longitudinal voltage $V_{ac}$ was measured between potential contacts (displaced along the x-direction) using a lockin amplifier with 10 M$\Omega$ input impedance. In the experiments the potential contacts provided insignificant contribution to the overall nonlinear response due to small values of the contact resistance (about 1k$\Omega$) and negligibly small electric current flowing through the contacts ($<0.1$ nA).
The differential longitudinal resistance $r_{xx}=V_{ac}/I_{ac}$ is measured at a frequency of 77 Hz in the linear regime. In the experiment a dependence of differential resistance $r_{xx}=dV_{xx}/dI$ on the $dc$ bias $I_{dc}$ is measured. The resistance $R_{xx}$ of the sample is obtained by an integration of the differential resistance: $R_{xx}= (\int r_{xx}dI)/I_{dc}$. In the paper we compare the resistance $R_{xx}$ with numerical calculations based on recent theory \cite{dmitriev2005}.
Experiments are done in a classically strong magnetic fields ($\omega_c \tau_{tr} \gg 1$), where the $\omega_c$ is cyclotron frequency and $\tau_{tr}$ is the transport scattering time. At this condition the electric current density $\vec J=(J_x, 0)$ directed along the x-axes is almost perpendicular to the total electric field $\vec E=(E_x,E_y)$, where $E_x \ll E_y$ \cite{ziman}. The magnitude of the Hall electric field $E_H=E_y$ directed along the y-axes is almost equal to the magnitude of the total electric field $\vert \vec E \vert$. Below we consider the magnitude of the Hall electric field $E_H$ to be equal to the magnitude of the total electric field $\vec E$ applied to the samples. The local Joule heat injected into the 2D systems per second can be evaluated with an accuracy better than 2\% as: $J_x \cdot E_x = (\sigma_{xx} E_x+\sigma_{xy} E_y)\cdot (\sigma_{xx}/\sigma_{xy})E_y \approx \sigma_{xx} \cdot E_H^2$, where $\hat \sigma$ is the conductivity in the strong magnetic field.
In our experiments the Hall voltage $V_{xy}$ is recorded simultaneously with the longitudinal voltage $V_{xx}$. Observed variations of the Hall conductivity $\sigma_{xy}$ and the Hall electric field $E_H$ with the $dc$ bias were below 1\%. These variations yield a negligibly small contribution to the overall dependence of the longitudinal conductivity $\sigma_{xx}$ on the $dc$ bias. This contribution are ignored in the comparison between the experiment and the theory.
\section {Theory and Numerical Simulations}
In this section we present basic parts of the theory \cite{dmitriev2005} and details of the numerical calculations of the nonlinear resistivity. The theory considers nonlinear electron transport in a strong magnetic field. In the magnetic field the electron spectrum is quantized and the density of states oscillates with the energy. The period of the oscillations is the cyclotron energy $\hbar \omega_c$. The width of the Landau levels is $\Gamma=\hbar/\tau_q$, where $\tau_q$ is quantum scattering time. At low temperatures the time $\tau_q$ is determined by an elastic impurity scattering of the 2D electrons. At small quantized magnetic fields the electron spin splitting is much smaller the level width $\Gamma$ \cite{romeroHparallel2008}. The spin splitting is neglected in the paper.
The net longitudinal conductivity of the 2D electrons $\sigma_{nl}=\sigma_{xx}$ is a sum of conductivities $\sigma(\epsilon)$ of the levels with energy $\epsilon$ over all possible energies, weighted with the first derivative of the distribution function $\partial f/\partial \epsilon$ \cite{ando}:
\begin{equation}
\sigma_{nl}= \int \sigma(\epsilon)(-\partial f/ \partial \epsilon) d\epsilon,
\label{sigma_nl}
\end{equation}
In the leading approximation for a classically strong magnetic field the longitudinal conductivity $\sigma(\epsilon)$ at an energy $\epsilon$ reads \cite{dmitriev2005}:
\begin{equation}
\sigma(\epsilon)=\sigma_D \tilde{\nu}^2(\epsilon),
\label{sigma_dc}
\end{equation}
where $\sigma_D=e^2 \nu_0 v_F^2/2 \omega_c^2 \tau_{tr}$ is the $dc$ Drude conductivity in a strong magnetic field $B$, $\tilde{\nu}(\epsilon)= \nu(\epsilon)/ \nu_0$ is dimensionless density of states (DOS), $\tau_{tr}$ and $\nu_0=m/\pi \hbar^2$ are transport scattering time and the density of states at zero magnetic field and $v_F$ is the Fermi velocity. The approximation neglects effects of the electric field on the electron-impurity collision, which yields a negligibly small correction to the nonlinear resistance at small electric fields\cite{dmitriev2005}. The dominant nonlinear effect is due to a non-trivial energy dependence of the distribution function $f(\epsilon)$, which is a result of non-uniform spectral diffusion of the 2D electrons in response to the total $dc$ electric field $\vec E$ applied to the system.
Due to conservation of total electron energy $\epsilon_0$ in the presence of the external electric field $\vec E$ and the elastic electron-impurity scattering, the kinetic energy of an electron $\epsilon_K$ depends on the electron position $\vec r$: $\epsilon_ K(\vec r)=\epsilon_0 -e \vec E \vec r$. As a result of the energy conservation, the diffusion motion of the electron in real space originates a diffusion of the electron kinetic energy in the energy space. The diffusion generates a spectral electron flow from occupied electron levels below the Fermi energy to empty states above it. The coefficient of the spectral diffusion $D_\epsilon(\epsilon)$ is proportional to the coefficient of the spatial diffusion $D(\epsilon)= v_F^2 \tilde{\nu}(\epsilon) /2 \omega_c^2 \tau_{tr}= r_C^2 \tilde{\nu}(\epsilon) /2 \tau_{tr}$: $D_\epsilon(\epsilon)=(e E)^2 D(\epsilon) \sim (\delta \vec r)^2$. The spectral diffusion is proportional to square of the cyclotron radius $r_C$ and the normalized density of states $\tilde{\nu}(\epsilon)$. The spectral diffusion is most effective in the center of the Landau levels, where the density of states is high, gradually decreases away from the center and is suppressed considerably between Landau levels, where the density of states is small.
The spectral diffusion is described by the Fokker-Plank type equation \cite{dmitriev2005}:
\begin{equation}
-\frac{\partial f}{\partial t}+E^2\frac{\sigma _{dc}^D}{\nu _0 \tilde{\nu}(\epsilon)}\partial_{\epsilon}\left[\tilde{\nu} ^2(\epsilon) \partial _{\epsilon}f(\epsilon)\right]=\frac{f(\epsilon)-f_T(\epsilon)}{\tau_{in}}
\label{main}
\end{equation}
The left side of the equation describes the spectral diffusion of a spherical part of the electron distribution function $f$ induced by the electric field $E$ in the presence of the elastic impurity scattering. The higher angular harmonics of the distribution function provide much smaller contributions to the net function $f$, due to much faster temporal relaxation. These are neglected in the eq.\ref{main}. The right side of the equation describes the inelastic relaxation of the distribution function toward the thermal equilibrium expressed by Fermi-Dirac function $f_T(\epsilon)$. The inelastic relaxation is taken in, so-called, $\tau$ approximation of the inelastic collision integral. Validity of the approximation is supported theoretically in the high temperature limit $kT \gg \hbar \omega_c$ \cite{dmitriev2005}. Below, in the numerical calculations of eq.\ref{main} we consider the inelastic scattering rate $1/\tau_{in}$ to be a constant independent on the electric field $E$ and the electron energy $\epsilon$.
Good agreement is found between the experiment and the numerical calculations for a broad range of temperatures $kT>\Gamma$ and magnetic fields.
At small magnetic fields the conjecture of the independence of the inelastic time $\tau_{in}$ on the electric field $E$ is supported by direct evaluation of the variation (broadening) of the distribution function, which is found to be small at the $dc$ biases used in the experiment. The small variation provides a negligibly small correction to the inelastic collision integral and to the inelastic scattering rate. Moreover at $kT \ge \Gamma$ the energy space available for inelastic scattering of an electron inside Landau sub-band contains, in fact, all levels of the sub-band. This may provide the weak dependence of the inelastic electron scattering on the energy $\epsilon$ inside the Landau level.
At a strong magnetic field, at which Landau levels are well separated, we have found a scaling behavior of the nonlinear resistance (see fig.\ref{tau_vsT_s2},\ref{tau_vsT_s1}). In this regime the experiment and the theory demonstrate a remarkable correspondence even at a strong variation of the nonlinear resistance. This behavior is unexpected since the strong variation of the resistance implies a substantial deviation of the electron distribution function from the equilibrium and, therefore, an apparent inapplicability of the $\tau$ approximation with the constant $\tau_{in}$. Below we provide arguments, which shed a light on this interesting phenomenon.
At a strong magnetic field, at which Landau levels are well separated, the spectral diffusion between Landau levels is absent due to the lack of the available electron states ($\nu=0$). In this regime the total broadening of the distribution function is absent and, therefore, the total number of Landau levels participating in the spectral diffusion is fixed. There is, however, a spectral diffusion inside Landau levels, generating local spectral flows. Since the spectral diffusion conserves the total number of particles and since there is no electron transport between Landau levels, the total number of electrons inside any Landau level is preserved and equal to the thermal equilibrium value despite considerable deviations of the electron distribution function from the thermal equilibrium inside the level. It is clear that in this condition the total number of empty states in each Landau level is also fixed and equal to the value at the thermal equilibrium (at zero $dc$ bias). Thus for the isolated Landau levels the averaged spectral distribution of electron states, which are available for the inelastic scattering of an electron, is independent on the applied electric field. This may provide the significant stability of the inelastic relaxation rate with respect to the $dc$ bias. These arguments are valid, when the electron distribution inside a Landau level is not changing substantially with the electron energy. This regime holds at relatively high temperature: $kT >\Gamma$.
At low temperatures $kT< \Gamma$ the only one Landau level is involved in electron transport and at the thermal equilibrium the electron distribution changes strongly inside the level. An application of a $dc$ bias changes appreciably the distribution of electrons. At $kT< \Gamma$ the numerical calculations done in the $\tau$ approximation deviate substantially from the experiment (see fig.\ref{B0784T}c), indicating a limited applicability of the approximation at the low temperatures.
The numerical calculations are done in several steps. The goal of the first step is to find the density of electron states $\nu(\epsilon)$ from a comparison with the experiment. The density of states $\nu(\epsilon)$ of the 2D electrons can be approximated by different theoretical expressions \cite{uemura,ando,raikh,xie1990,endo2008}. We have found that the numerical results for the temperature dependence of the inelastic scattering rate are robust with respect to particular choice of the expressions for the density of states (see below). Most of the numerical results, presented in the paper, are obtained using a Gaussian form of the DOS \cite{raikh}:
\begin{equation}
\nu (\epsilon)= \nu_0 \sqrt{\omega_c \tau_q}\sum \limits_n exp\left(-\frac{(\epsilon -n\omega _c)^2 }{\omega _{c}/\pi \tau _q} \right),
\label{dos}
\end{equation}
where the $\tau_q$ is the quantum scattering time. To find the DOS we compare normalized longitudinal resistance $R_{xx}/R_0$ with the numerical evaluation of the normalized longitudinal conductivity $\sigma_{nl}/ \sigma_D$ obtained from eq.\ref{sigma_nl} with thermal equilibrium distribution function $f_T(\epsilon)$. The $R_0$ is the resistance of the sample in zero magnetic field. In the leading approximation and at classically strong magnetic field ($\omega_c \tau_{tr} \gg 1$) the two ratios equal to each other: $ R_{xx}/R_0=\sigma_{nl}/ \sigma_D$. From the comparison we have obtained the quantum scattering time $\tau_q$ and, therefore, have approximated the density of electron states in eq.\ref{dos}. Comparable values of quantum scattering time have been obtained using other methods, in particular, from analysis of magnitude of the quantum oscillations \cite{ando}.
In the second step we use the DOS to numerically calculate the distribution function $f(\epsilon)$ using eq.\ref{main} in the limit $t \gg \tau_{in}$. In this limit the distribution function reaches a stationary state corresponding to the $dc$ response. The distribution function is calculated at different values of the electric field $E$.
In the third step the normalized nonlinear conductivity $\sigma_{nl}/ \sigma_D$ is calculated using eq.\ref{sigma_nl} for different electric field. The results are compared with the normalized resistance $R_{xx}/R_0$. The inelastic scattering time $\tau_{in}$ is found from the best fit between dependencies of the normalized resistance $R_{xx}/R_0$ and the calculated normalized conductivity $\sigma_{nl}/ \sigma_D$ on the $dc$ bias.
\begin{figure}[tbp]
\vbox{\vspace{0 in}} \hbox{\hspace{+0.1in}} \epsfxsize 3.4 in
\vskip -0.5in %
\epsfbox{example.eps} \vskip 0.5in
\caption{(color online) Normalized density of states $\tilde{\nu}$, distribution function $f$ and non-equilibrium part of the distribution function $\Delta f= f- f_T$ are shown as function of electron energy. The distribution function $f$ is obtained by numerical evaluation of eq. \ref{main}, using physical parameters typical for experiments presented below: $I_{dc}$=377 ($\mu A$); $\tau_{in}$=0.55 (ns); $\tau_q$=1.1 (ps); B=0.924 (T) and T=10.7 (K) }
\label{example}
\end{figure}
In accordance with eq.\ref{main} the spectral diffusion generates an electron spectral flow $J_\epsilon$ from low energy regions (occupied levels) to high energies (empty levels). The spectral flow is proportional to the coefficient of the spectral diffusion $D_\epsilon$ and to the gradient of the distribution function $\partial f/\partial \epsilon$: $J_\epsilon =D(\epsilon) \cdot \partial f/\partial \epsilon$. In a stationary state the spectral electron flow $J_\epsilon$ is constant. As a result, the gradient of the distribution function $\partial f/\partial \epsilon$ is strong in the regions of weak spectral diffusion (between Landau levels) and is small in the regions with strong spectral diffusion (centers of the Landau levels). It is important to realize that a $weak$ inelastic scattering cannot change significantly the robust dynamic flow in the energy space and, therefore, the behavior of the distribution function. This corresponds to our numerical calculations. Fig.\ref{example} demonstrates the density of states, distribution function and non-equilibrium part of the function induced by $dc$ current $I_{dc}$. Indeed the gradient of the distribution function is considerably suppressed inside Landau levels. This is due to both the fast spectral diffusion inside Landau levels and the slow diffusion between them. Such non-equilibrium distribution function can not be described by a temperature \cite{romero2008warming}. In accordance with eq.\ref{sigma_nl} the small gradient of the distribution function inside conducting Landau levels makes the net value of the nonlinear longitudinal conductivity (resistivity) to be significantly smaller than the linear, unbiased value. Below we present the detailed comparison between the experiments and the numerical calculations.
\begin{figure}[tbp]
\vbox{\vspace{0 in}} \hbox{\hspace{+0.1in}} \epsfxsize 3.4 in
\vskip -0.5in %
\epsfbox{SdH.eps} \vskip 0.5in
\caption{ (Color online), Dependencies of the longitudinal resistance $r_{xx}$ on magnetic field at different temperatures with no $dc$ bias (black solid and dotted lines) and with applied $dc$ bias $I_{dc}=$6 ($\mu$A) at T=2.04 K (grey solid line (red online)). Arrow indicates magnetic field B=0.1 T above which the electron spectrum is modulated due to quantization of electron motion: Landau levels.}
\label{SdH}
\end{figure}
\section{Results and Discussion}
Fig.\ref{SdH} demonstrates dependencies of the longitudinal resistance of
two dimensional electrons on the magnetic field in sample N2. Two upper curves present dependencies obtained at different temperatures T=2.16K (dotted curve) and T=4.2K (solid curve) at zero $dc$ bias. At small magnetic fields $B<$0.1T the magnetoresistance demonstrates the classical independence on the magnetic field \cite{ziman}. At $B>$0.1T the electron spectrum is quantized and at temperature $T=$0.3K the resistance demonstrates quantum oscillations (not shown). An arrow marks the magnetic field $B=$0.1T above which the electron spectrum is modulated due to the quantization of the electron motion in magnetic fields.
At magnetic fields $B<0.3$T the two traces at T=2.16K and at T=4.2K are almost identical, indicating a very weak temperature dependence of the resistance ($dr_{xx}/dT>0$). At stronger magnetic fields the quantum oscillations (Shubnikov de Haas, SdH) are observed. The oscillations are result of Landau quantization of the electron spectrum in the magnetic fields. At thermal equilibrium the amplitude $A$ of the oscillations follows from eq.\ref{sigma_nl} and eq.\ref{sigma_dc} with the Fermi-Dirac distribution function: $A \sim X_T/sinh(X_T)$, $X_T=2\pi^2kT/\hbar \omega_c$ \cite{shoenberg1984,ando}. At small magnetic fields $\hbar \omega_c \ll kT $ the amplitude of the SdH oscillations is small due to an effective averaging of the conductivity oscillations $\sigma(\epsilon)$ (see eq.\ref{sigma_dc}) over the temperature interval $kT$ in eq.\ref{sigma_nl}. Fig.\ref{SdH} shows that the increase of the temperature reduces the magnitude of the oscillations symmetrically toward a background, which is an averaged value between maximums and minimums of the oscillations.
A different behavior of the resistance is found in the response to the $dc$ bias \cite{romero2008warming}. In fig.3 the lower curve presents a typical dependence of the differential resistance on magnetic field at a finite $dc$ bias. At $B>0.1T$, at which the Landau quantization appears, the resistance shows a considerable decrease with the $dc$ bias ($dr_{xx}/dI <0$). The decrease of the resistance cannot be explained by a temperature increase due to the $dc$ heating. The temperature increase raises the resistance ($dr_{xx}/dT>0$). Moreover the quantum oscillations at the finite $dc$ bias do not have the canonical shape, corresponding to the two upper curves at zero $dc$ bias. Instead a strong increase of higher harmonics of the oscillations is obvious. The enhancement of the higher harmonic content is in apparent contradiction with the description of the $dc$ biased electrons by an elevated temperature $T_e$: high temperature reduces exponentially the higher harmonic content of the oscillations \cite{shoenberg1984,ando,romero2008warming}.
Below we show that the strong decrease of the resistance with the $dc$ bias is result of the non-uniform spectral diffusion of 2D electrons through Landau levels. We consider in detail two regimes. One regime corresponds to small magnetic fields, at which Landau levels are overlapped and the temperature is higher than the level separation: $kT \gg \hbar \omega_c$. In this regime the quantum oscillations are absent and the resistance depends weakly on the temperature. At the small magnetic fields the spectral diffusion equation is solved both numerically and analytically\cite{dmitriev2005}. Another regime corresponds to high magnetic fields at which the Landau levels are separated: $\hbar \omega_c > \Gamma$. For sample N2 the first regime corresponds to $B < 0.2$T whereas the second regime is at $B>0.7$T (see fig. \ref{SdH}).
\subsection {Small magnetic fields}
At small magnetic fields the separation between Landau levels $\hbar \omega_c$ is less than the effective width of the levels $\Gamma=\hbar/\tau_q $. At low temperatures the width $\Gamma$ is predominantly determined by the elastic impurity scattering of the 2D electrons. At small magnetic fields the density of states $\nu(\epsilon)$ is weakly oscillating with the energy $\epsilon$, making the spectral diffusion to also be a weakly modulated function of the energy. We consider a regime of high temperatures: $kT \gg \hbar \omega_c$. In this regime the quantum oscillations are absent and the resistance increases weakly with the temperature $T$.
\begin{figure}[tbp]
\vbox{\vspace{0 in}} \hbox{\hspace{+0.1in}} \epsfxsize 3.4 in
\vskip -0.5in %
\epsfbox{figB03434T.eps} \vskip 0.5in
\caption{ (Color online), (a) Dependence of normalized longitudinal resistance $R_{xx}/(R_0=37.75 \Omega)$ on electric current . Symbols are experimental data points. Solid lines present analytical results (eq.\ref{analytic}) and numerical evaluation of the normalized resistance at $\gamma=0.9931$, $\tau_q=1.138$ (ps) and $\tau_{in}=$23.65 (ps) for the gaussian form of the DOS. Thin dotted line is the numerical evaluation of the resistance, using the SCBA density of states with $\gamma=0.9931$, $\tau_q=1.132$ (ps) and $\tau_{in}=$21.4 (ps); (b) density of states, electron distribution function $f$ and the non-equilibrium part of the function $\Delta f=f-f_T$ at $dc$ bias $I_{dc}=177.6 \mu A$, (Gaussian DOS) ; (c) density of states, electron distribution function $f$ and the non-equilibrium part of the function $\Delta f=f-f_T$ at $dc$ bias $I_{dc}=192.5 \mu A$, (SCBA DOS); T=12.75 (K), B=0.3434 (T), sample N1.}
\label{B03434T}
\end{figure}
Fig.\ref{B03434T}(a) shows the dependence of normalized resistance $R/R_0$ of the sample N1 on electric current at a small magnetic field $B=$0.343 (T) and temperature $T=$12.75 (K). The parameter $R_0$ is the resistance at zero magnetic field. At small $dc$ biases the normalized resistance decreases with the electric current. We consider the decrease as a result of the non-uniform spectral diffusion of 2D electrons. At higher biases the resistance increases with the electric current due to other mechanisms of the nonlinearity \cite{dmitriev2007, glazman2007}. In accordance with the theory \cite{dmitriev2005} the decrease of the resistivity obeys the following relation:
\begin{equation}
\sigma_{xx}/\sigma_D=\gamma+2\delta^2[1-\frac{4Q_{dc}}{1+Q_{dc}}],
\label{analytic}
\end{equation}
where $\gamma=1$, $\delta=exp(-\pi/\omega_c \tau_q)$ is the Dingle factor. The parameter $Q_{dc}$ takes into account the electric field $E$ ( Hall electric field \cite{hall}):
\begin{equation}
Q_{dc}=\frac{2\tau_{in}}{\tau_{tr}}(\frac{eE
v_F}{\omega_c})^2(\frac{\pi}{\hbar \omega_c})^2.
\label{qfactor}
\end{equation}
To compare with the experiment we have used the Dingle factor $\delta$($\tau_q$) and the inelastic scattering time $\tau_{in}$ as fitting parameters. We also have varied parameter $\gamma$ to take into account possible memory effects \cite{vavilov2004,mirlin1999} and other deviations from the Drude magnetoconductivity \cite{shklov}, which are ignored at $\gamma=1$. A solid line presents the theoretical dependence (see eq.\ref{analytic}) of the normalized resistivity at $\gamma=0.9931$, $\tau_q=1.138$ (ps) and $\tau_{in}=$23.65 (ps). Another solid line, which is indistinguishable from the analytical result, presents the numerical evaluation of the normalized resistivity, using eq.\ref{main} with the same fitting parameters $\gamma=0.9931$, $\tau_q=1.138$ (ps) and $\tau_{in}=$23.65 (ps) and the Gaussian form of the DOS \cite{raikh}. A thin dotted line in fig.\ref{B03434T}(a) demonstrates the numerical evaluation of the resistance, using the SCBA density of states with $\gamma=0.9931$, $\tau_q=1.132$ (ps) and $\tau_{in}=$21.4 (ps). The density of states, electron distribution function $f$ and the non-equilibrium part of the function $\Delta f=f-f_T$ are shown in fig.\ref{B03434T}(b) (Gaussian DOS) and \ref{B03434T}(c) (SCBA DOS). Fig.\ref{B03434T}(a) demonstrates good agreement between the experiment and the theory at small $dc$ biases.
\begin{figure}[tbp]
\vbox{\vspace{0 in}} \hbox{\hspace{+0.1in}} \epsfxsize 3.2 in
\vskip -0.1in %
\epsfbox{B02T.eps} \vskip 0.0in
\caption{ (Color online), (a) Dependence of normalized longitudinal resistance $R_{xx}/R_0$ on electric current at different temperatures as labeled. Solid lines are experimental curves. Symbols present result of numerical calculations of the resistance, using Gaussian DOS (eq.\ref{dos}) with $\gamma=1$ and $\tau_q$ and $\tau_{in}$ presented in fig.\ref{B05T}(a); dotted lines demonstrate numerical evaluation of the $R/R_0$ using SCBA DOS with $\gamma=1$ and $\tau_q$ and $\tau_{in}$ presented in fig.\ref{B05T}(a). (b) Dependencies of normalized SCBA density of states $\tilde {\nu}(\epsilon)=\nu(\epsilon)/\nu_0$, electron distribution function $f$ and non-equilibrium part of the function $\Delta f$ on electron energy $\epsilon$ counted with respect to Fermi energy $\mu$. Distribution function a is solution of eq.\ref{main} using SCBA DOS with $\tau_q$=3.8 (ps), temperature T=4.41 (K) and electric current $I_{dc}$=50.6 ($\mu$A). (c) Dependencies of normalized Gaussian density of states $\tilde {\nu}(\epsilon)=\nu(\epsilon)/\nu_0$, electron distribution function $f$ and non-equilibrium part of the function $\Delta f$ on electron energy $\epsilon$. The distribution function is a solution of.(\ref{main} using the Gaussian DOS with $\tau_q$=3.96 (ps), temperature T=4.41 (K) and electric current $I_{dc}$=56.4 ($\mu$A); $R_0(2.34K)=44.6 (\Omega)$, $R_0(4.41K)= 46.36 (\Omega)$, $R_0(6.17K)= 49.29(\Omega)$, $R_0(8.41K)= 52.47(\Omega)$; B=0.2 (T); sample N2.}
\label{B02T}
\end{figure}
Fig.\ref{B02T}(a) shows the dependence of the resistance of the sample N2 on the direct current at different temperatures as labeled. Solid lines present experimental dependencies. Dashed lines demonstrate results of numerical evaluation of the resistance, using eq.\ref{main} with SCBA DOS at T=2.34 (K) and T=4.41 (K). The numerical calculations demonstrate strong nonlinear suppression of the longitudinal resistance with the $dc$ bias. The result is due to drastic modulation of the SCBA density of states and, therefore, spectral diffusion with the energy.
The SCBA DOS, distribution function and the non-equilibrium part of the function are presented in the fig.\ref{B02T}(b) at temperature T=4.41 (K). The DOS demonstrates sharp drops to almost zero values between Landau levels. Such strong modulation of the DOS creates significant suppression of the energy exchange between different levels facilitating the electron "warming" inside the levels \cite{romero2008warming}. The results, however, are apparently less compatible with the experiment than the one obtained with a smoother Gaussian DOS.
\begin{figure}[tbp]
\vbox{\vspace{0 in}} \hbox{\hspace{+0.1in}} \epsfxsize 3.4 in
\vskip -0.5in %
\epsfbox{B05T.eps} \vskip 0.5in
\caption{ Dependencies of the inelastic scattering time $\tau_{in}$ and the quantum time $\tau_q$ on temperature. (a) Filled squares show inelastic scattering time $\tau_{in}$, obtained numerically using eq.\ref{main} with Gaussian DOS; open circles present $\tau_{in}$ obtained, using eq.\ref{main} with SCBA DOS. Magnetic field B=0.2 (T). Sample N2.
(b) Sample N1. Gaussian DOS. Magnetic field is 0.5 (T).}
\label{B05T}
\end{figure}
In fig.\ref{B02T}(a) symbols present results of the numerical evaluation of the longitudinal resistivity, using eq.\ref{main} with the Gaussian DOS and the quantum scattering times and inelastic times shown in the fig.\ref{B05T}(a). The numerical simulations demonstrate good agreement with the experiment in a considerably broader range of the $dc$ biases. Gaussian DOS is shown in the fig.\ref{B02T}(c), demonstrating moderate oscillations with energy.
The experiment and the numerical calculations correspond well to each other at small electric currents $I_{dc}$. At higher currents considerable deviations between the experiment and the theory occur. The deviations are expected. At higher currents there are additional mechanisms of the 2D electron nonlinearity \cite{yang2002,durst2003,ryzhii1970,anderson,shi,liu2005,dietel2005,inarreaPRB2005,vavilov2004,dmitriev2005,alicea2005,volkov2007,glazman2007,dmitriev2007}, which are not taken into account in eq.\ref{main}. These nonlinearities are beyond the scope of the present paper. Moreover an additional contribution to the deviations may occur due to the conjecture of the constant inelastic relaxation rate $1/\tau_{in}$ in eq.\ref{main}. At very small $dc$ biases, at which the electron distribution is near the thermal equilibrium, the variation of the inelastic rate with the $dc$ bias is also small since the phase space available for the inelastic scattering of an electron is nearly the same as at the equilibrium. At stronger $dc$ biases the distribution function is broader and the inelastic scattering rate can be considerably stronger.
To estimate the broadening of the distribution function at small magnetic fields, at which the spectrum is weakly modulated, we approximate the distribution function by an elevated temperature $T_e$. At a stationary condition an increase of the Joule heat: $dP=d(J^2 \cdot \rho)$ is balanced by an increase of the heat dissipation: $dE/ \tau_r(T_e)=c(T_e)dT / \tau_r(T_e)$, where $c(T_e)=c_0T_e$ is the electron heat capacity, $\tau_r$ is a time of the relaxation of the total electron energy, $J$ is current density and $\rho$ is electron resistivity per square. In our case the time $\tau_r$ is controlled by the electron-phonon scattering, since the electron-electron scattering cannot stabilize the global broadening of the distribution function. For the estimation of the broadening we use $\tau_r=\tau_{e-ph}/T^3$ with $\tau_{e-ph}=20$ (ns/K$^3$) \cite{pepper,sergeev}. An integration of both sides of the balanced equation yields: $T_e^5-T_L^5=5\tau_{e-ph}J^2 \rho/c_0$. At the lattice temperature $T_L$=2.34 (K) the temperature increase $\Delta T_e=T_e-T_L=$0.14 (K) is found at $I_{dc}$=9 ($\mu$A). $\Delta T_e$=0.34 (K) is at $I_{dc}$=17 ($\mu$A), at which a deviation between the solution of eq.\ref{main} with a constant $\tau_{in}$ and the experiment is evident. Thus the estimation indicates that the deviation between the experiment and the theory at high $dc$ biases can be also related to the variation of the inelastic scattering time $\tau_{in}$ with the $dc$ bias. Similar results are found for sample N1.
To obtain agreement between the experimental and numerical dependencies in fig.\ref{B02T}a we have used the constant inelastic scattering time $\tau_{in}$ as a fitting parameter. The temperature dependence of the time $\tau_{in}$, obtained from fitting at different temperatures, is shown in fig.\ref{B05T} for two samples. For sample N2 (fig.\ref{B05T}(a) black squares) the inelastic time follows the dependence $\tau_{in}=1.8 (\pm 0.3)/T^{2(\pm 0.15)}$ (ns). The time is obtained using Gaussian DOS shown in fig.\ref{B02T}(c). Open circles in fig.\ref{B05T}(a) present the inelastic time $\tau_{in}$, obtained using the SCBA DOS shown in fig.\ref{B02T}(b). The SCBA DOS results in consistently shorter inelastic times than the Gaussian DOS does, but with essentially the same temperature dependence. This holds for other magnetic fields and temperatures. Taking into account the better overall agreement with the experiment obtained for numerical simulations with the Gaussian DOS, from now on we will only show numerical results for this density of states.
Similar temperature dependence of the inelastic scattering time $\tau_{in}$ is found for the sample N1 with a higher electron density and considerably shorter quantum scattering time $\tau_q$. The dependence is shown in fig.\ref{B05T}(b). The dependence is obtained at magnetic field B=0.5 (T) and corresponds to the Gaussian DOS, which is similar to the one presented in fig.\ref{B02T}(c). The quantum scattering times $\tau_q$ in both samples are also shown for comparison and completeness in the figure. The time $\tau_q$ is much shorter the inelastic scattering time $\tau_{in}$. The quantum scattering time has weak temperature dependence.
In accordance with the theory the temperature dependence of the inelastic time $\tau_{in} \sim T^{-2}$ indicates the dominant contribution of the electron-electron scattering into the inelastic relaxation of the distribution function.
We have compared the experimental results with theoretical calculations of the inelastic relaxation due to electron-electron interaction \cite{chaplik1971,quinn1982,dmitriev2005}. For the parameters corresponding to fig.\ref{B05T} the theoretical values of the inelastic time are found to be: $\tau_{in}^{theor}=1.2/T^2$ (ns) for sample N2 (fig.\ref{B05T}(a)) and $\tau_{in}^{theor}=2.5/T^2$ (ns) for sample N1 (fig.\ref{B05T}(b)). The theoretical values are in good agreement with the experiment. A longer inelastic relaxation, found in the experiments, could be a result of an additional screening by X-electrons in our samples \cite{fried1996}. The screening is not taken into account in the comparison. Fig.\ref{B05T} demonstrates a longer inelastic time for sample N1 with a higher electron density in agreement with the theory \cite{chaplik1971,quinn1982,dmitriev2005}.
When considering the spectral diffusion of electrons in crossed electric and small magnetic fields at high temperatures, the results presented in this section demonstrate good quantitative agreement between the experiments and the theory. The numerical and analytical evaluation of the distribution function shows significant deviations of the electron distribution function from the Fermi-Dirac form leading to the nonlinear transport. At these conditions the rate of the inelastic relaxation of the non-equilibrium distribution function is found to be proportional to the square of the temperature: $1/\tau_{in} \sim T^2$.
\subsection {High magnetic fields}
\begin{figure}[tbp]
\vbox{\vspace{0 in}} \hbox{\hspace{+0.1in}} \epsfxsize 3.4 in
\vskip -0.5in %
\epsfbox{example_a.eps} \vskip 0.5in
\vbox{\vspace{0 in}} \hbox{\hspace{+0.1in}} \epsfxsize 3.4 in
\vskip -1in %
\epsfbox{example_b.eps} \vskip 0.5in
\caption{ (color online) (a) Relaxation of the non-equilibrium part of the distribution function $\Delta f$ by an electron-electron scattering at small magnetic fields and/or high temperatures. Two electrons near maximum of $\Delta f$ at energy $\epsilon_0$ scatter into nearest minimums at energies $\epsilon_1=\epsilon_0-\Delta \epsilon$ and $\epsilon_2 = \epsilon_0+\Delta \epsilon$. The process conserves the total electron energy $\epsilon_0+ \epsilon_0 = \epsilon_1 + \epsilon_2$ and can be accomplished by the electron-electron interaction. (b) Inelastic relaxation at high magnetic fields and/or low temperatures. The relaxation flows from overpopulated high energy levels ($\epsilon_0$) toward under-populated low energy region ($\epsilon_1, \epsilon_2 $). The relaxation flow does not conserve the total energy of 2D electron system and cannot be accomplished by $e-e$ scattering. The electron-phonon scattering provides the relaxation. }
\label{EEexample}
\end{figure}
At high magnetic fields the density of states and, therefore, the spectral diffusion are strongly modulated with the energy. Between completely separated Landau levels ($\Gamma \ll \hbar \omega_c$) the spectral diffusion is expected to be very weak. This may create a strong thermal isolation of the Landau levels and a stratification of the dynamic flow in the phase space in the response to the $dc$ bias. In a limiting case of a single isolated level at low temperatures the global spectral flow is absent and the slope (gradient) of the distribution function $df/d\epsilon$ is determined solely by intra-level inelastic processes. For the intra-level inelastic transitions the electron-electron interaction may not be effective, because the interaction conserves the total energy of electron system. Fig.\ref{EEexample} demonstrates a difference between the inelastic relaxation of distribution function through several Landau levels (fig.\ref{EEexample}(a)) and the relaxation involving only one isolated Landau level (fig.\ref{EEexample}(b)).
The first case (fig.\ref{EEexample}(a)) corresponds to a high temperature regime: $kT \gg \hbar \omega_c$. In the first case the electron-electron interaction can effectively reduce the non-equilibrium part of the distribution function $\Delta f$ through the processes similar to the one shown in the figure. Two electrons near a maximum of the oscillating function $\Delta f$ relax into the two nearest minimums. This process reduces the non-equilibrium part of the distribution function $\Delta f$ smoothing out the oscillations. In this process the total electron energy is conserved and the relaxation can be accomplished by electron-electron scattering.
The second case (fig.\ref{EEexample}(b)) corresponds to low temperatures (high magnetic field) $kT < \Gamma < \hbar \omega_c$. Under these conditions the only Landau level (sub-band), located near the Fermi energy, is involved in the spectral diffusion. Lower energy levels are gapped and populated completely. They cannot participate in spectral transport due to the Pauli principle. The higher energy levels are empty, but, again, are inaccessible at low T due to the cyclotron gap. A typical non-equilibrium part of the distribution function corresponding to this case is shown in fig.\ref{EEexample}(b). The main flow of the relaxation to the thermal equilibrium is from overpopulated high energy levels into the under-populated low energy region of the Landau level. The relaxation flow does not conserve the total energy of electron system, and, therefore, cannot be accomplished by the electron-electron scattering.
A possible candidate for inelastic electron relaxation is electron-phonon scattering. Electron-phonon scattering does not conserve the total electron energy and, therefore, can be the mechanism responsible for the inelastic relaxation inside the isolated Landau level at low temperatures. Moreover, due to a stronger temperature dependence \cite{price,sergeev}, the electron-phonon scattering could be the dominant mechanism of the relaxation at high temperature. Below we show the interplay between different regimes of the inelastic electron relaxation, which are observed in our samples.
\begin{figure}[tbp]
\vbox{\vspace{0 in}} \hbox{\hspace{+0.1in}} \epsfxsize 3.4 in
\vskip -0.5in %
\epsfbox{B0784T.eps} \vskip 0.5in
\caption{(color online) (a) Dependence of normalized resistance $R/R_0$ on $dc$ bias at high temperatures as labeled. $R_0(6K)=49.29 (\Omega)$, $R_0(8.13K)=52.12(\Omega)$. Insert demonstrates dependence of density of states, distribution function and non-equilibrium part of the function $\Delta f$ on energy $\epsilon$; T=8.13 (K), $I_{dc}=$58.5 ($\mu$A), $\tau_{in}$=151 (ps), $\tau_q=$1.9 (ps). (b) Dependence of normalized resistance on $dc$ bias at intermediate temperatures from top to bottom at zero bias: T=1.48($R_0=43.68(\Omega)$), 1.97($R_0 =44.33 (\Omega)$), 2.44($R_0 =44.99 (\Omega)$), 2.93($R_0=45.45 (\Omega)$), 3.52($R_0=45.89 (\Omega)$), 4.08($R_0= 46.37 (\Omega)$) (K). The electron system undergoes a transition to state with zero differential resistance at $I_{dc}> I_{th}$ and $T<$3 (K). Insert demonstrates dependence of density of states, distribution function and non-equilibrium part of the function $\Delta f$ on energy $\epsilon$; T=2.44 (K), $I_{dc}=$18.2 ($\mu$A), $\tau_{in}$=3.77 (ns), $\tau_q=$2.75 (ps). (c) Dependence of normalized resistance on $dc$ bias at low temperatures from top to bottom at zero bias: T=0.27($R_0 =42 (\Omega)$), 0.71($R_0 =42.64 (\Omega)$), 1.06($R_0 =42.99 (\Omega)$) (K). Insert demonstrates dependence of density of states, distribution function and non-equilibrium part of the function $\Delta f$ on energy $\epsilon$; T=0.71 (K), $I_{dc}=$6.67 ($\mu$A), $\tau_{in}$=17.7 (ns), $\tau_q=$3.65 (ps). Symbols are numerical calculations and solid lines are experiments. Magnetic field is 0.784 (T). Sample N2. }
\label{B0784T}
\end{figure}
Fig.\ref{B0784T}(a) presents dependencies of the normalized resistance of the sample N2 at $B=0.784$ (T) and at high temperatures as labeled. The magnetic field corresponds to a maximum of the SdH oscillations. At small currents the numerical simulation describes well the experiment. The insert to the figure shows the normalized density of states, distribution function $f$ and non-equilibrium part of the function $\Delta f$ at $dc$ bias 58.5 ($\mu$A). The regime corresponds to the condition $kT \gg \Gamma$.
Fig.\ref{B0784T}(b) presents dependencies of the normalized resistance at medium temperatures $kT \sim \Gamma$. Again, at small currents the numerical simulation, obtained in the $\tau_{in}$ approximation of the right side of eq.\ref{main}, works well, providing very good fit of the experiment data. At temperatures below 3 (K) a sudden deviation between the experimental data and the simulation occurs above a threshold current of $I_{th}$= 6.6 ($\mu$A). An arrow in the figure marks this current. It has been shown, that above the current $I_{th}$ the electron system undergoes a transition into the zero differential resistance state \cite{bykov2007zdr,zudov2008zdr}. In this state the differential resistance of the sample is nearly zero in a broad range of the current $I_{dc} > I_{th}$. Non-uniform, domain-like structures, propagating in real space, have been proposed to explain the origin of the electron state with zero differential resistance\cite{bykov2007zdr,vavilov2004}. Such states are beyond the regime described by the spatially uniform eq.\ref{main}.
It is interesting that the transition to the nonlinear state with zero differential resistance happens at a normalized value of the resistance $R_{tr}=R/R_0 \approx 1.5$, which is almost independent on the temperature. Moreover at this point ($R_{tr}, I_{th}$) the nonlinear resistance demonstrates a transition from an insulating-like ($dR/dT<0$) to a metallic-like ($dR/dT>0$) behavior. These unexpected features are currently not understood and will be subject of future studies. The insert to the figure shows the normalized density of states, distribution function $f$ and non-equilibrium part of the function $\Delta f$ obtained at $dc$ bias 18.2 ($\mu$A).
Finally fig.\ref{B0784T}(c) presents data at very low temperature $kT < \Gamma$. At this condition only one Landau level provides the electron transport. At the low temperatures the theory, used in the $\tau_{in}$ approximation, fits with the data only at very small currents. At the lowest temperature T=0.27K, numerical results deviate almost immediately from the experiment. The comparison indicates that the approximation of the inelastic collision integral in eq.\ref{main} by a constant relaxation time $\tau_{in}$ does not work in these conditions. At very low temperature the equilibrium distribution changes very rapidly with the energy $\epsilon$ inside the Landau level on a scale, which is much narrower than the level width $\Gamma$: $kT \ll \Gamma$. Since the inelastic processes are extremely weak at the low T, the spectral diffusion broadens easily the electron distribution to a scale comparable with the width of the level $\Gamma $ even at small $dc$ biases. This process increases significantly the phase space available for the inelastic electron scattering, enhancing the scattering rate $1/\tau_{in}$ appreciably. Thus at $kT < \Gamma$ the inelastic scattering depends strongly on the $dc$ bias and the spectral diffusion equation (eq.\ref{main}) with a constant $\tau_{in}$ does not describe the nonlinear resistance appropriately. More work is required to evaluate quantitatively the shape of the distribution function in this regime. However we suggest that even in the regime $kT < \Gamma$ the distribution function will be qualitatively similar to the one shown in the insert to fig.\ref{B0784T}(c), which is obtained in the $\tau$ approximation. At a high $dc$ bias the function can not be described by an elevated electron temperature as it is shown in the figure (see also \cite{romero2008warming}).
Additional analysis of the curves at the high magnetic fields reveals an interesting scaling behavior of the nonlinear resistance. Applying two linear transformations ($y^{'}=K_y \cdot y$ and $x^{'}=K_x \cdot x$) along y and x-axes one can collapse all dependencies at different temperatures presented in fig.\ref{B0784T}(a,b) on a single curve. Fig.\ref{tau_vsT_s2}(a) shows the result. The y-transformation normalizes the resistance at zero bias to unity: $R(I)=R(I)/R(I=0)$. The linear x-transformation, applied along the x-axes, provides the final result. Solid curves are experimental dependencies measured in temperature interval (1.48-8.13) (K). Open circles show a result of numerical calculations of the nonlinear resistance obtained using eq.\ref{main} with the equilibrium electron distribution at $T=$4.08 (K) and $\tau_q=2.75$ (ps). The same scaling is found for sample N1 in a broader range of temperatures. The result is shown in fig.\ref{tau_vsT_s1}(a). All dependencies are plotted versus a parameter $A^{1/2}=(\sigma_{dc}^D E^2 \tau_{in}/\nu_0)^{1/2} \sim I_{dc}$. At a fixed density of states $\nu(\epsilon)$ the variable $A \sim E^2 \tau_{in}$ is the main parameter, which determines the deviation of the electron distribution $f$ from the thermal equilibrium $f_T$ in eq.\ref{main}.
Fig.\ref{tau_vsT_s2}(a) demonstrates a good scaling and a remarkable correspondence with numerical results obtained at $A^{1/2}<0.15$, using eq.\ref{main} with a fixed $\tau_{in}$. The correspondence between the experiment and the theory is even more impressive for a curve at the lowest temperature (T=2.34 (K)) presented in fig.\ref{tau_vsT_s1}(a). Almost perfect agreement between the experiment at T=2.34 (K) and the theory is found at substantially stronger $dc$ biases ($A \sim 1$). The scaling of the nonlinear resistance and the excellent agreement with the theory indicates strongly the presence of the spectral diffusion with a constant rate of the inelastic relaxation $1/\tau_{in}$.
We suggest that the scaling is a result of a specific nonlinear regime, which occurs for separated Landau levels. As we have already mentioned in the section "Theory and Numerical Simulations", the spectral diffusion between well-separated Landau levels is absent. In this regime there is no global broadening of the distribution function. Moreover inside each of the Landau levels the local spectral flow preserves the number of electrons and, therefore, the number of the empty states. Thus the stratified spectral diffusion keeps the spectral distribution of the available phase space (averaged over each Landau level), to be fixed and the same as the one at the thermal equilibrium ($E=0$). The invariance of the phase space available for inelastic processes could provide the independence of the inelastic scattering time $\tau_{in}$ on the $dc$ bias fixing the time at the thermal equilibrium value: $ \tau_{in}(E)=\tau_{in}(E=0)$. The constant inelastic scattering rate makes the evolution of the electron distribution and the nonlinear resistance to be universal in a broad range of the $dc$ biases.
\begin{figure}[tbp]
\vbox{\vspace{0 in}} \hbox{\hspace{-0.3in}} \epsfxsize 3.0 in
\vskip -0.5in %
\epsfbox{figscaling2.eps} \vskip -0.5in
\vbox{\vspace{0 in}} \hbox{\hspace{-0.3in}} \epsfxsize 3.0 in
\epsfbox{tauvsTs2.eps} \vskip 0.5in
\caption{ (a) Scaling of normalized resistance with parameter $A^{0.5} \sim I_{dc}$. All curves presented in fig.\ref{B0784T}(a,b) at different temperatures (1.48-8.13) (K) follow the same dependence on the parameter $A^{0.5}<0.15$ (solid curves). Open circles present results of numerical calculations of the normalized resistance, using eq.\ref{main} with $\tau_q=2.75$ (ps), $T=4.08$ (K), B=0.784 (T) and parameter $A^{1/2}=(\sigma_{dc}^D E^2 \tau_{in}/\nu_0)^{1/2}$; insert shows independence of variations of the normalized resistance with $A$ on temperature $T$. The results are obtained using eq.\ref{main} at T=3(K) -open circles, T=4.08(K) - solid curve, and T=6(K)-filled circles.
(b)Dependences of inelastic scattering time $\tau_{in}$, obtained from comparison between experiment and numerical evaluation of nonlinear resistance, using eq.\ref{main} (filled squares) and from scaling (open circles) on temperature. Open squares present temperature dependence of quantum scattering time $\tau_{q}$. Magnetic field $B$=0.784 (T). Sample N2.}
\label{tau_vsT_s2}
\end{figure}
\begin{figure}[tbp]
\vbox{\vspace{0 in}} \hbox{\hspace{-0.3in}} \epsfxsize 3.0 in
\vskip -0.5in %
\epsfbox{figscaling1.eps} \vskip -0.5in
\vbox{\vspace{0 in}} \hbox{\hspace{-0.3in}} \epsfxsize 3.3 in
\epsfbox{tauvsTs1.eps} \vskip 0.5in
\caption{(a) Scaling of normalized resistance (solid curves) with parameter $A^{0.5} \sim I_{dc}$ at different temperature from bottom to top: 2.34, 4.2, 5.4, 7.6, 10.7, 14.8, 20.1, 24.6 (K). Open circles present results of numerical calculations of the normalized resistance, using eq.\ref{main} with $\tau_q=1.1$ (ps), $T=2.34$ (K), $B$=0.924 (T) and parameter $A^{1/2}=(\sigma_{dc}^D E^2 \tau_{in}/\nu_0)^{1/2}$; (b)Dependences of inelastic scattering time $\tau_{in}$, obtained from comparison between experiment and numerical evaluation of nonlinear resistance, using eq.\ref{main} (filled squares) and from scaling (open circles) on temperature. Open squares present temperature dependence of quantum scattering time $\tau_{q}$. Magnetic field $B$=0.924 (T). Sample N1.}
\label{tau_vsT_s1}
\end{figure}
The scaling reveals another interesting property of the nonlinear regime.
Fig.\ref{tau_vsT_s2}(a) shows that variations of the normalized resistance with parameter $A^{1/2}<$0.15 is the same at different temperatures and, therefore, does not depend on the initial, equilibrium distribution $f_T$ of 2D electrons in eq.\ref{main}. The equilibrium distribution $f_T$ is substantially different in the temperature interval, in which the scaled dependencies have been measured: (1.4 - 8.13) (K). We suggest that the independence of the nonlinear resistance on the $f_T$ is also a result of the absence of the $dc$ bias induced spectral flows between Landau levels. Without the inter-level spectral flow the levels are, in essence, independent from each other and, therefore, absorb the energy from electric field independently. The absorption inside each Landau level is determined by the same spectral dynamics, assuming that the density of states is the same for each level. An estimation of the nonlinear conductivity in a model of separated (independent) levels supports the suggestion \cite{vitkalov2007unpublished}. The numerical evaluation of the nonlinear behavior of the resistance, which has been done for different temperatures, using eq.\ref{main}, demonstrates also the independence of the normalized nonlinear resistance on the temperature in this regime. In particular, the numerical values of the normalized resistance obtained for T=3K, T=4.08K and T=6K at a fixed density of states ($\tau_q=2.75$ (ps)) differ by less that 3\% at any $A<$0.4. This is shown in the insert to fig.\ref{tau_vsT_s2}(a).
The scaling of the nonlinear resistance provides an easy practical access to the variation of the inelastic relaxation time with the temperature since it does not require the solution of the eq.\ref{main}. The scaling coefficient $K_x \sim E \cdot (\tau_{in}(T))^{1/2}$ takes into account the temperature variations. A comparison of the inelastic time $\tau_{in}$ obtained from the scaling (open circles) and from the direct comparison with the numerical calculation of the nonlinear resistance using eq.\ref{main} (solid squares) are presented in fig.\ref{tau_vsT_s2}(b) (sample N2) and fig.\ref{tau_vsT_s1}(b) (sampleN1). There is a good overall agreement between two approaches. A difference appears since the numerical calculation takes into account a variation of spectral dynamics with the temperature due to changes in density of states (see the time $\tau_q$ presented in the figures) and a temperature variation of the transport scattering rate.
Deviations from the scaling depend on the temperature.
Presented in fig.\ref{tau_vsT_s2}(a) and fig.\ref{tau_vsT_s1}(a) at higher temperatures experimental curves deviate up from the scaling behavior at a smaller $A$. Taking into account the strong reduction of the inelastic scattering time $\tau_{in}$ with the temperature, one can find that the deviations from the scaling occur at progressively higher $dc$ biases: $E \sim (A/\tau_{in})^{1/2}$. This indicates that corrections to the scaling due to other nonlinear mechanisms, arising at high biases \cite{yang2002, glazman2007,dmitriev2007}, decreases with the temperature increase. The later agrees with the temperature dumping of a magnitude of the $dc$ bias induced magneto-oscillations of the nonlinear resistance \cite{zudov2009} due to inter-level scattering \cite{yang2002}. At high $dc$ biases $A^{1/2}>0.15$ sample N2 demonstrate an additional abrupt deviation down from the scaling at temperatures below 3K (see fig.\ref{tau_vsT_s2}(a)). As we have mentioned at this condition a transition to the zero differential resistance state appears \cite{bykov2007zdr,zudov2008zdr}, which may break down the description of the 2D electron system by the spatially uniform spectral equation (eq.\ref{main}) \cite{bykov2007zdr}.
Below we discuss the temperature dependence of the inelastic scattering time.
Fig.\ref{tau_vsT_s2}b presents the temperature dependence of the time $\tau_{in}$ at magnetic field $B$=0.784 (T) for the sample N2. Two temperature regimes are clearly observable. At temperatures $T>2$K the inelastic relaxation time $\tau_{in}$ is inversely proportional to $T^3$: $\tau_{in}=66(\pm 10)/T^{3 (\pm 0.15)}$ (ns). At temperatures below 2K the inelastic time depends weaker on the temperature: $\tau_{in}=11.6 (\pm 2)/T^{1.26 \pm 0.15}$ (ns).
The observed $T^3$ dependence of the inelastic time $\tau_{in}=66/T^3$(ns) correlates with the one obtained in Si-MOSFETs : $\tau_{in}=(10-60)/T^3$ (ns) at temperatures $1.5<T<4.2$K \cite{dolgopol1985} and with the dependence found in a GaAs/AlGaAs heterojunction: $\tau_{in}=20/T^3$(ns) at temperatures $1<T<3$K \cite{pepper}. In both papers the temperature dependence has been attributed to the electron-phonon scattering. We suggest that the temperature dependence observed at $T>$2K is also due to an electron-phonon scattering in Bloch-Gruneisen (BG) regime at which the wave vector of a typical thermal phonon $q_T=kT/\hbar s$ is smaller than the size of the Fermi circle $2k_F$: $q_T < 2k_F$. Here $s$ is sound velocity and $k_F$ is Fermi wave vector \cite{ziman}. In our high density samples the BG regime exists at temperatures below $T_{BG} \approx 20$K, where $kT_{BG}=2k_F \cdot \hbar s$ \cite{stormer1990}. A theoretical evaluation of the inelastic electron-phonon scattering time in GaAs quantum wells due to screened piezoelectric (PZ) coupling yields: $\tau_{PZ} \approx 16/T^3$ (ns) at temperatures of few K at zero magnetic field \cite{price,karpus1996}. Deformation potential (DP) yields a comparable contribution to the electron-phonon scattering rate at $T>$4K. At a weak screening the electron-phonon scattering time is found to be $\tau_{DP} \approx 18/T^3$(ns) \cite{sergeev} at zero magnetic field.
The $T^{-3}$ temperature dependence is found also for the sample N1 at high temperatures. Fig.\ref{tau_vsT_s1}b presents the temperature dependence. At $T>$10K the inelastic scattering time is proportional to $1/T^3$: $\tau_{in}=70(\pm 10)/T^{3 \pm 0.2}$ (ns). The dependence is the same as the one observed in the sample N2. At lower temperatures $T<10$ (K) the inelastic relaxation time deviates consistently from the $T^{-3}$ dependence. The temperature dependence $\tau_{in}=9 (\pm 2)/T^{2(\pm 0.2)}$ provides a reasonable approximation, indicating a possible contribution of the electron-electron interaction to the inelastic relaxation rate. The same ($T^{-2}$) temperature dependence is observed at small magnetic fields for both samples but at considerably stronger relaxation rate.
Thus the temperature dependence below 10(K) appears as an intermediate regime at which the electron-electron scattering is significant but is suppressed considerably by the quantization of the electron spectrum. At the beginning of the section we have discussed the possible reason for the reduction of the contribution of the $e-e$ scattering to the inelastic relaxation in strong magnetic fields.
\begin{figure}[tbp]
\vbox{\vspace{0 in}} \hbox{\hspace{+0.1in}} \epsfxsize 3.4 in
\vskip -0.5in %
\epsfbox{tau_vsB_s2.eps} \vskip 0.5in
\caption{ Dependences of inelastic scattering time $\tau_{in}$ and quantum scattering time $\tau_{q}$ on magnetic field at two different temperatures as labeled. Two shaded areas indicate two different temperature regimes of the inelastic electron relaxation observed in the sample. Sample N2.}
\label{tau_vsB_s2}
\end{figure}
Our experiment demonstrates a correlation between modulation of the density of states, the inelastic time $\tau_{in}$ and the temperature dependence of the time. At low magnetic field $B$=0.2 (T) the density of states of the sample N2 is weakly modulated at about $\pm$40\% (see fig.\ref{B02T}). The time of inelastic relaxation equals to $1.8/T^2$ (ns) below 8K. At the magnetic field $B=$0.784 (T) the modulation of the density of states of the sample N2 is significantly stronger approaching 95 \% of the averaged value (see fig.\ref{B0784T}(a,b)). The inelastic time equals to $66/T^3$ at $2<T<8$ (K). In magnetic field $B=$0.924 (T) the modulation of the density of states of the sample N1 is about 60\% and the inelastic time is between the two previous values: $1.8/T^2 < 9/T^2 < 66/T^3$ at $T<7$ (K).
In accordance with the correlation one should expect a gradual reduction of the contribution of electron-electron scattering to the inelastic relaxation and an increase of the relaxation time $\tau_{in}$ with an increase of the modulation of the density of states. An increase of the magnetic field $B$ enhances the DOS modulation. Fig.\ref{tau_vsB_s2} presents the dependence of the inelastic time $\tau_{in}$ on the magnetic field for sample N2 at two different temperatures as labeled. Magnetic field increases the relaxation time $\tau_{in}$. The temperature dependence of the inelastic relaxation rate changes from $T^2$ at low magnetic field to $T^3$ at high magnetic fields. In the figure, two rectangular shaded areas indicate the two different temperature regimes of the inelastic relaxation. These regimes are presented in more details in fig.\ref{B05T}(a) and fig.\ref{tau_vsT_s2}(b). Similar enhancement of the relaxation time $\tau_{in}$ with the increase of the magnetic field is found for sample N1 (not shown).
\section{Conclusion}
We have studied the nonlinear response of 2D electrons placed in crossed electric and quantized magnetic fields at low temperatures. The resistance of 2D electrons decreases strongly with an increase of the electric field. The decrease of the resistance is in good quantitative agreement with theory considering the nonlinear response as a result of non-uniform spectral diffusion of 2D electrons limited by inelastic electron scattering. Comparison between the experiments and the theory has revealed different regimes of the electron inelastic relaxation.
At low magnetic fields, at which the Landau levels are well overlapped and the spectral diffusion is weakly modulated with the electron energy, the inelastic scattering rate is found to be proportional to the square of the temperature $T^2$ in temperature interval (2-10 (K)). The dependence indicates the electron-electron scattering as the dominant mechanism of the inelastic relaxation. At high magnetic fields, at which the Landau levels are well separated, the spectral diffusion is strongly modulated and the rate of the inelastic relaxation is proportional to $T^3$. This suggests the electron-phonon scattering to be the dominant inelastic mechanism. At fixed temperature the inelastic time $\tau_{in}$ increases with the magnetic field. At very small temperatures $kT<\Gamma$ and well separated Landau levels an additional regime of the inelastic electron relaxation is identified: $1/ \tau_{in} \sim T^{1.26}$.
At the high magnetic fields the nonlinear resistance demonstrates scaling behavior in a broad range of temperatures exceeding the width of Landau levels. The scaling indicates specific regime of the $dc$ heating in electron systems with discrete electron spectrum. A temperature cannot describe the heating. The spectral diffusion limited by the inelastic relaxation with constant rate describes remarkably well the scaling in broad range of the $dc$ biases.
\begin{acknowledgements}
S. Vitkalov thanks I. Aleiner, I. Dmitriev and A. Sergeev for valuable discussions and comments. This work was supported by National Science Foundation: DMR 0349049 and by Russian Fund for Basic Research, project No.08-02-01051
\end{acknowledgements}
|
2,877,628,089,764 | arxiv | \section{Introduction}
\vspace{-0.3cm}
The diversity in the properties and applications of spinels with the general formula of AB\textsubscript{2}O\textsubscript{4} arises from the variety of cations, magnetic or nonmagnetic, that can be substituted at the tetrahedral A-sites and octahedral B-sites of the spinel structure~\cite{morrish2001physical,Rabe_review2010, thanh2012magnetic,ChakhalianRMP2014, seehra2017magnetic, thota2017nature, SinghJAP2017, Pramanik_2017, harris2009recent}.
\ss{Recent studies on a subclass of spinels having nonmagnetic cations such as Zn\textsuperscript{2+}, Mg\textsuperscript{2+}, and Ge\textsuperscript{4+} at the A-sites and magnetic cations at the B-sites reveal}
intriguing magnetic and structural properties at low temperatures. As first pointed out by Anderson~\cite{anderson1956ordering}, these spinels have inherent magnetic frustration, making the long-range magnetic order, if at all present, highly dependent on various other factors~\cite{ChakhalianRMP2014,seehra2017magnetic,thota2017nature, harris2009recent,LiuNL2019}. %
Examples of such spinels are ZnFe\textsubscript{2}O\textsubscript{4}~\cite{schiessl1996magnetic}, defect spinel MgMnO\textsubscript{3}~\cite{seehra2011magnetic}, and GeCo\textsubscript{2}O\textsubscript{4}~\cite{GhoshPRB2018, pramanik2019magnetic}. The latter is the subject of this paper. It is noteworthy that GeCo\textsubscript{2}O\textsubscript{4}, hereafter listed as GCO for brevity, has been substantially investigated in connection with its use as an anode material for Li-ion batteries~\cite{ge2012co,jin2015ultrathin,subramanian2017facile,yuvaraj2017electrochemical}. Moreover, the nanostructures of GCO have found applications in the renewable energy sectors such as fuel-cells, electrochemical sensors, and supercapacitors~\cite{jin2015ultrathin,yuvaraj2017electrochemical}.
The magnetic properties of GCO have been under intense investigation in recent years because of the distinct magnetoelectric features linked to the noncollinear spin arrangement and distorted cubic structure. Based in part on several previous electron-spin resonance, magnetic, and neutron diffraction studies in GCO~\cite{Okubo2017,Yamasaki_2012, diaz2006magnetic, matsuda2011magnetic, horibe2006spontaneous, lashley2008specific,barton2014structural,fabreges2017field,tomiyasu2011molecular, GhoshPRB2018}, Pramanik {\it et al.}~\cite{pramanik2019magnetic} \ss{recently} presented results on the magnetic ground state, magnetic-field-induced transitions, and optical bandgap of GCO.
Summarizing these results, \ss{it was shown that GCO contains a pyrochlore lattice of Co\textsuperscript{2+} spin moments which have effective spin \emph{S} = 1/2 (instead of \emph{S} = 3/2 as expected from the Hund's rules) due to the effects of the spin-orbit coupling and Jahn-Teller distortion. The magnetic ordering consists}
of alternate planes of kagom\'e (KGM) and triangular (TRI) spins lying perpendicular to the {[}111{]} direction.
The dominant in-plane exchange constant between the Co\textsuperscript{2+} spins is ferromagnetic (FM). However, the spins in the neighboring planes are ordered antiferromagnetically (AFM) with $\textbf{q}$ = ($\frac{1}{2},\frac{1}{2},\frac{1}{2}$) to yield an overall AFM order in the absence of any external magnetic field below the N\'eel temperature \emph{T}\textsubscript{N} = 20.4\,K~\cite{pramanik2019magnetic}.
Due to such a peculiar magnetic behavior, especially owing to the frustrated AFM ordering with $\textbf{q}$ = ($\frac{1}{2},\frac{1}{2},\frac{1}{2}$), various exotic competing magnetic phases such as classical and quantum spin liquid phases, recently reported in (111)-oriented quasi-two-dimensional spinels through a geometric lattice design approach, can be realized in GCO at low temperatures~\cite{LiuNL2019, liu2021proximate,ChakhalianAPL2020}.
Several studies reported a cubic ($Fd{\bar 3}m$) to tetragonal ($I4_{1}/amd$) distortion of the lattice accompanying \emph{T}\textsubscript{N} ~\cite{lashley2008specific,hoshi2007magnetic,watanabe2008jahn}, although high resolution x-ray diffraction studies by Barton \emph{et al.}~\cite{barton2014structural} revealed that the tetragonal distortion
\ss{of $\sim$0.1\% in the lattice parameters}
occurs at \emph{T\textsubscript{S}} = 16\,K, a few degrees below \emph{T}\textsubscript{N}, \ss{along with modulation of the
Co-O bonds in the CoO$_6$ octahedra. However, it likely has}
a nonmagnetic origin since no anomalies occur in the heat capacity \ss{and magnetic susceptibility} data near \emph{T\textsubscript{S}}~\cite{barton2014structural}.
Also, the degree of tetragonality progressively increases with decreasing temperature~\cite{barton2014structural}. This cubic-to-tetragonal structural phase transition was attributed to local Jahn-Teller effects~\cite{barton2014structural}, which lift the degeneracy of the $t_{2g}$ states by minimizing the energy of the $d_{xz}$ and $d_{yz}$ Co-3$d$ sub-orbitals~\cite{Ghosh_2021}.
The closeness between the magnetic and structural transition temperatures reveals the existence of competing spin-orbit coupling and Jahn-Teller effects in GCO~\cite{barton2014structural,hoshi2007magnetic,watanabe2008jahn, Ghosh_2021}. Currently, there exists a fair amount of debate regarding the fact that $T_S$ is below the $T_N$, which is uncommon when compared to other spinels that exhibit magnetostructural quantum phase transitions~\cite{bordacs2009magnetic, thota2014ac, kim2012giant, kim2011pressure, guillou2011magnetic, suchomel2012spin, thota2017neutron,nayak2015magnetic,nayak2016low,nayak2016reentrant,pramanik2020neutron,thota2015nature,thota2013co}.
A systematic investigation of the temperature-dependent lattice dynamics is required to pin down the nature of transitions occurring near $T_S$ and $T_N$ in GCO. The only previously reported Raman studies in GCO are those of Koringstein \emph{et al.}~\cite{koningstein1972light}, which reported the observation of three Raman-active modes (\emph{A}\textsubscript{1g}, \emph{T}\textsubscript{2g}(1), and \emph{E}\textsubscript{g}) in GCO. However, these studies were done at only two temperatures, 200 and 400\,K, which are much higher than the $T_S$ and $T_N$. Also, the only yet reported infrared (IR) study in GCO was performed by Preudhomme and Tarte~\cite{preudhomme1972infrared} at 300\,K, which reported the observation of four IR-active modes (\emph{T}\textsubscript{1u}).
In this work, we perform detailed temperature-dependent Raman measurements covering the temperature range \ss{of} 5 to 300 K with a focus on the changes occurring in the Raman-active modes as the temperature is lowered through \emph{\emph{T}\textsubscript{N}} and \emph{T\textsubscript{S}}. Notably, our low-temperature Raman measurements confirm that the structural phase transition in GCO follows the magnetic phase transition, as first reported by Barton \emph{et al.}~\cite{barton2014structural} using x-ray diffraction measurements. We observe noticeable changes in the line parameters of the \emph{E}\textsubscript{g} and \emph{T}\textsubscript{2g} modes, which are associated with the modulation of the Co-O bonds in CoO\textsubscript{6} octahedra, near the \emph{T}\textsubscript{N} and \emph{T}\textsubscript{S}. We further report the observation of three (out of four) symmetry-allowed IR-active \emph{T}\textsubscript{1u} modes along with two satellite modes likely appearing due to the local symmetry breaking. In addition, computational studies of the lattice modes using density-functional theory (DFT+$U$) calculations are presented, revealing the presence of moderate spin-phonon coupling in GCO. A systematic analysis of the Heisenberg spin Hamiltonian suggests that the magnetic-exchange interactions up to the third nearest neighbors are required to accurately describe the low-temperature AFM ordering in GCO. Besides, we also briefly comment on the problems encountered in the DFT+$U$ calculations involving the orbital occupation of Co-$3d$ orbitals located at the magnetically frustrated sites in GCO.
This paper is organized as follows.
\ss{In Sec.~\ref{sec:methods}, experimental and computational details of this study are
presented.}
Section~\ref{sec:resultsdiscussion} contains all the results and discussions in the following order: first, we discuss the crystal structure and the magnetic-exchange interactions in GCO, and then, we present our theoretical and experimental investigations on the lattice dynamics of GCO. This is followed by conclusions in Sec.~\ref{sec:conclusions}.
\section{Methods}
\label{sec:methods}
\subsection{Experimental details}
\label{sec:exp_details}
\vspace{-0.2cm}
A well-grounded mixed powder of high purity GeO\textsubscript{2}
(Sigma-Aldrich, 99.99\%) and Co\textsubscript{3}O\textsubscript{4}
(Sigma-Aldrich, 99.99\%) in stoichiometric amounts was pressed \ss{into} a cylindrical disc at 50
kg/cm\textsuperscript{2} by hydraulic press and followed by the sintering
process to yield the desired compound. The details of the sample synthesis procedures are described in a previous publication~\cite{pramanik2019magnetic}.
The single phase of the synthesized sample was confirmed by x-ray diffraction measurements using a high-resolution XPERT-PRO diffractometer
(Co-K\textsubscript{$\alpha$} radiation with $\lambda$ = 1.78901 \AA).
The temperature-dependent vibrational Raman-scattering spectra of
GCO were recorded with a commercial Labram-HR800 micro-Raman spectrometer, in the temperature range of 5\,K to 300\,K, using a He--Ne laser of wavelength 514\,nm.
For frequency calibration the silicon mode at 520 cm$^{-1}$ was used.
All the Raman spectra were recorded in the anti-Stokes region.
For the low-temperature measurements, the sample was first mounted on a cold stage
setup (THMS600 stage from Linkam UK) equipped with a temperature
controller capable of maintaining a steady temperature.
The sample was cooled by liquid helium and the temperature controller was able to hold
the temperature fluctuations within a range of $\pm$1\,K.
\ss{The experimental uncertainty in the Raman peak positions, as determined using Lorentzian oscillator fits, was less than 0.1 cm$^{-1}$.}
The room temperature IR spectrum was recorded using a Perkin-Elmer
Spectrum-Two system with the standard spectral resolution of 0.5\,cm$^{-1}$. \ss{The IR-active mode frequencies were determined by Lorentzian oscillator fits of the transmittance data.}
\subsection{Computational details}
\label{sec:comp_details}
\vspace{-0.2cm}
In order to better understand the nature of the magnetic-exchange interactions and Raman and IR-active phonon modes in GCO, we carried out DFT+$U$ based first-principles calculations using the Projector Augmented Wave (PAW) method as implemented in the VASP software~\cite{Kresse96a, Kresse96b, KressePAW}.
The PAW pseudopotentials considered the following valence configurations:
Ge 4s\textsuperscript{2}4p\textsuperscript{2}, Co 3d\textsuperscript{8}4s\textsuperscript{1}, and O 2s\textsuperscript{2}2p\textsuperscript{4}.
A kinetic energy cutoff of 650\,eV was set for the plane waves. The reciprocal space was sampled using a Monkhorst-pack k-mesh~\cite{MP1976} of size 8$\times$8$\times$8. The energy convergence criterion for the self-consistent DFT+$U$ calculations was set to 10$^{-7}$\,eV, and the force convergence criterion for relaxation calculations was set to 10$^{-3}$\,eV/\AA. All DFT+$U$ calculations were performed for collinear magnetic configurations without considering spin-orbit coupling effects.
\ss{{\sc PyProcar} software~\cite{pyprocar} was used to plot the density of states, shown in Supplemental Materials (SM)~\cite{SM}.}
We used the {\sc Phonopy} package to study the lattice dynamics~\cite{phonopy}.
Supercells of size 2$\times$2$\times$2 were employed to calculate the phonon frequencies and phonon eigenvectors within the finite-displacement approach.
The exchange-correlation functional was computed using the generalized-gradient approximation (GGA) as parameterized by Perdew-Burke-Ernzerhof (PBE) as well as the PBE revised for solids (PBEsol)~\cite{PBE, PBEsol}.
We find that the PBEsol yields lattice parameters and phonon frequencies that are in better agreement with the experimental data as compared to the PBE predictions.
\ss{The onsite-Coulomb interaction effects for Co-$3d$ electrons were treated at the mean-field level using the rotationally invariant DFT+$U$ method introduced by Liechtenstein {\it et al.}~\cite{Liechtenstein1995}.
We set $U$ = 4.0\,eV and $J$ = 1.0\,eV~\cite{GhoshPRB2018}. We find that this set of values appropriately describes the lattice parameters, magnetic structures, and vibrational properties of GCO. No tuning of the ($U, J$) parameters was performed to match the calculated phonon frequencies with the experimental data. Besides, it has been reported that an effective $U$\textsubscript{eff}\,\textsuperscript{Co} (= $U-J$) in the range of 2 – 3 eV provides a reasonable prediction of the electronic structure and optical properties of GCO~\cite{GhoshPRB2018}.}
We often noticed an anomalous variation in the occupation of the Co-3$d$ orbitals in some of our DFT+$U$ calculations due to the presence of strong magnetic frustration effects leading to a metastability problem in this system~\cite{MeredigPRB2010, AllenWatson2014}. To ensure the correct and consistent occupation of the Co-3$d$ orbitals, we utilized the occupation matrix control methodology developed by Allen and Watson~\cite{AllenWatson2014} in our reported DFT+$U$ calculations.
We optimized the structural primitive cell in the FM order since the FM order preserves the cubic symmetry of the paramagnetic phase. The PBE+$U$ and PBEsol+$U$ optimized lattice parameters are 8.434 and 8.322\,\AA, respectively. We observed that the PBEsol+$U$ optimized lattice parameters are in excellent agreement with the reported experimental data (8.3191\,\AA)~\cite{pramanik2019magnetic, barton2014structural}. Further, the PBEsol+$U$ optimized Co$-$O and Ge$-$O bond lengths are 2.1\,\AA~and 1.8\,\AA, respectively, which agree very well with the reported experimental data (2.1\,\AA~and 1.8\,\AA)~\cite{pramanik2019magnetic,Yamasaki_2012, barton2014structural}.
\section{Results and Discussion}
\label{sec:resultsdiscussion}
\subsection{Crystal structure and magnetic structure of G\MakeLowercase{e}C\MakeLowercase{o}$_2$O$_4$}
GeCo$_2$O$_4$, [(Ge$^{4+})_{A}$[Co$^{2+}_{2}]_{B}$O$_{4}$], crystallizes in a normal cubic spinel structure at room temperature (space group $Fd\bar{3}m$).
The oxygen anions are located at the 32$e$ Wyckoff positions forming a close-packed face-centered cubic arrangement, whereas Ge and Co cations occupy the 8$a$-tetrahedral and 16$d$-octahedral interstitial positions, respectively.
Therefore, the crystal structure consists of the corner sharing CoO$_6$ octahedra and GeO$_4$ tetrahedra, as shown in Fig.~\ref{fig:fig1struct}(a).
The structural primitive cell contains 2 formula units of GCO. There are 4 magnetic Co atoms in the primitive cell forming a regular Co-Co tetrahedron, where each Co is located at the center of an oxygen octahedron at the 16$d$ sites. The corner-sharing oxygen octahedra form a pyrochlore lattice containing alternating planes of the KGM and TRI layers of Co atoms stacked along the [111] direction of the bulk unit cell.
There are 3 Co atoms in the KGM plane and 1 Co atom in the TRI plane, as shown in Fig.~\ref{fig:fig1struct}(a).
Within each KGM and TRI planes, Co spins order ferromagnetically. However, the overall low-temperature magnetic structure of GCO is much complex involving an antiferromagnetic ordering of wave vector $\rm{\bf{q}}$ = ($\frac{1}{2}, \frac{1}{2}, \frac{1}{2}$).
In this AFM order, Co spins in a pair of TRI and KGM layers ($i.e.$, within a structural primitive cell) order ferromagnetically, whereas the same order antiferromagnetically in the neighboring structural primitive cell, thus
resulting in a TRI-KGM layer spin configuration of $T_{+}\,K_{+}\,T_{-}\,K_{-}\,T_{+}\,K_{+}\,T_{-}\,K_{-}\cdots$ along the [111] direction, as shown in Fig.~\ref{fig:fig1struct}(b).
Here, $T_{+}$ ($T_{-}$) and $K_{+}$ ($K_{-}$) denote the spin up (down) configurations of the TRI and KGM layers, respectively.
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\columnwidth]{Figure1_GCO_struct.pdf}
\caption{(a) Crystal structure of GCO. Dashed black lines mark the boundaries of the structural primitive cell. The primitive cell consists of two GeO\textsubscript{4} tetrahedra and one Co\textsubscript{4} tetrahedral unit.
Here Co, Ge, and O ions are presented by the blue, gray, and red color, respectively.
(b) Schematic representation of $\rm{\bf{q}}$ = ($\frac{1}{2}, \frac{1}{2}, \frac{1}{2}$) AFM ordering in GCO. Magnetic moments at Co sublattice are shown using arrows in a collinear setting, {\it i.e.}, majority or up ($+$) and minority or down ($-$) spin states are denoted using up and down arrows, respectively.
Alternating kagom\'e (KGM) and triangular (TRI) planes are highlighted using dashed horizontal lines.
A $T_{+}\,K_{+}\,T_{-}\,K_{-}\,T_{+}\,K_{+}\,T_{-}\,K_{-}\cdots$ type AFM spin configuration of TRI ($T_{\pm}$) and KGM ($K_{\pm}$) layers can be noticed along the [111] direction.
}
\label{fig:fig1struct}
\end{figure}
To get an accurate description of the low-temperature magnetic structure \ss{ experimentally reported in Ref.~\cite{pramanik2019magnetic},} we extract the values of the spin-exchange interactions ($J$'s) by mapping the DFT-computed total energies onto a Heisenberg spin Hamiltonian (Eq.~\ref{eq:hamiltonian}).
In our spin model, we consider four exchange-interaction parameters, which correspond to the first ($J_1$), second ($J_2$), and third ($J_3$ and $J_{3}^{'}$) nearest-neighbor (NN) interactions, as shown in Fig.~\ref{fig:jfit_fig}(a).
The first, second, and third NN interactions correspond to a Co-Co bond distance of 2.94\,\AA, 5.09\,\AA, and 5.86\,\AA, respectively.
The third NN interaction was further divided into two categories: $J_3$ and $J_{3}^{'}$. Although both belong to the same Co-Co distance, $J_3$ connects two Co atoms located at 5.86\,\AA~distance apart without passing through any intermediate Co atom, whereas $J_{3}^{'}$ connects two Co atoms located at 5.86\,\AA~distance apart but it passes through an intermediate Co atom at the half bond distance.
For instance, a $J_{3}^{'}$ exchange would correspond to the interaction between two Co atoms located at two adjacent TRI planes with the bond between them passing through an intermediate Co atom situated at a KGM plane [see Fig.~\ref{fig:jfit_fig}(a)].
The spin Hamiltonian reads
\begin{equation}
\begin{aligned}
H = E_{0} + J_{1} \sum_{<ij>}^{\text{first NN}} S_{i} \cdot S_{j}
+ J_{2} \sum_{<ij>}^{\text{second NN}} S_{i} \cdot S_{j} \\
+ J_{3} \sum_{<ij>}^{\text{third NN}} S_{i} \cdot S_{j}
+ J_{3}^{'} \sum_{<ij>}^{\text{third NN}} S_{i} \cdot S_{j},
\end{aligned}
\label{eq:hamiltonian}
\end{equation}
where $S_{i}$ and $S_{j}$ denote the spin ordering at different Co sites, and $E_{0}$ represents a rigid shift in the total energy ($E$).
In Fig.~\ref{fig:jfit_fig}(b), we show the fitting of the DFT+$U$ energies ($\Delta\text{E} = E-E_{0}$) computed for several distinct spin configurations in a doubled primitive cell, as shown in Fig.~\ref{fig:jfit_fig}(a), with our spin Hamiltonian described in Eq.~\ref{eq:hamiltonian}.
The lowest-energy spin configuration corresponds to a $T_{+}\,K_{+}\,T_{-}\,K_{-}\,T_{+}\,K_{+}\,T_{-}\,K_{-}\cdots$ type AFM order, as shown in Fig.~\ref{fig:fig1struct}(b).
This spin configuration represents a $\rm{\bf{q}}$ = ($\frac{1}{2}, \frac{1}{2}, \frac{1}{2}$) AFM order that has been experimentally observed in GCO~\cite{pramanik2019magnetic}.
\ss{We note that all the considered spin configurations yielded gapped densities of states, shown in SM~\cite{SM}, in their converged electronic ground state. This ensured that our DFT+$U$ calculations correctly converged for all distinct spin configuration considered in this study. }
\begin{figure}[htb!]
\centering
\includegraphics[width=0.95\columnwidth]{figure_Jij.pdf}
\caption{(a) Definition of all four magnetic-exchange interactions, first ($J_1$), second ($J_2$), and third ($J_3$ and $J_{3}^{'}$) NN, considered in this work. Co atoms are shown in blue color. Ge and O atoms are omitted for clarity. Note that $J_{3}^{'}$ passes through an intermediate Co atom (see text). (b) Fitting of the DFT (PBEsol+$U$) energy values computed for various different spin configurations in a doubled primitive cell, as shown in (a), with our model spin Hamiltonian.
Here, we decide to choose the PBEsol+$U$ method since it predicts better lattice parameters compared to the PBE+$U$ predictions. }
\label{fig:jfit_fig}
\end{figure}
The best fit of data \ss{(provided in SM~\cite{SM})} yields $J_{1}S^{2}$ = $-$3.9, $J_{2}S^{2}$ = 0.7, $J_{3}S^{2}$ = 2.0, and $J_{3}^{'}S^{2}$ = 0.4 (in meV units), where positive (negative) values represent AFM (FM) magnetic interactions. We notice that the first NN exchange has a dominating FM nature, whereas all the second and third NN interactions exhibit an AFM nature, which is consistent with the recent experimental observations~\cite{pramanik2019magnetic}.
\ss{According to the Goodenough-Anderson-Kanamory rules~\cite{morrish2001physical, Anderson1950, anderson1956ordering, Goodenough1955, Goodenough1958, Kanamori1959, Kanamori1960}, $J_{1}$ is mediated via an intermediate oxygen ion having a Co-O-Co bond angle of $\theta=90^{0}$. Therefore, it is a superexchange interaction of FM nature. All other higher-order exchange interactions, {\it viz.,} $J_{2}$, $J_{3}$, and $J_{3}'$, are super-super AFM exchange interactions as they involve more than one ion along the exchange path.}
These competing FM and AFM exchange interactions are primarily responsible for introducing the magnetic frustration and establishing a $\rm{\bf{q}}$ = ($\frac{1}{2}, \frac{1}{2}, \frac{1}{2}$) AFM order in GCO at low temperatures~\cite{diaz2006magnetic}.
\ss{Our theoretical findings discussed above, when combined with the experimental results reported in Ref.~\cite{pramanik2019magnetic}, provide a firm foundation for the magnetic properties of GCO. Hereafter, we focus on the lattice dynamics and vibrational properties of GCO. }
\subsection{Lattice dynamics and vibrational spectroscopy in G\MakeLowercase{e}C\MakeLowercase{o}$_2$O$_4$}
The vibrational spectroscopy of AB\textsubscript{2}O\textsubscript{4} cubic spinels was first studied by Waldron who analyzed the phonon modes of simple ferrites (AFe\textsubscript{2}O\textsubscript{4}) using the structural primitive cell having 14 atom per cell \cite{waldron1955infrared}.
Later, White and DeAnglis presented a group theoretical approach to analyze the Raman spectra of cubic spinels by considering the rhombohedral lattice as the smallest Bravais cell~\cite{white1967interpretation}.
In their study, they considered the body diagonal elements consisting of two AO\textsubscript{4} and one B\textsubscript{4} tetrahedron of total 14 atoms~\cite{white1967interpretation}, as shown in Fig.~\ref{fig:fig1struct}(a).
According to theory, the $\emph{Fd}\bar{3}\emph{m}$ space group belongs to the O\textsuperscript{7}\textsubscript{h} spectroscopic symmetry, where Ge\textsuperscript{4+}, Co\textsuperscript{2+}, and O\textsuperscript{2-} ions belong to the \emph{T}\textsubscript{d}, \emph{D}\textsubscript{3d}, and \emph{C}\textsubscript{3v}(32 \emph{e}-sites) point groups, respectively \cite{white1967interpretation}. All the
allowed optical phonon modes at the Brillouin-zone center $\Gamma$
(\(\overrightarrow{k} = 0\)) for each atomic displacement in the structural primitive cell can be denoted as~\cite{white1967interpretation, ChanPRB2007}:
\begin{equation}
\begin{aligned}
\Gamma\textsubscript{vib} = \emph{A}\textsubscript{1g} \oplus
2\emph{A}\textsubscript{2u} \oplus \emph{E}\textsubscript{g} \oplus
2\emph{E}\textsubscript{u}
\oplus \emph{T}\textsubscript{1g} \\
\oplus\,
4\emph{T}\textsubscript{1u} \oplus 3\emph{T}\textsubscript{2g}
\oplus 2\emph{T}\textsubscript{2u}.
\end{aligned}
\end{equation}
Out of the 39 optical phonon modes, only five modes are Raman active (\emph{A}\textsubscript{1g} $\oplus$ \emph{E}\textsubscript{g}
$\oplus$ 3\emph{T}\textsubscript{2g}), four modes (4\emph{T}\textsubscript{1u}) are IR active, and the remaining modes are inactive in simple Raman and IR experiments. We note that the acoustic modes transform according to the $T_{1u}$ irreducible representation of the $O_{h}$ point group. The atomic vibration patterns corresponding to the IR-active modes are shown in Fig.~\ref{fig:IRmodes} and those of the Raman-active modes are shown in Fig.~\ref{fig:RAmodes}.
These vibrational patterns, $i.e.,$ the phonon eigenvectors at $\Gamma$ depicted using green arrows, were obtained using the {\sc phonopy} package~\cite{phonopy}.
\ss{Besides, we note that in the case of the cubic-to-tetragonal phase transition, splitting of some phonon degeneracies occurs due to the reduction in the crystal symmetry. For instance, a triply-degenerate $T_{1u}$ phonon mode splits into a doublet ($E_{u}$) and a singlet ($A_{2u}$) during the cubic to tetragonal phase transition in GCO. However, the total number of phonon modes remains the same since the cubic-to-tetragonal phase transition is primarily driven by a zone-center $\Gamma_{3}^{+}$ mode.
}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.84\columnwidth]{figure_IR_eigenmodes_b.pdf}
\caption{Atomic vibration patterns for all four IR-active phonon modes: (i) $T\textsubscript{1u}(1)$,
(ii) $T\textsubscript{1u}(2)$, (iii) $T\textsubscript{1u}(3)$, and (iv) $T\textsubscript{1u}(4)$. The color coding of atoms is the same as in Fig.~\ref{fig:fig1struct}(a). These modes are listed here in the order of decreasing frequency (see Table~\ref{tab:table1}).
}
\label{fig:IRmodes}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.80\columnwidth]{figure_Raman_eigenmodes_b.pdf}
\caption{Atomic vibration patterns for all five Raman-active phonon modes:
$A$\textsubscript{1g}, $T$\textsubscript{2g}(1), $T$\textsubscript{2g}(2),
$E$\textsubscript{g}, and $T$\textsubscript{2g}(3).
These modes are listed here in the order of decreasing frequency (see Table~\ref{tab:table2}).
}
\label{fig:RAmodes}
\end{figure}
As mentioned earlier, the magnetic structure of GCO is quite complex due to the $\rm{\bf{q}}$ = ($\frac{1}{2}, \frac{1}{2}, \frac{1}{2}$) AFM ordering, and a first-principles DFT+$U$ calculation of the full phonon dispersion for the actual magnetic cell would be computationally very demanding. However, DFT+$U$ calculation for the structural primitive cell (14 atoms/cell) considering various different spin configurations can provide useful insights about the Raman/IR-active phonon modes at the zone-center $\Gamma$ (which is required for this study), and the strength of the spin-phonon coupling in GCO.
To simulate the high-temperature paramagnetic phonon frequencies (at the infinite temperature limit of spin fluctuations), we follow the method proposed by Kumar-Fennie-Rabe for magnetic spinels~\cite{KFR2012}.
In this method, we take the statistical average of the interatomic force constants calculated for all the possible spin configurations such that each Co-Co bond has an equal fraction of parallel and antiparallel spins.
This method assumes that the time scale of phonons is much longer compared to the spin fluctuations, and spins in the paramagnetic phase are not correlated,
which are reasonable approximations at the high-temperature limit.
In the case of GCO, we have 4 magnetic Co atoms yielding a total of $2^{4}$ (=16) collinear spin configurations, which can be reduced to 8 spin configurations using the time-reversal symmetry.
A further consideration of the cubic crystal symmetry reduces the total number of non-equivalent spin configurations to three, which are: ${++++}$, ${++--}$, and ${+---}$, each with a statistical weight of $\frac{1}{8}$, $\frac{3}{8}$, and $\frac{1}{2}$, respectively. Here $+/-$ denotes the up/down spin moment at each Co site.
Thus computed phonon frequencies for the IR-active and Raman-active
modes are given in Table~\ref{tab:table1} and Table~\ref{tab:table2}.
Owing to the fact that the PBEsol functional describes the lattice parameters and bond lengths in GCO better compared to the PBE functional, we find that the PBEsol predicted phonon frequencies are in better agreement with the experimental data as compared to the PBE predictions.
\subsubsection{{\bf{IR-active modes}}}
\vspace{-0.2cm}
The frequency of the four allowed IR-active modes in GCO along with \ss{those} for some other normal spinels \ss{are} listed in Table~\ref{tab:table1}.
The Fourier-transform infrared (FTIR) spectrum of GCO recorded at 300\,K in the transmission mode, shown in Fig.~\ref{fig:irspectra},
displays the observation of $T$\textsubscript{1u}(1), $T$\textsubscript{1u}(2), and
$T$\textsubscript{1u}(3) modes at frequencies 680, 413, and 325 \(\text{cm}^{-1}\), respectively, which are in decent agreement with our DFT+$U$ calculated frequencies.
Since the experimental limitations did not allow us to measure modes below
300 \(\text{cm}^{- 1}\), the $T$\textsubscript{1u}(4) mode predicted to occur at
189 \(\text{cm}^{- 1}\ \) (see Table~\ref{tab:table1}) could not be observed.
However, the predicted frequency of the $T$\textsubscript{1u}(4) mode is in good agreement with the experimental data (186 \(\text{cm}^{- 1}\ \)) reported by Preudhomme and Tarte~\cite{preudhomme1972infrared}.
Overall there is a good agreement between the observed and predicted values for the IR-active modes at room temperature, \ss{as shown in Figure~\ref{fig:compare_ir_raman}. }
\begin{figure}[htb]
\centering
\includegraphics[trim=1.7cm 0.2cm 1cm 2cm, clip=true,scale=0.37]{fig-IR.pdf}
\caption{Fourier-transform infrared spectrum of GCO polycrystalline sample recorded at room temperature. \ss{The DFT+$U$ simulated IR spectrum is given in the SM~\cite{SM}. }}
\label{fig:irspectra}
\end{figure}
In addition to the above listed IR-active modes, Fig.~\ref{fig:irspectra}
shows the observation of two satellite modes at 608 and
459 \(\text{cm}^{- 1}\), marked as \(v_{1}\ \)and
\(v_{2}\), respectively.
Although crystal symmetry allows the observation of only four $T$\textsubscript{1u} modes, these additional satellite modes are likely occurring from the splitting of the $T$\textsubscript{1u} modes due to the induced local electric fields~\cite{ChanPRB2007}.
The presence of any impurity or crystallite domains \ss{in a powder sample} breaks the local crystal symmetry distorting the local potential, which in turn relaxes the selection rules governing the observation of the allowed IR-active modes, and it may lead to the appearance of the satellite modes in the IR spectrum. Such satellite modes have been previously observed in lithium-cobalt oxides~\cite{BURBA2009248, Ahamed_2020}.
Our DFT+$U$ calculations predict moderate spin-phonon coupling in the IR-active T$_{1u}$ modes of GCO. We notice that each triply-degenerate T$_{1u}$ mode of the $O_h$ point group splits into two modes, one doublet and one singlet, when the magnetic symmetry is changed from FM to AFM, which is consistent with the work of Wysocki and Birol~\cite{WysockiBirol2016}. The magnitude of the frequency splitting between the doublet and singlet modes ($\Delta\omega_{ds}$) provides a good qualitative estimate of the strength of the spin-phonon coupling in magnetic spinels~\cite{FenniePRL2006, KFR2012, ChanPRB2007, WysockiBirol2016}. In case of GCO, the PBEsol+$U$ (PBE+$U$) calculated $\Delta\omega_{ds}$ is 1 (1), 4 (2), 6 (10), 2 (2) \(\text{cm}^{- 1}\) for the
\emph{T}\textsubscript{1u}(1), \emph{T}\textsubscript{1u}(2), \emph{T}\textsubscript{1u}(3), and \emph{T}\textsubscript{1u}(4) modes, respectively.
These values are consistent with the previously reported data on other magnetic spinels~\cite{FenniePRL2006, KFR2012, ChanPRB2007, WysockiBirol2016}. The maximum frequency splitting is predicted for the \emph{T}\textsubscript{1u}(3) mode, which is evident since the \emph{T}\textsubscript{1u}(3) mode involves the vibration of the magnetic Co sites, as shown in Fig.~\ref{fig:IRmodes}.
An experimental validation of the aforementioned frequency-splitting values requires low temperature IR measurements, which, unfortunately, could not be carried out because of the limitations of our experimental facilities.
The high frequency IR-active modes \emph{T}\textsubscript{1u}(1) and \emph{T}\textsubscript{1u}(2), as shown in Fig.~\ref{fig:IRmodes}, involve the symmetric and asymmetric bending of oxygen ions present at the tetrahedral and octahedral sites, whereas the low frequency IR-active modes, \emph{T}\textsubscript{1u}(3) and \emph{T}\textsubscript{1u}(4), are associated with the vibrations of the relatively heavier Ge and Co ions situated at the tetrahedral and octahedral sites, respectively.
Generally, the frequency of a mode varies as \(\sqrt{k/m},\) where $k$ is the stiffness constant of the bond and $m$ is the effective mass of the associated ions.
From the magnitudes of the four IR-active modes for various spinels listed in Table~\ref{tab:table1},
one can argue that $T$\textsubscript{1u}(1) and \emph{T}\textsubscript{1u}(2) modes are due to the vibrations of the tetrahedral group (\(\text{GeO}_{4}\) or \(\text{SiO}_{4}\)) whereas $T$\textsubscript{1u}(3) and
$T$\textsubscript{1u}(4) also involve the vibrations of the octahedral group (\(\text{MgO}_{6}\) and \(\text{CoO}_{6}\)).
Our reasoning is as follows:
When Co in GeCo\textsubscript{2}O\textsubscript{4} is replaced by
lighter Mg in GeMg\textsubscript{2}O\textsubscript{4}, there is
about 50\% increase in the frequencies of $T$\textsubscript{1u}(3) and $T$\textsubscript{1u}(4) modes, whereas the increase in the frequencies of the $T$\textsubscript{1u}(1) and $T$\textsubscript{1u}(2) modes is only a few percent. When lighter Si in SiCo\textsubscript{2}O\textsubscript{4} replaces heavier Ge in GeCo\textsubscript{2}O\textsubscript{4}, then the frequencies
of the $T$\textsubscript{1u}(1)and $T$\textsubscript{1u}(2) modes in SiCo\textsubscript{2}O\textsubscript{4} goes up by about 25\%, whereas changes
in the $T$\textsubscript{1u}(3) and $T$\textsubscript{1u}(4) mode frequencies is only about 5\%. Therefore, $T$\textsubscript{1u}(1) and $T$\textsubscript{1u}(2) modes primarily represent the
vibrations of the tetrahedral group, while $T$\textsubscript{1u}(3) and $T$\textsubscript{1u}(4) modes represent the
vibrations of the octahedral group.
This qualitative description is consistent with the schematic phonon eigenvectors plot shown in Fig.~\ref{fig:IRmodes}.
\subsubsection{{\bf{Raman-active modes }}}
\vspace{-0.2cm}
The frequencies of the Raman-active modes
in GCO (at 300\,K) are listed in Table~\ref{tab:table2} along with their
calculated values. As done in Table~\ref{tab:table1} for the IR modes, we have also
listed the frequencies of the Raman-active modes in Table~\ref{tab:table2} reported for several other isostructural spinels \emph{e.g.,} SiCo\textsubscript{2}O\textsubscript{4},
GeMg\textsubscript{2}O\textsubscript{4},
MgTi\textsubscript{2}O\textsubscript{4}, and
SiMg\textsubscript{2}O\textsubscript{4}.
Our observed values of the frequencies of $A$\textsubscript{1g}, $T$\textsubscript{2g}(1), and $E$\textsubscript{g} modes in GCO are nearly identical to those
reported by Koningstein \emph{et al.}~\cite{koningstein1972light}.
The frequency of the $T$\textsubscript{2g}(2) mode in GCO is reported for the first time in this work.
Our DFT+$U$ calculated phonon frequencies of Raman-active modes are in good agreement with the experimental observations. The $T$\textsubscript{2g}(3) mode could not be detected in our experiments since this mode is predicted to occur below the lowest frequency of our Raman measurements. However, the predicted frequency of the $T$\textsubscript{2g}(3) mode is consistent with that of reported values for other isostructural spinel oxides (see Table~\ref{tab:table2}).
\begin{figure}[htb!]{
\centering
\includegraphics[trim=2cm 30cm 1cm 9.5cm,clip=true,scale=0.115]{fig-6.pdf}
\caption{Raman spectra of GCO recorded at temperatures T = 5, 10, 12, 14, 18, 20, 21, 22, 24, 26, 30, 40, 60, 80, 100, and 300\,K.
}
\label{fig:raman_temp5_300} }
\end{figure}
Our calculations reveal that the strength of the spin-phonon coupling is the largest for the $T$\textsubscript{2g}(3) mode since this mode is associated with the vibration of the heavy cations. The values of the frequency splitting $\Delta\omega_{ds}$ for the triply-degenerate $T$\textsubscript{2g}(1), $T$\textsubscript{2g}(2), and $T$\textsubscript{2g}(3) modes are 3 (1), 2 (2), 5 (3) \(\text{cm}^{- 1}\), respectively, as obtained using the PBEsol+$U$ (PBE+$U$) method.
We note that these values are in the same range of the observed frequency shifts of the associated Raman peaks at \emph{T\textsubscript{N}}, as discussed below.
To better understand the Raman modes in GCO, a systematic comparison of their frequencies with those reported in SiCo\textsubscript{2}O\textsubscript{4},
GeMg\textsubscript{2}O\textsubscript{4},
SiMg\textsubscript{2}O\textsubscript{4}, and MgTi\textsubscript{2}O\textsubscript{4} are listed in Table~\ref{tab:table2}. Comparing
SiCo\textsubscript{2}O\textsubscript{4} with
GeCo\textsubscript{2}O\textsubscript{4} for which lighter Si atom
replaces heavier Ge atom at the tetrahedral site, the frequencies of the $A$\textsubscript{1g}, $E$\textsubscript{g}, and $T$\textsubscript{2g}(1) modes in
SiCo\textsubscript{2}O\textsubscript{4} are increased by about 10--20\%.
This suggests that these modes likely involve some motion of the
tetrahedral cation in addition to the O atoms.
This is further confirmed by comparing the mode frequencies of
GeMg\textsubscript{2}O\textsubscript{4} with those in
SiMg\textsubscript{2}O\textsubscript{4} where the frequencies of
$A$\textsubscript{1g},$E$\textsubscript{g}, and $T$\textsubscript{2g}(1) modes in
SiMg\textsubscript{2}O\textsubscript{4} are higher by about 10--20\%.
For the $T$\textsubscript{2g}(2) mode, the observed differences in the
frequencies for GCO {\it vis-a-vis} SiCo\textsubscript{2}O\textsubscript{4},
GeMg\textsubscript{2}O\textsubscript{4}, and
SiMg\textsubscript{2}O\textsubscript{4} do not show a systematic pattern.
To further understand the role of the Co-O octahedra on the Raman modes, mode frequencies in GeMg\textsubscript{2}O\textsubscript{4} and
GeCo\textsubscript{2}O\textsubscript{4} are compared for which the lighter Mg replaces the heavier Co.
In this case, the frequencies of the $A$\textsubscript{1g} and $T$\textsubscript{2g}(1) modes are increased by about 2\% only.
However, the frequency of the $E$\textsubscript{g} mode in GeMg\textsubscript{2}O\textsubscript{4} is enhanced by about 11\%.
This suggests that the $E$\textsubscript{g} mode also involves some vibrations of the cations on the octahedral site.
In summary, for GCO, the $A$\textsubscript{1g} and $T$\textsubscript{2g}(1) modes involve some vibrations of Ge at the tetrahedral site in addition to the vibrations of the O atoms, whereas for the $E$\textsubscript{g} modes, the vibrations of GeO\textsubscript{4} and CoO\textsubscript{6} are also involved.
\begin{table*}[htb]
\begin{center}
\caption{List of IR-active modes and
their frequencies (in cm$^{- 1}$) at room temperature for several cubic spinels}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& $T$\textsubscript{1u}(1) & $T$\textsubscript{1u}(2) & $T$\textsubscript{1u}(3) & $T$\textsubscript{1u}(4) & Reference \\
\hline
GeCo\textsubscript{2}O\textsubscript{4} & 680 & 413 & 325 & & This work (Experiment) \\
\hline
GeCo\textsubscript{2}O\textsubscript{4} & $^{\#}$\,640 & 407 & 312 & 189 & This work (Calculation) \\
& *\,(615) & (379) & (294) & (168) & \footnotesize{$^{\#}$\,PBEsol+$U$; *\,(PBE+$U$)} \\
\hline
GeCo\textsubscript{2}O\textsubscript{4} & 679 & 427 & 321 & 186 & \cite{preudhomme1972infrared} \\
\hline
GeNi\textsubscript{2}O\textsubscript{4} & 690 & 453 & 335 & 199 & \cite{preudhomme1972infrared} \\
\hline
GeMg\textsubscript{2}O\textsubscript{4} & 694 & 450 & 485 & 274 & \cite{preudhomme1972infrared} \\
\hline
SiCo\textsubscript{2}O\textsubscript{4} & 815 & 504 & 354 & 161 & \cite{kushwaha2018vibrational} \\
\hline
SiMg\textsubscript{2}O\textsubscript{4} & 834 & 547 & 444 & 348 & \cite{kushwaha2018vibrational} \\
\hline
\end{tabular}
\label{tab:table1}
\end{center}
\end{table*}
\begin{table*}[htb]
\begin{center}
\caption{List of Raman-active phonon modes and their frequencies (in cm$^{- 1}$) at room temperature for several cubic spinels}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& $A$\textsubscript{1g} & $T$\textsubscript{2g}(1) & $T$\textsubscript{2g}(2) & $E$\textsubscript{g} & $T$\textsubscript{2g}(3) & Reference \\
\hline
GeCo\textsubscript{2}O\textsubscript{4} & 760 & 647 & 550 & 308 & & This work (Experiment) \\
\hline
GeCo\textsubscript{2}O\textsubscript{4} & $^{\#}$\,720 & 649 & 475 & 323 & 204 & This work (Calculation) \\
& *\,(695) & (610) & (461) & (311) & (203) & \footnotesize{$^{\#}$\,PBEsol+$U$; *\,(PBE+$U$)} \\
\hline
GeCo\textsubscript{2}O\textsubscript{4} &757 & 643 & & 302 & & \cite{koningstein1972light} \\
\hline
SiCo\textsubscript{2}O\textsubscript{4} & 833 & 788 & 521 & 373 & 270 & \cite{kushwaha2018vibrational} \\
\hline
GeMg\textsubscript{2}O\textsubscript{4} & 777 & 669 & 520 & 341 & 213 & \cite{ross1987mg} \\
\hline
SiMg\textsubscript{2}O\textsubscript{4} & 834 & 798 & 599 & 373 & 300 & \cite{kushwaha2018vibrational} \\
\hline
MgTi\textsubscript{2}O\textsubscript{4} &628 & 493 & 335 & 448 & & \cite{popovic2003phonon} \\
\hline
\end{tabular}
\label{tab:table2}
\end{center}
\end{table*}
\ss{In Figure~\ref{fig:compare_ir_raman} we compare the DFT+$U$ predicted phonon frequencies calculated for the paramagnetic phase using the statistical averaging method, as mentioned above, with the experimental data recorded at 300\,K from the IR and Raman measurements. The experimental frequency of the $T$\textsubscript{1u}(4) mode was obtained from Ref.~\cite{preudhomme1972infrared}. We observe a good agreement between theory and experiment. In particular, the PBEsol+$U$ predicted frequencies are in better agreement with the experimental data compared to the PBE+$U$ predictions. }
\begin{figure}[h!]
\centering
\includegraphics[width=0.96\columnwidth]{compare_ir_raman_fig.pdf}
\caption{ \ss{Comparison of the DFT+$U$ predicted phonon frequencies ($\omega$\textsubscript{theory}) for the simulated paramagnetic phase with the experimentally measured frequencies at 300\,K ($\omega$\textsubscript{expt}) for the IR and Raman-active modes. The data plotted in this figure were obtained from Table~\ref{tab:table1} and~\ref{tab:table2}. }
}
\label{fig:compare_ir_raman}
\end{figure}
\subsubsection{{\bf{Temperature dependence of the Raman-active modes}}}
\vspace{-0.2cm}
\begin{figure}[htb!]
\centering
\includegraphics[trim=3cm 0.7cm 2cm 1cm, clip=true,scale=0.6]{temp-dept-Raman-modes-21-Dec-2020.pdf}
\caption{The temperature dependence of Raman intensity (I$\times$10$^{4}$),
fullwidth at half maximum ($\Delta F$), and Raman-peak position (RPP) for the $A$\textsubscript{1g}, $T$\textsubscript{2g}(1), $T$\textsubscript{2g}(2), and $E$\textsubscript{g} Raman active modes. The lines connecting the data points are visual guides. The \emph{T}\textsubscript{N} and \emph{T}\textsubscript{S} mark the transition temperatures corresponding to the antiferromagnetic and the cubic-to-tetragonal structural phase transitions, respectively.
}
\label{fig:raman_peaktemp}
\end{figure}
A brief summary of the temperature dependence of the structural
properties of GCO is first presented in order to place the data on the
Raman-active modes in proper context. Using x-ray synchrotron data on a
polycrystal GCO, Barton \emph{et al}.~\cite{barton2014structural} determined changes in the lattice parameters and Co-O and Ge-O bond lengths as a function of
temperature including the regions around \emph{\emph{T}\textsubscript{N}} = 21
K and \emph{T\textsubscript{S}} = 16 K. For \emph{T} \textless{}
\emph{T\textsubscript{S}}, the crystal symmetry changes from cubic to
tetragonal with c/a \textgreater{} 1 with degree of tetragonality
increasing with decreasing \emph{T}. An elongation of the
CoO\textsubscript{6} octahedron is observed below the
\emph{T\textsubscript{S}} as the Co-O bond length = 2.09\,\AA~above
\emph{\emph{T}\textsubscript{N}} increases to 2.13\,\AA~along the c-axis but
decreases to 2.07\,\AA~normal to c-axis for \emph{T \textless{}
T\textsubscript{S}}. However, there is no change in the Ge-O bond length
in the GeO\textsubscript{4} tetrahedron as the symmetry changes from the
cubic to tetragonal phase below the \emph{T\textsubscript{S}}. Considering
these results, changes around \emph{T\textsubscript{S}} should be
expected in the Raman and the IR-active modes which involve vibrations of the
atoms in the CoO\textsubscript{6} octahedron.
The structural transition at low temperature is expected due to the possible Jahn-Teller distortions and spin-orbit coupling effects in the 3$d^7$ state of Co$^{2+}$, in which the spin degeneracy is lifted due to the stabilization of the $t_{2g}$ orbitals.
Following our earlier discussion on the comparison of the Raman-active modes for different spinels listed in Table~\ref{tab:table2}, significant changes around
\emph{T\textsubscript{S}} should be expected for the $E$\textsubscript{g} mode.
Another relevant and important results from the paper by Barton {\it et al.}~\cite{barton2014structural} is the presence of the magneto-dielectric coupling,
which is evident from the fitting of the temperature-dependent dielectric constant data of GCO with the Barrett equation for \emph{T \textgreater{} \emph{T}\textsubscript{N}} (similar to previous reports on MnO and MnF\textsubscript{2} \cite{seehra1981dielectric,seehra1984anomalous,seehra1986effect})
yielding 339\,cm$^{-1}$ as the frequency of the coupling
mode. This frequency is close to that of the \emph{E\textsubscript{g}}
mode determined in this work.
Keeping the above comments in mind, the Raman spectra of GCO recorded at
various temperatures between 5 and 300\,K are shown in Fig.~\ref{fig:raman_temp5_300} with each line identified with one of the five Raman-active modes. For each line, except
the $T$\textsubscript{2g}(3) mode whose intensity is too weak for accurate measurements, we measured its position, full width at half maximum
(FWHM) and line intensity (area under the peak),
and plotted these quantities as a function of temperature in Fig.~\ref{fig:raman_peaktemp}. The
positions of \emph{T}\textsubscript{N} = 21\,K and
\emph{T}\textsubscript{S} = 16\,K are also marked by vertical dashed
lines in these plots.
Qualitative interpretations of these results are presented below.
\begin{figure}[htb!]
\centering
\includegraphics[trim=0.2cm 0cm 0cm 1.8cm, clip=true,scale=0.36]{fig-line-seperation.pdf}
\caption{The temperature dependence of the line separation $\Delta \omega$ of the low-frequency shoulder from the position of the $T$\textsubscript{2g}(1) peak. Inset marks the shoulder appearing on the low-frequency side of the $T$\textsubscript{2g}(1) peak. }
\label{fig:temp_t2g1}
\end{figure}
A detailed examination of the plots shown in Fig.~\ref{fig:raman_peaktemp} reveals some interesting features. First, for all the four observed Raman modes {\it viz.,} $A$\textsubscript{1g}, $E$\textsubscript{g}, $T$\textsubscript{2g}(1), and
$T$\textsubscript{2g}(2), the intensity of the Raman lines increases with
decreasing temperature below \emph{T}\textsubscript{S}, which is somewhat similar to the variation of the order parameter.
According to the Suzuki and Kamimura theory~\cite{suzuki1973theory} for the spin-dependent Raman scattering, the magnetic order significantly influences the phonon Raman efficiency through the dependence of the optical dipole transitions on the relative orientation of the adjacent spins.
Generally, the temperature dependence of the integrated Raman intensity is proportional to the nearest neighbor spin correlation function~\cite{balkanski1987magnetic}.
Also, the emergence of an AFM order below the $T_N$ enhances the Raman intensity due to the Brillouin-zone folding since the magnetic unit cell would be doubled in size compared to the structural unit cell~\cite{suzuki1973theory, balkanski1987magnetic}.
As a result, the Raman intensity always enhances below the magnetic transition in both FM and AFM systems.
The second noticeable effect is the dramatic changes observed in the FWHM for the
$T$\textsubscript{2g}(1), $T$\textsubscript{2g}(2) and \emph{E}\textsubscript{g}
modes between \emph{T}\textsubscript{N} and \emph{T}\textsubscript{S} along with weaker anomalies in the line positions of these modes.
As argued earlier based on the comparison
with data on other spinels, significant changes due to the structural
transition at \emph{T}\textsubscript{S} were expected in the line
parameters of the \emph{E}\textsubscript{g} mode.
Results presented in Fig.~\ref{fig:raman_peaktemp} show that the
effects of magnetic ordering at \emph{T}\textsubscript{N} and structural transition at
\emph{T}\textsubscript{S} for the $T$\textsubscript{2g}(1), $T$\textsubscript{2g}(2)
and \emph{E}\textsubscript{g} modes are significant.
The Raman linewidth is supposed to decrease with decreasing temperature since the phonon scattering usually gets suppressed at low temperatures.
As it can be noticed from Fig.~\ref{fig:raman_peaktemp}, the FWHM ($\Delta F$) is indeed decreasing below \emph{T}\textsubscript{N} until \emph{T}\textsubscript{S}, which clearly indicates that the structural transition is independent of the magnetic transition.
Also, for the only case of T \textless \emph{T}\textsubscript{S}, there is (roughly) an overall increase in the FWHM of all four Raman-active modes.
This could be associated to the cubic-to-tetragonal structural distortion occurring at \emph{T}\textsubscript{S} since this distortion could lift the degeneracy of the degenerate Raman-active modes, with the exception of the nondegenerate \emph{A}\textsubscript{g} mode. It is possible that the distortion-split modes are not showing up distinctly in our Raman measurements due to their smaller magnitude of the frequency shift, however, they may form a convoluted peak with a larger FWHM.
Another possible explanation could be related to the local structural disorder driven by the randomly distributed Ge atoms, which may cause an increase in the linewidth at \emph{T} $<$ \emph{T}\textsubscript{S}.
Another noteworthy feature evident from the Raman spectra at low temperatures is the separation of a shoulder, marked by an arrow in the inset of Fig.~\ref{fig:temp_t2g1}, on the low-frequency side of the $T$\textsubscript{2g}(1) line.
The origin of this shoulder is not yet well understood. However, we think it could be attributed to the magnon-induced excitations~\cite{Zhang_NatComm2016}.
In Fig.~\ref{fig:temp_t2g1}, we plot the temperature dependence of the frequency shift of this shoulder $\Delta \omega$ from the $T$\textsubscript{2g}(1) line.
We note that $\Delta \omega$ increases with lowering temperature and attains a maximum value at \emph{T}\textsubscript{N}. With a further decrease in temperature (\emph{T}\textsubscript{S} $<$ \emph{T} $<$ \emph{T}\textsubscript{N}), $\Delta \omega$ starts decreasing, and it shows an upturn at \emph{T}\textsubscript{S}.
Such a temperature dependence of $\Delta \omega$ implies the presence of two distinct phase transitions, one magnetic and another structural, in GCO, thus, validating the claim of Barton \emph{et al.}~\cite{barton2014structural} that the structural phase transition in GCO does not occur exactly at \emph{T}\textsubscript{N}, rather it follows the magnetic phase transitions at 21\,K and occurs at 16\,K.
Our DFT+$U$ calculations further support this argument as we do not notice any phonon instability when the magnetic order is changed from FM to AFM.
This suggest that no structural phase transition should occur exactly at \emph{T}\textsubscript{N}.
However, below \emph{T}\textsubscript{N} the system could undergo a structural phase transition due to the relaxation of stress and forces on atoms within the AFM phase~\cite{KFR2012}.
\section{Conclusions}
\label{sec:conclusions}
Results from our \ss{combined} experimental and computational investigations of the IR and Raman-active modes of the normal spinel GeCo\textsubscript{2}O\textsubscript{4}
\ss{with the effective spin $S$=1/2 ground state}
have been presented here with the following major conclusions: (i) The measured frequencies of the IR and Raman-active modes at room temperature are in good agreement with the results obtained from our DFT+$U$ calculations. (ii) All the IR and Raman-active modes exhibit moderate spin-phonon coupling in GeCo\textsubscript{2}O\textsubscript{4}. (iii) The temperature dependence of the Raman-active modes carried out between 5\,K and 100\,K with a special attention given to the region between \emph{T}\textsubscript{N} ($\sim$ 21\,K) and \emph{T}\textsubscript{S} ($\sim$ 16\,K) shows noticeable anomalies in the line parameters of the Raman-active modes.
(iv) The temperature-dependent frequency shift of a shoulder appearing near the peak of the Raman-active mode $T$\textsubscript{2g}(1) validates that the structural phase transition in GeCo\textsubscript{2}O\textsubscript{4} is distinct from the magnetic phase transition occurring at \emph{T}\textsubscript{N}.
Investigations of the temperature dependence of the IR modes covering the region below \emph{T}\textsubscript{N} is recommended since it is likely to provide significant information on the transitions at \emph{T}\textsubscript{N} and \emph{T}\textsubscript{S}.
Our DFT+$U$ calculations reveal that exchange interactions up to at least the third neighbors are required to correctly describe the low-temperature antiferromagnetic ordering in GeCo\textsubscript{2}O\textsubscript{4}.
\ss{We find that the nearest-neighbor magnetic exchange interaction has a ferromagnetic nature and it is a superexchange interaction mediated {\it via} an intermediate oxygen ion having a Co-O-Co bond angle of $\theta = 90^{0}$.
Instead, the second and third near-neighbor exchange interactions are antiferromagnetic in nature, and they involve more than one ion along the exchange interaction path corresponding to the super-super exchange interaction.
These interactions play a vital role in stabilizing the ($\textbf{q}$= $\frac{1}{2},\frac{1}{2},\frac{1}{2}$) antiferromagnetic order in GeCo\textsubscript{2}O\textsubscript{4} at low temperatures.}
\ss{The presence of the spin $S$=1/2 ground state in GeCo\textsubscript{2}O\textsubscript{4} due to spin-orbit coupling and local Jahn-Teller distortion effects, discussed in detail in Ref.~\cite{pramanik2019magnetic}, gets additional support from the recently reported results in an Ising linear chain system CoNb\textsubscript{2}O\textsubscript{6} having a similar $S$ =1/2 ground state of Co\textsuperscript{2+} ions~\cite{thota2021}. Lastly, we note that inclusion of the spin-orbit coupling and local Jahn-Teller distortion effects in DFT+$U$ calculations may slightly change the quantitative values reported in this work without affecting the overall physics of the studied system.}
\section*{ACKNOWLEDGEMENTS}
S.S., K.R., and D.V. acknowledge the support from Office of Naval Research (ONR) Grants N00014-16-1-2951 and N00014-19-1-2073. P.P. and S.G. acknowledge the FIST program of Department of Science and Technology, India for partial support of this work (Ref. No. SR/FST/PSII-020/2009 and SR/FST/PSII-037/2016).
\vspace{0.3 cm}
\textsuperscript{*}\,These authors contributed equally to this work.\\
\vspace{-0.2 cm}
Corresponding author(s):
\textsuperscript{\textdagger}\,sobhit.singh@rutgers.edu,
\textsuperscript{\ddag} \,subhasht@iitg.ac.in
|
2,877,628,089,765 | arxiv | \section{Introduction}
The next generation wireless communication systems are expected to provide $1000\times$ increase in data traffic and support billions of internet-of-things (IoT) devices. However, the limitation of battery capacity will be a bottleneck and it is of vital importance to prolong the lifetime of energy-constrained wireless devices. To this end, wireless information and power transfer (WIPT) has been regarded as a promising solution to achieve two-way communications and at the same time provide cost-effective energy supplies for low-power IoT devices. Rather than relying solely on the batteries, IoT devices are also able to replenish energy by WIPT in a sustainable and controllable way \cite{Lu2015}. In general, there are two solutions to implement radio-frequency (RF) based WIPT in practice: wireless powered communication networks (WPCNs) and simultaneous wireless information and power transfer (SWIPT). In WPCNs, wireless nodes are first powered by an energy transmitter and then use the harvested energy to transmit data \cite{Ju2014c,Clerckx2017,Ju2014a}, while SWIPT uses the same RF signal to convey energy and information simultaneously \cite{Liu2016,Zhang2013,Liu2016a,Liu2013,Zhang2016c,Liu2017a,Clerckx2018,Liu2017}.
However, WIPT suffers from the fast decay of wireless energy transmission over distances. The traditional way to deal with this problem is energy beamforming using multi-antenna techniques, which is also not efficient due to the distance limitation of WIPT. Interestingly, this problem can be alleviated in distributed antenna system (DAS) \cite{Zhou2003,hu2007,He2013,Li2016}. Different from the conventional base stations with co-located antennas, the role of the base station in DAS is substituted by a central processor (CP) and a set of distributed antenna (DA) ports. Specifically, the CP is designed for the computational intensive baseband signal processing and the DA ports, geographically distributed throughout the area and connected to the CP via high capacity backhaul links, are used for all RF signal's operations. Thus DAS can substantially improve system's coverage and throughput. More importantly, as the access distances between the users and the DA ports are substantially reduced in DAS, WIPT is more flexible and efficient. As a result, many studies have been made to integrate WIPT into DAS. For instance, the security of WIPT based DAS has been investigated in \cite{Ng2015}. WIPT in massive DAS has been considered in \cite{Yuan2015}, while SWIPT for multiple-input single-output (MISO) DAS has been investigated in \cite{Yuan2017}.
On the other hand, due to the rapidly increasing energy cost in communication systems, energy efficiency (EE) has been considered as an important system performance metric \cite{Zhang2017,Miao2013}. EE optimization has been widely studied in WIPT \cite{Ng2013a,Xiong2014,Ng2013,Huang2018a} from a network-centric (NC) perspective, namely, NC-EE. As shown in \cite{Huang2018a}, the NC-EE of WIPT usually leads to unbalanced or unfair energy consumption, i.e., some users consume most of the network resources while others may be idle. Such a NC-EE optimization is optimal for overall system design. Besides that, improving the EEs of individual users is equally important for improving users' qualities of experience (QoE), because users have different battery capacities and heterogeneous QoE requirements. Therefore, some works \cite{Yu2015,Yu2015a,Wu2016b,Ding2018} have considered weighted sum EEs of individual users as the performance metric, i.e., user-centric EE (UC-EE). For example, joint downlink and uplink resource allocation for UC-EE maximization has been studied in \cite{Yu2015}. UC-EE in multiple radio access technologies (RATs) heterogeneous network (HetNet) has been considered in \cite{Yu2015a}. Joint time allocation and power control has been studied in WPCNs for UC-EE maximization in \cite{Wu2016b}, where the users first harvest energy from a dedicated energy station and then transmit information to an access point using the harvested energy in time-division multiplexing access (TDMA) manner. UC-EE in WPCNs has also been investigated in \cite{Ding2018}, where users are allowed to transmit information simultaneously in the uplink channel.
In this paper, we study both UC-EE and NC-EE in WIPT based DAS as shown in Fig. \ref{fig:system}, where TDMA is adopted for downlink multiuser information transmission. In the considered system, when a user is scheduled for receiving information, such as user 2 in the figure, the remaining users harvest energy from the same RF signal, such as user 1, at the same time. Different from \cite{Wu2016b,Ding2018}, in the considered system there is no need for extra dedicated energy signal to charge the users, and each user harvests energy when it is not scheduled for receiving information. Additionally, rather than being served by a single energy station and a single access point in \cite{Wu2016b,Ding2018}, in the considered system each user can receive information from a different group of DA ports due to the geographical distribution, so that the DA ports for information decoding and energy harvesting for each user may be different, making the system more flexible.
The main contributions of this paper are summarized as follows:
\begin{itemize}
\item We study energy-efficient resource allocation in WIPT based DAS for both UC-EE and NC-EE maximization. Each un-scheduled user is allowed to harvest energy from the information bearing signals conveyed for other users. We jointly optimize transmit time and power for TDMA-based multiuser transmission while satisfying the minimum harvested energy requirement for each user and the maximum transmit power budget for each DA port.
\item For the UC-EE maximization problem, the objective function has the structure of the sum-of-ratios. Therefore, we convert it into an equivalent subtractive form by introducing a set of auxiliary parameters, and then propose an iterative algorithm to solve the equivalent optimization problem in two layers. In the inner layer, the subtractive formed problem is optimally solved by Lagrangian duality method because of the concavity of the transformed problem. In the outer layer, we update the auxiliary parameters with the damped Newton method, which ensures global convergence to the optimal solution of the original problem.
\item We also investigate the NC-EE maximization problem in the same system, which is a fractional programming problem. We also develop a two-layer iterative algorithm to find the optimal solution. In the inner layer, two block coordinate descent (BCD) optimization loops are proposed to find the optimal time and power allocation. In the outer layer, the Dinkelbach method is used for solving the fractional structure of the original problem.
\end{itemize}
The rest of this paper is organized as follows. Section \ref{se1} introduces the system model and formulates the UC-EE maximization problem and NC-EE maximization problem, respectively. Sections \ref{se2} and \ref{se4} solve the UC-EE and NC-EE maximization problems, respectively. Section \ref{se5} provides extensive simulation results and discussions. Finally, Section \ref{se6} concludes the paper.
\section{System Model and Problem Formulation}\label{se1}
In this section, we first introduce the system model of the WIPT based DAS. Then we formulate the UC-EE and NC-EE maximization problems, respectively.
\subsection{System Model}
\begin{figure}[t]
\begin{centering}
\includegraphics[scale=0.6]{JSAC-1570440702-Fig1.eps}
\vspace{-0.1cm}
\caption{ An example of system model of DAS. }\label{fig:system}
\end{centering}
\vspace{-0.1cm}
\end{figure}
As shown in Fig. \ref{fig:system}, we consider a WIPT based downlink DAS consisting of a CP, $K$ users, and $N$ DA ports with independent power supply. For the ease of implementation, both DA ports and users are equipped with single antenna. The TDMA mode is taken into consideration for the downlink multiuser transmission. That is, each frame is divided into $K$ slots, and user $k$ is scheduled in slot $k$ with time duration $\tau_{k}$. We model the channel power gains as $h_{i,k}=cd_{i,k}^{-\phi}\rho^{2}_{i,k}, \forall i,k$, where $c$ is the pathloss at a reference distance of 1m, $d_{i,k}$ denotes the distance between DA port $i$ and user $k$, $\phi$ is the pathloss exponent, and $\rho_{i,k}$ follows independent and identical distribution (i.i.d) with zero mean and unit variance. Note that in DAS, the TDMA transmission for each user $k$ is a multiple-input single-output (MISO) channel and the received signal at user $k$ can be expressed as
\begin{align}\label{eqn:r88}
y_{k}=\bm{g}_{k}^{T}\bm{x}_{k}+z_{k},
\end{align}
where $\bm{g}_{k}=[\sqrt{h_{1,k}}, \ldots, \sqrt{h_{N,k}}]^{T}$ denotes the channel coefficient vector between DA ports and user $k$, $\bm{x}_{k}=[x_{1,k}, \ldots, x_{N,k}]^{T}$ denotes the transmitted signal vector for user $k$, and $z_{k}$ indicates the additive white Gaussian noise (AWGN) with zero mean and variance $\sigma^{2}$. We assume that global channel state information (CSI) is available at the CP. It is also assumed that CSI remains unchanged in each frame but may vary from one frame to another. Denote $\bm{Q}_{k}=E[\bm{x}_{k}\bm{x}_{k}^{\dag}]$ as the covariance of the Gaussian input, where $(\cdot)^{\dag}$ is conjugate transpose of a vector. The achievable rate at user $k$ can be written as
\begin{align}\label{eqn:r89}
R_{k}=\tau_{k}\log\left(1+\frac{1}{\sigma^{2}}\bm{g}_{k}^{T}\bm{Q}_{k}(\bm{g}^{\dag}_{k})^{T}\right).
\end{align}
According to \cite{Vu2011}, in DAS, the DA ports are distributed throughout the area with independent power budget and act independently. Thus the DA ports are without jointly coding and signal processing. Therefore, the transmitted signals at $N$ DA ports are independent and the input covariance is $\bm{Q}_{k}=\text{diag} \{p_{1,k}, \ldots,p_{N, k}\}$, where $p_{i,k}$ is the transmit power between DA port $i$ and user $k$. Then the achievable rate at user $k$ can be further derived as
\begin{eqnarray}
\label{eqn:r02}
R_{k}&=\tau_{k}\log\left(1+\frac{\sum^{N}_{i=1}|g_{i,k}|^{2}p_{i,k}}{\sigma^{2}}\right)\nonumber \\&=\tau_{k}\log\left(1+\frac{\sum^{N}_{i=1}h_{i,k}p_{i,k}}{\sigma^{2}}\right).
\end{eqnarray}
Note that $R_{k}$ in \eqref{eqn:r02} provides a lower bound of the achievable rate of a MISO channel by maximum ratio transmission (MRT). Also it is worth noting that \eqref{eqn:r02} implies DA port selection (or antenna selection/clustering) issue. That is, $p_{i,k}$ is positive if DA port $i$ is selected to transmit information for user $k$, and $p_{i,k}$ should be zero otherwise.
We assume that each user has the energy harvesting function, so that when one particular user $k'$ is scheduled to receive information, the other users $\forall k\neq k'$ can harvest energy from the same RF signal conveying information to user $k'$. Denote $\zeta$ as the energy conversion efficiency, then the harvested energy at user $k$ is given by
\begin{align}\label{eqn:r01}
E_{k}=\zeta\sum^{N}_{i=1}h_{i,k}\sum^{K}_{k'\neq k}\tau_{k'}p_{i,k'}.
\end{align}
In \eqref{eqn:r01}, $\sum^{K}_{k'\neq k}\tau_{k'}p_{i,k'}$ is the total energy used for transmitting information from DA port $i$ to all users except user $k$.
Denote $p^{c}_{k}$ as user $k$'s circuit power consumption, like signal processing, mixers, and so on. Then the total energy consumption used for transmitting information to user $k$ can be written as $\tau_{k}(\sum^{N}_{i=1}p_{i,k}+p^{c}_{k}$), which consists of two parts: the energy consumption of transmitting signals via power amplifiers at all DA ports and the circuit energy consumption of user $k$.
\subsection{UC-EE Maximization Problem Formulation}
From the UC-EE perspective, the EE of user $k$ is defined as the ratio of its achievable rate and its total energy consumption, which (in bits/Hz/Joule) can be written as
\begin{align}\label{eqn:r03}
\eta_{k}=\frac{\tau_{k}\ln\bigl(1+\frac{\sum_{i=1}^N h_{i,k}p_{i,k}}{\sigma^{2}}\bigr)}{\tau_{k}\bigl(\sum^{N}_{i=1}p_{i,k}+p^{c}_{k}\bigr)}=\frac{\ln\bigl(1+\frac{\sum_{i=1}^N h_{i,k}p_{i,k}}{\sigma^{2}}\bigr)}{\sum^{N}_{i=1}p_{i,k}+p^{c}_{k}},
\end{align}
where $\tau_{k}$ is eliminated in the individual EE of user $k$.
The objective of the UC-EE is to balance each user's EE. We adopt the weighted sum EEs of users as the objective function. We maximize the UC-EE of all users by varying the transmit power of DA ports and time duration of each user, subject to the minimum harvested energy requirement $\bar{E}_k$ of each user $k$ and the maximum transmit power constraint $\bar{P_{i}}$ of each DA port $i$. Denote $\bm{p}=\{p_{i,k}\}$ and $\bm{\tau}=\{\tau_{k}\}$, the UC-EE maximization problem can thus be formulated as
\begin{eqnarray}\label{eqn:r04}
{\rm (P1):}\nonumber
\max_{\bm{\tau},\bm{p}}&&\sum^{K}_{k=1}w_{k}\eta_{k} \\
{\rm s.t.}&&E_{k}\ge \bar{E}_k,\forall k,\label{eqn:r05}\\
&& 0\leq \tau_{k}\leq 1,\forall k,\label{eqn:r06}\\
&& 0\leq p_{i,k}\leq \bar{P_{i}},\forall i, k,\label{eqn:r07}\\
&& \sum^{K}_{k=1}\tau_{k}\leq 1,
\label{eqn:r08}
\end{eqnarray}
where $w_{k}$ is the non-negative constant assigned to user $k$'s EE, denoting user $k$'s EE weight. The weights are parameters decided by the system and reflect the priorities among users.
\subsection{NC-EE Maximization Problem Formulation}
We also consider the EE from the network's perspective. In this case, the network's total energy consumption is
\begin{align}\label{eqn:r84}
P_{\mathrm{total}} = \sum^{K}_{k=1}\tau_{k}\biggl(\sum^{N}_{i=1}p_{i,k}+p^{c}_{k}\biggr),
\end{align}
which includes the total transmit energy consumption at all DA ports and the total circuit energy consumption at all users. The NC-EE (in bits/Hz/Joule) for this network can be expressed as
\begin{align}\label{eqn:r09}
\eta=\frac{\sum^{K}_{k=1}w_{k}R_{k}}{P_{\mathrm{total}}}.
\end{align}
Similar to the UC-EE problem in (P1), the NC-EE maximization problem can be formulated as
\begin{eqnarray}\label{eqn:r10}
{\rm (P2):}\nonumber
\max_{\bm{\tau},\bm{p}}&&\eta \\
{\rm s.t.}&&E_{k}\ge \bar{E}_k,\forall k,\label{eqn:r11}\\
&& 0\leq \tau_{k}\leq 1,\forall k,\label{eqn:r12}\\
&& 0\leq p_{i,k}\leq \bar{P_{i}},\forall i, k,\label{eqn:r13}\\
&& \sum^{K}_{k=1}\tau_{k}\leq 1.
\label{eqn:r14}
\end{eqnarray}
\begin{lemma}\label{Lm0}
UC-EE always mathematically outperforms NC-EE.
\end{lemma}
\emph{Proof:}
Please refer to Appendix \ref{AP0}.$\hfill\blacksquare$
\section{Optimal Solution for UC-EE Maximization Problem}\label{se2}
In this section, we address the UC-EE maximization problem (P1). We first transform it into a concave form by some mathematical methods and then develop an iterative algorithm to obtain the globally optimal solution.
\subsection{Problem Transformation}
The constraint \eqref{eqn:r05} is non-convex as two variables are multiplied. To make problem (P1) more tractable, we introduce a set of variables $\bm{s}=\{s_{i,k}\}$ with $s_{i,k}=\tau_{k}p_{i,k}$, which actually denote the energy variables. Then user $k$'s EE $\eta_{k}$ in \eqref{eqn:r03} becomes
\begin{align}
\eta_{k}=\frac{\ln\biggl(1+\frac{\sum_{i=1}^N h_{i,k}s_{i,k}}{\sigma^{2}\tau_{k}}\biggr)}{\sum^{N}_{i=1}\frac{s_{i,k}}{\tau_{k}}+p^{c}_{k}}=\frac{\tau_{k}\ln\biggl(1+\frac{\sum_{i=1}^N h_{i,k}s_{i,k}}{\sigma^{2}\tau_{k}}\biggr)}{\sum^{N}_{i=1}s_{i,k}+\tau_{k}p^{c}_{k}}.
\end{align}
In addition, substituting $\bm{s}$ into constraints \eqref{eqn:r05} and \eqref{eqn:r07}, problem (P1) can be rewritten as
\begin{eqnarray}
{\rm (P1'):}\nonumber\max_{\bm{\tau},\bm{s}}&&\sum^{K}_{k=1}\frac{w_{k}\tau_{k}\ln\biggl(1+\frac{\sum_{i=1}^N h_{i,k}s_{i,k}}{\sigma^{2}\tau_{k}}\biggr)}{\sum^{N}_{i=1}s_{i,k}+\tau_{k}p^{c}_{k}}\nonumber\\
{\rm s.t.}&&\zeta\sum_{i=1}^{N}h_{i,k}\sum^{K}_{k'\neq k}s_{i,k'} \ge \bar{E}_k,\forall k, \label{C1}\\
&& 0\leq \tau_{k}\leq 1,\forall k,\label{C7} \\
&& 0\leq s_{i,k}\leq \tau_{k}\bar{P_{i}},\forall i, k,\label{C2} \\
&& \sum^{K}_{k=1}\tau_{k}\leq 1.\label{C3}
\end{eqnarray}
Note that once we obtain the optimal energy and time variables $(\bm{s}^*, \bm{\tau}^*)$ by solving problem (P1'), we can recover the optimal power allocation $\bm{p}^*$ by
\begin{gather}
p_{i,k}^*=\left\{\begin{array}{ll}
s^*_{i,k}/\tau_{k}^* &\text{if } \tau_{k}^*>0,\\
0&
\text{if } \tau^*_{k}=0.\label{eqn:r87}
\end{array}\right.
\end{gather}
The objective function of problem (P1') is with the sum-of-ratios structure and thus non-concave. Based on \cite{Jong2012}, the sum-of-ratios optimization problem can be transformed into a parameterized subtractive-form problem as following.
Denote $\bm{\alpha}=(\alpha_{1},\ldots,\alpha_{K})$ and $\bm{\beta}=(\beta_{1},\ldots,\beta_{K})$, if $(\bm{\tau}^*,\bm{s}^*)$ is a solution of problem (P1'), there always exists $\bm{\alpha}^*$ and $\bm{\beta}^*$ such that $(\bm{\tau}^*,\bm{s}^*)$ is also a solution of the following problem with $\bm{\alpha}=\bm{\alpha}^*$ and $\bm{\beta}=\bm{\beta}^*$.
\begin{eqnarray}\label{eqn:r15}
\max && \sum^{K}_{k=1}\alpha_{k}\biggl(w_{k}R_{k}-\beta_{k}\bigl(\sum^{N}_{i=1}s_{i,k}+\tau_{k}p^{c}_{k}\bigr)\biggr)\nonumber\\
{\rm s.t.}&& \eqref{C1}, \eqref{C7}, \eqref{C2}, \eqref{C3},
\end{eqnarray}
where $R_{k}$ can be obtained by applying \eqref{eqn:r87} in \eqref{eqn:r02}. Additionally, $(\bm{\tau}^*,\bm{s}^*)$ also meets the following conditions with $\bm{\alpha}=\bm{\alpha}^*$ and $\bm{\beta}=\bm{\beta}^*$.
\begin{eqnarray}
&&1-\alpha_{k}\biggl(\sum^{N}_{i=1}s_{i,k}^*+\tau_{k}^*p^{c}_{k}\biggr)=0, \forall k,\label{eqn:r16}\\
&&w_{k}R_{k}^*-\beta_{k}\biggl(\sum^{N}_{i=1}s^*_{i,k}+\tau^*_{k}p^{c}_{k}\biggr)=0, \forall k.\label{eqn:r17}
\end{eqnarray}
By the above transformation, now we can solve problem (P1') in an equivalent parameterized form \eqref{eqn:r15} where the objective function is in subtractive form with extra parameters $\bm{\alpha}$ and $\bm{\beta}$. As a result, problem (P1') can be solved in two layers: in the inner-layer, the optimal time and energy variables $(\bm{\tau}^*, \bm{s}^*)$ can be obtained by solving the subtractive formed problem \eqref{eqn:r15} with given $(\bm{\alpha},\bm{\beta})$. At the outer-layer, we find the optimal $(\bm{\alpha}^*,\bm{\beta}^*)$ satisfying \eqref{eqn:r16} and \eqref{eqn:r17}.
\subsection{Finding Optimal $(\bm{\tau}^*, \bm{s}^*)$ for Given $(\bm{\alpha},\bm{\beta})$}
\begin{lemma}\label{Lm1}
The objective function of problem \eqref{eqn:r15} is jointly concave over $\bm{s}$ and $\bm{\tau}$.
\end{lemma}
\emph{Proof:}
Please refer to Appendix \ref{AP1}.$\hfill\blacksquare$
Since all constraints in problem \eqref{eqn:r15} are affine, problem \eqref{eqn:r15} is a convex problem and we can use the Lagrangian dual method to solve this substrative formed maximization problem optimally. The Lagrangian function for problem \eqref{eqn:r15} can be written as
\begin{align}\label{eqn:r24}
&L_1(\bm{s},\bm{\tau},\bm{\mu},\bm{\upsilon},\lambda)=\nonumber\\&\sum^{K}_{k=1}\alpha_{k}\biggl(w_{k}\tau_{k}\ln\bigl(1+\frac{\sum_{i=1}^N h_{i,k}s_{i,k}}{\sigma^{2}\tau_{k}}\bigr)-\beta_{k}\bigl(\sum^{N}_{i=1}s_{i,k}+\tau_{k}p^{c}_{k}\bigr)\biggr)\nonumber\\&+\sum^{K}_{k=1}\mu_{k}\biggl(\zeta\sum_{i=1}^{N}h_{i,k}\sum^{K}_{k'\neq k}s_{i,k'}-\bar{E}_k\biggr)+\lambda\biggl(1-\sum^{K}_{k=1}\tau_{k}\biggr)\nonumber\\& +\sum^{K}_{k=1}\sum_{i=1}^{N}\upsilon_{i,k}\biggl(\bar{P_{i}}\tau_{k}-s_{i,k}\biggr),
\end{align}
where $\bm{\mu}=\{\mu_{k}\},$ $\bm{v}=\{v_{i,k}\}$ and $\lambda$ are the Lagrangian multipliers associated with the constraints \eqref{C1}, \eqref{C2} and \eqref{C3} respectively. Then the Lagrangian dual function is given by
\begin{align}\label{eqn:r25}
g_1(\bm{\mu},\bm{\upsilon},\lambda)=\max_{\substack{ \{0\leq \tau_{k}\leq 1\} \\ \{s_{i,k}\geq 0\} }}L_1(\bm{s},\bm{\tau},\bm{\mu},\bm{\upsilon},\lambda).
\end{align}
And the dual problem is given by
\begin{align}\label{eqn:r26}
\min_{\{\bm{\mu}, \bm{\upsilon}, \lambda\}\geq\bm{0} }g_1(\bm{\mu},\bm{\upsilon},\lambda).
\end{align}
Now we solve the problem \eqref{eqn:r25} for given Lagrangian variables $\{\bm{\mu},\bm{\upsilon},\lambda\}$. The BCD method \cite{Richtarik2014} can be adopted to solve the problem, where we alternatively optimize one of $\bm{\tau}$ and $\bm{s}$ with the other fixed. Note that the Lagrangian function $L_{1}$ is jointly concave in $\bm{\tau}$ and $\bm{s}$ as shown before, which ensures the BCD method to converge to the globally optimal solution.
Given $\bm{s}$, there are two cases of $\tau_{k}^*$. First, if $\sum^{N}_{i=1}s_{i,k}=0$, i.e., $s_{i,k}=0$ for all $i$, which means that there is no energy transmitted for user $k$ and we set $\tau_{k}^{*}=0$ in this case. Otherwise, with $\sum^{N}_{i=1}s_{i,k}>0$, the derivation of the Lagrangian function $L_{1}$ with respect to $\tau_{k}$ is
\begin{align}\label{eqn:r90}
\frac{\partial L_1}{\partial \tau_{k}}=\alpha_{k}w_{k}\ln\biggl(1+\frac{\sum_{i=1}^N h_{i,k}s_{i,k}}{\sigma^{2}\tau_{k}}\biggr)-\alpha_{k}\beta_{k}p^{c}_{k}-\lambda \nonumber\\-\frac{\alpha_{k}w_{k}\sum^{N}_{i=1}h_{i,k}s_{i,k}}{\sigma^{2}\tau_{k}+\sum^{N}_{i=1}h_{i,k}s_{i,k}}+\sum^{N}_{i=1}\bar{P}_{i}v_{i,k}.
\end{align}
And we can further derive that
\begin{align}\label{eqn:r30}
\frac{\partial^{2} L_1}{\partial \tau^{2}_{k}}=&-\frac{\alpha_{k}w_{k}\bigl(\sum^{N}_{i=1}h_{i,k}s_{i,k}\bigr)^{2}}{\tau_{k}\bigl(\tau_{k}\sigma^{2}+\sum^{N}_{i=1}h_{i,k}s_{i,k}\bigr)^{2}},
\end{align}
which is negative so that $L_{1}$ is concave over $\tau_{k}$. The solution of $\tau_{k}$ can be solved by setting $\frac{\partial L_1}{\partial \tau_{k}}=0$ as well as considering the constraint $0\leq \tau_{k}\leq 1$. The closed-form solution of $\tau_k^*$ is
\begin{align}\label{eqn:r91}
\tau^{*}_{k} =\left[\tilde{\tau}_{k}\right]^{1}_{0},
\end{align}
where $\tilde{\tau}_{k}$ can be obtained by
\begin{gather}
\tilde{\tau}_{k}=\left\{\begin{array}{ll}\label{eqn:r92}
\frac{\sum^{N}_{i=1}h_{i,k}s_{i,k}}{\sigma^{2}(\exp\{\omega(-\exp\{\frac{A_{k}}{\alpha_{k}w_{k}}-1\})+1-\frac{A_{k}}{\alpha_{k}w_{k}}\}-1)}&A_{k}\leq 0,\\
0&
\text{otherwise,}
\end{array}\right.
\end{gather}
where $A_{k}=\sum^{N}_{i=1}\bar{P}_{i}v_{i,k}-\alpha_{k}\beta_{k}p^{c}_{k}-\lambda$ and $\omega(x)$ is defined as the inverse function of $f(x) = xe^x$ and denotes the principal branch of the Lambert $\omega$ function \cite{Corless1996}. Note that in \eqref{eqn:r92} we have to consider the definition domain of $\omega$ function. The details of obtaining $\tilde{\tau}_{k}$ by solving $\frac{\partial L_1}{\partial \tau_{k}}=0$ can be found at the Appendix B in \cite{Kim2015}.
Next, for given $\bm{\tau}$, by the Karush-Kuhn-Tucker (KKT) conditions \cite{Boyd2004}, we have
\begin{align}\label{eqn:r27}
\frac{\partial L_1}{\partial s_{i,k}}=&\frac{\alpha_{k}w_{k}\tau_{k}h_{i,k}}{\tau_{k}\sigma^{2}+\sum^{N}_{i=1}s_{i,k}h_{i,k}}-\alpha_{k}\beta_{k}+\zeta\sum^{K}_{k'\neq k}\mu_{k'}h_{i,k'}\nonumber\\&-\upsilon_{i,k}.
\end{align}
By defining $B_{i,k}=\zeta\sum^{K}_{k'\neq k}\mu_{k'}h_{i,k'}-\alpha_{k}\beta_{k}-\upsilon_{i,k}$, the optimal value of $s_{i,k}^*$ can be obtained by setting $\frac{\partial L_1}{\partial s_{i,k}}=0$ since \eqref{eqn:r27} is decreasing with $s_{i,k}$. So $s^*_{i,k}$ can be given by
\begin{align}\label{eqn:r28}
s_{i,k}^*=
\biggl[-\frac{\alpha_{k}w_{k}\tau_{k}}{B_{i,k}}-\frac{\tau_{k}\sigma^{2}+\sum^{N}_{j\neq i}s_{j,k}h_{j,k}}{h_{i,k}}\biggr]^{+},
\end{align}
where $[x]^{+}=\max(x,0)$. Note that we use the BCD method to obtain $s^*_{i,k}$. Thus we update each $s_{i,k}$ as \eqref{eqn:r28} while the values of other $s_{i,k}$'s are given in last iteration.
After solving problem \eqref{eqn:r25} with given $\{\bm{\mu},\bm{\upsilon},\lambda\}$, we now address the minimization problem \eqref{eqn:r26} which is a convex problem. We use the ellipsoid method to simultaneously update $\{\bm{\mu}, \bm{\upsilon}, \lambda\}$ to the optimal ones. The subgradients used for the ellipsoid method are provided as
\begin{align}
&\Delta \mu_k= \zeta\sum^{N}_{i=1}h_{i,k}\sum^{K}_{k'\neq k}s^*_{i,k'}-\bar{E}_{k}, \forall k,\label{eqn:r85}\\
&\Delta \upsilon_{i,k}=\bar{P}_{i}\tau_{k}^* -s^*_{i,k}, \forall i, k,\label{eqn:r82}\\
&\Delta \lambda=1-\sum^{N}_{k=1}\tau^*_{k}.\label{eqn:r81}
\end{align}
\subsection{Finding Optimal $(\bm{\alpha}^*,\bm{\beta}^*)$ for Given $(\bm{\tau}^*, \bm{s}^*)$}
After solving problem \eqref{eqn:r15} with given $\bm{\alpha}$ and $\bm{\beta}$, we now develop an algorithm to update $\bm{\alpha}$ and $\bm{\beta}$ according to \cite{Jong2012}. To begin with, we define $\bm{\psi}(\bm{\alpha, \beta})=(\psi_{1},\ldots,\psi_{2K})$ as
\begin{align}
&\psi_{k}(\alpha_{k})=\alpha_{k}\biggl(\sum^{N}_{i=1}s_{i,k}+\tau_{k}p^{c}_{k}\biggr)-1,\forall k,\label{eqn:r32}\\
&\psi_{k+K}(\beta_{k})=w_{k}R_{k}-\beta_{k}\biggl(\sum^{N}_{i=1}s_{i,k}+\tau_{k}p^{c}_{k}\biggr), \forall k. \label{eqn:r33}
\end{align}
As shown in \cite{Jong2012}, if $\bm{\psi}(\bm{\alpha, \beta})=\bm{0}$, then $(\bm{s}^*, \bm{\tau}^*)$ is the global optimal solution for the problem (P1') and the iteration stops. Otherwise, we need to update $\bm{\alpha}$ and $\bm{\beta}$ as
\begin{align}
&\bm{\alpha}^{n+1}=\bm{\alpha}^{n}+\gamma^{n}\bm{q}^{n},\label{eqn:r34}\\
&\bm{\beta}^{n+1}=\bm{\beta}^{n}+\gamma^{n}\bm{q}^{n},\label{eqn:r35}\\
&\bm{q}^{n}=[\bm{\psi}'(\bm{\alpha, \beta})]^{-1}\bm{\psi}(\bm{\alpha, \beta}),\label{eqn:r36}
\end{align}
where $\bm{\psi}'(\bm{\alpha, \beta})$ is the Jacobian matrix of $\bm{\psi}(\bm{\alpha, \beta})$ and $n$ is the iteration index. Let $m_{k}$ denote the smallest integer among $m\in \{0,1,2,\ldots\}$ which satisfies
\begin{align}\label{eqn:r37}
\lVert \bm{\psi}(\bm{\alpha}^{n}+\xi^{m}\bm{q}^{n}, \bm{\beta}^{n}+\xi^{m}\bm{q}^{n})\rVert \leq (1-\epsilon\xi^{m})\lVert\bm{\psi}(\bm{\alpha, \beta})\rVert,
\end{align}
where $\epsilon\in (0,1)$, $\xi\in (0,1)$, and $\lVert \cdot \rVert$ is the standard Euclidean norm. Then $\gamma^{n}$ can be obtained as $\xi^{m_{k}}$.
To summarize, the whole algorithm solving problem (P1) optimally is presented in Algorithm \ref{alg:A1}.
The complexity of Algorithm \ref{alg:A1} is evaluated as follows. The complexity for solving $\bm{s}$ and $\bm{\tau}$ with the BCD method is $\mathcal{O}(K^{2}N)$. The complexity of the ellipsoid method is $\mathcal{O}((NK+K+1)^{2})$. The complexity for updating $\bm{\alpha}$ and $\bm{\beta}$ is independent of $K$ \cite{Jong2012}. So the total complexity of Algorithm \ref{alg:A1} is $\mathcal{O}((NK+K+1)^{2}K^{3}N)$.
\begin{algorithm}[tb]
\caption{Optimal algorithm for problem (P1) }\label{alg:A1}
\begin{algorithmic}[1]
\STATE Initialize $\bm{\alpha}$ and $\bm{\beta}$.
\REPEAT
\STATE Initialize $\{\bm{\mu}, \bm{\upsilon}, \lambda\}$.
\REPEAT
\STATE Initialize $\bm{s}$ and $\bm{\tau}$.
\REPEAT
\STATE Compute $\tau_{k}$ that maximizes $L_1$ by \eqref{eqn:r91}.
\STATE Compute $\bm{s}$ using \eqref{eqn:r28} with fixed $\bm{\tau}$.
\UNTIL{The improvement of $L_{1}$ stops.}
\STATE Update $\{\bm{\mu}, \bm{\upsilon}, \lambda\}$ by the ellipsoid method using subgradients \eqref{eqn:r85}-\eqref{eqn:r81}.
\UNTIL{$\{\bm{\mu}, \bm{\upsilon}, \lambda\}$ converge to a prescribed accuracy.}
\STATE Denote $m_{k}$ as the smallest $m$ meeting \eqref{eqn:r37}.
\STATE Let $\gamma^{n}=\xi^{m_{k}}$, update $\bm{\alpha}$ and $\bm{\beta}$ by \eqref{eqn:r34} and \eqref{eqn:r35}, respectively.
\UNTIL{$\lVert \bm{\psi}(\bm{\alpha, \beta})\rVert$ is smaller than a prescribed accuracy.}
\STATE Obtain $\bm{p}^{*}$ by \eqref{eqn:r87}.
\end{algorithmic}
\end{algorithm}
\section{Optimal Solution for NC-EE Maximization Problem}\label{se4}
In this section, we solve the NC-EE maximization problem (P2). Since problem (P2) is a fractional programming problem, we develop an iterative algorithm to obtain global optimum solution.
To begin with, we define $\mathcal{F}$ as the feasible set of problem (P2) specified by constraints \eqref{eqn:r11}-\eqref{eqn:r14}. Denote $q^{*}$ as the optimal value of problem (P2), we have
\begin{align}\label{eqn:r55}
q^{*}=\max_{(\bm{p},\bm{\tau})\in\mathcal{F}} \frac{\sum^{K}_{k=1}w_{k}R_{k}}{P_{\mathrm{total}}}.
\end{align}
This is a fractional programming problem and we introduce the following theorem, proved in \cite{W.Dinkelbach1967}, to transform this problem into an equivalent linear form.
\begin{theorem}\label{T1}
The NC-EE maximization problem (P2) can be solved in the following subtractive form with parameter $q$.
\begin{align}\label{eqn:r56}
\max_{(\bm{p},\bm{\tau})\in\mathcal{F}} \sum^{K}_{k=1}w_{k}R_{k}-qP_{\mathrm{total}}.
\end{align}
Problem \eqref{eqn:r55} and problem \eqref{eqn:r56} are equivalent under the optimal $q^{*}$ if and only if
\begin{align}\label{eqn:r57}
T(q^{*})=\max_{(\bm{p},\bm{\tau})\in\mathcal{F}} \left\{ \sum^{K}_{k=1}w_{k}R_{k}-q^{*}P_{\mathrm{total}} \right\}=0.
\end{align}
\end{theorem}
From Theorem \ref{T1}, the two problems in \eqref{eqn:r55} and \eqref{eqn:r56} lead to the same optimal solution. Moreover, \eqref{eqn:r57} can be utilized to verify the optimality of the solution in the subtractive formed problem.
Based on above, now we can solve problem (P2) optimally with an equivalent form. Here we adopt an iterative algorithm to obtain $q^*$, i.e., the Dinkelbach method \cite{W.Dinkelbach1967}. In particular, the solution also has a two-layer structure: for given parameter $q$, we solve the subtractive formed problem in the inner-layer and then we update $q$ by \eqref{eqn:r55} in the outer-layer. This iterative process continues until the optimal solution satisfies condition \eqref{eqn:r57} in Theorem \ref{T1}. The convergence of this algorithm is guaranteed when the subtractive formed problem \eqref{eqn:r56} is solved globally optimally in each iteration.
To make problem \eqref{eqn:r56} more tractable, we set $s_{i,k}=\tau_{k}p_{i,k}$ as in the previous section. As a result, with $\bm{s}=\{s_{i,k}\}$ and given $q$, problem \eqref{eqn:r56} can be reformulated as
\begin{eqnarray}\label{eqn:r58}
{\rm (P2'):}\max_{\bm{\tau},\bm{s}}&&\sum^{K}_{k=1}w_{k}R_{k}-qP_{\mathrm{total}} \nonumber\\
{\rm s.t.}&&\zeta\sum^{N}_{i=1}h_{i,k}\sum^{K}_{k'\neq k}s_{i,k'}\ge \bar{E}_k,\forall k,\label{C4}\\
&& 0\leq \tau_{k}\leq 1,\forall k, \\
&& 0\leq s_{i,k}\leq \tau_{k}\bar{P_{i}},\forall i, k, \label{C5}\\
&& \sum^{K}_{k=1}\tau_{k}\leq 1\label{C6}.
\end{eqnarray}
Similar to the previous section, after solving problem (P2') optimally with $q^{*}$, we can recover the optimal power allocation $\bm{p}^*$ by \eqref{eqn:r87}.
Now problem (P2') is a convex problem and the Lagrangian dual method can be used to solve problem (P2') optimally. The Lagrangian function of problem (P2') for a given $q$ can be written as
\begin{align}\label{eqn:r59}
&L_2(\bm{s},\bm{\tau},\bm{\mu},\bm{\upsilon},\lambda)=\nonumber\\&\sum^{K}_{k=1}w_{k}\tau_{k}\ln\biggl(1+\frac{\sum_{i=1}^N h_{i,k}s_{i,k}}{\sigma^{2}\tau_{k}}\biggr)-q\sum^{K}_{k=1}\sum^{N}_{i=1}s_{i,k}\nonumber\\&-q\sum^{K}_{k=1}\tau_{k}p^{c}_{k}+\sum^{K}_{k=1}\mu_{k}\biggl(\zeta\sum_{i=1}^{N}h_{i,k}\sum^{K}_{k'\neq k}s_{i,k'}-\bar{E}_k\biggr)\nonumber\\&+\lambda\biggl(1-\sum^{K}_{k=1}\tau_{k}\biggr) +\sum^{K}_{k=1}\sum_{i=1}^{N}\upsilon_{i,k}\biggl(\bar{P_{i}}\tau_{k}-s_{i,k}\biggr),
\end{align}
where $\bm{\mu}=\{\mu_{k}\}$, $\bm{\upsilon}=\{\upsilon_{i,k}\}$ and $\lambda$ are the Lagrangian multipliers with respect to the constraints \eqref{C4}, \eqref{C5} and \eqref{C6}, respectively. Then the corresponding Lagrangian dual function $g_2(\bm{\mu},\bm{\upsilon},\lambda)$ is expressed as
\begin{align}\label{eqn:r60}
g_2(\bm{\mu},\bm{\upsilon},\lambda)=\max_{\substack{ \{0\leq \tau_{k}\leq 1\} \\ \{s_{i,k}\geq 0\} }}L_2(\bm{s},\bm{\tau},\bm{\mu},\bm{\upsilon},\lambda).
\end{align}
The dual problem is written as
\begin{align}\label{eqn:r61}
\min_{\{\bm{\mu}, \bm{\upsilon}, \lambda\}\geq\bm{0} }g_2(\bm{\mu},\bm{\upsilon},\lambda).
\end{align}
Note again that the Lagrangian function $L_{2}$ in \eqref{eqn:r60} is jointly concave in variables $\bm{s}$ and $\bm{\tau}$ as explained in the previous section. Thus we use the BCD method to obtain the optimal solution with the guaranteed convergence. Similar to the previous section, for given $\bm{s}$, we have $\tau^*_{k}=0$ if $s_{i,k}=0$ for all $i$. Otherwise, we have $\frac{\partial^{2} L_2}{\partial \tau^{2}_{k}}<0$ and thus $L_{2}$ is concave over each $\tau_{k}$. So that we solve the zero point of $\frac{\partial L_2}{\partial \tau_{k}}$ within $0\leq \tau_{k}\leq 1$ to obtain the optimal $\tau_{k}^*$. We have
\begin{gather}
\tau^*_{k}=\left\{\begin{array}{ll}\label{eqn:r94}
\left[\frac{\sum^{N}_{i=1}h_{i,k}s_{i,k}}{\sigma^{2}(\exp\{\omega(-\exp\{\frac{C_{k}}{w_{k}}-1\})+1-\frac{C_{k}}{w_{k}}\}-1)}\right]^{1}_{0}&C_{k}\leq 0,\\
0&
\text{otherwise,}
\end{array}\right.
\end{gather}
where $C_{k}=\sum^{N}_{i=1}\bar{P}_{i}v_{i,k}-qp^{c}_{k}-\lambda$. It is worth noting that in \eqref{eqn:r94} the definition domain of $\omega$ function is needed to be taken into consideration as well. And the details of obtaining $\tau^*_{k}$ by solving the zero of $\frac{\partial L_2}{\partial \tau_{k}}$ can be found at the Appendix B in \cite{Kim2015}.
With $\bm{\tau}$ obtained, we can also use the BCD method to optimize $\bm{s}$, i.e., we alternatively optimize each $s_{i, k}$ with the others fixed. The derivation of $L_{2}$ with respect to $s_{i,k}$ can be written as
\begin{align}\label{eqn:r64}
\frac{\partial L_2}{\partial s_{i,k}}=&\frac{w_{k}\tau_{k}h_{i,k}}{\tau_{k}\sigma^{2}+\sum^{N}_{i=1}s_{i,k}h_{i,k}}+D_{i,k},
\end{align}
where $
D_{i,k}=-q+\zeta\sum^{K}_{k'\neq k}\mu_{k'}h_{i,k'}-\upsilon_{i,k}$.
From \eqref{eqn:r64}, we find that $\frac{\partial L_2}{\partial s_{i,k}}$ is decreasing with $s_{i,k}$. Thus $s_{i,k}^*$ can be uniquely determined through setting $\frac{\partial L_2}{\partial s_{i,k}}=0$ under the non-negative constraint $s_{i,k}\geq 0$. As a result, to maximize $L_{2}$, we have
\begin{align}\label{eqn:r66}
s_{i,k}^*=
\left[-\frac{w_{k}\tau_{k}}{D_{i,k}}-\frac{\tau_{k}\sigma^2+\sum^{N}_{j\neq i}h_{j,k}s_{j,k}}{h_{i,k}}\right]^{+}.
\end{align}
We also note that for given $\bm{\tau}$, the BCD optimization of $\bm{s}$ by \eqref{eqn:r66} ensures the convergence due to the concavity of $L_2$.
In summary, problem \eqref{eqn:r60} can be solved optimally by two BCD loops. In the outer-loop, $\bm{s}$ and $\bm{\tau}$ are alternatively optimized. In the inner-loop, with given $\bm{\tau}$, each $s_{i,k}$ is also alternatively optimized while fixing other $s_{i,k}$'s. The iterative process stops when the improvement of $L_{2}$ stops.
Next we turn to obtain the optimal values of Lagrangian multipliers through the ellipsoid method, where the subgradients used to update $\{\bm{\mu}, \bm{\upsilon}, \lambda\}$ are given by
\begin{align}
&\Delta \mu_k=\zeta\sum^{N}_{i=1}h_{i,k}\sum^{K}_{k'\neq k}s^*_{i,k'}-\bar{E}_{k}, \forall k,\label{eqn:r86}\\
&\Delta \upsilon_{i,k}=\bar{P}_{i}\tau_{k}^* -s^*_{i,k}, \forall i, k,\label{eqn:r83}\\
&\Delta \lambda=1-\sum^{N}_{k=1}\tau^*_{k}\label{eqn:r80}.
\end{align}
Finally, after solving the dual function in the pervious steps, we update $q$ as \eqref{eqn:r55}. Then problem (P2') is solved again until $q$ converges to an optimal value $q^*$, which is also the optimal value of NC-EE $\eta^*$. The algorithm for addressing problem (P2) is summarized in Algorithm \ref {alg:A3}.
The complexity of the BCD method is $\mathcal{O}(K^{2}N)$ and the complexity of the ellipsoid method is $\mathcal{O}((NK+K+1)^{2})$. Thus the total complexity of Algorithm \ref {alg:A3} is $\mathcal{O}(\kappa (NK+K+1)^{2}K^{2}N)$, where $\kappa$ is the number of iterations for updating $q$.
\begin{algorithm}[tb]
\caption{Optimal algorithm for problem (P2)}\label{alg:A3}
\begin{algorithmic}[1]
\STATE Initialize $q$.
\REPEAT
\STATE Initialize $\{\bm{\mu}, \bm{\upsilon}, \lambda\}$.
\REPEAT
\STATE Initialize $\bm{s}$ and $\bm{\tau}$.
\REPEAT
\STATE Compute $\tau_{k}$ that maximizes $L_2$ by \eqref{eqn:r94}.
\STATE Compute $\bm{s}$ using \eqref{eqn:r66} with fixed $\bm{\tau}$.
\UNTIL{The improvement of $L_{2}$ stops.}
\STATE Update $\{\bm{\mu}, \bm{\upsilon}, \lambda\}$ by the ellipsoid method using subgradients \eqref{eqn:r86}-\eqref{eqn:r80}.
\UNTIL{$\{\bm{\mu}, \bm{\upsilon}, \lambda\}$ converge to a prescribed accuracy.}
\STATE Update $q$ as \eqref{eqn:r55}.
\UNTIL{$T(q^{*})$ in \eqref{eqn:r57} is smaller than a prescribed accuracy.}
\STATE Obtain $\bm{p}^{*}$ by \eqref{eqn:r87}.
\end{algorithmic}
\end{algorithm}
\section{Simulation Results}\label{se5}
\begin{table}[t]\label{table:sim_para}
\renewcommand\arraystretch{1.5}
\caption{\\Simulation Parameters}
\centering
\begin{tabular}{|c|c|p{'1'}|}
\hline
Noise power $\sigma^{2}$ & $-$104 dBm \\
\hline
Pathloss at a reference distance of 1m & $10^{-3}$ \\
\hline
Pathloss exponent & 2 \\
\hline
Length of the square & 10 m \\ %
\hline
Power constraint for the $i$-th DA port & $\bar{P_{i}}=\bar{P}$\\ %
\hline
Harvested energy constraint for the $k$-th user & $\bar{E}_k=\bar{E}$\\ %
\hline
Circuit power consumption & $p^{c}_{k}=0.5 \text{W}, \forall k$ \\
\hline
Weight of users & $w_{k}=1,\forall k$ \\ %
\hline
DA port deployment & Square layout \\ %
\hline
Energy conversion efficiency $\zeta$ & 0.6 \\ %
\hline
\end{tabular}
\label{table:sim_para}
\end{table}
In this section, we present simulation results to verify the effectiveness of the proposed optimal algorithms for UC-EE and NC-EE maximization problems called UC-OPT and NC-OPT, respectively. The main system parameters are listed in Table \ref{table:sim_para}. In the proposed DAS, we assume that $N$ DA ports are distributed uniformly in a square area of $100$ square meters and the users are randomly distributed throughout the area. For comparison, we also evaluate the performance of the following benchmark schemes:
\begin{enumerate}
\item \textbf{UC-EE maximization problem with fixed time allocation (UC-FT)}. In this scheme, the information transmission time for each user is fixed as $\tau_{k}=1/K, \forall k$. Note that this is a special case of problem (P1) and the proposed Algorithm \ref{alg:A1} is also applicable for this case. The overall complexity for this benchmark scheme is $\mathcal{O}(K^{4}N)$.
\item \textbf{UC-EE maximization problem with fixed power allocation (UC-FP)}. The transmit power in each DA port is fixed as $p_{i,k}=\bar{P}_{i}, \forall i, k$ in this case. As a result, the user's EE becomes $\eta_{k}=\frac{\ln\bigl(1+\frac{\sum_{i=1}^N h_{i,k}\bar{P}_{i}}{\sigma^{2}}\bigr)}{\sum^{N}_{i=1}\bar{P}_{i}+p^{c}_{k}}$ which is a constant. Thus in this case we need to optimize the transmit time $\bm{\tau}$ to meet the minimum harvested energy constraints.
\item \textbf{NC-EE maximization problem with fixed time allocation (NC-FT)}. With $\tau_{k}=1/K, \forall k$, Algorithm \ref{alg:A3} is also applicable for solving this simplified NC-EE maximization problem. The total complexity is $\mathcal{O}(\kappa K^{3}N)$.
\item \textbf{NC-EE maximization problem with fixed power allocation (NC-FP)}. Given $p_{i,k}=\bar{P}_{i}, \forall i, k$, we adopt the Dinkelbach method to transform this time allocation problem into a linear programming problem. Therefore, we can apply some standard linear optimization methods, such as the simplex method \cite{Boyd2004}, to obtain the solution efficiently. The total complexity is $\mathcal{O}(\kappa K)$.
\end{enumerate}
\begin{figure}[t]
\begin{centering}
\includegraphics[scale=0.4]{JSAC-1570440702-Fig2.eps}
\vspace{-0.1cm}
\caption{ UC-EE and NC-EE versus the minimum harvested energy constraint $\bar{E}$. }\label{fig:E_min}
\end{centering}
\vspace{-0.1cm}
\end{figure}
Fig. \ref{fig:E_min} illustrates the impact of the minimum harvested energy requirement on the EE of the considered schemes with 4 users, 7 DA ports, and $\bar{P}=6 \text{W}$. From the figure, the optimality of the proposed schemes (UC-OPT and NC-OPT) is confirmed. Also when the minimum harvested energy constraint $\bar{E}$ increases, the EE performance for all considered schemes declines for two reasons. First, each user needs longer time for energy harvesting to meet the growing minimum harvested energy demand. This results in shorter time for information decoding, which finally results in a lower throughput. Secondly, with increasing $\bar{E}$, the DA ports are likely to transmit higher power so as to enable the users to harvest more energy, which leads to higher energy consumption. Moreover, we observe that the benchmark scheme with fixed time allocation outperforms the benchmark scheme with fixed power allocation, no matter in the UC case or NC case. It demonstrates that power allocation plays a more important role in the optimization process, compared with time allocation. Furthermore, we note that the UC-OPT scheme gains much more EE than the NC-OPT scheme. The reason for this performance gap is that the UC-OPT scheme adopts the weighted sum EEs of individual users as its performance metric, which is in sum-of-ratios structure, while the NC-OPT scheme chooses the ratio of system's throughput to total energy consumption as its performance metric, which is in fractional structure.
\begin{figure}[t]
\begin{centering}
\includegraphics[scale=0.4]{JSAC-1570440702-Fig3.eps}
\vspace{-0.1cm}
\caption{ UC-EE and NC-EE versus the maximum transmit power constraint $\bar{P}$. }\label{fig:P_max}
\end{centering}
\vspace{-0.1cm}
\end{figure}
In Fig. \ref{fig:P_max}, we compare the EE performance of the above mentioned schemes with respect to the maximum transmit power constraint $\bar{P}$. The numbers of DA ports and users are 7 and 4 respectively, and $\bar{E}$ is fixed as $0.1 \text{mW}$ in this case. First, we confirm the effectiveness of our proposed optimal schemes (UC-OPT and NC-OPT). We observe that as the maximum transmit power constraint grows, the UC-OPT, NC-OPT, UC-FT and NC-FT schemes' EE increase. In particular, the UC-OPT and UC-FT schemes experience a sharp increase in their EE first, but gradually saturate in the high transmit power region. This is because the minimum harvested energy requirement $\bar{E}$ is relatively high when $\bar{P}$ is small, which means that there are strict requirements and few network resources. Thus a small increase in $\bar{P}$ can have a significant improvement of the EE performance. For $\bar{P}$ in high region, we have rich network resources to meet the minimum harvested energy demand. Therefore, there is a high flexibility in resource allocation and $\bar{P}$ no longer has a large impact on EE performance. However, with the augment of $\bar{P}$, the UC-FP and NC-FP schemes' EE decline. Note that the transmit power is fixed as $\bar{P}$ in these two benchmark schemes so that the numerators of the individual EE $\eta_{k}$ in \eqref{eqn:r03} and NC-EE $\eta$ in \eqref{eqn:r09} have a logarithmic growth with increasing $\bar{P}$ while the denominators increase linearly. Hence the gap between the optimal schemes (UC-OPT and NC-OPT) and the benchmark schemes with fixed transmit power (UC-FP and NC-FP) widens in the high $\bar{P}$ region.
\begin{figure}[t]
\begin{centering}
\includegraphics[scale=0.4]{JSAC-1570440702-Fig4.eps}
\vspace{-0.1cm}
\caption{ UC-EE and NC-EE versus the number of DA ports $N$. }\label{fig:Number_DA}
\end{centering}
\vspace{-0.1cm}
\end{figure}
In Fig. \ref{fig:Number_DA}, we show the relationship between the number of DA ports and the EE of the considered schemes with 4 users and $\bar{E}=0.2 \text{mW}$. The UC-OPT gains most UC-EE, compared with other UC-EE benchmark schemes (UC-FT, UC-FP) and so does the case of the NC-EE maximization schemes. Note that with the growing number of DA ports, the EE performance of all schemes improves as expected. With more DA ports, i.e., more network resources, we have high flexibility in time allocation and power allocation. Moreover, since the DA ports are uniformly distributed in a fixed area, more DA ports mean shorter average access distances between the users and DA ports. Both reasons lead to better EE performance. However, as for the benchmark schemes with fixed power allocation, we observe a modest decrease in their EE performance with the growth of the number of DA ports. This is because in these two benchmark schemes, all DA ports are turned on and transmit with maximum power $\bar{P}$. As a result, the individual EE $\eta_{k}$ in \eqref{eqn:r03} and the NC-EE $\eta$ in \eqref{eqn:r09} experience a linear increase in the denominator while the nominator has a logarithmic increase.
\begin{figure}[t]
\begin{centering}
\includegraphics[scale=0.4]{JSAC-1570440702-Fig5.eps}
\vspace{-0.1cm}
\caption{ UC-EE and NC-EE versus the number of users. }\label{fig:user}
\end{centering}
\vspace{-0.1cm}
\end{figure}
The impact of the number of users on the considered schemes is demonstrated in Fig. \ref{fig:user}. As can be seen, the EE performance of the UC-EE maximization schemes (UC-OPT, UC-FT and UC-FP) improve with more users. This is because that the UC-EE maximization schemes all choose the weighted sum EEs of users as their objective functions which increases with the growing number of users. In particular, we observe that the UC-EE maximization schemes' EE tend to be saturated as the number of users increases. The reason accounting for this trend is that each user has minimum harvested energy requirement in this system. When there are more users, i.e., there are limited network resources and growing demand of overall minimum harvested energy requirements, the improvement of UC-EE finally saturates. This reason is also applicable for the decrease in the NC-EE maximization schemes (NC-OPT, NC-FT and NC-FP).
\begin{figure}[t]
\begin{centering}
\includegraphics[scale=0.4]{JSAC-1570440702-Fig6.eps}
\vspace{-0.1cm}
\caption{ UC-EE versus the weight of user 1. }\label{fig:weight}
\end{centering}
\vspace{-0.1cm}
\end{figure}
Fig. \ref{fig:weight} illustrates the EE tradeoff between four users where users 2, 3 and 4, are assigned the same weights, i.e., $w_{2}=w_{3}=w_{4}=1$ while the weight of user 1, $w_{1}$ is varied between 1 and 8. There are 7 DA ports and $\bar{E}=0.2 \text{mW}$. From Fig. \ref{fig:weight}, we can see that with growing $w_{1}$, the EE of user 1 shows an upward trend while the EEs of other users fall. Based on this trend, we can obtain that the improvement of user's EE performance can be achieved by assigning higher weight, which also means a flexibility of customizing the EE performance of different users. Specially, with increasing $w_{1}$, the EE of user 1 experiences a considerable increase first but finally approaches a maximum value, indicating that assigning higher weight to user has a limit effect on improving its EE performance.
\section{Conclusions}\label{se6}
In this paper, we have investigated energy-efficient resource allocation in WIPT based DAS. Two kinds of EE metrics have been studied, namely, NC-EE and UC-EE. We have formulated the UC-EE and NC-EE maximization problems where the transmit power and time are jointly optimized. As both problems are nonlinear programming problems and thus non-convex, we have proposed iterative algorithms to find the optimal solutions by using some mathematical transformations.
Some valuable insights have been provided through extensive simulations: First, UC-EE always outperforms NC-EE. Second, power allocation is more important than time allocation for improving EE. Third, more users benefit UC-EE but harm NC-EE.
\begin{appendices}
\section{Proof of Lemma \ref{Lm0}} \label{AP0}
Firstly we define a set of new variables $T_{k}, \forall k$ as $T_{k}=\tau_{k}(\sum^{N}_{i=1}p_{i,k}+p^{c}_{k}), \forall k$ which indicates the total energy consumption of transmitting information to user $k$. To avoid confusion, for the NC-EE maximization problem, we denote the optimal values of $T_{k}$ and $R_{k}$ as $T^{NC}_{k}$ and $R^{NC}_{k}$ respectively, $\forall k$ . Similarly, in terms of the UC-EE maximization problem, we denote the optimal values of $T_{k}$ and $R_{k}$ as $T^{UC}_{k}$ and $R^{UC}_{k}$ respectively, $\forall k$. Then we have
\begin{align}\label{eqn:r22}
\eta=\frac{\sum_{k=1}^{K}{w_{k}R^{NC}_{k}}}{\sum_{k=1}^{K}{T^{NC}_{k}}}\leq \max_{k}\frac{w_{k}R^{NC}_{k}}{T^{NC}_{k}}\leq\sum_{k=1}^{K} \frac{w_{k}R^{NC}_{k}}{T^{NC}_{k}}.
\end{align}
Because $T^{NC}_{k}$ and $R^{NC}_{k}$, $\forall k$ are the optimal values of the NC-EE maximization problem, for the UC-EE maximization problem with the same constraints, we can further derive that
\begin{align}\label{eqn:r29}
\sum_{k=1}^{K} \frac{w_{k}R^{NC}_{k}}{T^{NC}_{k}}\leq \sum_{k=1}^{K} \frac{w_{k}R^{UC}_{k}}{T^{UC}_{k}}=\sum_{k=1}^{K}w_{k}\eta_{k}.
\end{align}
From \eqref{eqn:r29} and \eqref{eqn:r22} we can conclude that UC-EE always outperforms NC-EE.
\section{Proof of Lemma \ref{Lm1}} \label{AP1}
With $\bm{s}$, user $k$'s achievable rate $R_{k}$ becomes
\begin{align}
R_{k}=\tau_{k}\ln\biggl(1+\frac{\sum^{N}_{i=1}h_{i,k}s_{i,k}}{\sigma^{2}\tau_{k}}\biggr).
\end{align}
To keep $R_{k}$ continuity over $0\leq\tau_{k}\leq1, \forall k$, here we define $R_{k}=0$ when $\tau_{k}=0$ for all $k$.
The objective function of problem \eqref{eqn:r15} can be written as $\sum^{K}_{k=1}\alpha_{k}f_{1}(\bm{s},\tau_{k})$, where
\begin{align}
f_1(\bm{s},\tau_{k})=\left\{\begin{array}{ll}\label{eqn:r23}
w_{k}R_{k} -\beta_{k}\biggl(\sum^{N}_{i=1}s_{i,k}+\tau_{k}p^{c}_{k}\biggr)&\tau_{k}>0,\\
-\beta_{k}\sum^{N}_{i=1}s_{i,k}&
\tau_{k}=0.
\end{array}\right.
\end{align}
According to \cite{Boyd2004}, to prove the concavity of $\sum^{K}_{k=1}\alpha_{k}f_{1}(\bm{s},\tau_{k})$, we need to prove that for $(\hat{\bm{s}},\hat{\tau}_{k})=\theta(\dot{\bm{s}},\dot{\tau}_{k})+(1-\theta)(\ddot{\bm{s}},\ddot{\tau}_{k}), 0<\theta< 1$, $f_1(\hat{\bm{s}},\hat{\tau}_{k})\geq\theta f_1(\dot{\bm{s}},\dot{\tau}_{k})+(1-\theta)f_1(\ddot{\bm{s}},\ddot{\tau}_{k})$ is always satisfied. Here four mutually complementary cases for $\dot{\tau}_{k}$ and $\ddot{\tau}_{k}$ are considered by us.
\begin{enumerate}
\item $\dot{\tau}_{k}>0$ and $\ddot{\tau}_{k}>0$: In this case, $\hat{\tau}$ is also positive. According to \cite{Boyd2004}, $w_{k}\tau_{k}\ln\biggl(1+\frac{\sum_{i=1}^N h_{i,k}s_{i,k}}{\sigma^{2}\tau_{k}}\biggr)$ is jointly concave over $\bm{s}$ and $\tau_{k}$. Then we can further derive that $f_1(\bm{s},\tau_{k})=w_{k}\tau_{k}\ln\biggl(1+\frac{\sum_{i=1}^N h_{i,k}s_{i,k}}{\sigma^{2}\tau_{k}}\biggr) -\beta_{k}\biggl(\sum^{N}_{i=1}s_{i,k}+\tau_{k}p^{c}_{k}\biggr)$ is also jointly concave over $\bm{s}$ and $\tau_{k}$. As a result, we have $f_1(\hat{\bm{s}},\hat{\tau}_{k})\geq\theta f_1(\dot{\bm{s}},\dot{\tau}_{k})+(1-\theta)f_1(\ddot{\bm{s}},\ddot{\tau}_{k})$ in this case.
\item $\dot{\tau}_{k}>0$ and $\ddot{\tau}_{k}=0$: In this case, $f_1(\ddot{\bm{s}},\ddot{\tau}_{k})=-\beta_{k}\sum^{N}_{i=1}\ddot{s}_{i,k}$ and $f_1(\hat{\bm{s}},\hat{\tau}_{k})$ can be expressed as
\begin{align}\label{eqn:r24}
&f_{1}(\hat{\bm{s}},\hat{\tau}_{k})=-\beta_{k}\theta\bigl(\sum^{N}_{i=1}\dot{s}_{i,k}+\dot{\tau}_{k}p^{c}_{k}\bigr)-\beta_{k}(1-\theta)\sum^{N}_{i=1}\ddot{s}_{i,k}\nonumber\\&+\!\theta \dot{\tau}_{k}w_{k}\ln\biggl(\!1\!+\!\frac{\sum_{i=1}^N h_{i,k}\dot{s}_{i,k}}{\sigma^{2}\dot{\tau}_{k}}\!+\!\frac{(1\!-\!\theta)\sum_{i=1}^N h_{i,k}\ddot{s}_{i,k}}{\sigma^{2}\theta\dot{\tau}_{k}}\!\biggr).
\end{align}
So $f_1(\hat{\bm{s}},\hat{\tau}_{k})\geq\theta f_1(\dot{\bm{s}},\dot{\tau}_{k})+(1-\theta)f_1(\ddot{\bm{s}},\ddot{\tau}_{k})$ is proved in this case.
\item $\dot{\tau}_{k}=0$ and $\ddot{\tau}_{k}>0$: Since this case is similar to the second case, we can draw the same conclusion based on the previous analysis.
\item $\dot{\tau}_{k}=0$ and $\ddot{\tau}_{k}=0$: In this case, $f_1(\dot{\bm{s}},\dot{\tau}_{k})$ and $f_1(\ddot{\bm{s}},\ddot{\tau}_{k})$ equal to $-\beta_{k}\sum^{N}_{i=1}\dot{s}_{i,k}$ and $-\beta_{k}\sum^{N}_{i=1}\ddot{s}_{i,k}$, respectively. Noting that they are both linear functions. Therefore, $f_1(\hat{\bm{s}},\hat{\tau}_{k})\geq\theta f_1(\dot{\bm{s}},\dot{\tau}_{k})+(1-\theta)f_1(\ddot{\bm{s}},\ddot{\tau}_{k})$ is satisfied.
\end{enumerate}
\end{appendices}
\begin{footnotesize}
\bibliographystyle{IEEEtran}
|
2,877,628,089,766 | arxiv | \section{Introduction}
Recent observations of gamma ray bursts (GRBs) supports the idea that these
objects may be distributed over cosmological ranges\cite{fishman94}. The GRBs
may be among the most luminous objects in the universe, with peak gamma ray
luminosities $L\sim 10^{51}\,{\rm ergs}/\sec $. There is evidence that the
range of intrinsic luminosities is very narrow, which might allow distance
determinations out to red--shifts of 2 or more\cite{horack94}. It is plausible
that when such large amounts of energy are released in objects which are
evidently quite compact, pions will be produced in hadronic collisions, and
that the GRBs may serve as standard candles in bursts of neutrinos as well. The
($\sim 1/E^2$) power law spectrum of gamma rays which is observed to extend at
least to several $GeV$, suggests the possibility of particle acceleration and
radiation from other than electromagnetic interactions of electrons. Paczynski
and Xu \cite{paczynski94} have recently proposed a specific GRB\ model in which
the gamma rays and neutrinos arise from decay of pions produced in shock front
collisions. In a second class of models (e.g. Plaga \cite{plaga94}) gamma rays
are hypothesized to come from superconducting cosmic strings (SCSs); the
luminosities are very high and one expects neutrino emission as well.
Models of the first kind\cite{paczynski94} will have the following generic
features for the neutrinos emitted: the neutrino energies range over MeV to (a
few) GeV, or perhaps much higher, and since the source is $\pi $-decay, the
flavor content has the proportions $\nu _e:\nu _\mu :\nu _\tau =2:1:0$. We
also note that due to gamma ray absorption within the source, the luminosity in
neutrinos can, in principle, exceed the observed luminosity in gamma rays,
potentially by a large factor\footnote{ Recent emphasis has centered on neutron
star models\cite{nemiroff94}, motivated in part by the observation that the
gamma--ray energy released by GRBs at cosmological distances is of order of a
per cent of the binding energy of a neutron star if the emission is isotropic,
and less if the emission is beamed. In analogy to a supernova one might expect
99 to 99.9\% of the binding energy is expected to emerge as neutrinos, yielding
a neutrino--to--photon energy emission ratio of $10^2$ to $10^3$. }. In
models of the second kind\cite{plaga94}, the neutrino flavor content is
expected to be $\nu_e: \nu_\mu:\nu_\tau \ = 1:1:1$ and the energies can reach
up to $10~TeV$.
In this paper we suppose that the GRBs are indeed cosmological standard candles
(at least in an ensemble average sense) for neutrinos and gammas, in order to
maximize the physical inferences to be drawn from coincident neutrino and
photon detection. We investigate the opportunities for neutrino astronomy,
for cosmology, and for particle physics. Some of this analysis may also be
applicable to other astrophysical neutrino sources, such as AGNs which are
expected to emit neutrinos over a very large energy range; AGNs will probably
have observational data exceeding what is possible from GRBs, but are unlikely
to have such short time pulse emission\footnote{ However, Markarian 421 has
been seen by the Whipple Observatory very--high--energy $\gamma$-ray telescope
to have a flux increase of a factor of ten on a time scale of one
day\cite{kerrick95}.}. We hope that our analysis will be useful to designers
of neutrino telescopes and inspire them to consider GRB neutrino detection as
an important goal for future instruments.
\section{Neutrino Mass Mixing and Oscillations}
One of the unique properties of three generations of neutrinos, not shared with
photons, is that they may carry with them two additional scale lengths
associated with flavor oscillations. The oscillation lengths are related to the
neutrino mass differences according to
\begin{equation}
l_{ij}({\rm km})\simeq 2.5[\frac{p({\rm GeV})}{\delta m_{ij}^2({\rm
eV}^2)}],
\label{one}
\end{equation}
where i and j are flavors and $\delta m_{ij}$ is the, currently unknown,
mass difference between neutrinos of the relevant flavors. The probability of
observing neutrinos of a given flavor at any distance from their source is a
well known function of the distance, the oscillation length and mixing
parameters, as well as the initial conditions. Therefore, if the neutrinos are
produced in the same flavor ratio in each GRB, then the measured flavor ratios
can, in principle, determine the relative distances between various sources, at
least for the case of small mass differences and consequent long oscillation
lengths. Absolute distances will not be given directly by the measured flavor
ratios, because the oscillation length itself will be difficult to measure
independently.
However, if GRBs are standard candles whose distances can be calibrated (e.g.,
by the time dilation of their spectra, or by gravity waveforms of inspiraling
objects\cite{schutz86}), then the measured flavor ratios may provide a direct
determination of very small neutrino mass differences. Competing models place
the GRBs (i) within the solar neighborhood ($D\stackrel{<}{\sim} pc$), (ii) in the galactic
halo ($10^{-2} kpc \stackrel{<}{\sim} D \stackrel{<}{\sim} 10^2 kpc$), and (iii) at cosmic distances
($D\stackrel{>}{\sim} Mpc$) as we assume herein. Assuming detection of a correlated neutrino
with GeV energy, the $\delta m^2$'s which are probed at each distance scale are
$\stackrel{>}{\sim} 10^{-13} eV^2$, $\stackrel{>}{\sim} 10^{-14}$ to $\stackrel{>}{\sim} 10^{-18} eV^2$, and $\stackrel{>}{\sim}
10^{-19} eV^2$, respectively.
If the neutrino oscillation indications from the atmospheric neutrino
observations are correct ($\delta m^2_{\mu x}\sim 0.01~eV^2$, and $\theta_{\mu
x}$ large, $\sim 20^{\circ}-40^{\circ}$ \cite{fukuda94}), then the observed
flavor content of the neutrinos from pion--producing GRBs should show the same
effect, {\it viz.} a $\nu _\mu :\nu _e$ flavor content of 1.2:1 rather than
2:1.
This applies to models of the Paczynski-Xu kind. Furthermore, because the
atmospheric solution has a short oscillation length $l_{\mu x} \simeq 250
p({\rm GeV})$ km, this flavor ratio should be universal for all GRBs
independent of their distance; individual GRBs might have hot source spots
smaller in size than the oscillation length $l$, but individual oscillations
will sum to zero in an ensemble average. For the models like that of Plaga,
based on SCS's, the democratic flavor mixture is unchanged by oscillations and
remains $\nu_\mu: \nu_e: \ \nu_\tau \ = 1:1:1$. It may be thus possible to
distinguish between these two classes of source models.
If the atmospheric neutrino anomaly is not due to neutrino oscillations, then
there is a more interesting possibility for GRB neutrinos. According to the
see-saw mechanism \cite{yanagida79},
$$
m_\nu \sim m_D^2/M
$$
where $m_D$ is the generation--dependent Dirac mass of either the up quarks,
down quarks, or charged leptons $m_l$. If we use $m_D\sim m_l$ and $M\sim $\
Planck Mass, the neutrino masses\footnote {Neutrino mass differences induced by
scattering on the inter-galactic medium are expected to be smaller than these
numbers\cite{learned94a}.} are $m_{\nu _e}\sim 2\cdot 10^{-17}eV$,
$m_{\nu _\mu }\sim 10^{-12}eV$, $m_{\nu _\tau }\sim 3\cdot 10^{-10}eV$, and
hence $\delta m_{e\mu }^2\sim 10^{-24}{\rm eV}^2$ and $\delta m_{\mu \tau }^2
\sim 10^{-19}{\rm eV}^2$.
In this see--saw case, the oscillation lengths are in the cosmic range 1 to
$10^5$ Mpc for neutrino energies in the $GeV$ range\footnote{ For most
purposes in this paper, it is sufficient to equate time and distance as if the
universe were static. The generalization to an expanding cosmology is reserved
for \S 4. }. There is a maximum distance over which a mixture of mass
eigenstates can be expected to remain coherent\cite{nussinov76}. However, for
the tiny mass differences here, the coherence length appears to exceed $10^5$
Mpc.
The longer oscillation length possibilities exceed the size of the visible
universe $\sim H_0^{-1} = 3000 h^{-1}$ Mpc, where $H_0 = 100 h$ km/sec/Mpc is
the present value of the Hubble parameter. If $l$ exceeds $H_0^{-1}$, then the
neutrinos do not oscillate, and the flavor ratios observed at earth are just
those established at emission. On the other hand, for $l$ less than $\sim 10^4$
Mpc, the flavor content of the neutrinos from pion--producing GRBs will vary
strongly with distance; and again, for Plaga-like models the flux will remain
universal with no dependence on distance.
Alternatively, if mass differences are large (of order $0.01~eV^2$ or
larger), then the separation of the mass eigenstates in time offers
another handle on $\delta m^2$. At a fixed energy E, the arrival times of the
$\nu$'s are separated by
$$
\delta t =
5\times 10^{-3} \frac{(L/100~Mpc)~(\delta m^2/10^{-2}~eV^2)}{(E/100~MeV)^2}
sec,
$$
assuming much smaller time difference at the source. For low energies and
large $\delta m^2$, $\delta t$ can range from $10^{-3}~sec$ to several seconds.
With a spectrum of energies, the neutrino burst would be spread more than the
accompanying gamma ray burst.
\section{Other Neutrino Properties}
As with the observations of Supernova 1987A, the distance, short emission time
and trajectory through varying gravitational fields, leads to the potential for
some fundamental tests of neutrino properties, which are not possible in
terrestrial laboratories\cite{pakvasa90}. In the following we enumerate some
of
the possibilities.
\subsection{Neutrino Lifetime}
From the detection of neutrinos arriving form the $50kpc$ distant SN1987A it
was possible to place a limit on the (laboratory frame) lifetime of the
$\bar{\nu_e}$ of $\tau (\bar{\nu_e}) > 5 \cdot 10^{12}\ sec$ for $E_\nu$ of the
order of $10~MeV$. In the same way, observation of $\nu$'s from GRBs would
place bounds on $\tau (\nu) > 10^{17} (D/Gpc)\ sec$, which for a $10~Mpc$
distant source is some $200$ times stronger than the bounds on $\bar{\nu_e}$
from SN1987A. Of course this is really a bound on the lifetime of the dominant
mass eigenstate.
\subsection{Neutrino Electric Charge}
If neutrinos carry even a tiny electric charge (as favored in some theoretical
models\cite{ignatev94}), then the passage through magnetic field regions
enroute to the earth from a distant source creates an additional dispersion in
arrival time, $\delta t$. From the observed upper bound on any additional
dispersion beyond the gamma pulse time distribution one can then derive a limit
on the neutrino charge from the Barbillieni-Cocconi
formula\cite{barbillieni87}:
$$
\frac{Q_\nu}{|e|} < \frac{\delta t}{D} \sqrt{\frac{<E>}{0.6 D B}}
\left(\delta E / E \right) ^{-1/2},
$$
where $\delta t$ is the burst duration, $D$ is the flight distance, $B$ is the
root--mean--squared average magnetic field experienced by the neutrinos
enroute, and $\delta E/E$ is the relative spread in energies. For $\delta t
\sim 10^{-3}~sec$, $D \sim 10~Mpc$, $<E> \sim 1~GeV$, $B \sim 10^{-12}$~Tesla
and $\delta E /E \sim 1$, one has $Q_\nu /|e| < 10^{-27}$. Hence it may be
possible to improve considerably on the SN1987A limit of $Q_{\nu_e} < 10^{-14}
|e|$, and the somewhat better laboratory limit of $Q_{\nu_e} < 10^{-19} |e|$.
Moreover, one could place the first limit on $\nu_\mu$, and possibly on
$\nu_\tau$.
\subsection{Neutrino Speed}
To the extent that the $\gamma$-ray and neutrino pulses coincide, new limits on
neutrino speed relative to the speed of light may be placed. If
$\delta t = t_{\nu} - t_\gamma$, then
$$
1-v_{\nu}/c \le \delta t / D .
$$
For $\delta t$ as large as 1 second this can place upper bounds of the order of
$10^{-15} (10 Mpc/D)$ on $1-\beta_{\nu}$, to be compared with $10^{-9}$ (for
$\nu_e$) from SN1987A\cite{stodolsky88}.
\subsection{Equivalence principle}
The neutrinos from SN1987A were used to establish limits on parameters in
various theories of gravitation and to test whether the Weak Equivalence
Principle (WEP) is symmetric with respect to bosons and fermions, and with
respect to matter and anti-matter. Specifically, the Shapiro delays for
gammas, neutrinos, and anti-neutrinos passing the Galactic nucleus were
compared and found to be the same, within errors \cite{longo88,pakvasa89}.
Also, SN1987A provided tests for the presence of proposed new forces in nature
and tests of the equality of parameters when applied to photons, neutrinos, or
anti-neutrinos \cite{pakvasa89,grifols94}. The same methods would apply to
neutrinos from GRBs. However, under our assumptions, the distances would be
greater and the impact parameter with the Galactic nucleus would vary from
event to event, offering much improved sensitivity and a new distance scale, as
well as a large increase in statistics.
To test the possibility that the neutrino and antineutrino gravity couplings
differ requires tagging of neutrino and antineutrino events separately.
Distinguishing between $\nu_e$ and $\bar{\nu_e}$ interactions at low energy may
be possible because of the charged current capture by protons. At higher
energies distinguishing between $\nu_\mu$ and $\bar{\nu_\mu}$ interations is in
principle possible if one employed a magnetic field, or if one could adequately
detect the muon capture by nuclei; but in practice it cannot be done in
instruments proposed at this time. A $\nu - \bar{\nu}$ separation would also
be useful for studying possible CP violation in neutrino
oscillations\cite{cabibbo78}.
A flavor violating gravitational coupling has been proposed as a possible
mechanism for accounting for atmospheric neutrino as well as solar
anomalies\cite{pantaleone93}. It is remarkable that one choice of parameters
($sin^2 2 \theta_G \sim 1$, $\delta f \sim O(10^{-15})$) can account for both
problems. The transition probability for $\nu_\mu \leftrightarrow \nu_e$ is
$sin^2(2\theta_G) sin^2 (\frac{1}{2} L E \phi (L) \delta f )$, where $\phi$ is
the gravitational potential and $\delta f$ is a measure of the degree of
violation of the WEP. Without a knowledge of $\phi$ along the path it is not
possible to calculate the net effect on the mixing, but it is very likely
similar to the expectation in the oscillation case ({\it i.e.}, $\nu_e:\nu_\mu$
becomes $1.2:1$ from $2:1$).
\section{Cosmology}
Norris, et al. \cite{norris94}, have analyzed the experimental data assuming
that the GRBs are standard candles in gamma rays and have found evidence for a
cosmological time dilation. If the GRBs are neutrino sources, then the same
analysis can provide, for the first time, non-electromagnetic evidence for the
expansion of the universe\footnote{ We note that such an analysis would exclude
any cosmological model which attempts to explain the observed red--shifts by
``tired photons''; earlier arguments against the tired photon hypothesis were
given by Zeldovich, {\it et al.}\cite{zeldovich63}. }.
The prevailing view, that the expansion of the universe is of a universal
nature, leads to the expectation that the gamma ray and neutrino time dilations
will be identical. Unfortunately, a nearly identical dilation may occur if the
cause is evolutionary effects. This is because the charged and neutral pions
of the pion--production model, and the emitted photons and neutrinos of the SCS
model arise from a common mechanism. Still, it is possible that the neutrino
and photon opacities evolve differently, in which case studies of the differing
dilations may yield information on cosmic evolution.
An important probe of the universe at an early time is the oscillation phase
itself, $\phi_{ij}$. In Minkowski space, the quantum mechanical phase is just
$E t$. However, it is a bit more complicated in the expanding universe. In the
adiabatic approximation, the phase is
\begin{equation}
\phi = R_0 \int_0^{\tau} \frac{d\tau'}{R(\tau')}E .
\label{phase}
\end{equation}
We have assumed a time--dependent Robertson--Walker metric, with scale factor
$R(\tau)$; $\tau$ is the lookback time to the source emission, and $R(0)\equiv
R_0$. The red--shift factor $R_0/R(\tau)$ accounts for the time dilation of our
clocks compared to early universe clocks. Taking the difference of two
mass--eigenstate phases, expanding $E_i\simeq p + m_i^2/2p$, and red--shifting
the momentum to its present--time observed value via $p(\tau) = (R_0/R(\tau))
p_0$, one obtains from Eq. (\ref{phase}) the very simple result for the
oscillation phase:
\begin{equation}
\phi_{ij}=\frac{\tau \delta m^2_{ij}}{2 p_0}.
\label{oscphase}
\end{equation}
The blue--shift of the inverse momentum exactly cancels the red--shift from
time dilation. Thus, the correct generalization of the oscillation phase from
Minkowski space to an expanding cosmology is obtained by replacing laboratory
time with cosmic lookback time, or equivalently, replacing $l_{ij}$ in Eq.
(\ref{one}) with $c\tau_{ij}$.
This deceptively simple result is rich in cosmological information. In
particular, if $\delta m^2$ were {\it a priori} known, then a measurement of
the flavor ratio and $E_{\nu}$ would directly yield $\tau$. This is analogous
to obtaining the cosmic red--shift $z$ directly from a measurement of a
photon's energy, when the unshifted spectral line of the photon source is {\it
a priori} known. Furthermore, $\tau$ contains as much information as is
conveyed by $z$. In fact, $\tau$, $z$, and the distance $D$ are linearly
related for small $z, \tau, D$, and nonlinearly related for large $z, \tau,
D$. For example, to first nonlinear order, some Taylor series expansions
relating these three variables are\cite{kolb90}
$z(\tau)=H_0\tau+(1+\frac{q_0}{2})(H_0\tau)^2+\ldots$, and its inverse relation
$\tau(z)=H_0^{-1}\left[z-(1+\frac{q_0}{2})z^2+\ldots \right]$; and $D_L =
H_0^{-1} \left[z+\frac{1}{2}(1-q_0)z^2 +\ldots\right]$. Here, $D_L$ is the
``luminosity'' distance defined as $D_L^2 = {\cal L}/4\pi{\cal F}$, where
${\cal L}$ is the absolute luminosity at the source (energy/time), and ${\cal
F}$ is the fluence measured at earth (energy/time/area). Results for other
distance definitions are similar; e. g. the ``proper'' distance ($R_0\times$
comoving cordinate distance), is given as $D_P = H_0^{-1}
\left[z-\frac{1}{2}(1+q_0)z^2 +\ldots\right]$. In all of these relations,
$q_0$ is the present value of the deceleration parameter.
The value of $q_0$ is unknown; since the GRBs seem to exist at cosmic
distances, it is possible that oscillation measurements with GRB neutrinos may
shed some ``neutrino light'' on this important parameter. An idependent
measurement of even two of the three variables $z, \tau$, and $D$ would
potentially determine $q_0$ and test cosmological models, because of the
nonlinear relations. The series expansions make it clear that the linear
Hubble relation fails by a fractional amount $z$ in red--shift, $H_0\tau$ in
lookback time, and $D/H_0^{-1}$ in distance. Interpreting the burst dilation of
fainter GRBs as due to time dilation leads to the estimate $z\sim$ 1 to 2 for
these GRBs, so there is indeed hope that oscillation measurements may yield a
value for $q_0$. We have seen that for small neutrino masses, it may be
possible to measure the phase of oscillations from pion--producing GRBs. Of
course, one would require a much better understanding of the structure and
mechanisms of GRBs than we have today, in order to draw useful conclusions.
Let us assume for the moment that such measurements can be made with enough
precision to yield values for $\tau \delta m^2_{ij}$. If an independent
measurement of $z$ or $D$ is available, then a single GRB would suffice to fix
$\delta m^2$, and a second GRB measurement would yield $q_0$. In this idealized
situation, we may also inquire about the nature of higher order terms in the
nonlinear expansions relating $z, \tau$, and $D$. In fact, given a
cosmological model, the nonlinear relations among $z, \tau$, and $D$ are
exactly calculable. For example, $\tau = \int^{R_0}_{R(\tau)}dR/ \dot{R} =
\int^{R_0}_{R(\tau)} d\ln R/ H$ will yield $\tau(1+z=R_0/R(\tau))$ once
$\dot{R}(\tau)$ or equivalently, $H\equiv \dot{R}/R$ are determined as a
function of $R$ (or $z$) from the Friedmann equation. Ignoring the radiation
energy density of the recent universe compared to the matter density, the
Friedmann equation reads
$H(z)^2 = H_0^2\left[(1+z)^2(1+z\Omega_0)-z(2+z)\Omega_{\Lambda}\right]$,
where $\Omega_0$ is the present matter density compared to the critical value
$\rho_{c}=3H_0^2/8\pi G$, and $\Omega_{\Lambda}=\Lambda/3H_0^2$; $\Lambda$ is
the comsological constant. (The Friedmann universe is open or closed
according to whether $\Omega_0 +\Omega_{\Lambda}$ is less than, or greater
than, unity.) The Friedmann equation may be manipulated to yield
\begin{equation}
\tau=H_0^{-1}\int^{1+z}_1 \frac{d\, \omega}{\omega}
(\Omega_0 \omega^3 +\Omega_\Lambda -\Omega_k \omega^2)^{-\frac{1}{2}},
\label{tau}
\end{equation}
where the curvature term $\Omega_k = \Omega_0 +\Omega_{\Lambda} -1$ is a
constrained by the Friedmann equation itself. A thorough set of numerical
solutions to this equation may be found in ref.\cite{felten86}. With
$\Omega_{\Lambda}$ omitted, the integral is easily solved analytically. The
form of the solution for $\tau(z)$ depends on whether the universe is closed
($\Omega_0 > 1$), critical ($\Omega_0 = 1$) or open ($\Omega_0 < 1$). For the
critical case, motivated by inflation, the result is
\begin{equation}
\tau=\frac{2}{3} H_0^{-1}\left[1-(1+z)^{-3/2}\right].
\label{taucrit}
\end{equation}
For small $z$, the integral is just $z$, and the linear Hubble relation $H_0
\tau\simeq z$ results. A related calculation yields the simple and useful
relation between the red--shift and the luminosity distance\cite{weinberg72},
valid for $\Omega_{\Lambda} \ll \Omega_0$: $D_L =(2 H_0^{-1}/\Omega_0^2) \left[
z \Omega_0 + (\Omega_0 -2) (\sqrt{z\Omega_0 +1} -1) \right]$. Then for
$\Omega_0 =1$ as suggested in inflationary cosmologies, $D_L =
2H_0^{-1}\left[z-1-\sqrt{z+1}\right]$. Given a cosmological model,
measurements of a second GRB would potentially validate or invalidate the
model, by fitting or not fitting the nonlinear $\tau(z)$ or $\tau(D)$
relation.
In static Minkowski space, a neutrino source is most useful for oscillation
studies if its distance is comparable to the oscillation length; a shorter
distance does not provide sufficient path length for oscillations to develop,
and in a longer distance the information is effectively averaged over many
oscillations. However, in an expanding metric, distance must be carefully
defined, and the situation is different. We may calculate the lookback times
and luminosity distances to typical GRBs using the $\tau(z)$ or $D_L(z)$
formulae just given. The results are $\tau(z=1)=0.43 H_0^{-1}$ and
$\tau(z=2)=0.54 H_0^{-1}$, and $D_L(z=1)=1.18 H_0^{-1}$ and $D_L(z=2)= 2.54
H_0^{-1}$. Note that for this matter--dominated, critical--density example,
$D_L(z)$ is unbounded as $z$ increases, whereas the lookback time $\tau(z)$
has approaced $\frac{2}{3} H_0^{-1}$ for large $z$. The lookback time is the
relevant variable for neutrino oscillation, since it is directly proportional
to the oscillation phase, according to Eq. (\ref{oscphase}). The fact that
$\tau$ has become asymptotic for the red--shift values typical of GRBs has an
important implication: the value of $\tau$ in the oscillation phase is nearly
$\tau(z=\infty)$, which in any cosmological model is a known fraction of
$H_0^{-1}$. This fact means that the uncertainty in $\tau$ is dominated by the
uncertainty in $H_0$, which is only a factor of two. Thus, a single
measurement of the oscillation phase and neutrino energy may permit a
determination of $\delta m^2$ to a factor of two!
Comparing an exact cosmological solution for $z, \tau$, or $D$ to the
appropriate Taylor series approximation relates each term in the series to more
fundamental quantities. The simplest relation is between the parameter $q_0$
multiplying the quadratic term, and the parameters $\Omega_0$ and
$\Omega_{\Lambda}$ in the Friedmann equation. The relation is\cite{felten86}
$q_0 = \Omega_0 /2 - \Omega_{\Lambda}$. Thus, a neutrino oscillation
determination of $q_0$ would establish a fundamental constraint between the
two parameters of the Friedmann universe.
Studying cosmology by measuring neutrino flavor ratios has one tremendous
advantage over other methods, namely that flavor ratios should be independent
of any evolutionary effects in the ensemble of sources. Absolute neutrino
luminosities may evolve, but the initial neutrino flavor ratios are fixed by
microphysics; it is hard to imagine that these ratios will change with cosmic
history. This is in sharp contrast to the use of distant ``candle''
luminosities to infer deviations from the linear Hubble Law, which may be due
to the deceleration of the universe or to evolutionary effects in the candles.
\section{Physical Constants - Time Dependence}
There is a long tradition in physics of asking whether various constants are,
in fact, constant over cosmological time. The best known of these questions was
raised by Dirac. In 1981, Barrow\cite{barrow81} reviewed these ideas. Among
his conclusions is the idea that only dimensionless constants can have a
meaningful time dependence. In fact there are some new strong limits upon the
time variation of the fine structure constant and the electron to proton mass
ratio, and upon posible variations across causally disconnected regions of
space\cite{cowie94}.
However, any possible time variation of the dimensionless parameters upon which
neutrino oscillations depend (mass ratios and mixing angles, or equivalently,
the Yukawa couplings at the origin of fermion mass generation) are without
constraint. If the distances to the GRBs can be measured by independent means,
and the neutrino mass differences turn out to be very small, then a time
dependence of the mixing angles over cosmological times may be detectable as
deviations from the expected flavor ratios.
It has been speculated that dimensionful comological parameters may have a
time--dependence. After all, the Hubble parameter itself has a complicated
dependence on time, varying inversely during power law expansion, and
exponentially during inflationary expansion. The cosmological constant
$\Lambda(t)$ (or equivalently, the energy density of the vacuum), has been
hypothesized to relax to zero asymptotically with time \cite{antoniadis84}. And
the Hubble parameter and Newton's gravitational constant have been hypothesized
to consist of two terms each, one standard and one oscillating periodically in
time \cite{hill90}. Motivation for the relaxation of $\Lambda$ is that the
present value of $\Lambda/8\pi G$ is known to be less than
$10^{-47}~GeV^4$\cite{kolb90}, whereas there is no good theoretical
understanding of why this should be so small. Motivation for the periodic
oscillation in $H$ and $G$ is that periodic modulation offers an explanation
for the controversial observation\cite{broadhurst90} of a $128/h$ Mpc
quasi--period in deep ``pencil beam'' surveys of galaxy positions. If
$\Lambda(\tau), G(\tau)$, and/or $H(\tau)$ are time--dependent, then the
Friedmann equation is modified. As discussed in the previous section, the
oscillation phases of neutrinos emitted at large lookback time $\tau$ are
sensitive to the parameters in the (now modified) Friedmann equation. Thus,
measured flavor ratios can offer information on the time dependences of these
cosmic parameters.
\section{Implications for Neutrino Telescopes}
We discuss some implications of our analysis for designers of neutrino
telescopes. The spectrum of gammas from GRBs has been seen out to energies of a
few $GeV$, so lacking a detailed GRB model one would do well to focus upon
neutrino detection in a similar energy range. There are already weak limits
on the neutrino flux associated with GRBs from the IMB
experiment\cite{becker94} and others. The largest deep mine experiment yet
planned, the SuperKamiokande detector with a 50 kiloton sensitive volume
(scheduled for operation in 1996) will have about ten times the sensitivity for
events with energies between $5~MeV$ and a few $GeV$ as had previous
instruments.
Much further progress for sensitivity down to the few $MeV$ region is not
presently on the horizon. One possibility would be parasitic use of a megaton
size detector contructed for observation of supernova neutrinos out to a few
$Mpc$. The capability to sense neutrino versus antineutrino interactions via
interaction characteristics, muon absorption, or magnetic fields to distinguish
charge, would provide a powerful tool for many of the tests described above.
However, if the GRB neutrino spectrum extends to energies of many $GeV$ or
even $TeV$, then there is more hope for the near future. This is because the
detectability of signals rises strongly with energy. For example, the AMANDA,
Baikal, DUMAND and NESTOR ice/water instruments now under construction have
effective volumes for $100~GeV$ neutrinos of order $10^6~tonnes$, up by two
orders of magnitude from underground instruments\cite{learned94b}. Observation
of the ratio of muon charged current events to muonless events would
potentially allow for discimination between the putative neutrino--flavor
democratic source and the pion source with $\pi \rightarrow \mu \rightarrow e$
decay and $\nu_{\mu}$/$\nu_e$ = 2.
We can make a rough estimate of counting rates for neutrinos by assuming a
spectral shape which we take to be $1/E^2$, and a gamma flux which we take to
be $1~\gamma/cm^2/burst$ with energy greater than $1~MeV$. The ratio of
neutrinos to gamma rays, labeled as $\eta$, could be zero in the case of a
purely electromagnetic origin of the gammas, in which case most of the
foregoing is irrelevant except for the important constraint upon the source
model. For the situation of particle acceleration with power law spectra, as
discussed by Paczynski, $\eta \stackrel{>}{\sim} 1$. $\eta \simeq 1$ holds if the source
is not heavily shielded. Even better for our purposes, in situations similar to
that expected near AGN\cite{stenger92} the attenuation of the gamma rays can be
severe, while the neutrinos flow freely and so $\eta >> 1$, possibly even as
large as the $10^3$ energy emission ratio expected in supernovae.
We take a simplistic model of a standard $10,000~m^2$ effective area (for muon
detection) instrument ({\it e.g.}, AMANDA, Baikal, DUMAND or NESTOR). In
underwater (or ice) experiments the area grows with energy somewhat, but for
simplicity we make the conservative assumption that this area independent of
energy (as in mine detectors). We take the effective detector area for
neutrinos then as the the muon range, times the effective area for muons, times
the density of the medium, times Avogadro's Number, times the
neutrino--nucleon cross section. This turns out to be $90~cm^2$ for the
nominal detector at $1~TeV$, and it scales roughly as ${E_\nu}^2$ from cross
section and range, for energies from $1~GeV$ to $10~TeV$.
A given detector will have some threshold detection energy, which in practice
is not a step function, though for simplicity we take it to be so, at $20~GeV$.
We also take an arbitrary maximum neutrino energy of $1~TeV$. With these
assumptions we find for the expected number of neutrino interactions per burst
which are below the neutrino detector horizon, the value $9 \cdot 10^{-5} \eta
/ burst$ in muon neutrinos. If we take the rate of GRB (as now detected) as
about $1/day$, then the total expected number of correlated muons for this
standard neutrino detector is $1.5 \cdot 10^{-2} \eta /year$. Since the input
assumptions are certainly imprecise to a factor of ten, this could easily be
well detectable or beyond experimental reach.
If $\eta = 1$, it is easy to see why existing underground instruments have not
seen such correlations as yet, despite lower thresholds. The IMB detector had
$400~m^2$ area, 25 times less than our assumed instrument. In fact, one can
turn this around and ask what limit on $\eta$ is implied by the non--detections
in present underground instruments, under our assumptions of spectrum. In
Figure \ref{fig:eta}, we show the combinations of maximum GRB neutrino energy
and $\eta$ which would lead to one event detected per year in IMB ($400~m^2$),
a next generation instrument ($10^4~m^2$), and a hypothetical $km^3$ detector
(KM3). One sees that for a $TeV$ GRB neutrino cutoff energy, the IMB limit
on the neutrino to gamma ratio is of the order of a few thousand.
Another approach is to consider the brightest GRBs, and ask whether or not
multiple events from a single source are possible. If we assume that the
distribution of GRB gamma fluxes are dominated by spatial distribution, then
0.1\% of the GRBs will be at 0.1 of the distance of the typical burst, and
offer 100 times the neutrino flux at earth; this is the famous ``3/2 law''.
Thus in one year of operation during which there will be about 1000 GRBs, one
might find a GRB with $9 \cdot 10^{-3} \eta $ muons. One can conclude that
detection of multiple muons per GRB is likely in the standard next generation
instrument only if $\eta \gg 1$. This is illustrated in Figure
\ref{fig:brightest}, where the diagonal lines indicate the $\eta$ and maximum
neutrino energy combinations needed to see an event with ten muons once per
two years in the various classes of detectors (and roughly one in three such
GRBs would have a coincident GRO observation).
Better possibilities exist with a $km^3$ scale instrument. Even though current
design discussions center around optimization for detection of $TeV$ to $PeV$
neutrinos from AGN's, neutrinos from GRB's (as well as other opportunities such
as dark matter searching and atmospheric neutrino oscillation measurement)
argue in favor of reducing the threshold to as low a value as feasible, say of
the order of $10~GeV$. Given the lack of understanding of the process which
generates the GRB's we cannot do much in terms of {\it a priori} optimization
of detector design. Good timing is obviously important, but high angular
resolution is not as demanding as for neutrino point source searches.
If we take the energy threshold to again be $20~GeV$ for the ($km^3$) neutrino
detector with $10^6~m^2$ effective muon area, then under the same assumptions
we employed for the standard next generation instrument we get an expected
event number of $9 \cdot 10^{-3} \eta /burst$, or about $1.5 \eta /year$ GRO
coincident detections. More encouraging
yet is the rate of multiples: the brightest 0.1\% of GRBs might produce the
spectacular signature of $90 \mu$'s once in two years if $\eta$ should be 100
and the maximum energy $1~TeV$, in which case one could begin the studies
outlined above.
\section{Conclusion}
Any detection of GRBs in neutrinos would have great significance in
understanding these enigmatic objects. What we advertise herein is that
moreover such a detection can lead to fundamental exploration of neutrino
physics, astrophysics, and maybe even cosmology. This exploration is not
possible by any other means we know.
The implications for the telescope designers then are fairly obvious: make
instruments with as low an energy threshold as practical, and allow for
upgrades (in terms of energy sensitivity and capability to resolve neutrino
flavors) to follow the path of discovery.
\section*{Acknowledgements}
We want to thank Xerxes Tata for many useful discussions. We also acknowledge
Jack VanderVelde for a useful suggestion about calculating rates. We also want
to thank Andy Szentgyorgyi for help with the BATSE data. This work was
supported in part by the U.S. Department of Energy grants no. DE-FG05-85ER40226
and no. DE-FG03-94ER40833.
|
2,877,628,089,767 | arxiv | \section{Introduction}
The detection and characterization of extrasolar planets is one of the most dynamic research fields in modern astrophysics. To date more than $5'200$ exoplanets and exoplanet candidates\footnote{\url{http://exoplanets.org} (as of July 17, 2017)} have been identified by various detection techniques and instruments on ground and in space. In particular the NASA \emph{Kepler} mission \citep{borucki2010} revolutionized the field by providing, for the first time, robust statistics for the occurrence rate of exoplanets as a function of their size, orbital period, and stellar host type, revealing that small-sized planets with radii $< 2~R_\text{Earth}$ are ubiquitous \citep[e.g.,][]{howard2012, dressing2013, fressin2013, mulders2015, dressing2015, burke2015, gaidos2016}. Most of these studies make use of various pipeline completeness models to estimate the distribution of exoplanets in the radius-orbital period space depending on the spectral type of the host star and give their results in either continuous probability density functions or binned occurrence rates. While \emph{Kepler} was an unprecedented success in constraining the occurrence rate of exoplanets even for planets with radii down to $1~R_\text{Earth}$, most of the \emph{Kepler} systems are too distant for in-depth follow-up observations for atmospheric characterization. With the upcoming \emph{TESS} mission from NASA and then later the ESA \emph{PLATO} mission, the number of transiting planets detected around bright, nearby stars will significantly increase. Furthermore, a few transiting systems in the immediate solar neighborhood have already been identified, some of which are excellent targets for follow-up observations with the James Webb Space Telescope (\emph{JWST}) even for the search of biomarkers \citep[e.g.,][]{bertathompson2015, gillon2017a, gillon2017b}. However, it is important to keep in mind that transit observations will always only probe a small subset of the exoplanet population as the transit probability is inversely proportional to the star-planet separation. So, overall, it is unclear how many small-sized planets will be uncovered from transit surveys that are suitable for in-depth follow-up observations via transit or secondary eclipse spectroscopy and, at the same time, cover a broad range of host star spectral types and orbital separations.
In this context, a few studies investigated to what extent the next generation of extremely large ground-based telescopes (ELTs) can be used to detect and characterize small-sized exoplanets \citep[e.g.,][]{snellen2015}. \citet{crossfield2013} and \citet{quanz2015} used statistical results from the \emph{Kepler} mission to model exoplanetary systems around nearby stars with the goal of estimating the number and characteristics of exoplanets directly detectable by different ground-based high-contrast imaging instruments such as the mid-infrared E-ELT imager and spectrograph (E-ELT/METIS). Unlike the transit method, direct imaging, which aims at spatially separating the flux received from an exoplanet and the flux received from its host star, is primarily sensitive to widely separated planets as the contrast achievable by the instrument increases with increasing angular separation. It turns out that the spatial resolution and the sensitivity of the next generation of ground-based telescopes will enable the detection of a few small-sized planets around the very nearest stars maybe even in the habitable zone \citep{quanz2015}, but the characterization of a sizeable sample of small-sized exoplanets very likely remains out of reach even in the era of the ELTs.
Already more than a decade ago mid-infrared (MIR) space-based interferometers, providing unprecedented spatial resolution and sensitivity for the search of exoplanets, were proposed to both NASA \citep[Terrestrial Planet Finder, \emph{TPF};][]{lawson2001} and ESA \citep[\emph{Darwin};][]{leger2007}. Unfortunately, neither of the two missions were implemented; among several technical challenges, one scientific reason for the lack of implementation were uncertainties in their expected exoplanet yield. \emph{Kepler} released its first data only in 2010 \citep{borucki2010} and also ground-based radial velocity surveys had hardly begun to uncover exoplanets in the super-Earth mass regime at that time. In consequence, the occurrence rate of small, terrestrial planets was largely unconstrained when \emph{TPF} and \emph{Darwin} were discarded. More recently several papers have investigated the capabilities of space-based direct imaging missions to detect and characterize exoplanets focusing on Earth twins in environments appropriate for extrasolar life \citep[e.g.,][]{stark2014, brown2015, leger2015, stark2015}. These studies indeed made use of a statistical framework of Earth twins inferred from \emph{Kepler} data. However, all of these studies focused solely on planets that are distinctly equal to our own Earth. This is reflected in their choice of physical planet parameters, which they assign to their simulated exoplanet sample that is necessary to estimate the scientific yield of their hypothetical space observatories. It is beyond debate that the search for extrasolar life is one main goal of exoplanet research, but to understand the complicated processes involved in the formation and evolution of planets, a diverse and comprehensive sample of exoplanets is needed. Furthermore, the occurrence of Earth-like planets in the habitable zone of their parent stars is still affected by high uncertainties, varying from a factor of $\sim2$ around M-type stars \citep[e.g.,][]{dressing2015} to a factor of $\sim10$ around G- and K-type stars \citep[e.g.,][]{burke2015}.
Here, for the first time to our knowledge, we quantify the general scientific yield of a large space-based MIR interferometer by combining the technical specifications of the \emph{Darwin} mission, as proposed a decade ago, with the most recent planet occurrence statistics from the \emph{Kepler} mission. Using Monte Carlo simulations and a stellar sample of $326$ nearby stars within $20~\text{pc}$, we make predictions for the detection rates of exoplanets in different bands between $5.6$ and $15~\text{\textmu m}$ and over a large range of planet radii ($0.5$ -- $6~R_\text{Earth}$) and orbital periods ($0.5$ -- $418~\text{d}$). Our analysis allows us to identify the best target stars and to estimate what percentage of exoplanets might also be detectable with future, high-precision radial velocity or space-based optical/near-infrared (NIR) instruments because fully characterizing a planet, and in particular assessing its habitability, will require data from multiple techniques and instruments over a wavelength range that is as broad as possible. We also quantify what scientific gain/loss, in terms of planet detections, one can expect depending on the achievable spatial resolution and sensitivity of the space interferometer. As such, this paper aims at providing a \emph{scientific} basis for reinitiating and inspiring the discussion for the need for such a mission in the long term; we do not discuss the technical readiness or related challenges.
\section{Methods}
In our Monte Carlo simulations we model $2'000$ exoplanetary systems around each of $326$ nearby stars within $20~\text{pc}$ based on planet occurrence statistics from \emph{Kepler}. All astrophysical and instrumental parameters for our baseline scenario can be found in Table~\ref{baseline} and in the following subsections.
\begin{table}[!h]
\caption{\label{baseline}Astrophysical and instrumental parameters for our baseline scenario.}
\centering
\begin{tabular}{lll}
\hline\hline\noalign{\smallskip}
Parameter & Value & Description \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$\cos i$ & $[-1,~1)$ & Cosine of inclination \\
$\Omega$ & $[0,~2\pi)$ & Longitude of ascending node \\
$\omega$ & $[0,~2\pi)$ & Argument of periapsis \\
$\vartheta$ & $[0,~2\pi)$ & True anomaly \\
$e$ & $0$ & Eccentricity\tablefootmark{a} \\
$A_\text{B}$ & $[0,~0.8)$ & Bond albedo\tablefootmark{b} \\
$A_\text{g}$ & $[0,~0.1)$ & Geometric albedo\tablefootmark{b} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
IWA & $5.0~\text{mas}$ & IWA @ $\lambda_\text{eff} = 10~\text{\textmu m}$\tablefootmark{c, d} \\
OWA & $1''$ & OWA @ $\lambda_\text{eff} = 10~\text{\textmu m}$\tablefootmark{c, e} \\
$F_\text{lim,~F560W}$ & $0.16~\text{\textmu Jy}$ & Sensitivity limit ($\lambda_\text{eff} = 5.6~\text{\textmu m}$)\tablefootmark{f} \\
$F_\text{lim,~F1000W}$ & $0.54~\text{\textmu Jy}$ & Sensitivity limit ($\lambda_\text{eff} = 10.0~\text{\textmu m}$)\tablefootmark{f} \\
$F_\text{lim,~F1500W}$ & $1.39~\text{\textmu Jy}$ & Sensitivity limit ($\lambda_\text{eff} = 15.0~\text{\textmu m}$)\tablefootmark{f} \\
\noalign{\smallskip}\hline
\end{tabular}
\tablefoot{Values are distributed uniformly in the given ranges. \\
\tablefoottext{a}{see discussion in Section~\ref{eccentricity}} \\
\tablefoottext{b}{see discussion in Section~\ref{albedos}} \\
\tablefoottext{c}{from \citet{leger2007}} \\
\tablefoottext{d}{IWA = inner working angle; scales with $\lambda_\text{eff}$} \\
\tablefoottext{e}{OWA = outer working angle; scales with $\lambda_\text{eff}$} \\
\tablefoottext{f}{$10\sigma$ detection limit in $35'000~\text{s}$; modified from \citet{glasse2015} who assumed $10'000~\text{s}$ to achieve these limits with \emph{JWST}/MIRI}
}
\end{table}
\begin{figure*}[h!]
\centering
\includegraphics[width=9cm]{figures/distances.pdf}
\includegraphics[width=9cm]{figures/magnitudes.pdf}
\caption{Left: Distribution of the stellar distances in parsec including all stars from our star catalog. Right: Distribution of the measured Ks-band magnitudes from \emph{2MASS} for the same stars.}
\label{newfig_01}
\end{figure*}
\subsection{Stellar sample}
We adopted the star catalog from the E-ELT/METIS study presented in \citet{quanz2015} containing $328$ stars of spectral types A, F, G, K and M out to a distance of $21.4~\text{pc}$, but we removed two stars with parallaxes $< 50~\text{mas}$ to obtain a stellar sample limited to a distance of $20~\text{pc}$. The basis for this catalog was originally established by \citet{kirkpatrick2012} and contained objects within $8~\text{pc}$. Then white dwarfs and stars with spectral types later than M7 and binaries with apparent angular separations $< 5''$ were removed by \citet{crossfield2013}. Stars with fainter companions were retained. The star catalog was finally appended by \citet{quanz2015} with all dwarf stars with K-band magnitudes $< 7^\text{mag}$ out to a distance of $10~\text{pc}$ and $< 5^\text{mag}$ out to a distance of $20~\text{pc}$ from the SIMBAD Astronomical Database\footnote{\url{http://simbad.u-strasbg.fr/simbad/}}. Close binaries were again removed and empirical relations were used to convert magnitudes and spectral types into stellar parameters. The final catalog contains stellar properties (e.g., parallax, spectral type, radius, effective temperature, and mass) for $326$ stars ($8$ A stars, $54$ F stars, $72$ G stars, $71$ K stars, and $121$ M stars) distributed over the whole sky. Figure~\ref{newfig_01} shows, for illustrative purposes, histograms of the stellar distances and the Ks-band magnitudes as measured from \emph{2MASS} \citep{skrutskie2006}.
\subsection{Planet population}
\label{planet_population}
To create a random planet population we used planet occurrence statistics from \citet{fressin2013}, \citet{dressing2015}, and \citet{burke2015}. \citet{fressin2013} covered the broadest range of planet radii from $0.8$ to $22~R_\text{Earth}$ and orbital periods from $0.8$ to $418~\text{d}$ corresponding to $0.017$ to $1.094~\text{au}$ around a Sun-like star. As \citet{fressin2013} did not find a significant dependence of the planet occurrence on the spectral type we applied these statistics to all AFGKM stars in our star catalog.
As we are especially interested in terrestrial exoplanets, whenever possibly we decided to use the planet occurrence statistics from \citet{dressing2015} and \citet{burke2015} instead of those from \citet{fressin2013} for small exoplanets. Compared to \citet{fressin2013}, who only used Q1 -- Q6 data to identify \emph{Kepler} planet candidates, \citet{dressing2015} and \citet{burke2015} considered Q0 -- Q17 and Q1 -- Q16 data, respectively. Therefore, we expected their statistics to be more complete. Furthermore, they concentrated on certain spectral types, which allowed us to take into account a spectral type dependence of the planet occurrence rates. This patchwork-like approach is summarized in Table~\ref{ragrug}.
\begin{table}[!h]
\caption[]{\label{ragrug}Applied planet occurrence statistics with the planet radius $R_\text{p}$ and orbital period $P_\text{orb}$ ranges for each spectral type appearing in our star catalog.}
\centering
\begin{tabular}{llll}
\hline\hline\noalign{\smallskip}
Spec. type & Statistics & $R_\text{p}$ [$R_\text{Earth}$] & $P_\text{orb}$ [d] \\
\noalign{\smallskip}\hline\noalign{\smallskip}
A & Fressin & 0.8 -- 22 & 0.8 -- 418 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
F & Fressin & 0.8 -- 22 & 0.8 -- 418 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multirow{4}{*}{G} & Burke & 0.75 -- 2.5 & 50 -- 300 \\
& Fressin & 0.8 -- 22 & 0.8 -- 50 \\
& Fressin & 0.8 -- 22 & 300 -- 418 \\
& Fressin & 2.5 -- 22 & 50 -- 300 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multirow{4}{*}{K} & Burke & 0.75 -- 2.5 & 50 -- 300 \\
& Fressin & 0.8 -- 22 & 0.8 -- 50 \\
& Fressin & 0.8 -- 22 & 300 -- 418 \\
& Fressin & 2.5 -- 22 & 50 -- 300 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multirow{3}{*}{M} & Dressing & 0.5 -- 4 & 0.5 -- 200 \\
& Fressin & 0.8 -- 22 & 200 -- 418 \\
& Fressin & 4 -- 22 & 0.8 -- 200 \\
\noalign{\smallskip}
\hline
\end{tabular}
\tablefoot{Planet radii and orbital periods are not simulated in the same range throughout all spectral types.
}
\end{table}
The planet occurrence statistics from \citet{fressin2013} and \citet{dressing2015} were binned in planet radius and orbital period space. We applied a Poisson distribution for the number of planets per star and drew planet radii uniformly in linear space inside a single bin \citep[e.g.][]{crossfield2013} and orbital periods uniformly in logarithmic space inside a single bin in agreement with \citet{petigura2013} and \citet{silburt2015}. We conservatively assumed that the planet occurrence rate is zero inside bins for which no statistics are provided. If the $1\sigma$ upper limits are given instead of the statistical occurrence of exoplanets $r$ we proceeded as follows: We calculated the missing $r$ using the cumulative number of planets per star versus orbital period from Table~5 of \citet{dressing2015}. For $3.5~R_\text{Earth} \leq R_\text{p} \leq 4~R_\text{Earth}$, where $r$ is missing for two orbital period bins we assumed that the statistical occurrence of exoplanets is the same for the $3.5~R_\text{Earth} \leq R_\text{p} \leq 4~R_\text{Earth}$ and $0.5~\text{d} \leq P_\text{orb} \leq 1.7~\text{d}$ bin and the $3~R_\text{Earth} \leq R_\text{p} \leq 3.5~R_\text{Earth}$ and $0.5~\text{d} \leq P_\text{orb} \leq 1.7~\text{d}$ bin (including its errors). We further assumed the upper error to be the difference between our calculated $r$ and the given $1\sigma$ upper limit. If the obtained upper error is smaller than $r$ we set the lower error equal to the upper error. However, if the obtained upper error is larger than $r$ we set the lower error equal to $r$ itself to avoid negative planet occurrence rates.
\citet{burke2015} provided their planet occurrence statistics in terms of a continuous probability density function,
\begin{equation}
\label{eq01}
\frac{d^2f}{dR_\text{p}dP_\text{orb}} = F_0C_\text{n} \cdot \begin{cases} \left( \frac{R_\text{p}}{R_0} \right)^{\alpha_1}\left( \frac{P_\text{orb}}{P_0} \right)^\beta &\quad R_\text{p} < R_\text{brk}, \\ \left( \frac{R_\text{p}}{R_0} \right)^{\alpha_2}\left( \frac{R_\text{brk}}{R_0} \right)^{\alpha_1-\alpha_2}\left( \frac{P_\text{orb}}{P_0} \right)^\beta &\quad R_\text{p} \geq R_\text{brk}, \end{cases}
\end{equation}
where we also applied a Poisson distribution with mean value $F_0$ for the number of planets per star since
\begin{equation}
\label{eq02}
\int_{R_\text{p,~min}}^{R_\text{p,~max}}\int_{P_\text{orb,~min}}^{P_\text{orb,~max}}\frac{f}{F_0}dR_\text{p}dP_\text{orb} = 1,
\end{equation}
and thus $F_0$ represents the average number of planets per star. We separated the continuous probability density function into a planet radius and an orbital period part and calculated their inverse cumulative distribution functions (normalized to one) into which a uniformly distributed random variable between 0 and 1 can be inserted yielding a planet radius and an orbital period distribution according to Equation~\ref{eq01}.
We note that orbital periods $\leq 418~\text{d}$ are too short to probe the habitable zones around A- and F-type stars. Furthermore, we disregard the conservative habitable zone around G-type stars \citep{kopparapu2013}, Earths ($0.8$ -- $1.25~R_\text{Earth}$), and super-Earths ($1.25$ -- $2~R_\text{Earth}$), since \citet{fressin2013} provided empty bins above $P_\text{orb} = 145~\text{d}$ and the data from \citet{burke2015} extended to $P_\text{orb} = 300~\text{d}$ only. However, as mentioned in the Introduction, we do not solely focus on exoplanets in the habitable zone here and feel uncomfortable with extrapolating the orbital period distribution to longer periods for our baseline scenario. We come back to the question of potentially habitable exoplanets in Section~\ref{exo_earths}, however.
We assigned a randomly distributed orbit and position on this orbit and a randomly drawn Bond and geometric albedo to each generated exoplanet. We distributed the Bond albedos $A_\text{B}$ uniformly in $[0,~0.8)$ considering the Bond albedos of the solar system planets, which range from $0.068$ (Mercury) to $0.77$ (Venus)\footnote{\url{http://nssdc.gsfc.nasa.gov/planetary/factsheet/}} and the geometric albedos $A_\text{g}$ uniformly in $[0,~0.1)$, because many gases that dominate the atmospheres of our solar system planets absorb strongly in the infrared \citep[e.g., $\text{H}_2\text{O}$, $\text{CO}_2$, $\text{O}_3$, $\text{CH}_4$, $\text{N}_2\text{O}$;][]{green1964}. As we discuss further below the geometric albedos have no significant influence on our results since the percentage of exoplanets that can be detected only due to their reflected host star light is negligible longward of $\sim3~\text{\textmu m}$. We then calculated the apparent angular separation between exoplanet and host star and the effective temperature of the exoplanet $T_\text{eff,~p}$, which we assumed to be equal to its equilibrium temperature,
\begin{equation}
\label{eq03}
T_\text{eff,~p} = T_\text{eq,~p} = \left[ \frac{R_*^2(1-A_\text{B})}{4r_\text{p}^2} \right]^\frac{1}{4}T_{\text{eff},~*},
\end{equation}
where $R_*$ is the radius of the host star, $r_\text{p}$ is the physical separation between exoplanet and host star, and $T_{\text{eff},~*}$ is the effective temperature of the host star. We estimated the emitted thermal flux $F_\text{therm,~p}$ of the exoplanet, assuming it emits similar to a blackbody with temperature $T_\text{eff,~p}$. Finally, we computed the reflected host star flux from the exoplanet according to
\begin{equation}
\label{eq04}
F_\text{refl,~p} = A_\text{g}f(\alpha)\frac{R_\text{p}^2}{d^2}F_{\text{inc},~*},
\end{equation}
where $f(\alpha)$ is the exoplanet's phase curve, $d$ is the distance to Earth, and $F_{\text{inc},~*}$ is the host star flux, which is incident on the exoplanet, assuming Lambertian scattering \citep[cf., e.g.,][]{seager2010} and that the host stars are spherical blackbodies as well.
\subsection{Instrument properties}
Unlike other studies \citep[e.g.,][]{crossfield2013, stark2014, brown2015, stark2015}, which have investigated the exoplanet yield for a broad range of instrument parameters, we focused on a specific instrument configuration that was proposed to ESA in 2007. We took the technical specifications (see bottom half of Table~\ref{baseline}) of this space-based nulling interferometer from the proposal published by \citet{cockell2009}. This proposal suggested an instrument consisting of four formation-flying mirrors that focus their light onto a fifth beam combiner spacecraft (BCS), where detectors and communication devices are located. The spacecraft would be arranged in an X-like architecture with the four mirrors flying in a single plane at the tips of the X and effectively forming part of a large synthetic paraboloid. The BCS would fly $\sim1.2~\text{km}$ above this plane in the center of the X. The whole setup could be launched to the Earth-Sun L2 point in a single Ariane 5 rocket and would be able to observe an annular region between $46^\circ$ and $83^\circ$ from antisolar direction and thus reach complete sky coverage ($> 99\%$) throughout the whole year.
Besides an imaging baseline $B_\text{I}$ of up to $500~\text{m}$, a nulling baseline $B_\text{N}$ from $7$ to $168~\text{m,}$ resulting in a best possible IWA (inner working angle) of $\lambda_\text{eff}/(4B_\text{N}) \approx 3~\text{mas}$ at $10~\text{\textmu m}$, would be possible. In our analysis we assumed a conservative IWA of $5.0~\text{mas}$ at $\lambda_\text{eff} = 10~\text{\textmu m}$. However, we also investigated the impact of shorter nulling baselines, i.e., worse IWA. For the OWA (outer working angle) we assumed $1~\lambda_\text{eff}/D$, where $D$ is the aperture diameter, which is $\sim1''$ at $\lambda_\text{eff} = 10~\text{\textmu m}$ for a $2.8~\text{m}$ aperture\footnote{To achieve the same collecting area as the \emph{JWST}, each of the four mirror spacecraft would need to have a diameter of $D \approx 2.8~\text{m}$.}. Both IWA and OWA are scaled with the corresponding effective wavelength $\lambda_\text{eff}$. \citet{cockell2009} further required the instrument to achieve a sensitivity comparable to the \emph{JWST}. In our analysis we focused on the F560W ($5.6~\text{\textmu m}$), F1000W ($10~\text{\textmu m}$), and F1500W ($15~\text{\textmu m}$) filters and their respective sensitivities from the mid-infrared instrument (MIRI) of the \emph{JWST}. We therefore used the MIRI filter curves and the faint source low background detection limits\footnote{They differ by less than $4\%$ from the high background values for our filters.} from Table~3 of \citet{glasse2015}, who performed an extensive study on the detector sensitivity taking into account zodiacal background at the L2 point, thermal instrument background, photon conversion efficiency, image broadening, scattering, and crosstalk. Since our three filters have effectively no overlap we assumed that all three bands can be observed simultaneously. However, comparing the instrument throughput of MIRI, which is approximately $35\%$ for the F1000W filter\footnote{\url{https://jwst.etc.stsci.edu/}; the F560W and the F1500W filters have lower throughputs.}, to those of a nulling interferometer \citep{dubovitsky2004}, which is approximately $10\%$ only, we scaled the necessary integration time for a $10\sigma$ detection by a factor of $3.5$, resulting in $35'000~\text{s}$.
The high spatial resolution in combination with the superb sensitivity is key for the direct detection of the thermal emission of close-in extrasolar planets. An exoplanet is considered as detectable in the simulations if its apparent angular separation exceeds the instrument's IWA and is smaller than the OWA and its total flux observed in a particular band exceeds $F_\text{lim}$ (cf. Table~\ref{baseline}). We do not consider null-depth in our simulations\footnote{\citet{cockell2009} require a null depth of $1\mathrm{e}{-5}$ stable over a duration of $5~\text{d}$.}, but discuss various noise effects in Section~\ref{observing_strategy}.
\subsection{Radial velocity detectability}
\label{radial_velocity}
In several cases we also analyzed the feasibility of additional radial velocity measurements. However, one needs to know the mass of the planet to estimate the semi-amplitude of the radial velocity signal
\begin{equation}
\label{eq05}
K_* = \sqrt{\frac{G}{1-e^2}}M_\text{p}|\sin(i)|(M_*+M_\text{p})^{-1/2}a^{-1/2},
\end{equation}
where $G$ is the gravitational constant, $e$ is the orbital eccentricity, $M_\text{p}$ is the mass of the planet, $i$ is the orbital inclination, $M_*$ is the host star mass, and $a$ is the semi-major axis of the orbit of the exoplanet around its host star. Having its radius already (simulated from the \emph{Kepler} planet occurrence statistics) and assuming a planet density of $\rho_\text{p} = 5'000~\text{kg}~\text{m}^{-3}$ for all planets with radii below $2~R_\text{Earth}$, we approximated the mass of the planet. The density is motivated by the mean value of the densities of the four terrestrial planets of our own solar system, which is $5'029~\text{kg}~\text{m}^{-3}$. However, the assumption that all planets with radii below $2~R_\text{Earth}$ are rocky is a rough simplification. The transition from rocky planets to planets with a significant gas envelope lies probably between $1.2$ and $1.8~R_\text{Earth}$ \citep{wolfgang2015}. Similarly, \citet{rogers2015} found that most planets with radii above $1.6~R_\text{Earth}$ are not rocky considering a \emph{Kepler} subsample of planets with radial velocity mass constraints. We thus keep in mind that our analysis might overestimate the radial velocity signal for a fraction of the exoplanets between $\sim1.5$ and $2~R_\text{Earth}$. As baseline detection threshold we adapted a value of $K_* \geq 10~\text{cm}~\text{s}^{-1}$ motivated by the aimed precision of the new Echelle spectrograph for rocky exoplanet and stable spectroscopic observations (ESPRESSO) currently installed at ESO's Very Large Telescope in Chile \citep{pepe2010}; we acknowledge that this precision is not necessarily be achieved for all stars in our sample. Especially, detecting exoplanets around A- and early F-type stars is challenging because of their sparse spectra and this might require additional efforts \citep[e.g., Fourier space correlation;][]{lagrange2009}.
\section{Results}
We present all of our results in terms of the expected number of detectable exoplanets ($\eta$). A real observation can be understood as a random experiment where a single value is drawn from a Poisson-like probability function with mean value $\eta$. All results are therefore affected by statistical Poisson errors $\Delta\eta$. Since we simulated $2'000$ exoplanetary systems around each host star these errors can be calculated for each expectation value of detectable exoplanets $\eta$ via
\begin{equation}
\label{eq06}
\Delta\eta = \frac{\sqrt{\eta \cdot 2000}}{2000} \approx 0.022 \cdot \sqrt{\eta}
\end{equation}
(error of a Poisson distribution).
\begin{figure}[h!]
\centering
\includegraphics[width=\hsize]{figures/COR_MIRI_F560W_baseline_hist2d_rv.pdf}
\includegraphics[width=\hsize]{figures/COR_MIRI_F1000W_baseline_hist2d_rv.pdf}
\includegraphics[width=\hsize]{figures/COR_MIRI_F1500W_baseline_hist2d_rv.pdf}
\caption{Expected number of detectable exoplanets binned in the radius-equilibrium temperature plane for the $5.6~\text{\textmu m}$ filter (F560W, top),$10~\text{\textmu m}$ filter (F1000W, middle), and $15~\text{\textmu m}$ filter (F1500W, bottom). The histograms in each subfigure show the projected equilibrium temperature distribution (top) and the projected radius distribution (right) of the detectable exoplanets. The numbers in percent state the percentage of planets, which is also detectable with radial velocity observations assuming a detection threshold of $K_* \geq 10~\text{cm}~\text{s}^{-1}$ and a planet density of $\rho_\text{p} = 5'000~\text{kg}~\text{m}^{-3}$. Color shade and figure axes are scaled equally in all three subfigures for better comparison. A negligible percentage of detectable exoplanets ($\lessapprox 1\%$) have equilibrium temperatures below $100~\text{K}$ or above $1'300~\text{K}$ and thus lie outside the depicted bins.}
\label{fig01}
\end{figure}
\subsection{Single filter observations}
\subsubsection{Number of detectable exoplanets -- Baseline scenario}
The expected numbers of detectable exoplanets for each individual filter are given in Figure~\ref{fig01}. We restrict the presented parameter space to planets with radii up to $6~R_\text{Earth}$ to stick to the planet radius bins determined by \citet{fressin2013}\footnote{The occurrence rate of planets with radii between $6$ and $22~R_\text{Earth}$ is comparatively low.}. Integrating over the whole depicted radius-equilibrium temperature plane results in $\sim243$ expected planets at $5.6~\text{\textmu m}$, $\sim261$ expected planets at $10~\text{\textmu m,}$ and $\sim194$ expected planets at $15~\text{\textmu m}$. The majority of these planets are smaller than Neptune ($R_\text{p} \leq 4~R_\text{Earth}$) and have equilibrium temperatures between $200$ and $700~\text{K}$.
The highest number of exoplanets can be detected with the $10~\text{\textmu m}$ filter although it is inferior to the $5.6~\text{\textmu m}$ filter in terms of spatial resolution (i.e., IWA and scales with $\lambda_\text{eff}$) and sensitivity. However, the simulated planet population consists of a significant number of planets with equilibrium temperatures around $300~\text{K}$. Their thermal emission peaks at roughly $10~\text{\textmu m}$, but their flux decreases sharply toward shorter wavelengths so that they remain undetected in the $5.6~\text{\textmu m}$ filter. Going from shorter to longer wavelengths shows that the average equilibrium temperatures of the detectable exoplanets shift slightly toward cooler planets because the planet equilibrium temperature is proportional to the inverse square root of the physical separation between planet and host star; hotter planets are located closer to their host stars and cannot be resolved at longer wavelengths.
The percentage of exoplanets that can also be detected with radial velocity measurements (as defined in Section~\ref{radial_velocity}) is very high for planets between $1.25$ and $2~R_\text{Earth}$, typically close to $100\%$. For these planets the true mass and orbit could be reconstructed by combining data from both observing techniques, including a constraint on the radii of the planets from their measured luminosity (provided that their effective temperature can be reliably estimated). Because of their expected lower mass, planets with radii between $0.5$ and $1.25~R_\text{Earth}$ are less frequently detectable with radial velocity. However, the percentage detected of such planets is still mostly above $70\%$.
\subsubsection{Instrument performance}
We investigated how sensitive the scientific output, i.e., the expected number of detectable exoplanets, depends on the assumed instrument performance focusing on the $10~\text{\textmu m}$ filter because it promises the highest planet yield in our baseline scenario. Figure~\ref{fig02} (top panel) shows the expected number of detectable exoplanets as a function of the assumed faint source detection limit $F_\text{lim}$ for five different nulling baselines resulting in IWAs ranging from $5.0$ to $20.6~\text{mas}$. The slopes of the individual curves are a measure of how stable the expected exoplanet yield is with respect to the sensitivity of the instrument. For all presented IWAs the curves have their steepest slope at roughly our baseline scenario of $F_\text{lim,~F1000W } = 0.54~\text{\textmu Jy}$ implying that the instrument is operating in a regime where its scientific yield is particularly sensitive to variations in the achievable detection limit (see also Section~\ref{observing_strategy}).
\begin{figure}[h!]
\centering
\includegraphics[width=\hsize]{figures/COR_MIRI_F1000W_baseline_performance1.pdf}
\includegraphics[width=\hsize]{figures/COR_MIRI_F1000W_baseline_performance2.pdf}
\caption{Top: Expected number of detectable exoplanets (with radii $< 6~R_\text{Earth}$) as a function of the faint source detection limit $F_\text{lim}$ for the $10~\text{\textmu m}$ filter. The five curves represent various nulling baselines, i.e., various IWAs ranging from $5.0~\text{mas}$ (baseline value) to $20.6~\text{mas}$. The red vertical line indicates our baseline detection limit of $F_\text{lim,~F1000W} = 0.54~\text{\textmu Jy}$. Bottom: Ratios of the other curves to the $5.0~\text{mas}$ curve as a function of the faint source detection limit $F_\text{lim}$.}
\label{fig02}
\end{figure}
\begin{figure*}[h!]
\centering
\includegraphics[width=9cm]{figures/COR_Darwin_baseline_hist_sort.pdf}
\includegraphics[width=9cm]{figures/COR_Darwin_baseline_hist_comp.pdf}
\caption{Left: Expected number of detectable exoplanets (with radii $< 6~R_\text{Earth}$) for each individual host star from our star catalog. Each of the $326$ host stars is represented by one bin of the histogram (see color code for the different spectral types in the figure legend). This plot contains all exoplanets that can be detected in at least one of the three filters used in this work ($5.6~\text{\textmu m}$, $10~\text{\textmu m}$, $15~\text{\textmu m}$). Right: Expected number of detectable exoplanets for each individual host star from our star catalog (left axis, as in the left panel), but sorted by completeness (right axis) overplotted as the transparent gray curve on the histogram.}
\label{fig03}
\end{figure*}
\begin{figure}[h!]
\centering
\includegraphics[width=9.0cm]{figures/COR_Darwin_baseline_piechart.pdf}
\caption{Expected number of exoplanets (with radii $< 6~R_\text{Earth}$) that are detectable in only one, two, or all three filters (see color code for the different filters in the figure legend). There are no planets expected to be detectable simultaneously at $5.6$ and $15~\text{\textmu m}$ alone. In total $\sim315$ exoplanets can be detected in at least one of our simulated filters. Any number of discrepancies are due to rounding errors.}
\label{fig04}
\end{figure}
The ratios of the expected exoplanet yield of the four smaller nulling baselines, i.e., worse IWAs, relative to our baseline value of $5.0~\text{mas}$ are plotted in the bottom panel of Figure~\ref{fig02}. This figure shows that these lines are roughly constant, in particular the green line that depicts the ratio between $8.3$ and $5.0~\text{mas}$, for detection limits that are more sensitive than our baseline value. In this regime, for an IWA of $8.3~\text{mas}$, the planet yield is $\sim20\%$ lower; for an IWA of $20.6~\text{mas}$ only $\sim40\%$ of the planets are detected compared to our baseline scenario.
Toward the right side of the presented parameter space all four curves in the bottom panel of Figure~\ref{fig02} show a small dip. This implies that the expected exoplanet yield is most sensitive to variations in the IWA toward faint source detection limits roughly one order of magnitude worse than our baseline assumption.
\subsection{Multi-filter observations}
\subsubsection{Spectroscopic characterization}
For a more comprehensive characterization of their properties (e.g., atmospheric composition) the exoplanets should be analyzed spectroscopically and hence they must be detectable over a broad wavelength range in a reasonable amount of time. To estimate the percentage of exoplanets for which a spectroscopic characterization would be possible Figure~\ref{fig04} shows the expected number of exoplanets that can be detected in only one, two, or in all three filters. Out of the expected $\sim315$ individual exoplanets, which can be detected in at least one filter, $\sim144$ exoplanets can be detected in all three filters.
Therefore, as a significant percentage of exoplanets are detected between $5.6$ and $15~\text{\textmu m}$ within a reasonable amount of observing time, there would be sufficient time left for detailed follow-up observations of the most interesting targets. The MIR wavelength regime contains numerous absorption bands of key molecules for atmospheric characterization \citep[e.g., $\text{O}_3$, at $9.6~\text{\textmu m}$; $\text{CO}_2$, at $15~\text{\textmu m}$; $\text{H}_2\text{O}$, at $7$ and $20.5~\text{\textmu m}$; and $\text{CH}_4$, at $3.3$ and $8~\text{\textmu m}$; cf.][]{desmarais2002} and bands from additional biosignatures \citep[such as $\text{CH}_3\text{Cl}$, $\text{DMS}$, $\text{N}_2$, and $\text{NH}_3$; cf.][]{seager2013}. A detailed analysis concerning the required S/N, spectral resolution, and wavelength range to identify certain molecules and robustly characterize atmospheric properties as a function of planet parameters is, however, beyond the scope of the present paper and left for future investigations.
\subsubsection{Planet yield per star}
\begin{table}[h!]
\caption[]{\label{yield} Name, distance $d$, spectral type, completeness $C$, and expected number of detectable exoplanets (planet yield) for the best targets.}
\centering
\begin{tabular}{lllll}
\hline\hline
\noalign{\smallskip}
Name & $d$ [pc] & Spec. type & $C$ & Planet yield \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Gl 887 & 3.28 & M2V & 0.85 & 2.7 \\
Gl 15 A & 3.59 & M1.0V & 0.85 & 2.6 \\
Gl 411 & 2.55 & M1.5V & 0.82 & 2.5 \\
Gl 1 & 4.34 & M2V & 0.79 & 2.5 \\
Gl 725 A & 3.57 & M3.0V & 0.76 & 2.4 \\
Gl 338 A & 6.14 & M0.0V & 0.76 & 2.3 \\
Gl 832 & 4.95 & M2/3V & 0.76 & 2.3 \\
Gl 784 & 6.20 & M0V & 0.74 & 2.3 \\
Gl 526 & 5.39 & M3V & 0.72 & 2.3 \\
Gl 628 & 4.29 & M3V & 0.73 & 2.3 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
$\alpha$ Cen A & 1.32 & G2V & 1.00 & 1.4 \\
$\alpha$ Cen B & 1.32 & K1V & 1.00 & 1.4 \\
Sirius A & 2.64 & A1V & 1.00 & 0.8 \\
Procyon A & 3.51 & F5IV-V & 1.00 & 0.8 \\
Altair & 5.13 & A7Vn & 0.99 & 0.9 \\
Fomalhaut & 7.70 & A4V & 0.99 & 0.8 \\
$\delta$ Pav & 6.11 & G8IV & 0.99 & 1.4 \\
$\eta$ Cas A & 5.95 & F9V & 0.99 & 0.8 \\
Vega & 7.68 & A0Va & 0.99 & 0.8 \\
$\zeta$ Tuc & 8.59 & F9.5V & 0.98 & 0.8 \\
\noalign{\smallskip}
\hline
\end{tabular}
\tablefoot{The top half of the table shows the best 10 targets according to the left panel of Figure~\ref{fig03} (highest planet yield), and the bottom half shows the best 10 targets according to the right panel of Figure~\ref{fig03} (highest completeness). Spectral types taken from the SIMBAD Astronomical Database (\url{http://simbad.u-strasbg.fr/simbad/}).
}
\end{table}
From the point of view of the observer, it is not only interesting to know how many exoplanets one can expect to detect, but also around which targets one could find these exoplanets. For this purpose we summarize the expected number of detectable exoplanets for each individual host star from our star catalog in Figure~\ref{fig03} (left panel). We sort the stars by planet yield to show that most of the stars have expectation values of $\sim1$ detectable exoplanet. For roughly $30\%$ of the host stars from our star catalog our simulations predict an expected planet yield of $\geq 1$, ultimately ranging up to $\sim2.7$.
The top half of Table~\ref{yield} reveals that the 10 stars with the highest expected exoplanet yield are all nearby M-type stars. However, the exoplanet yield alone does not quantify the detection completeness, i.e., what percentage of the generated planets in each individual exoplanetary system is detectable. To decouple the underlying planet population from our analysis, we calculated the completeness for each host star as the fraction of exoplanets that can be directly detected relative to all exoplanets that are simulated around this star. Figure~\ref{fig03} (right panel) shows the expected number of detectable exoplanets for each target (like in the left panel), but this time the stars are sorted by their completeness rather than their expected exoplanet yield. The bottom half of Table~\ref{yield} reveals that the completeness is close to $100\%$ for nearby solar-type and intermediate-mass stars. This trend of higher completeness toward earlier spectral types can be explained with the comparatively higher temperatures of their planets.
\subsection{Distribution of spectral types}
\begin{table}[h!]
\caption{\label{types}Total expected exoplanet yield and expected exoplanet yield per star and average completeness $C$ sorted by host star spectral type.}
\centering
\begin{tabular}{llll}
\hline\hline
\noalign{\smallskip}
\multirow{2}{*}{Spec. type} & \multicolumn{2}{l}{Planet yield} & \multirow{2}{*}{$C$} \\
& total & per star & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
A & 6.4 & 0.80 & 0.96 \\
F & 40.8 & 0.76 & 0.92 \\
G & 66.4 & 0.92 & 0.67 \\
K & 55.0 & 0.78 & 0.56 \\
M & 146.8 & 1.21 & 0.39 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
In case one aims at comparing the properties of exoplanets as a function of their host star spectral type, for example, from a planet formation perspective, Table~\ref{types} shows that planets (with radii $< 6~R_\text{Earth}$) around all types of host stars are detected in our baseline scenario. Roughly one-half of the detectable planets orbit M stars and the other half orbit FGK stars; A stars, of which we have only eight in our sample, are negligible.
\section{Discussion}
As motivated in the Introduction the goal of this paper is to quantify the expected exoplanet yield of a space-based nulling interferometer to provide a framework for deriving robust science requirements for a possible future mission. Therefore, we discuss in the following how the expected exoplanet yield depends on our astrophysical assumptions such as planetary albedos, orbital eccentricity, and (exo-)zodiacal light. We further investigate the planet occurrence rate around binary stars, assess the impact of the statistical errors in the underlying planet population, review our observing strategy in the context of stellar leakage, and discuss the stellar properties of our M dwarf host stars. We end our discussion section by specifying the expected exo-Earth yield of our nulling interferometer in comparison to that of a large space-based optical/NIR telescope such as proposed in the NASA \emph{HabEx} and \emph{LUVOIR} missions.
\subsection{Choice of albedos}
\label{albedos}
The expected number of detectable exoplanets is strongly dependent on the choice of the planetary albedos, particularly on the choice of the Bond albedos. \citet{crossfield2013} and \citet{quanz2015} assume a uniform distribution of the Bond (and the geometric) albedos between $0$ and $0.4$. However, considering the Bond albedos of the solar system planets, we distributed these albedos uniformly between $0$ and $0.8$ for our simulated planets. This increase in the Bond albedo range reduces the scientific yield since higher Bond albedos translate into lower equilibrium temperatures and therefore less thermal emission.
A reasonable choice of the geometric albedos is more difficult since they are wavelength dependent, but hardly any information is available for the wavelength range we are interested in. We decided to distribute the geometric albedos uniformly between $0$ and $0.1$ because many gases that dominate the atmospheres of our solar system planets are strongly absorbed in the MIR. However, we find that in our simulations the contribution of the reflected host star light is negligible for the detection of the vast majority of exoplanets. Switching off the reflected host star light contribution reduces the expected number of detectable exoplanets in the $5.6~\text{\textmu m}$ filter, where one would expect the strongest effect, by no more than $\sim1\%$. On the other hand, even if we distributed the geometric albedos uniformly between $0$ and $1$ instead of $0$ and $0.1$, the expected exoplanet yield would increase by no more than $\sim1\%$. More specifically, the expected number of detectable exoplanets would increase by $\sim4.4$ planets with equilibrium temperatures $T_\text{eq,~p} \leq 300~\text{K}$. Therefore, the choice of the geometric albedos has only a minor impact on our results.
\subsection{Choice of eccentricity}
\label{eccentricity}
For our baseline scenario we assumed circular orbits (eccentricity $e = 0$; see Table~\ref{baseline}), however our Monte Carlo simulations can also handle eccentric orbits. On the one hand, planets on eccentric orbits spend more time around their apoapsis where the separation between exoplanet and host star is largest. Therefore, one would expect more detections because (statistically) more exoplanets can be spatially resolved from their host stars. On the other hand, an exoplanet that is located further away from its host star will be cooler and emit less thermal blackbody radiation and less reflected light. Therefore one would expect less detections because (statistically) less exoplanets pass the detection threshold of the instrument. Here we assumed that the timescales of the thermodynamic processes in the atmospheres of the exoplanets are short compared to their orbital periods so that the equilibrium temperatures of the exoplanets are always determined by the instantaneous physical separations between exoplanets and host stars $r_\text{p}$. Comparing the expected number of detectable exoplanets for circular orbits ($e = 0$) and highly eccentric orbits ($e$ distributed uniformly in $[0,~1)$) it can be concluded that the second effect has a larger impact since in our simulations only $\sim293$ are expected in the case of highly eccentric orbits compared to $\sim315$ in the non-eccentric case. However, as the difference is small (roughly $-7\%$) we consider our results as robust with respect to variations in the orbital eccentricity. If we assume a delay in the cooling of the exoplanets whilst moving from smaller to larger physical separations from their host stars, the exoplanets still had a higher temperature and emitted more thermal radiation when they reach their apoapsis. This would lead to an increase in the expected exoplanet yield, but only of the same order of magnitude as the decrease assuming instantaneous cooling and heating.
\subsection{(Exo-)zodiacal Light}
Planetary systems can be filled with dust particles, which do not only reflect stellar light but also emit thermal radiation. In our solar system this emission is called zodiacal light; around other stars it is referred to as exozodiacal light. This emission, from within our own solar system, but also from within the other stellar systems, gives rise to a significant radiation background in the thermal infrared. In fact, there are several projects ongoing to characterize the exozodiacal light in nearby stellar systems in preparation for future planet searches \citep[e.g.,][]{danchi2016, defrere2016}.
In our simulations the impact of the zodiacal light on our faint source detection limits is fully taken into account because the scattered and emissive components of the interplanetary dust are included in the MIRI sensitivity model of \citet{glasse2015}. These authors predicted that zodiacal light is the dominant background source shortward of $\sim17~\text{\textmu m}$ and the reflected and thermal emission of the instrument becomes dominant longward of $\sim17~\text{\textmu m}$ only. The zodiacal light contribution depends on the orbit of the observatory, but we assume that our MIR interferometer is operated at the Earth-Sun L2 point similar to the \emph{JWST}.
The contamination from the exozodiacal light is not taken into account in our simulations. If these emissions were uniform and without significant structure, for example, from a uniform face-on dust disk, then they could be canceled out by the nulling interferometer. According to \citet{cockell2009}, by chopping the outputs of two Bracewell interferometers via shifting their phase by $\pm\pi/2$, centro-symmetric sources can be suppressed significantly. This procedure not only eliminates exozodiacal light but also stellar leakage from the host star. Yet, even if the emission itself can be canceled out, there will still be an increase in background noise and hence a loss in sensitivity. For an exozodiacal dust cloud, which is less than five times as dense as our solar system zodiacal cloud, \citet{dubovitsky2004} find that its photon noise is negligible compared to that originating from stellar leakage (see Section~\ref{observing_strategy}) and zodiacal light. However, at higher dust densities, a significant portion of the photon noise can originate from the exozodiacal dust cloud so that our simulations overestimate the number of detectable exoplanets in this case.
While including estimations for exozodiacal light in our simulations is certainly one of the next steps, we point out that, as discussed in Section~\ref{observing_strategy}, at the moment we assume a constant integration time of $35'000~\text{s}$ per star in our baseline scenario and we could easily increase (and optimize) the observing time for individual targets.
\subsection{Planet occurrence around binaries}
Our simulations predict the number of detectable exoplanets within a distance of $20~\text{pc}$ only excluding close binaries. However, many processes, such as disk truncation \citep{jang-condell2015}, enhanced accretion \citep{jensen2007}, and enhanced photoevaporation \citep{alexander2012}, can influence the formation and evolution of planets in binary star systems. \cite{kraus2016} have shown that the occurrence of planets around close-in binaries with separations below $a_\text{cut} = 47_{-23}^{+59}~\text{au}$ differs from that around single stars or wider separated binaries by a suppression factor of $S_\text{cut} = 0.34_{-0.15}^{+0.14}$. They have used high-resolution imaging follow-up observations of a subsample of $382$ Kepler Objects of Interest (KOIs) to distinguish single star hosts from multiple star hosts. Since the \emph{Kepler} target stars were chosen almost blindly with respect to stellar multiplicity \citep{kraus2016} their findings translate into a higher planet occurrence for a stellar sample that systematically excludes close-in binaries.
Assuming a Gaussian distribution of binary companions in $\log(P)$ space according to \cite{raghavan2010}, we find that $f_{< 47~\text{au}} = 49_{-7}^{+9}\%$ of all binary stars have separations below $a_\text{cut}$. Moreover, we find a scaling factor of $r_\text{single}/r = 1.14_{-0.07}^{+0.07}$ for the planet occurrence around single stars or wider separated binaries via
\begin{equation}
S_\text{cut} \cdot f_\text{bin} \cdot f_{< 47~\text{au}} \cdot r_\text{single} + (1-f_\text{bin} \cdot f_{< 47~\text{au}}) \cdot r_\text{single} = r,
\end{equation}
where $f_\text{bin}$ is the percentage of binary stars among \emph{Kepler} targets and $r$ is the occurrence rate of planets predicted by the \emph{Kepler} data. We took $f_\text{bin}$ from Table~16 of \cite{raghavan2010} who investigated a sample of $454$ solar-type stars for stellar companions ($f_\text{bin} = N_\text{bin}/(N_\text{single}+N_\text{bin})$). Their findings are valid for stellar masses between roughly $0.5~M_\odot \leq M_* \leq 1.5~M_\odot$. This means that a similar scaling factor could be calculated for the low-mass stars from our star catalog on the basis of a different distribution of binary companions \citep[e.g.,][]{janson2012}.
However, we decided not to scale our planet occurrence rates with this factor of $r_\text{single}/r = 1.14_{-0.07}^{+0.07}$ and regarded the total expected exoplanet yield as conservative with respect to stellar multiplicity. Nevertheless, we need to point out that some of the best targets (e.g., $\alpha$ Cen A/B, Sirius A, Procyon A) are in binary systems with separations below $a_\text{cut}$ to which the suppressed planet occurrence rate would apply.
\subsection{Underlying planet population}
The underlying planet population has a high impact on the properties of the detectable exoplanets. The peak observed at radii of $1.25~R_\text{Earth} \leq R_\text{p} \leq 4~R_\text{Earth}$ and equilibrium temperatures of $200~\text{K} \leq T_\text{eq,~p} \leq 500~\text{K}$ is a direct fingerprint of the planet radius and orbital period distribution used to generate the exoplanetary systems. The spatial resolution and sensitivity of our space-based MIR interferometer allow for the detection of a broad range of planet types so that the underlying planet population itself is, to some extent, a limiting factor as also shown by the completeness analysis above. Another important point is our patchwork-like approach to merge planet occurrence statistics from different authors. Each statistic rests on different assumptions and models to estimate the completeness of the \emph{Kepler} pipeline and infer the true population of exoplanets from the highly biased sample of transiting exoplanets. We implicitly assumed a spectral type dependence of the planet occurrence by applying the statistics from \citet{dressing2015} to M-type stars only and those from \citet{burke2015} to G- and K-type stars only. This choice is reflected in our results (Table~\ref{yield} and Table~\ref{types}) as the targets with the highest planet yield are M-type stars.
\begin{figure}[h!]
\centering
\includegraphics[width=\hsize]{figures/COR_Darwin_max_hist2d_rv.pdf}
\includegraphics[width=\hsize]{figures/COR_Darwin_baseline_hist2d_rv.pdf}
\includegraphics[width=\hsize]{figures/COR_Darwin_min_hist2d_rv.pdf}
\caption{Expected number of detectable exoplanets combined for all three simulated filters for our best-case scenario (top; $r+\Delta r$), for our baseline scenario (middle), and for our worst-case scenario (bottom; $r-\Delta r$), binned in the radius-equilibrium temperature plane (like in Figure~\ref{fig01})}
\label{fig05}
\end{figure}
Moreover, the underlying planet population is affected by statistical errors $\Delta r$. \citet{dressing2015} and \citet{fressin2013} presented these errors as upper ($r+\Delta r$) and lower ($r-\Delta r$) bounds for the planet occurrence rate $r$ within each individual planet radius-orbital period bin. \citet{burke2015} present an optimistic and a pessimistic efficiency scenario for their two-power-law model. We performed error analyses by comparing the expected exoplanet yield $r$ of our baseline scenario with that of best-case and worst-case scenarios with planet occurrence rates $r+\Delta r$ and $r-\Delta r$, respectively (Figure~\ref{fig05}). We find an expected exoplanet yield of $\sim428$ in the best-case and $\sim238$ in the worst-case scenario revealing that the Poisson errors $315.4_{-0.4}^{+0.4}$ from our Monte Carlo simulation (cf. Equation~\ref{eq06}) are negligible compared to the statistical errors in the underlying planet population. Hence, we finally state the expected number of detectable exoplanets for our simulated mission as $315_{-77}^{+113}$.
\subsection{Observing strategy and integration time per target}
\label{observing_strategy}
The mission schedule for a space-based interferometer would very likely consist of two phases: (1) a detection phase to identify as many exoplanets as possible and (2) a characterization phase to investigate the atmospheric properties of a promising subset of those planets in detail. In our simulations we are primarily concerned with the first phase, which we assume to be background-limited. This means that an exoplanet is considered to be detectable as soon as its observed flux exceeds the faint source detection limit of the instrument and its apparent angular separation from the host star exceeds the IWA and is smaller than the OWA of the instrument. In our calculations, we take into account explicitly neither the planet-host star flux contrast nor additional photon noise from exozodiacal light or stellar leakage. The detection limits from \citet{glasse2015} are given for a $10\sigma$ detection significance in $10'000~\text{s}$ observing time. Taking into account the roughly $3.5$ times worse throughput of a nulling interferometer \citep{dubovitsky2004} if compared to MIRI\footnote{\url{https://jwst.etc.stsci.edu/}} and overheads of $\sim40\%$\footnote{\citet{cockell2009} assume that $70\%$ of the mission lifetime is spent taking data what results in overheads of $0.3/0.7 \approx 40\%$ if compared to the pure observing time.}, this would translate into a total of $\sim189~\text{d}$ or $\sim0.52$ years if all three bands could be observed simultaneously.
However, according to \citet{dubovitsky2004}, photon noise from stellar leakage can be a serious issue for stars closer than $\sim15~\text{pc}$. Their Figure~8 shows that the integration times necessary to detect an Earth twin around some of the nearby stars can increase by a factor of up to $\sim10$ owing to stellar leakage. Based on this it appears more realistic that the detection phase of our simulated mission will last somewhat between two and three years, which is slightly more than the two years proposed by \citet{cockell2009}. While beyond the scope of this paper, this clearly shows that detailed investigations regarding an optimized observing strategy, including quantitative estimates for stellar leakage, are warranted.
Regarding the observing strategy, another aspect is that for our Monte Carlo simulations we assumed that all exoplanets are observed at a randomly chosen instant of time by placing the exoplanets at a random position on a randomly oriented orbit. Similar to \citet{quanz2015} we could assume that radial velocity surveys would discover nearby exoplanets in advance and put constraints on their orbits. These constraints could then be used to directly image the exoplanets at quadrature when the apparent angular separation between exoplanet and host star is largest. If we observed all exoplanets at quadrature the expected exoplanet yield would increase by $\sim8\%$ from $\sim315$ to $\sim341$ in our baseline scenario. Figure~\ref{fig06} reveals that a significant percentage of exoplanets could not only be detected by direct imaging with our space-based MIR interferometer, but also by radial velocity if future spectrographs reach a precision of $\lessapprox 1~\text{m}~\text{s}^{-1}$.
\begin{figure}
\centering
\includegraphics[width=\hsize]{figures/COR_Darwin_quad_rv_hist.pdf}
\caption{Expected number of exoplanets binned by radial velocity semi-amplitude $K_*$ induced on their host star for all rocky exoplanets ($R_\text{p} \leq 2~R_\text{Earth}$) assuming a planet density of $\rho_\text{p} = 5'000~\text{kg}~\text{m}^{-3}$ (blue histogram). The percentage of exoplanets that is detectable with our space-based MIR interferometer at quadrature (in at least one of the three simulated filters) in addition to radial velocity is shown by the green histogram. The red vertical line indicates our baseline radial velocity detection threshold of $K_* \geq 10~\text{cm}~\text{s}^{-1}$.}
\label{fig06}
\end{figure}
Finally, in our baseline scenario we chose a very straightforward observing strategy by looking at each star for the same amount of time. However, by implementing more sophisticated algorithms to optimize the order of targets and observation time per target \citep[e.g., the altruistic yield optimization;][]{stark2014} the expected exoplanet yield could be further increased. Also, one could allow for and optimize revisits \citep[e.g.][]{stark2015} because in some cases it may be more useful to image promising targets multiple times to reach a higher completeness instead of searching for planets around stars from which one would only expect a poor planet yield (cf. Figure~\ref{fig03}).
\subsection{Stellar sample and stellar properties}
Recently, \cite{dressing2017} presented stellar parameters for low-mass dwarfs identified as candidate planet hosts during the \emph{Kepler K2} mission. We compare their derived parameters for stars of spectral types M1V -- M4V to those from our stellar sample to verify the properties we used for the M dwarfs in our simulations. We find that the mean value of our stellar effective temperatures $T_{\text{eff},~*}$ deviates at most by $-7.65\%$ for M1V stars and at least by $-0.30\%$ for M2V stars. Moreover, our stellar radii $R_*$ are significantly underestimated for M1V (mean value $-18.97\%$) and M4V ($-23.01\%$) stars but slightly overestimated for M2V ($+12.07\%$) and M3V ($+4.00\%$) stars. Consequently, on average, we underestimated the luminosity of the M-type stars from our stellar sample, which translates into underestimated planet equilibrium temperatures and therefore underestimated thermal blackbody emission from the exoplanets. Hence, our simulations set a lower limit for the expected number of detectable exoplanets around M dwarfs.
\subsection{Exo-Earths and spectroscopic characterization}
\label{exo_earths}
In our baseline scenario we would detect $\sim9$ Earth twins; which are here defined as planets with $0.5~R_\oplus \leq R_\text{p} \leq 1.25~R_\oplus$ and $200~\text{K} \leq T_\text{eq,~p} \leq 300~\text{K}$. This translates into a $99.99\%$, $95.14\%$, $43.89\%$ chance to find at least $1$, $5$, $10$ Earth twin(s) around stars from our stellar sample. Our planet population, however, does neither contain Earth-like planets orbiting G-type stars with orbital periods $> 300~\text{d}$ nor planets with orbital periods $> 418~\text{d}$ orbiting A- and F-type stars (see Section~\ref{planet_population}). Hence, the number of detectable Earth twins must be seen as a lower limit. Furthermore, as indicated in Section~\ref{observing_strategy}, a higher number can be achieved by longer observation times per target and by further optimizing the observing strategy. For instance, if we had 10 times longer on-source time per target in the detection phase compared to our baseline scenario, the total number of detectable exoplanets would increase from $\sim315$ to $\sim486$ (i.e., an increase by roughly $50\%$). The additional planets that could be discovered would primarily be cool ($T_\text{eq,~p} \leq 400~\text{K}$) and therefore faint. More importantly, the expected number of Earth twins detectable in at least one of the three filters simulated in this work would increase from $\sim9$ in our baseline scenario to $\sim48$. This again emphasizes the need for an optimized observing strategy that takes into account flexible integration times per target star, but also a more realistic noise model.
However, if looking for habitable (or even inhabited) planets is one of the major science goals, then one should keep in mind that the parameter space of interesting planets might actually be larger. Life may exist even under significantly higher temperatures of up to $\sim450~\text{K}$ and one might even find biosignatures in the MIR in $\text{H}_2$-dominated atmospheres of super-Earths with radii of up to $1.75~R_\text{Earth}$ \citep[e.g.,][]{seager2013}. Hence, if we add up all detectable exoplanets with equilibrium temperatures between $200$ and $450~\text{K}$\footnote{We acknowledge that the surface temperatures of planets can be significantly higher than their equilibrium temperatures and hence planets at the high temperature end of this range may be too hot for life to exist.} and radii between $0.5$ and $1.75~R_\text{Earth}$ even in our baseline scenario, we end up with $\sim85$ prime targets for in-depth characterization.
As discussed in Section~\ref{observing_strategy}, in our baseline scenario we spend at most three years in the detection phase so that there would be at least two years left for spectroscopic observations in the characterization phase assuming a nominal mission duration of five years. This would translate into an average time of $6~\text{d}$, accounting for $\sim40\%$ overheads again, available for follow-up observations of each of the $85$ prime targets. This seems to be sufficient for a robust characterization of a few dozens of prime targets, but a more detailed analysis concerning the required S/N, spectral resolution, and wavelength range is also necessary\footnote{\citet{cockell2009} suggest a wavelength range from $6$ -- $20~\text{\textmu m}$ and a spectral resolution of at least $25$ (possibly $300$).}.
\subsection{Mid-infrared interferometer versus optical/near-infrared telescope}
While it seems clear that the atmospheric characterization of a large sample of (small) nearby exoplanets is one of the long-term goals of exoplanet science, a space-based MIR interferometer is just one possible mission concept. However, even the next generation of ground-based ELTs only enable the detection of a dozen small to mid-sized exoplanets around the most nearby stars \citep[for a quantitative analysis see, e.g.,][]{crossfield2013, quanz2015}. One interesting and competitive alternative would be a large aperture, space-based optical/NIR telescope optimized for high-contrast exoplanet imaging observations in reflected host star light. Concepts for such a mission are currently studied by NASA (e.g., \emph{HabEx} \citep{mennesson2016} and \emph{LUVOIR} \citep{peterson2017}).
\begin{table}[h!]
\caption{\label{habex_baseline} Instrumental parameters for our baseline scenario for \emph{HabEx}/\emph{LUVOIR}.}
\centering
\begin{tabular}{lll}
\hline\hline
\noalign{\smallskip}
Parameter & Value & Description \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
$D$ & $12~\text{m}$ & Aperture size \\
IWA & $2~\lambda_\text{eff}/D$ & Inner working angle \\
$C_\text{ref}$ & $1\mathrm{e}{-10}$ & Achievable contrast performance \\
$\lambda_\text{cen,~V}$ & $554~\text{nm}$ & Central wavelength of V-band filter \\
$\lambda_\text{cen,~J}$ & $1245~\text{nm}$ & Central wavelength of J-band filter \\
$\lambda_\text{cen,~H}$ & $1625~\text{nm}$ & Central wavelength of H-band filter \\
$F_\text{lim,~V}$ & $3.31\mathrm{e}{-10}~\text{Jy}$ & Sensitivity limit (V-band)\tablefootmark{a} \\
$F_\text{lim,~J}$ & $9.12\mathrm{e}{-10}~\text{Jy}$ & Sensitivity limit (J-band)\tablefootmark{a} \\
$F_\text{lim,~H}$ & $8.32\mathrm{e}{-10}~\text{Jy}$ & Sensitivity limit (H-band)\tablefootmark{a} \\
\noalign{\smallskip}
\hline
\end{tabular}
\tablefoot{\\
\tablefoottext{a}{from \url{http://jt-astro.science:5101/hdi_etc} (as of 17 July 2017) assuming an exposure time of $9.7~\text{h}$, a S/N of $\sim10$ and a filter zero point in the AB system of $3631~\text{Jy}$}
}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[width=\hsize]{figures/Vband_hist2d_habex.pdf}
\includegraphics[width=\hsize]{figures/Jband_hist2d_habex.pdf}
\includegraphics[width=\hsize]{figures/Hband_hist2d_habex.pdf}
\caption{Expected number of detectable exoplanets binned in the radius-equilibrium temperature plane (like in Figure~\ref{fig01}) for a large aperture, space-based optical/NIR telescope assuming the instrument parameters presented in Table~\ref{habex_baseline} observing in the V band (top), J band (middle) and H band (bottom). The geometric albedos of our planet population are now distributed uniformly between $0$ and $0.6$ and the x-axis is shifted by $100~\text{K}$ toward cooler equilibrium temperatures.}
\label{fig07}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\hsize]{figures/Vband_performance_habex.pdf}
\caption{Expected number of detectable exoplanets (with radii $< 6~R_\text{Earth}$) as a function of the achievable contrast performance $C_\text{ref}$ in the V band. The five curves represent different aperture diameters $D$ from $20~\text{m}$ to $4.0~\text{m}$. We adapt the sensitivity limit $F_\text{lim,~V}$ according to the aperture size $D$. The red vertical line indicates our baseline contrast performance of $C_\text{ref} = 1\mathrm{e}{-10}$.}
\label{fig08}
\end{figure}
In order to compare, to first order, the expected exoplanet yield of both approaches we also quantified the scientific yield of \emph{HabEx}/\emph{LUVOIR} using our Monte Carlo simulations with the same underlying planet population. The instrument parameters, which we assumed for our baseline scenario, are presented in Table~\ref{habex_baseline}. However, since we are now interested in host star light reflected by the exoplanets, the choice of the geometric albedos had a large impact on our results. Considering the optical/NIR albedos of the solar system planets \citep[cf., e.g.,][]{seager2010} we distributed new geometric albedos uniformly between $0$ and $0.6$ to our exoplanet population. We did not assume any OWA, but we adapted a sensitivity limit corresponding to the $10\sigma$ detection limit in $35'000~\text{s}$ observing time for the MIR interferometer (see Table~\ref{habex_baseline}). For a fair comparison, we also ignored the impact of (exo-)zodiacal light. A planet is considered detectable if its apparent angular separation is larger than the IWA of the instrument, its flux contrast relative to the host star is larger than $C_\text{ref}$, and its total flux observed in a particular band exceeds $F_\text{lim}$ (cf. Table~\ref{habex_baseline}).
The expected numbers of detectable exoplanets for \emph{HabEx}/\emph{LUVOIR} are shown in Figure~\ref{fig07}. In our baseline scenario $\sim207$ planets could be detected in the V band, but only $\sim70$ and $\sim38$ in the J band and H band, respectively. This has a significant impact on the potential to characterize the detectable exoplanets over a broader wavelength range. However, while the overall number of detectable exoplanets is smaller, there are more planets detected at lower equilibrium temperatures compared to our space-based MIR interferometer. In particular there are $\sim22$ Earth twins, which can be detected in the V band compared to the $\sim9$ detectable Earth twins in the baseline scenario of our MIR interferometer. However, looking again at the expected number of detectable exoplanets with equilibrium temperatures between $200$ and $450~\text{K}$ and radii between $0.5$ and $1.75~R_\text{Earth}$, \emph{HabEx}/\emph{LUVOIR} ($\sim63$ detections) performs worse than our MIR interferometer ($\sim85$ detections).
One of the most important findings, however, is the impact of the contrast performance. Figure~\ref{fig08} shows the expected exoplanet yield in dependence of the achievable contrast performance for different aperture diameters; this figure reveals that the expected number of detectable exoplanets with a $12~\text{m}$ aperture does not increase significantly if instead of $1\mathrm{e}{-10}$ we assume $1\mathrm{e}{-11}$ ($\sim214$ detections in the V band) or even $1\mathrm{e}{-12}$ (also $\sim214$ detections in the V band). Lowering the contrast performance, however, has a large impact. In case of $C_\text{ref} = 1\mathrm{e}{-9}$ or $C_\text{ref} = 1\mathrm{e}{-8}$ the expected number of detectable exoplanets in the V band would decrease to $\sim161$ and $\sim66$, respectively. This reveals that a contrast performance of $1\mathrm{e}{-10}$ is close to the optimal regime and further gains in planet yield could only be achieved by decreasing the IWA, i.e., increasing the aperture. Enhancing the sensitivity does not have the same effect as for the MIR interferometer as even in the case of no sensitivity limit at all the expected number of detectable exoplanets in the V band is only $\sim234$.
Looking at the split of detectable exoplanets by host star spectral type shows that \emph{HabEx}/\emph{LUVOIR} has a similar preference for planets around M-type stars, which also make up $\sim50\%$ of all detectable exoplanets.
\section{Summary and conclusions}
We carry out Monte Carlo simulations predicting, for the first time, the expected number of directly detectable (small) exoplanets for a space-based MIR nulling interferometer. On the basis of planet occurrence statistics from the NASA \emph{Kepler} mission, an input catalog of $326$ host stars within $20~\text{pc}$, and the technical specifications from the original mission proposal of the \emph{Darwin} mission \citep{cockell2009}, we find a total expected exoplanet yield of $315_{-77}^{+113}$ (only counting planets with radii $0.5~R_\text{Earth} \leq R_\text{p} \leq 6~R_\text{Earth}$), where the uncertainties are dominated by statistical errors in the underlying planet population. Our baseline scenario assumes an initial detection phase observing in three bands ($5.6$, $10$ and $15~\text{\textmu m}$); roughly $244$ exoplanets are detected in at least two bands. Slightly less than half of all detectable planets orbit low-mass M-type stars, whereas the other half orbit F-, G-, and K-type stars.
By optimizing the observing strategy and assuming a nominal mission duration of five years such a mission should allow for the spectroscopic characterization of a few dozens of habitable or even inhabited planets with effective temperatures between $200$ and $450~\text{K}$ and radii between $0.5$ and $1.75~R_\text{Earth}$. More quantitative investigations regarding the required S/N, spectral resolution, and wavelength range for such a characterization phase are foreseen for a future publication.
The huge sample of directly detectable exoplanets would offer unprecedented opportunities for exoplanet science. The atmospheric composition of planets covering a large range of effective temperatures, radii, and host star spectral types would be a unique dataset for planet formation and evolution studies. This is particulary the case when combined with empirical mass estimates or constraints, for example, from radial velocity, which seems feasible for a significant percentage of the detectable exoplanets. For a considerable subset of these planets the MIR spectra are probed for indications of biosignatures in their atmospheres, addressing the fundamental question of exoplanet habitability. Comparing our MIR nulling interferometer with an optical/NIR telescope further shows that both approaches are extremely promising in themselves and in fact only data from both missions combined could yield planetary radii and albedos for a comprehensive sample of exoplanets.
While finding habitable or even inhabited planets is one of the major objectives of exoplanets research, it seems that only a large/flagship space mission is able to characterize a considerable sample of exoplanets in the most interesting and promising size and temperature range. Even the next generation of $30$ to $40~\text{m}$ ground-based telescopes or \emph{WFIRST-AFTA} will likely only detect a handful, maybe a dozen of objects \citep[e.g.,][]{crossfield2013, quanz2015}. While these projects are a tremendous challenge, they are an absolutely necessary and ground-breaking step for a variety of astrophysical research topics, but in the long run the exoplanet community will have to push for a large, and maybe even dedicated, space mission, to address the (most) relevant questions properly. As demonstrated above, a space-based MIR interferometer mission seems to be, as expected, a very promising concept and with our Monte Carlo tool at hand we have the possibility to quantify the scientific yield of certain design choices as we continue to tackle the technical challenges. The next key steps in this direction will be, first, implementing a more realistic noise model including the effects of exozodical light and stellar leakage and, second, looking in more detail into null-depth, stability and throughput performance of different array configurations \citep[cf.][]{guyon2013}.
In addition, because we know the targets we want to look at, we should start characterizing these stellar systems with available instruments. This includes (a) robust stellar parameters such as metallicity, rotation period, and rotation axis, (b) the search for (massive) planets and unknown substellar companions via radial velocity and high-contrast imaging, and (c) the derivation of at least constraints on the level of exozodiacal dust. Parts of these investigations are already ongoing or will, to some extent, be addressed in the near future. However, if a space-based direct imaging mission is the goal, better coordinated and systematic efforts are required.
The findings from \emph{Kepler} have taught us that moving forward with \emph{Darwin} would not have been in vain. Now it is time to reinitiate the discussion about the next big steps. The lead times are long and in case of Europe the first three slots for large missions within the ESA Cosmic Vision program are already taken by other research fields\footnote{\url{http://sci.esa.int/cosmic-vision/42369-l-class-timeline/}}. However, this is also an opportunity and a coordinated attempt from within the exoplanet community speaking with one voice would be a viable way forward.
\begin{acknowledgements}
This work has been carried out within the frame of the National Center for Competence in Research PlanetS supported by the Swiss National Science Foundation. SPQ acknowledges the financial support of the SNSF. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France, and of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. The authors gladly thank Ian Crossfield for sharing the stellar catalog, Denis Defr\`ere and Olivier Absil for helpful comments, and the referee for a timely and constructive report that helped to improve the original manuscript.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,089,768 | arxiv | \section{Introduction}
Prime numbers are generators of the multiplicative semigroup
${\mathbb N}^{*}$ (where ${\mathbb
N}^{*}=\left\{1,2,3,...\right\}$). It is well known, that it is
impossible to distinguish two different prime numbers using only
the "language of multiplication". If one wants to distinguish some
particular prime number from the others, one must consider an
additional structure in ${\mathbb N}^{*}$, like for example the
natural order in ${\mathbb N}$. The prime counting function is an
example of such order properties. In this paper we define a
property of prime numbers with respect to their position on the
graph of the prime counting function $x\longrightarrow \pi(x)$.
Some properties related to the graph of the function $\pi$ were
studied se\-veral years ago in 1979 by Carl Pommerance \cite{Pomm}
and recently (2006) by H.L. Montgomery and S.Wagon \cite{MonWa} in
considerations concerning the Prime Number Theorem (PNT for
short).
Let $\mathbb P$ denote the sequence of prime numbers, i.e.
$\mathbb P = \left\{2,3,5,7,11,...\right\}$. Usually one defines
the function $\pi:[2,\infty)\longrightarrow [1,\infty)$ by the
formula
\be\label{wzor na pi} \pi(x)= \sum_{p\in \mathbb P, p\leq x}
1
\ee
For our purposes it will be a little more convenient to
consider a function $\pi^{*}:[2,\infty)\longrightarrow [1,\infty)$
defined as follows. First we define a continuous function $\eta:
[1,\infty)\longrightarrow [2,\infty)$ setting: $\eta(n)= p_n$,
where $p_n$ is the $n-th$ prime number, and $\eta$ is affine (and
continuous) in the intervals $[n,n+1]$ for each $n\in \mathbb N$.
Obviously $\eta$ is strictly increasing, continuous and
surjective. Thus $\eta$ is invertible and we define $\pi^{*}$ as
the inverse of $\eta$. Let $[x]$ denote the integral part of the
real number $x$. One can easily check, that $\pi$ and $\pi^{*}$
have the same values at prime numbers, and that
\be\label{wzor na pi*} \pi(x)= [(\pi^{*}(x))]
\ee
\vspace{5mm}
\section{Part I}
\subsection{Definition of extremal primes.}
\vspace{3mm}
The function $\pi^{*}$ is increasing, continuous, but it is
"visibly" not concave. However there are many concave functions
$\varphi: [2,\infty)\longrightarrow [1,\infty)$, such that for
each $x\in [2,\infty)$ we have $\varphi(x)\geq \pi^{*}(x)$. This
follows for example from the Chebyshev theorem, which gives the
inequality
\be\label{Czebyszew}
A\cdot\frac{x}{\ln(x)}<\pi(x)<B\cdot\frac{x}{\ln(x)
\ee
for some $A<1$ and $B>1$, (obviously $\frac{x}{\ln x }$ is a
concave function).
\vspace{2mm}
Let us consider the set
\be
\Omega = \left\{f:[2,\infty)\longrightarrow [1,\infty): f\geq
\pi^{*}, f - {\rm concave}\right\}
\ee and let us observe, although this will play no role in our
consideration, that $\Omega$ is a subset of the vector cone of all
positive and concave real functions on $[2,\infty)$.
We put for $x\in [2,\infty)$
\be
\epsilon(x)= \inf\left\{f(x): f\in \Omega\right\}
\ee i.e. the function $\epsilon$ is the lower envelope of the
family $\Omega$. In other words the function $\epsilon$ is the
smallest concave function, which is greater than $\pi^{*}$
(equivalently than $\pi$). Since $\pi^{*}$ is piecewise affine,
then $\epsilon$ is also the lower envelope of those functions from
$\Omega$, which are piecewise affine. Then it is clear, that the
function $\epsilon$ is concave and it is also piecewise affine.
Thus the set
\be
\Gamma=\left\{(x,y)\in \mathbb R^{2}:x\in[2,\infty), 0\leq
y\leq \epsilon(x)\right\
\ee is a convex set. Let us recall, that if $U$ is a convex set
and $b\in U$, then $b$ is said to be an {\it extremal point } of
$U$ iff $b$ is not an interior point of any non-trivial segment
lying in $U$.
Now we are ready to formulate the following:
\vspace{3mm}
\begin{definition}\label{definicja liczb ekstremalnych}
{\it The prime number $p\in \mathbb P$ will be said to be extremal prime
number, when the point $(p,\pi(p))$ is an extremal point of the
convex set $\Gamma$}.
\end{definition}
\subsection{Properties of the set of extremal primes}
Let $\mathbb E$ denote the set of all extremal primes. Sometimes
we will think rather about the sequence of extremal primes
$\mathbb E = \left\{e_1,e_2,...,\right\},$ where $e_1<e_2<e_3...$,
i.e. the sequence $(e_k)_1^{\infty}$ is strictly increasing.
\vspace{3mm}
Now we will present some easy properties of the set $\mathbb E$.
\vspace{3mm}
\begin{proposition}
{\it The set $\mathbb E$ is not empty.}
\end{proposition}
\vspace{3mm}
Indeed, it is easy to check, that $2\in \mathbb E$. \vspace{2mm}
\begin{proposition}
{\it The set ${\mathbb N}^{*}\setminus \mathbb E$ is not empty.}
\end{proposition}
\vspace{3mm}
One can check, that $3\in \mathbb E$, $7\in \mathbb E$, but
$5\notin \mathbb E$.
\vspace{3mm}
\begin{proposition}\label{rekurencja}
{\it The set $\mathbb E$ is infinite}.
\end{proposition}
\vspace{3mm}
\begin{proof}
Let $l_k$ denote the straight line (the affine function)
passing through the points $(e_{k-1},\pi(e_{k-1}))$ and
$(e_k,\pi(e_k))$. It follows from Definition 1 that the graph of
the function $\epsilon$ lies below the line $l_k$. This gives a
simple inductive method of finding the next extremal prime
$e_{k+1}$ providing, that we know $e_1, e_2,..., e_{k-1}, e_k$ (in
fact it is sufficient to know only $e_{k-1}$ and $e_k$). We can do
it as follows. We consider the difference quotients of the form
\be\label{iloraz1}
I_k(p)= \frac{\pi(p) - \pi(e_k)}{p-e_k}
\ee
for $p\in \mathbb P, p>e_k.$ It follows from the remark made
above, that for each $p>e_k$ we have: \be\label{iloraz2}
0<I_k(p)<\frac{\pi(e_k)-\pi(e_{k-1})}{e_k-e_{k-1}}=I_{k-1}(e_k)
\ee
Using the commonly known fact
\be\label{Legendre}
\lim_{p\rightarrow\infty}\frac{\pi(p)}{p}=0
\ee we have
$\lim_{p\rightarrow\infty} I_k(p)=0$.
Then there exists a finite set $\mathbb P_k\subset \mathbb P$ of
primes, such that $p_o\in \mathbb P_k \Longrightarrow p_o>e_k$ and
such that $I_k(p)\leq I_k(p_o)$ for $p>e_k$. We set then
$e_{k+1}=\max {\mathbb {P}_k}$. This implies, that the set
$\mathbb E$ is infinite.
\end{proof}
\vspace{3mm}
\begin{proposition}\label{pochodna pi-e}
{\it The derivative $x\longrightarrow {\epsilon}'(x)$ is
strictly decreasing and tends to 0 at infinity.}
\end{proposition}
\vspace{3mm}
\begin{proof}
Let
\be\label{definicja delta k}
\delta_k=\frac{\pi(e_{k+1})-\pi(e_k)}{e_{k+1}-e_k}
\ee i.e. $\delta_n$ is the slope of the n-th segment lying on the
graph of the function $\epsilon$. Since $\epsilon$ is increasing
and concave, then the sequence $(\delta_k)_{1}^{\infty}$ is
positive and strictly decreasing. Let us observe, that the
sequence $(\delta_k)_{1}^{\infty}$ may be identified with the
derivative of the function $\epsilon$. Hence the limit $\delta =
\lim_{k\rightarrow \infty} \delta_k \geq 0$ exists and it must be
$\delta=0$, which follows once more from (\ref{Legendre}).
\end{proof}
The number $\alpha_k = {\delta_k}^{-1}$ is a measure of the
density of prime numbers in the interval $[e_k,e_{k+1})$ and may
be interpreted as an {\it average gap} between primes in
$[e_k,e_{k+1})$. By the remark made above, the sequence
$(\alpha_k)_1^{\infty}$ is strictly increasing.
\vspace{3mm}
It is natural to ask now about the cardinality of the set $\mathbb
N \setminus \mathbb E$. We have
\vspace{5mm}
\begin{proposition}\label{Zhang}
{\it The set $\mathbb N\setminus \mathbb E$ is infinite.}
\end{proposition}
\begin{proof}
This is true and is related to study of {\it small gaps between
primes}. Let us observe only, that the finitness of $\mathbb N\setminus \mathbb E$
is impossible if the {\it twin primes} conjecture is true. However, we
know now from the recent result of Zhang, \cite{Zhang} that
$\liminf(p_{n+1}-p_n)<7\cdot 10^7$. It follows from Proposition \ref{pochodna pi-e}
that this is sufficient for
the set $\mathbb N\setminus \mathbb E$ to be infinite.
\end{proof}
It appears, that the set $\mathbb E$ is in some sense minimal with respect to
Property \ref{pochodna pi-e}. Namely suppose, that $\mathbb G = (g_i)_1^{\infty}$ is a subsequence of
the sequence
$\mathbb P$ of prime numbers
such that $g_1=2$. Let \be
\delta_k(\mathbb G)=\frac{\pi(g_{k+1})-\pi(g_k)}{g_{k+1}-g_k}
\ee
We will say, that $\mathbb{G}$ is concave, when
$\delta_k(\mathbb{G})$ is strictly decreasing. For example the
sequence $\mathbb E$ is concave, while the sequence $\mathbb P$ is
not concave. A subsequence of a concave sequence is also concave.
The sequence $\mathbb E$ of extremal primes has the following
property: {\it if $\mathbb E$ is a subsequence of a concave
sequence $\mathbb G$, then $\mathbb E = \mathbb G$.} More exactly:
\vspace{3mm}
\begin{proposition}\label{minimality}
{\it Let us suppose that a sequence $(g_k)_{1}^{\infty}$ is
concave and the sequence $\mathbb E$ is a subsequence of $\mathbb
G$. Then $\mathbb E = \mathbb G$.}
\end{proposition}
\vspace{3mm}
\begin{proof}
Clearly $e_1=g_1=2$. Since there are no primes between 2 and 3
and
$e_2\in \mathbb G$ then also $e_2=g_2 =3$.
Suppose now that $e_i=g_i$ for $1\leq i \leq k$.
We wish to prove, that $e_{k+1}=g_{k+1}$. Assume then, that $e_{k+1}\neq g_{k+1}$
and that $g_{k+m}=e_{k+1}$ i.e. that $$e_k=g_k <g_{k+1}< g_{k+2}<
...< g_{k+m} = e_{k+1}.$$
Now, using the notations from Proposition \ref{rekurencja} and the
definition of $e_{k+1}$ we have for $i<m$: \be
\delta_{k}(\mathbb G)=I_k(g_{k+1})<\delta_k(\mathbb E
\ee
Let us consider a function $H:[e_k,e_{k+1}]\longrightarrow
{\mathbb R}$
such that $H(g_{k+i})= \pi(g_{k+i})$ and $H$ is affine and continuous in each
interval $[g_{k+i},g_{k+i+1}]$. We see, that the function $H$ is
continuous and differentiable except in the points $x=g_{k+i}$
and its derivative in the
intervals $ (g_{k+i},g_{k+i+1})$ is constant and equal
$\delta_{k+i}(\mathbb G)$. It follows from our assumptions (since
$\mathbb G$ is concave), that
\be \sup\left\{H^{'}(x): x\in [e_k,e_{k+1}]\right\}=
\delta_{k}({\mathbb G}) < \delta_k(\mathbb E). \ee
Let us observe, that since the function $H$
is continuous an differentiable except for a finite set of
arguments, we can apply the mean value theorem. Hence we have:
\begin{eqnarray*}
\pi(e_{k+1}-\pi(e_k)= \pi(g_{k+p})-\pi(g_k)&\leq&
\sup\left\{(H^{'}(x): x\in [e_k,e_{k+1}]\right\}\cdot
(g_{k+p}-g_{k})\\&\leq &\delta_k(\mathbb G)\cdot(e_{k+1}-e_k)<
\delta_k(\mathbb E)\cdot(e_{k+1}-e_k) = \pi(e_{k+1})-\pi(e_k),
\end{eqnarray*}
but this is impossible and this ends the
proof of Proposition \ref{minimality}.
\end{proof}
\subsection{Some numerical data and the questions they evoke}
The observations about the extremal primes made above are rather
trivial. We will prove later some deeper, however conditional,
results. We have calculated the first 2200 extremal primes and
after studying these numerical data, we can formulate a number of
more or less interesting questions. It is impossible to give here
the complete list of the first 2200 extremal primes, but we will
present some selected data:
\vspace{3mm}
The first twenty eight terms of the sequence $\mathbb E$ are:
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$n$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 \\
\hline
$e_n$ & 2 & 3 & 7 & 19 & 47 & 73 & 113 & 199 & 283 & 467 & 661 & 887 & 1129 & 1329 \\
\hline
\end{tabular}
\end{center}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$n$ & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22 & 23 & 24 & 25 & 26 & 27 & 28 \\
\hline
$e_n$ & 1627 & 2803 & 3947 & 4297 & 5881 & 6379 & 7043 & 9949 & 10343 & 13187 & 15823 & 18461 & 24137 & 33647 \\
\hline
\end{tabular}
\end{center}
\vspace{3mm}
The list of $e_k$ where $k\leq 2200$ and $k\equiv 0 (\mod 100)$:
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|r|}
\hline
$e_{100}$& 5253173 \\
\hline
$e_{200}$ & 67596937 \\
\hline
$e_{300}$ & 314451367\\
\hline
$e_{400}$ & 883127303\\
\hline
$e_{500}$ & 2122481761\\
\hline
$e_{600}$ & 4205505103\\
\hline
$e_{700}$ & 7274424463\\
\hline
$e_{800}$ & 12251434927\\
\hline
$e_{900}$ & 19505255383\\
\hline
$e_{1000}$ & 28636137347\\
\hline
$e_{1100}$ & 40001601779\\
\hline
$e_{1200}$ & 55036621907\\
\hline
$e_{1300}$ & 73753659461\\
\hline
$e_{1400}$ & 97381385771\\
\hline
$e_{1500}$ & 125232859691\\
\hline
$e_{1600}$ & 157169830847\\
\hline
$e_{1700}$ & 196062395777\\
\hline
$e_{1800}$ & 241861008029\\
\hline
$e_{1900}$ & 296478801431\\
\hline
$e_{2000}$ & 365234091199\\
\hline
$e_{2100}$ & 435006680401\\
\hline
$e_{2200}$ & 524320812671\\
\hline
\end{tabular}
\end{center}
The examination of the sequence of the first 2200 extremal primes
allows us to formulate a number of questions. First of all it
seems to be interesting to say something about the "density" of
the sequence $\mathbb E$. Our "experimental" data support some
conjectures. Namely
\begin{conjecture}
{\it The series
$$\sum_{k=1}^{\infty}\frac{1}{e_k}$$ is convergent.}
\end{conjecture}
It follows from our data that
$$\sum_{k=1}^{2000}\frac{1}{e_k}\cong 1,090..$$
\begin{conjecture}
{\it The series
$$\sum_{k=1}^{\infty}\frac{1}{\ln e_k}$$
is divergent.}
\end{conjecture}
Our data gives: $$\sum_{k=1}^{2000}\frac{1}{\ln e_k} > 100.$$
Since the set $\mathbb E$ of extremal prime numbers is infinite
and, clearly, the problem of finding any reasonable explicit
formula describing the correspondence $\mathbb N \ni
n\longrightarrow e_n$ is rather hopeless, we may define and try
to study a function, which may be called {\it extremal primes
counting function} $\pi_{\epsilon}$. The formula for
$\pi_{\epsilon}$ is analogous to the Formula (\ref{wzor na pi}).
We set
\be\label{wzor na pi e}
\pi_{\epsilon}(x)= \sum_{p\in \mathbb E, p\leq x} 1
\ee
Unfortunately we know only $2200$ values of
$\pi_{\epsilon}(x)$ for $x\leq 5\cdot10^{11}$. However it seems to
be possible to formulate some conjectures about $\pi_\epsilon$.
Clearly $\pi_e(x)\leq \pi(x)$ and the growth of $\pi_\epsilon$ is
much slower than the growth of $\pi$.
For example $\pi_\epsilon(x_o)=1700$, when $x_o = 196 062 395 777$ and for the same $x_o$ we have
$\pi(x_o)= 7 855 721 212$. In particular we may try to find the
best $\alpha < 1$ such that $\pi_{\epsilon}(x)=o(x^{\alpha})$
observing the ratio $\frac{\ln n}{\ln e_n}$ when $n$ tends to
infinity (in our case only to $n\leq 5\cdot 10^{11}$). May be
only accidentally, but the best $\alpha$ obtained from our data
is near to $\frac{\gamma}{2}$, where $\gamma$ is the Euler
constant. Hence we formulate:
\begin{conjecture}
{\it There exists infimum
$$\inf\left\{\alpha>0: \pi_{\epsilon}(x)=o(x^{\alpha})\right\}$$ and it is
positive.}
\end{conjecture}
Our numerical data support strongly also the following interesting
conjecture:
\begin{conjecture}\label{conjectura 11}
{\it In the notations as above, we have:
$$\lim_{k\rightarrow \infty}\frac{e_{k+1}}{e_k} = 1.$$}
\end{conjecture}
We will prove below, in Part II, that the Riemann Hypothesis
implies the Conjecture \ref{conjectura 11}. This conjecture is
interesting itself, but also because of the following:
\begin{proposition}\label{PNT}
{\it If $$\lim_{k\rightarrow \infty}\frac{e_{k+1}}{e_k} =
1$$then$$\lim_{n\rightarrow \infty}\frac{p_{n+1}}{p_n} = 1.$$}
\end{proposition}
\begin{proof}
For each $n\in \mathbb N$ there exists $k(n)\in \mathbb N$ such
that
$$e_{k(n)}\leq p_n < p_{n+1}\leq e_{k(n)+1}.$$ Thus
$$\frac{p_{n+1}}{p_n}\leq \frac{e_{k(n)+1}}{e_{k(n)}}$$and the last sequence tends by our assumption to 1.
Let us recall here, that $\lim_{n\rightarrow\infty}
\frac{p_{n+1}}{p_n} =1 $ implies PNT.
\end{proof}
It follows directly from the definitions of the functions $\pi$
and $\pi_{\epsilon}$ that $\pi(e_{k+1})-\pi(e_k)\geq 1$ and the
equality may occur. Except for trivial $e_1=2$ and $e_2=3$ I have
found two such "twin extremal primes" for $k=116$ and $k=976$.
Namely: $e_{116}=8 787 901$, $e_{117}= 8 787 917$ and
$\pi(e_{116})=589 274$, $e_{976}=26 554 262 369$ $e_{977}= 26 554
262 393$ and $\pi(e_{976})= 1 156 822 345$. We ask if:
\begin{question}\label{pytanie}
{\it Does there exists infinitely many $k\in \mathbb N$ such that
$\pi(e_{k+1})-\pi(e_k) = 1$.}
\end{question}
Some additional remarks about the "small" gaps between extremal
primes are in Part III.
Another exception is related to the inequality $I_k(p)\leq
I_k(p_o)$,
which is described in Proposition \ref{rekurencja}. One may ask if
the number of points $p>e_k$ such that $I_k(p)=I_k(p_o)$ is
greater than 1. In our numerical data we have only two such
examples, namely for $k=2$ we have $I_2(5)=I_2(7)$ and also
$I_4(23)=I_4(31)=I_4(43)=I_4(47) = \frac{1}{4}=\delta_4$ but in
fact our programme searching "next extremal primes" was not
written to "catch" such exceptions.
\section{Part II}
\subsection{Definition of lenses}
With the notation as in Part I, the intervals $[e_k,e_{k+1})$ (in
$\mathbb N$) will be called {\it lenses}. More exactly:
\begin{definition}
Definition: {\it Given a positive integer $k\in \mathbb N$ the lens
$S_k$ is a set $$S_k= \left\{n\in \mathbb N: e_k\leq n
<e_{k+1}\right\}.$$ The difference $e_{k+1}- e_k$ will be called {\it the length} of the lens
$S_k$ and will be denoted by $|S_k|$.}
\end{definition}
Sometimes we will use the name "lens" for a part of graph of
$\pi^{*}$ for $x\in [e_k,e_{k+1})$. Our aim is to study the order
of magnitude of $|S_k|$ when $k\rightarrow \infty$. Since we
will apply the language of differential calculus, it will be more
comfortable to work with the function $[2,\infty)\ni
x\rightarrow S(x)\in [1,\infty)$ where
$$x\in [e_k,e_{k+1})\Longrightarrow S(x)=|S_k|.$$
The typical lenses and the graph of $\epsilon(x)$ for $x\leq 113$
are illustrated on the pictures 1-3 at the end of this paper.
\subsection{The integral logarithm and error term}
We shall consider the following - well known -functions:
$L:[2,\infty)\longrightarrow [0,\infty)$ and
$\varepsilon:[2,\infty)\longrightarrow [0,\infty)$, defined by the
following formulas: \be\label{definicja Li}
L(x)=\int_{2}^{x}\frac{1}{\ln t}d
\ee
and
\be\label{definicja error}
\varepsilon(x)=\sqrt{x}\cdot\ln
x
\ee
The first is called {\it integral logarithm} (we will write
also $L(x)=Li(x)$), and the se\-cond is called {\it error term}.
Together with $L$ and $\varepsilon$ we will consider the functions
\be
\varphi(x)=L(x)-\varepsilon(x
\ee
and for $x\in
(2,\infty)$ and $h\in \mathbb R$ \be l(x,h)= \varphi'(x)\cdot h +
\varphi(x)\ee
Clearly all these functions are analytic at least in $(2,\infty)$.
We will use the derivatives of the considered functions to the
order four and we shall write $y$ instead of $\ln x$ to present
some formulas in more compact form. Hence we have:
\be\label{lp}
L^{(1)}(x)= \frac{1}{\ln x}= \frac{1}{y}
\end{equation}
\begin{equation}\label{ld}
L^{(2)}(x)= \frac{-1}{x\cdot \ln x}= \frac{-1}{x\cdot
y^2}
\end{equation}
\begin{equation}\label{lt}
L^{(3)}(x)= \frac{\ln x +2}{x^2\cdot {\ln^3 x}}
= \frac{y+2}{x^2\cdot y^3},
\end{equation}
\begin{equation}\label{lc}
L^{(4)}(x)=\frac{-(2\cdot {\ln^2 x}+6 \ln x +6)}{x^3\cdot \ln^4 x}
= \frac{-(2\cdot y^2 + 6y +6)}{x^3\cdot y^4}
\end{equation}
The derivatives of error term function, written in an analogous manner, run as follows:
\be\label{ez}
\varepsilon(x)= \sqrt{x}\cdot \ln x = \sqrt{x}\cdot y
\ee
\be\label{ep}
\varepsilon^{(1)}(x)= \frac{\ln x
+2}{2\sqrt{x}}=\frac{y+2}{2\sqrt{x}}
\ee
\be\label{ed}
\varepsilon^{(2)}(x)= \frac{-\ln
x}{4x\sqrt{x}}=\frac{-y}{4x\sqrt{x}
\ee
\be
\varepsilon^{(3)}=\frac{3\ln x - 2}{8x^2\sqrt{x}}=
\frac{3y-2}{8x^2\sqrt{x}}
\ee
\be
\varepsilon^{(4)}(x)=\frac{-15\ln x
+16}{16x^3\sqrt{x}}=\frac{-15y+16}{16x^3\sqrt{x}}
\ee
Let us observe, that the second derivatives of the functions $L$
and $\varepsilon$ are negative, so both these functions are
concave.
The second derivative of the function $\varphi$ has the form
$$\varphi^{(2)}(x)=\frac{-4\sqrt{x}+\ln^3 x}{x\sqrt{x}\ln^2
x}=\frac{-4\sqrt{x}+y^3}{4x\sqrt{x}y^2}$$
then taking into account that
$$\lim_{x\rightarrow \infty}(-4\sqrt{x}+\ln^3 x)= -\infty$$
we can state :
\vspace{3mm}
\begin{proposition}\label{wypuklosc}
{\it There exists $x_o\in(2,\infty)$ such, that the function
$\varphi$ is concave in the interval $[x_o,\infty)$.}
\end{proposition}
\subsection{A remark on Taylor polynomials of considered functions}
Let us fix a point $x\in (2,\infty)$. Let $T^{(3)}_{x,L}$
denote the Taylor polynomial of order three of the function
$L$ with the center at $x$. Hence
\be
T^{(3)}_{x,L}(h)= L(x) + L^{(1)}(x)\cdot h+\frac{1}{2}\cdot
L^{(2)}(x)\cdot h^2 + \frac{1}{6}\cdot
L^{(3)}(x)\cdot h^3
\ee
The remainder $R^{(3)}_x(h)=L(x+h)-T^{(3)}_{x,L}(h)$, written in
the Lagrange form, is given by the formula:
\be R^{(3)}_x(h)= \frac{1}{24}L^{(4)}(\xi)\cdot h^4,\ee
where
$\xi$ is a point from the $(x,x+h)$. Since $L^{(4)}<0$ in all its
domain, we have the inequality:
\vspace{3mm}
\begin{proposition}
{\it For each $x\in (2,\infty)$ and for each $h\in (2-x,\infty)$
the following inequality is true:
$$L(x+h)\leq T^{(3)}_{x,L}(h).$$}
\end{proposition}
Let $ T^{(3)}_{x,\varepsilon}$ denote the Taylor polynomial of
order three of the function $\varepsilon$ with the center at $x$,
i.e. \be T^{(3)}_{x,\varphi}(h)= \varepsilon(x) +
\varepsilon^{(1)}(x)\cdot h+\frac{1}{2}\cdot \varphi^{(2)}(x)\cdot
h^2 + \frac{1}{6}\cdot L^{(3)}(x)\cdot h^3.\ee
Using an analogous argumentation as in the case of the function
$L$ we have:
\vspace{3mm}
\begin{proposition}\label{nierownosc z Taylorem}
{\it For each $x\in (2,\infty)$ and for each $h\in
(2-x,\infty)$ the following inequality is true:
$$\varepsilon(x+h)\leq T^{(3)}_{x,\varepsilon}(h),$$}
\end{proposition}
and in consequence we have the inequality (true for all $h\in
(2-x,\infty)$):
\be\label{rownanie glowne}
L(x+h)+\varepsilon(x+h)<T^{(3)}_{x,L}(h)+T^{(3)}_{x,\varepsilon}(h
\ee
\vspace{3mm}
\subsection{Definition of two functions}
In this section we shall define two functions
$h_{+}:(x_o,\infty)\ni x \rightarrow h_{+}(x)\in \mathbb R$ and $
h_{-}:(x_o,\infty)\ni x\rightarrow h_{-}(x) \in \mathbb R$, where
$x_o$ is the point defined in Proposition \ref{wypuklosc}. First
we will describe in details the definition of the function
$h_{+}$. The definition of $h_{-}$ will be similar.
Let us fix a point $x\in (x_o,\infty)$. Take into account the
tangent line $l(x,h)$ to the graph of the function $\varphi$ at
the point $(x,\varphi(x))$. Its equation for $h\in \mathbb R$ is
given by:
\be\label{wzor 34}
l(x,h)=\varphi'(x)\cdot h + \varphi(x)=
L'(x)h-\varepsilon'(x)h+L(x)-\varepsilon(x)
\ee
The "tangent half-lines" obtained, when we restrict ourselves
in the Formula (\ref{wzor 34}) to $h\in [0,\infty)$ or $h\in
(-\infty,0]$ will be denoted by $l_{+}(x,h)$ or $l_{-}(x,h)$
respectively.
For
$h=0$ we have the inequality:
$$l(x,0)=\varphi(x)=L(x)-\varepsilon(x)<L(x)+\varepsilon(x).$$
This means that the half-line $l_{+}$ "starts" from the interior
point $(x,\varepsilon(x))$ of the subgraph of the function
$L+\varphi$, which is a convex set. Since
$$\frac{d}{dh}L(x+h)=\frac{1}{\ln(x+h)}$$and
$$\frac{d}{dh}\varepsilon(x+h)=\frac{\ln(x+h)+2}{2\sqrt{x+h}}$$
then $$\lim_{h\rightarrow
\infty}\frac{d}{dh}(L(x+h)+\varepsilon(x+h)) = 0.$$
On the other hand $$\frac{d}{dh}l(x+h) = \varphi'(x)
>0,$$hence the half-line $l_{+}(x,h)$ must intersect the graph of the strictly
concave function $L(x+h)+\varepsilon(x+h)$ in exactly one point.
Hence we have proved the following:
\vspace{3mm}
\begin{proposition}
{ \it For each $x\in (x_o,\infty)$ there exists exactly one
positive number $h_{+}(x)$ such that
$$L(x+h_{+}(x))+\varepsilon(x+h_{+}(x))=\varphi'(x)\cdot h_{+}(x) +
\varphi(x).$$}
\end{proposition}
In other words for each $x\in (x_o,\infty)$ the equation (with
unknown $h$):
\be\label{rownanie33} L(x+h)+\varepsilon(x+h)=\varphi'(x)\cdot
h+\varphi(x
\ee
has exactly one positive solution, which we will denote by
$h_{+}(x)$.
\vspace{3mm}
If one replaces the half-line $l_{+}(x,h)$, by the half line
$l_{-}(x,h)$, then applying the same arguments as above, we
obtain:
\begin{proposition}
{ \it For each $x\in (x_o,\infty)$ there exists exactly one
negative number $h_{-}(x)$ such that
$$L(x+h_{-}(x))+\varepsilon(x+h_{-}(x))=\varphi'(x)\cdot h_{-}(x) +
\varphi(x).$$}
\end{proposition}
In other words equation (\ref{rownanie33}) has exactly one
negative solution, which we will denote by $h_{-}(x)$.
\subsection{An auxiliary equation}.
\vspace{3mm}
In this paper we would like to establish the order of magnitude of the functions
$x\rightarrow h_{+}(x)$ and $x\rightarrow h_{-}(x)$ (in fact of the difference
$h_{+}(x)-h_{-}(x)$),
when $x$ tends to $+\infty$. Since the equation (\ref{rownanie33}) is rather
hard to solve, we will consider
an auxiliary equation:
\be\label{rownanie pomocnicze1}
T^{(3)}_{x,L}(h) +
T^{(3)}_{x,\varepsilon}(h)=\varphi'(x)\cdot h
+\varphi(x)
\ee
which can be written in the form:
\be\label{rownanie pomocnicze2}
W_{x}(h):=\frac{1}{6}(L^{(3)}(x)+\varepsilon^{(3)}(x))\cdot h^3
+ \frac{1}{2}(L^{(2)}(x)+\varepsilon^{(2)}(x)) \cdot
h^2+2\varepsilon^{(1)}(x)\cdot h +2\varepsilon(x)=0
\ee
As we see, equation (\ref{rownanie pomocnicze2}) is an algebraic equation of
degree three. It has at least one real root. We will see that it
can have (and has) more then one real root. We will be interested
not only on the existence of roots of equation (\ref{rownanie
pomocnicze2}), but also on theirs signs.
Let us observe, that since $W_x(0)= 2\varepsilon(x)>0$ then the number $h=0$
cannot be a root of considered equation. Let us also observe that,
in fact, equation (\ref{rownanie pomocnicze2}) is not a single
algebraic equation, but it is a one parameter family of algebraic
equations, where the parameter is $x\in (x_o,\infty)$.
\vspace{3mm}
We will prove the following :
\vspace{3mm}
\begin{lemma}\label{Lemma1}
{\it i). There exists $x_{+}\in (x_o,\infty)$, such that for each
$x>x_{+}$ the equation $W_{x}(h)=0$ has a positive root.}
{\it ii). There exists $x_{-}\in (x_o,\infty)$, such that for
each $x>x_{-}$ the equation $W_{x}(h)=0$ has a negative root.}
\end{lemma}
The proof of the lemma is done together with the proof of
Proposition \ref{glownapropozycja}. Assume now, that Lemma
\ref{Lemma1} is true. This allows us to define two new functions
$h^{*}_{+}$ and $h^{*}_{-}$. We will describe in details the
definition of $h^{*}_{+}$. We set
\vspace{3mm}
\begin{definition}
{ \it Let $x\in (x_{+},\infty)$. Then the set of positive roots of
equation (\ref{eq35}) is not empty and we
set:$$h^{*}_{+}(x)= \min\left\{h>0: W_x(h)=0\right\}.$$}
\end{definition}
The relation between the functions $h_{+}$ and $h^{*}_{+}$ is the
following:
\vspace{3mm}
\begin{proposition}\label{propozycja22}
{\it If Lemma \ref{Lemma1} is
true, then for $x\in (x_{+},\infty)$ we have the inequality:
$h_{+}(x)<h^{*}_{+}(x)$.}
\end{proposition}
\vspace{3mm}
\begin{proof}
Let us fix $x\in (x_{+},\infty)$. In the interval
$[x,x+h_{+}(x)]$, i.e. for $h\in [0,h_{+}(x)]$ the line $l(x,h)$
lies below the graph of the function $L+\varepsilon$. This
follows directly from the definition of the function $h_{+}(x)$.
Hence in this interval the line $l(x,h)$ cannot intersect the
graph of the function $T^{(3)}_{x,\varepsilon} + T^{(3)}_{x,L}$
because of inequality (\ref{rownanie glowne}). Hence the equation
$W_x(h)=0$ has no roots in the interval $h\in [0,h_{+}(x)]$. But
this means that
$h_{+}(x)<h^{*}_{+}(x)$, which ends the proof of Proposition
\ref{propozycja22}.
\end{proof}
\vspace{3mm}
Assume once more, that Lemma \ref{Lemma1} is true. We have
\vspace{3mm}
\begin{definition}
{ \it Let $x\in (x_{-},\infty)$. Then the set of negative roots of
equation (\ref{rownanie pomocnicze2}) is not empty and we set:$$h^{*}_{-}(x)=
\max\left\{h<0: W_x(h)=0\right\}.$$}
\end{definition}
\vspace{3mm}
The relation between the functions $h_{-}$ and $h^{*}_{-}$ is as
follows:
\vspace{3mm}
\begin{proposition}\label{proposition 24}
{\it If Lemma (\ref{Lemma1}) is
true, then for $x\in (x_{-},\infty)$ we have the inequality:
$h_{-}(x)>h^{*}_{-}(x)$.}
\end{proposition}
\vspace{3mm} The proof of Proposition \ref{proposition 24} is
similar to the proof of Proposition \ref{propozycja22}.
\vspace{3mm}
\subsection{The proof of the main lemma}
Now we will prove Lemma (20). Equation
(\ref{rownanie pomocnicze2})
we are interested in, can be written in the form:
\be\label{rownanie z A} A_3(x)\cdot h^3 + A_2(x)\cdot h^2 +
A_1(x)\cdot h +
A_o(x)=
\ee
where, using formulas 21-28, we have:
\be A_3(x)=
\frac{1}{6}(L^{(3)}(x)+\varepsilon^{(3)}(x))=\frac{1}{48}\cdot\frac{8\sqrt{x}(y+2)+y^3(3y-2)}
{x^2\sqrt{x}y^3}, \ee \be
A_2(x)=\frac{1}{2}(L^{(2)}(x)+\varepsilon^{(2)}(x))=
\frac{-1}{8}\cdot\frac{4\sqrt{x}+y^3}{x\sqrt{x}y^2}.\, \ee \be
A_1(x)= \frac{y+2}{\sqrt{x}}, \ee
\be A_o(x)=2\sqrt{x} y.\ee
Now, taking into account the fact, that for $x$ sufficiently
large
$A_3(x)>0$, we divide equation (\ref{rownanie z A})
by $A_3(x)$
in order to obtain the form:
\be\label{rownanie z B}
h^3+ B_2(x)\cdot h^2 +B_1(x)\cdot h + B_o(x)=0
\ee
where
\be B_2(x)=\frac{A_2(x)}{A_3(x)}=-6x\frac{4\sqrt{x}y+y^4}
{8\sqrt{x}y+16\sqrt{x}+3y^4-2y^3},
\ee
\be B_1(x)= \frac{A_1(x)}{A_3(x)}=
48x^2\frac{y^3}{8\sqrt{x}y+16\sqrt{x}+3y^4-2y^3}, \ee
\be B_o(x)=\frac{A_o(x)}{A_3(x)}=
96x^3\frac{y^4}{8\sqrt{x}y+16\sqrt{x}+3y^4-2y^3}.
\ee
For further analysis of equation \ref{rownanie z B} it will be
convenient to use some Landau symbols. Let us recall that for a
function $g$ defined in the neighbourhood of $+\infty$ one
writes $g=o(1)$ if and only if $\lim_{x\rightarrow
+\infty}g(x)=0$. Using this convention, we can write:
\be
B_2(x)=-6x\frac{\frac{1}{2}+o(1)}{1+o(1)}
\ee
\be B_1(x)=48x^2\frac{o(1)}{1+o(1)}, \ee
\be B_o(x)=96x^3\frac{o(1)}{1+o(1)}. \ee
This makes possible to write equation \ref{rownanie z B} in the
form:
\be h^3 - 6x\frac{\frac{1}{2}+o(1)}{1+o(1)}h^2
+48x^2\frac{o(1)}{1+o(1)}h+96x^3\frac{o(1)}{1+o(1)}=0.\ee
Now we apply the substitution $h=\theta x$, which leads to the
form: \be\label{eq50}
\theta^3x^3-6x\frac{\frac{1}{2}+o(1)}{1+o(1)}\theta^2x^2
+48x^2\frac{o(1)}{1+o(1)}\theta x + 96x^3\frac{o(1)}{1+o(1)}=0.\ee
Since we work only with $x>0$, we can divide the last equation by
$x^3$, and we obtain the following equation (with unknown $
\theta$):
\be\label{eq50} \theta^3-6\frac{\frac{1}{2}+o(1)}{1+o(1)}
\theta^2+48\frac{o(1)}{1+o(1)}\theta+96\frac{o(1)}{1+o(1)}=0
\ee
Finally, taking into account the equality:
$$\frac{\frac{1}{2}+o(1)}{1+o(1)}=\frac{1}{2}+o(1)$$
we can write equation (\ref{eq50}) in the form: \be\label{eq51}
\theta^3 - 3\theta^2 +
v_{2}(x)\theta^2 +v_{1}(x)\theta +v_{o}(x)=0
\ee
where $v_1(x)$, $v_2(x)$, $v_o(x)$ are three positive functions
defined in a neighbourhood of $+\infty$ and tending to 0 when $x$
tends to $+\infty$. If for a fixed $x'$ we find a number
$\theta'$ being a root of equation (\ref{eq51}), then the number
$h'=\theta'\cdot x'$ is a root of equation (\ref{rownanie z B}).
It is then enough to study equation (\ref{eq51}). We shall prove
much more. Namely we have the following:
\begin{proposition}\label{glownapropozycja}
For each $\alpha>0$ there exists a point $x_{2}$ such
that for each $x>x_2$ equation (53) has in the interval
$[-\alpha,\alpha]$ exactly two roots $\theta_{-}$ and
$\theta_{+}$, and moreover $\theta_{-}<0<\theta_{+}$.
\end{proposition}
\vspace{3mm}
\begin{proof}
Indeed, Proposition \ref{glownapropozycja} is stronger than Lemma \ref{Lemma1}, where we need only the
existence of a negative root and of a positive root. In
Proposition \ref{glownapropozycja} we prove not only that the
roots exist, but also that we can find the solutions in an
arbitrary open interval containing the origin. Without loss of
generality, we may assume, that $\alpha\leq 1$. Let us fix then a
positive number $1\geq\alpha>0$ and choose $x_2$ so large, that
for $x>x_2$ we have:
\be\label{in52} v_2(x)\cdot \alpha^2+v_1(x)\cdot \alpha +
v_o(x)<2\alpha^2\ee
and \be\label{in53} v_2(x)\cdot \alpha^2-v_1(x)\cdot \alpha +
v_o(x)
<2\alpha^2,\e
Such an $x_2$ exists since all three functions $v_2$, $v_1$, $v_o$
are $o(1)$ when $x$ tends to $+\infty$. Let us fix $x>x_2$. We
rewrite equation (\ref{eq51}) in the form: $f(\theta)=g(\theta)$,
where
\be f(\theta)= \theta^3 + v_2(x)\cdot \theta^2+v_1(x)\cdot \theta
+ v_o(x),\ee
and
\be\label{eq55} g(\theta)=3\cdot \theta^2.\ee
Let us set
$h(\theta)=f(\theta)-g(\theta)$ and let us consider the interval
$[0,\alpha]$. We have: $h(0)= f(0)-g(0)=v_o(x)
>0$ and , (since $\alpha<1$ and using the inequality (52))we obtain:
$$h(\alpha)=f(\alpha)-g(\alpha)= \alpha^3 + v_2(x)\cdot \alpha^2+v_1(x)\cdot
\alpha + v_o(x)<\alpha^2+2\alpha^2-3\alpha^2=0.$$ Thus equation
(\ref{eq51}) has a root $\theta_{+}\in (0,\alpha)$.
Now we will consider the interval $[-\alpha,0]$. For $\theta=0$ we
have, as above $h(0)=v_o(x)>0$. For $\theta=-\alpha$ we have
(since $-\alpha^3<0$ and we have inequality (\ref{in53}):
\be
h(-\alpha)=f(-\alpha)-g(-\alpha)= -\alpha^3 + v_2(x)\cdot
\alpha^2-v_1(x)\cdot \alpha + v_o(x)- 3\alpha^2 <\ee
\be <v_2(x)\cdot \alpha^2-v_1(x)\cdot \alpha +
v_o(x)-3\alpha^2<2\alpha^2-3\alpha^2<0.\ee
Once more the continuity argument implies the existence of the
root $\theta_{-}$ of the equation (\ref{eq51}) in the interval
$(-\alpha,0)$. Let us remark, that $\theta_{-}\cdot
x=h^{*}_{-}(x)$ and $\theta_{+}\cdot x=h^{*}_{+}(x)$. This ends
the proof of Proposition \ref{glownapropozycja}, hence moreover
Lemma \ref{Lemma1}.
\end{proof}
\vspace{3mm}
\subsection{The order of magnitude of lenses}
\vspace{3mm}
By the results of the previous subsection, we can consider four functions:
$h_{-}$, $h_{+}$,$h^{*}_{-}$ and $h^{*}_{+}$, which are defined in
an interval $(M,\infty)$, and such that the following inequalities
holds (for each $x\in(M,\infty)$) :
\be h^{*}_{-}(x)<h_{-}(x)<0<h_{+}(x)<h^{*}_{+}(x).\ee
Our aim is
to establish the order of magnitude at $+\infty$ of the difference
$H(x)=h_{+}(x)-h_{-}(x)$. We will prove the following:
\vspace{3mm}
\begin{proposition}\label{Prop o male}
{\it The function $H$ satisfies the relation:
$$H(x) = o(x),$$when $x$ tends to $+\infty$}
\end{proposition}
\vspace{3mm}
\begin{proof}
This follows directly from the property formulated in
Proposition \ref{glownapropozycja}. Indeed, it is sufficient to
show separately,
that $h_{+}(x)=o(x)$ and $|h_{-}(x)|=o(x)$. To prove the first
relation, let us fix a positive number $\epsilon>0$. It follows
from Proposition \ref{glownapropozycja} (setting $\alpha=\epsilon$) that there exists $M_1>M$,
such that $x>M_1$ implies, that there exists a number $\theta<\epsilon$
($\theta$ depending on $x$) such that $h^{*}_{+}(x)=\theta \cdot
x$. But this means, that $$\frac{h^{*}_{+}(x)}{x}<\epsilon$$ for
$x>M_1$. The proof for $h^{*}_{-}$ is similar.
\end{proof}
\vspace{3mm}
Now we can prove a theorem on the order of magnitude of the length
of lenses S$_k$ using the Proposition \ref{Prop o male}. First we
shall prove the following lemma about sequences tending to
$+\infty$.
\vspace{3mm}
\begin{lemma}\label{ciagi}
{\it Suppose that we have four sequences
$(x^{-}_k)_1^{\infty}$,$(x^{+}_k)_1^{\infty}$,$(z_k)_1^{\infty}$,
and $(e_k)_1^{\infty}$ such that:
\be 0<x^{-}_k\leq e_k<e_{k+1}\leq x^{+}_k, \e
\be x^{-}_k\leq z_k \leq x^{+}_k, \e
\be \lim_{k\rightarrow \infty} e_k=+\infty,\ee
\be \lim_{k\rightarrow
\infty}\frac{x^{+}_k-x^{-}_k}{z_k}=0.\e
Then $$\lim_{k\rightarrow \infty}\frac{e_{k+1}-e_k}{e_k} =0.$$}
\end{lemma}
\vspace{3mm}
\begin{proof}
From (60) and (62) we deduce that: $$\lim_{k\rightarrow \infty}x^{+}_k = +
\infty.$$ It must be also $$\lim_{k\rightarrow \infty}x^{-}_k = +
\infty.$$Indeed, suppose that there exists an infinite subset
$\mathbb L\subset \mathbb N$ and a constant $K>0$ such that $0\leq
x^{-}_n\leq K$ for $n\in \mathbb L$. Then for $n\in \mathbb L$ we
have:
$$0\leq \frac{x^{+}_n-K}{z_n}\leq\frac{x^{+}_n-x^{-}_n}{z_n}$$
Hence by (63) $$ \frac{x^{+}_n-K}{z_n}\rightarrow 0,n\in \mathbb
L.$$ This implies that $\lim_{n\in \mathbb L}z_n = +\infty$. In
consequence $$\lim_{n\in \mathbb L}\frac{x^{+}_n}{z_n} = 0,$$ thus
there exists $n\in \mathbb L$ such that $x^{+}_n<z_n$, but this is
impossible.
From the inequality $$\frac{x^{+}_k-x^{-}_k}{x^{+}_k}\leq
\frac{x^{+}_k-x^{-}_k}{z_k}$$ we deduce that
$$\lim_{k\rightarrow +\infty}\frac{x^{-}_k}{x^{+}_k}=1$$ and this gives
$$\lim_{k\rightarrow +\infty}\frac{x^{+}_k-x^{-}_k}{x^{-}_k}=0.$$
But
$$\frac{x^{+}_k-x^{-}_k}{e_k}\leq
\frac{x^{+}_k-x^{-}_k}{x^{-}_k}$$then
$$\lim_{k\rightarrow \infty}\frac{x^{+}_k-x^{-}_k}{e_k}=0.$$
Since$$\frac{e_{k+1}-e_k}{e_k}\leq \frac{x^{+}_{k}-x^{-}_k}{e_k}$$
then$$\lim_{k\rightarrow \infty}\frac{e_{k+1}-e_k}{e_k} =0,$$and
this ends the proof of Lemma \ref{ciagi}.
\end{proof}
\vspace{3mm}
\begin{lemma}\label{graph}
{ \it The graph of the function $\pi^{*}$ lies between the graphs
of the functions $Li-\varepsilon$ and $Li+\varepsilon$}.
\end{lemma}
\vspace{3mm}
\begin{proof}
Suppose the opposite. Then there exist two consecutive prime
numbers $p_n$ and $p_{n+1}$, such that the points $A=(p_n,n)$ and
$B=(p_{n+1},n+1)$ lies between $Li-\varepsilon$ and
$Li+\varepsilon$ and the segment $[A;B]$ cuts the graph of
$Li-\varepsilon$ or $Li+\varepsilon$. But the subgraph of
$Li+\varepsilon$ is convex, then $[A;B]$ cuts only the graph of
$Li-\varepsilon$. This means, that there exists a point $x\in
(p_n,p_{n+1})$ such that the point $X=(x,n)$ lies below the graph
of $Li-\varepsilon$. But $X=(x,\pi(x))$, then from the definition
of the error term, $X$ lies between the graphs of $Li-\varepsilon$
and $Li+\varepsilon$. This ends the proof of Lemma \ref{graph}.
\end{proof}
\vspace{3mm}
\begin{lemma}\label{lemat29}
{\it Let $S_k$ be a lens defined by the extremal prime numbers
$e_k$ and $e_{k+1}$. Then the straight line joining the points
$U=(e_k,\pi(e_k))$ and $V=(e_{k+1},\pi(e_{k+1}))$ cannot cut the
graph of $Li-\varepsilon$ in two distinct points.}
\end{lemma}
\vspace{3mm}
\begin{proof}
This follows from the Lemma \ref{graph} since, by the definition of
extremal points, all the graph of $\pi^{*}$ lies below the
straight line joining the points $U$ and $V$.
\end{proof}
\vspace{3mm}
The main theorem of this section is the following:
\vspace{3mm}
\begin{theorem}
{\it With the notations as above if the Riemann Conjecture is
true, then $$\lim_{k\rightarrow +\infty}\frac{e_{k+1}}{e_k} =
1.$$}
\end{theorem}
\vspace{3mm}
\begin{proof}
Let $U$ and $V$ be as in Lemma \ref{lemat29}. Take the straight
line $l(U,V)$ joining $U$ and $V$ and translate it to the
position $l^{*}$ where the straight line $l^{*}$ is parallel to
$l(U,V)$ and tangent to the graph of $Li-\varepsilon$. This line
$l^{*}$ cuts the graph of $Li+\varepsilon$ in points $U^{*}$ and
$V^{*}$, whose first coordinates are $x^{-}_k$ and $x^{+}_k$
respectively, and the tangent point is $z_k$. It is not hard to
check, that the sequences
$(x^{-}_k)_1^{\infty}$,$(x^{+}_k)_1^{\infty}$,$(z_k)_1^{\infty}$,
and $(e_k)_1^{\infty}$ satisfy the assumptions of Lemma
\ref{ciagi}. Then this ends the proof of the theorem.
\end{proof}
\vspace{3mm}
We have an equivalent formulation.
\vspace{3mm}
\begin{corollary}
{ \it The length of lenses $x\rightarrow
S(x)$ satisfies the equality $S(x)=o(x)$.}
\end{corollary}
\section{Part III}
\subsection{Final remarks}
It is natural to ask if one can prove the results like Theorem 30
or Corollary 31 without assuming the Riemann Hypothesis. Maybe
this is possible, but it seems, that the method used in this paper
is insufficient. In particular an analogous argumentation applied
to $L(x)=\frac{x}{\ln x}$ and $\varepsilon(x)=
C\cdot\frac{x}{\ln^{2}x}$ gives only $S(x)=O(x)$. I was also not
able to prove Theorem 30 using $L(x)= Li(x)$ and
$$\varepsilon(x)=O\left(x\cdot \exp\left(\frac{A(\ln x)^{\frac{3}{5}}}{(\ln(\ln
x))^{\frac{1}{5}}}\right)\right).$$ On the other hand for
$L(x)=Li(x)$ the error term $\varepsilon(x)=O(x^{\alpha}\cdot
(\ln^{k}x))$ ( $\alpha
>\frac{1}{2}$ and $k\in \mathbb Z$) is sufficient.
If one assumes the Riemann hypothesis, then some naive
argumentation leads to the equality like $S(x)= O(\sqrt{x}
\ln^{2}x)$, which seems to be supported by the experimental data.
This may suggest, that the problem of determining the right order
of magnitude of $S(x)$ at infinity is near to the problem of
determining the right order of magnitude of the difference
$|Li(x)-\pi(x)|$.
I have no idea about "the small gaps between extremal primes". As
it was mentioned in Part I, Question \ref{pytanie}, the small gaps
between extremal primes -i.e. the small $S_k$- may occur, but the
theorems like for example $$\liminf\frac{e_{k+1}-e_k}{\ln e_k}=0$$
or at least $$\liminf\frac{e_{k+1}-e_k}{\sqrt{e_k}}=0$$ seems to
be out of reach.
As it was mentioned in Introduction, Montgomery and Wagon in
\cite{MonWa} considered the function $M(x)=x\rightarrow
\frac{x}{\pi(x)}$. I used an analogous algorithm as in Proposition
\ref{rekurencja} to obtain about 1500 "another" extremal prime
numbers, $(m_k)_1^{\infty}$ "generated" by the function $M(x)$
instead of $\pi(x)$. Generated by $M(x)$ means, that the points
$(m_k,M(m_k))$ are extremal points of the convex hull of the
subgraph of the function $M(x)$. Clearly $(m_k)_1^{\infty}$ and
$(e_k)_1^{\infty}$ are not the same sequences, there are many
differences, but on the other hand they behave (in asymptotic
sense) similarly.
\begin{figure}[h]
\begin{center}
\scalebox{0.6}{\includegraphics{rys01a.eps}}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\scalebox{0.5}{\includegraphics{rys01b.eps}}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\scalebox{0.6}{\includegraphics{rys01c.eps}}
\end{center}
\end{figure}
|
2,877,628,089,769 | arxiv | \section{Introduction}
Methods for \emph{online convex optimization} (OCO)
\citep{ShalevShwartz2012,Hazan2016} make it possible to optimize
parameters sequentially, by processing convex functions in a streaming
fashion. This is important in time series prediction where the data are
inherently online; but it may also be convenient to process offline data
sets sequentially, for instance if the data do not all fit into memory
at the same time or if parameters need to be updated quickly when extra
data become available.
The difficulty of an OCO task depends on the convex functions
$f_1,f_2,\ldots,f_T$ that need to be optimized. The argument of these
functions is a $d$-dimensional parameter vector $\w$ from a convex
domain $\U$. Although this is abstracted away in the general framework,
each function $f_t$ usually measures the loss of the parameters on an
underlying example $(\x_t,y_t)$ in a machine learning task. For example,
in classification $f_t$ might be the \emph{hinge loss} $f_t(\w) =
\max\{0,1-y_t \ip{\w}{\x_t}\}$ or the \emph{logistic loss} $f_t(\w) =
\log\del*{1 + e^{-y_t \ip{\w}{\x_t}}}$, with $y_t \in \{-1,+1\}$. Thus
the difficulty depends both on the choice of loss and on the observed
data.
There are different methods for OCO, depending on assumptions that can
be made about the functions. The simplest and most commonly used
strategy is \emph{online gradient descent} (GD), which does not require
any assumptions beyond convexity. GD updates parameters $\w_{t+1} = \w_t
- \eta_t \nabla f_t(\w_t)$ by taking a step in the direction of the
negative gradient, where the step size is determined by a parameter
$\eta_t$ called the \emph{learning rate}. For learning rates $\eta_t
\propto 1/\sqrt{t}$, GD guarantees that the \emph{regret} over $T$
rounds, which measures the difference in cumulative loss between the
online iterates $\w_t$ and the best offline parameters $\u$, is bounded
by $O(\sqrt{T})$ \citep{Zinkevich2003}. Alternatively, if it is known
beforehand that the functions are of an easier type, then better regret
rates are sometimes possible. For instance, if the functions are
\emph{strongly convex}, then logarithmic regret $O(\log T)$ can be
achieved by GD with much smaller learning rates $\eta_t \propto 1/t$
\citep{ons}, and, if they are \emph{exp-concave}, then logarithmic
regret $O(d \log T)$ can be achieved by the \emph{Online Newton Step}
(ONS) algorithm \citep{ons}.
This partitions OCO tasks into categories, leaving it to the user to
choose the appropriate algorithm for their setting. Such a strict
partition, apart from being a burden on the user, depends on an
extensive cataloguing of all types of easier functions that might occur
in practice. (See Section~\ref{sec:fastRateExamples} for several ways in
which the existing list of easy functions can be extended.) It also
immediately raises the question of whether there are cases in between
logarithmic and square-root regret (there are, see
Theorem~\ref{thm:Bernstein} in Section~\ref{sec:fastRateExamples}), and
which algorithm to use then. And, third, it presents the problem that
the appropriate algorithm might depend on (the distribution of) the data
(again see Section~\ref{sec:fastRateExamples}), which makes it entirely
impossible to select the right algorithm beforehand.
These issues motivate the development of \emph{adaptive} methods, which
are no worse than $O(\sqrt{T})$ for general convex functions, but also
automatically take advantage of easier functions whenever possible. An
important step in this direction are the adaptive GD algorithm of
\citeauthornumber{BartlettHazanRakhlin2007} and its proximal improvement by
\citeauthornumber{Do2009}, which are able to interpolate between strongly convex
and general convex functions if they are provided with a data-dependent
strong convexity parameter in each round, and significantly outperform
the main non-adaptive method (i.e.\ Pegasos,
\citep{Shalev-ShwartzEtAl2011Pegasos}) in
the experiments of \citeauthor{Do2009} Here we consider a significantly richer
class of functions, which includes exp-concave functions, strongly
convex functions, general convex functions that do not change between
rounds (even if they have no curvature), and stochastic functions whose
gradients satisfy the so-called Bernstein condition, which is well-known
to enable fast rates in offline statistical learning
\citep{BartlettMendelson2006,VanErven2015FastRates,AndereNIPSpaper2016}.
The latter group can again include functions without curvature, like the
unregularized hinge loss. All these cases are covered simultaneously by
a new adaptive method we call \emph{MetaGrad}, for \underbar{m}ultiple
\underbar{eta} \underbar{grad}ient algorithm. MetaGrad maintains a
covariance matrix of size $d \times d$ where $d$ is the parameter
dimension. In the remainder of the paper we call this version \emph{full
MetaGrad}. A reference implementation is available from~\cite{MetaGradCode}. We also design and analyze a faster approximation that only
maintains the $d$ diagonal elements, called \emph{diagonal MetaGrad}.
Theorem~\ref{thm:mainbound} below implies the following:
\begin{theorem}\label{thm:roughthm}
Let $\grad_t = \nabla f_t(\w_t)$ and $V_T^\u = \sum_{t=1}^T \del*{(\u - \w_t)^\top \grad_t}^2$. Then the regret of full MetaGrad is
simultaneously bounded by $O(\sqrt{T \log \log T})$, and by
\begin{equation}\label{eqn:roughmainbound}
\sum_{t=1}^T f(\w_t) - \sum_{t=1}^T f_t(\u)
~\le~
\sum_{t=1}^T (\w_t - \u)^\top \grad_t
~\le~
O\del*{
\sqrt{
V_T^\u\,
d \ln T
}
+ d \ln T
}
\end{equation}
for any $\u \in \U$.
\end{theorem}
Theorem~\ref{thm:roughthm} bounds the regret in terms of a measure of variance
$V_T^\u$ that depends on the distance of the algorithm's choices $\w_t$
to the optimum $\u$, and which, in favourable cases, may be
significantly smaller than $T$. Intuitively, this happens, for instance,
when there is stable optimum $\u$ that the algorithm's choices $\w_t$
converge to. Formal consequences are given in
Section~\ref{sec:fastRateExamples}, which shows that this bound implies
faster than $O(\sqrt{T})$ regret rates, often logarithmic in $T$, for
all functions in the rich class mentioned above. In all cases the
dependence on $T$ in the rates matches what we would expect based on
related work in the literature, and in most cases the dependence on the
dimension $d$ is also what we would expect. Only for strongly convex
functions is there an extra factor $d$. It is an open question whether
this is a fundamental obstacle for which an even more general adaptive
method is needed, or whether it is an artefact of our analysis.
The main difficulty in achieving the regret guarantee from
Theorem~\ref{thm:roughthm} is tuning a learning rate parameter $\eta$.
In theory, $\eta$ should be roughly $1/\sqrt{V_T^\u}$, but this is not
possible using any existing techniques, because the optimum $\u$ is unknown in
advance, and tuning in terms of a uniform upper bound $\max_\u V_T^\u$
ruins all desired benefits. MetaGrad therefore runs multiple slave
algorithms, each with a different learning rate, and combines them with
a novel master algorithm that learns the empirically best learning rate
for the OCO task in hand. The slaves are instances of exponential
weights on the continuous parameters $\u$ with a suitable surrogate loss
function, which in particular causes the exponential weights
distributions to be multivariate Gaussians. For the full version of
MetaGrad, the slaves are closely related to the ONS algorithm on the original
losses, where each slave receives the master's gradients instead
of its own. It is shown that $\ceil{\half \origlog_2 T} + 1$ slaves
suffice, which is at most $16$ as long as $T \leq 10^9$, and therefore
seems computationally acceptable. If not, then the number of slaves can
be further reduced at the cost of slightly worse constants in the bound.
\paragraph{Related Work}
If we disregard computational efficiency, then the result of
Theorem~\ref{thm:roughthm} can be achieved by finely discretizing the
domain $\U$ and running the Squint algorithm for prediction with experts
with each discretization point as an expert \citep{squint}. MetaGrad may
therefore also be seen as a computationally efficient extension of
Squint to the OCO setting.
Our focus in this work is on adapting to sequences of functions $f_t$
that are easier than general convex functions. A different direction in
which faster rates are possible is by adapting to the domain $\U$. As we
assume $\U$ to be fixed, we consider an upper bound $D$ on the norm of the
optimum $\u$ to be known. In contrast,
\citeauthor*{OrabonaPal2016} \cite{Orabona2014,OrabonaPal2016} design methods that can
adapt to the norm of $\u$. One may also look at the shape of $\U$. As
can be seen in the analysis of the slaves, MetaGrad is based a spherical
Gaussian prior on $\reals^d$, which favours $\u$ with small
$\ell_2$-norm. This is appropriate for $\U$ that are similar to the
Euclidean ball, but less so if $\U$ is more like a box
($\ell_\infty$-ball). In this case, it would be better to run a copy of
MetaGrad for each dimension separately, similarly to how the diagonal
version of the AdaGrad algorithm \citep{adagrad,McMahanStreeter2010} may
be interpreted as running a separate copy of GD with a separate learning
rate for each dimension. AdaGrad further uses an adaptive tuning of the
learning rates that is able to take advantage of sparse gradient
vectors, as can happen on data with rarely observed features. We briefly
compare to AdaGrad in some very simple simulations in
Appendix~\ref{app:simulations}.
Another notion of adaptivity is explored in a series of work
\cite{hazan2010extracting,GradualVariationInCosts2012,SteinhardtLiang14}
obtaining tighter bounds for
linear functions $f_t$ that vary little between rounds (as measured
either by their deviation from the mean function or by successive
differences). Such bounds imply super fast rates for optimizing a fixed
linear function, but reduce to slow $O(\sqrt{T})$ rates in the other
cases of easy functions that we consider.
Finally, the way MetaGrad's slaves maintain a Gaussian distribution on
parameters $\u$ is similar in spirit to AROW and related confidence
weighted methods, as analyzed by \citeauthornumber{CrammerEtAl2009AROW}
in the mistake bound model.
\paragraph{Outline}
We start with the main definitions in the next section. Then
Section~\ref{sec:fastRateExamples} contains an extensive set of
examples where Theorem~\ref{thm:roughthm} leads to fast rates,
Section~\ref{sec:metagrad} presents the MetaGrad algorithm, and
Section~\ref{sec:analysis} provides the analysis leading to
Theorem~\ref{thm:mainbound}, which is a more detailed statement of
Theorem~\ref{thm:roughthm} with an improved dependence on the dimension
in some particular cases and with exact constants. The details of the
proofs can be found in the appendix.
\section{Setup}
\begin{algorithm2e}[t]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmiccomment}[1]{\hfill $\triangleright$~\textit{#1}}
\begin{algorithmic}[1]
\REQUIRE Convex set $\U$
\FOR{$t=1,2,\ldots$}
\STATE Learner plays $\w_t \in \U$
\STATE Environment reveals convex loss function $f_t : \U \to \reals$
\STATE Learner incurs loss $f_t(\w_t)$ and observes (sub)gradient $\grad_t = \nabla f_t(\w_t)$
\ENDFOR
\end{algorithmic}
\SetAlgorithmName{Protocol}{Protocol}{List of Protocols}
\caption{Online Convex Optimization from First-order Information}\label{alg:OCOprotocol}
\end{algorithm2e}
\setcounter{algocf}{0}
Let $\U \subseteq \reals^d$ be a closed convex set, which we assume
contains the origin $\zeros$ (if not, it can always be translated). We
consider algorithms for Online Convex Optimization over $\U$, which
operate according to the protocol displayed in
Protocol~\ref{alg:OCOprotocol}. Let $\w_t \in \U$ be the iterate
produced by the algorithm in round $t$, let $f_t : \U \to \reals$ be the
convex loss function produced by the environment and let $\grad_t =
\nabla f_t(\w_t)$ be the (sub)gradient, which is the feedback given to the
algorithm.\footnote{If $f_t$ is not differentiable at $\w_t$, any choice of subgradient $\grad_t \in \partial
f_t(\w_t)$ is allowed.} We abbreviate the
\emph{regret} with respect to $\u \in \U$ as $R_T^\u = \sum_{t=1}^T
\del*{f_t(\w_t) - f_t(\u)}$, and define our measure of variance as
$V_T^\u = \sum_{t=1}^T \del*{(\u - \w_t)^\top \grad_t}^2$ for the full
version of MetaGrad and $V_T^\u = \sum_{t=1}^T \sum_{i=1}^d (u_i -
w_{t,i})^2 \grads_{t,i}^2$ for the diagonal version. By convexity of
$f_t$, we always have $f_t(\w_t) - f_t(\u) \leq (\w_t - \u)^\top
\grad_t$. Defining $\Rtrick_T^\u =
\sum_{t=1}^T(\w_t - \u)^\top \grad_t$, this
implies the first inequality in
Theorem~\ref{thm:roughthm}: $R_T^\u \leq \Rtrick_T^\u$. A stronger requirement than
convexity is that a function $f$ is \emph{exp-concave}, which (for
exp-concavity parameter $1$) means that $e^{-f}$ is concave.
Finally, we impose the following standard boundedness assumptions,
distinguishing between the full version of MetaGrad (left column) and
the diagonal version (right column): for all $\u, \v \in \U$, all
dimensions $i$ and all times $t$,
\begin{align}
\notag
& \text{full} & & \text{diag}
\\
\label{eq:B}
\norm{\u - \v} &~\leq~ \Dfull
& |u_i - v_i| &~\leq~ \Ddiag
\\
\notag
\norm{\grad_t} &~\leq~ \Gfull
& |\grads_{t,i}| &~\leq~ \Gdiag.
\end{align}
Here, and throughout the paper, the norm of a vector (e.g.\
$\|\grad_t\|$) will always refer to the $\ell_2$-norm. For the full
version of MetaGrad, the Cauchy-Schwarz inequality further implies that
$(\u - \v)^\top \grad_t \leq \|\u - \v\| \cdot \|\grad_t\| \leq \Dfull
\Gfull$.
\section{Fast Rate Examples}\label{sec:fastRateExamples}
In this section, we motivate our interest in the adaptive bound
\eqref{eqn:roughmainbound} by giving a series of examples in which it
provides fast rates. These fast rates are all derived from two general
sufficient conditions: one based on the directional derivative of the
functions $f_t$ and one for stochastic gradients that satisfy the
\emph{Bernstein condition}, which is the standard condition for fast
rates in off-line statistical learning. Simple simulations that
illustrate the conditions are provided in Appendix~\ref{app:simulations}
and proofs are also postponed to
Appendix~\ref{app:MoreFastRateExamplesAndProofs}.
\paragraph{Directional Derivative Condition}
In order to control the regret with respect to some point $\u$, the
first condition requires a quadratic lower bound on the curvature of the
functions $f_t$ in the direction of $\u$:
\begin{theorem}\label{thm:curvedfunctions}
Suppose, for a given $\u \in \U$, there exist constants $a,b > 0$ such
that the functions $f_t$ all satisfy
\begin{equation}\label{eqn:curvedfunctions}
f_t(\u) \geq f_t(\w) + a (\u - \w)^\top \nabla f_t(\w) + b \del*{(\u - \w)^\top \nabla f_t(\w)}^2
\qquad \text{for all $\w \in \U$.}
\end{equation}
Then any method with regret bound \eqref{eqn:roughmainbound} incurs
logarithmic regret, $R_T^\u = O(d \ln T)$, with respect to $\u$.
\end{theorem}
The case $a=1$ of this condition was introduced by \citeauthornumber{ons}, who show
that it is satisfied for all $\u \in \U$ by exp-concave and strongly
convex functions. The rate $O(d \log T)$ is also what we would expect by
summing the asymptotic offline rate obtained by ridge regression on the
squared loss \citep[Section~5.2]{SrebroEtAl2010}, which is exp-concave.
Our extension to $a > 1$ is technically a minor step, but it makes the
condition much more liberal, because it may then also be satisfied by
functions that do \emph{not} have any curvature. For example, suppose
that $f_t = f$ is a fixed convex function that does not change with $t$.
Then, when $\u^* = \argmin_\u f(\u)$ is the offline minimizer, we have
$(\u^* - \w)^\top \nabla f(\w) \in \intcc{-\Gfull \Dfull,0}$, so that
\begin{align*}
f(\u^*) - f(\w)
&\geq (\u^* - \w)^\top \nabla f(\w)
\\
&\geq 2 (\u^* - \w)^\top \nabla f(\w) + \frac{1}{\Dfull \Gfull} \del*{(\u^* - \w)^\top
\nabla f(\w)}^2,
\end{align*}
where the first inequality uses only convexity of $f$. Thus condition
\eqref{eqn:curvedfunctions} is satisfied by \emph{any fixed convex
function}, even if it does not have any curvature at all, with $a =
2$ and $b=1/(\Gfull \Dfull)$.
\paragraph{Bernstein Stochastic Gradients}
The possibility of getting fast rates even without any curvature is
intriguing, because it goes beyond the usual strong convexity or
exp-concavity conditions. In the online setting, the case of fixed
functions $f_t = f$ seems rather restricted, however, and may in fact be
handled by offline optimization methods. We therefore seek to loosen
this requirement by replacing it by a stochastic condition on the
distribution of the functions $f_t$. The relation between variance
bounds like Theorem~\ref{thm:roughthm} and fast rates in the stochastic
setting is studied in depth by \citeauthornumber{AndereNIPSpaper2016}, who obtain
fast rate results both in expectation and in probability. Here we
provide a direct proof only for the expected regret, which allows a
simplified analysis.
Suppose the functions $f_t$ are independent and identically
distributed (i.i.d.), with common distribution $\pr$. Then we say that
the gradients satisfy the \emph{$(B,\beta)$-Bernstein condition} with
respect to the stochastic optimum
\[
\u^* = \argmin_{\u \in \U} \E_{f \sim
\pr}[f(\u)]
\]
if, for all $\w \in \U$,
\begin{equation}\label{eqn:bernstein}
(\w - \u^*)^\top
\ex_f \sbr*{
\nabla f(\w) \nabla f(\w)^\top
}
(\w - \u^*)
~\le~
B
\big((\w - \u^*)^\top \ex_f \sbr*{\nabla f(\w)}\big)^\beta
.
\end{equation}
This is an instance of the well-known Bernstein condition from offline
statistical learning
\citep{BartlettMendelson2006,VanErven2015FastRates}, applied to the
linearized excess loss $(\w - \u^*)^\top \nabla f(\w)$.
As shown in Appendix~\ref{sec:bnst}, imposing the condition for the
linearized excess loss is a weaker requirement than imposing it for the
original excess loss $f(\w) - f(\u^*)$.
\begin{theorem}\label{thm:Bernstein}
If the gradients satisfy the $(B,\beta)$-Bernstein condition for $B > 0$
and $\beta \in (0,1]$ with respect to $\u^* = \argmin_{\u \in \U} \E_{f
\sim \pr}[f(\u)]$, then any method with regret bound
\eqref{eqn:roughmainbound} incurs expected regret
\[
\E\sbr{R_T^{\u^*}} =
O\del*{\del*{B d \ln T}^{1/(2-\beta)} T^{(1-\beta)/(2-\beta)}
+ d\ln T}.
\]
\end{theorem}
\noindent
For $\beta=1$, the rate becomes $O(d
\ln T)$, just like for fixed functions, and for smaller $\beta$ it is in
between logarithmic and $O(\sqrt{d T})$.
For instance, the hinge loss on the unit ball with i.i.d.\ data satisfies the Bernstein condition with $\beta = 1$, which implies
an $O(d \log T)$ rate. (See Appendix~\ref{app:hingeLossExample}.) It is
common to add $\ell_2$-regularization to the hinge loss to make it
strongly convex, but this example shows that that is not necessary
to get logarithmic regret.
\section{MetaGrad Algorithm}\label{sec:metagrad}
In this section we explain the two versions (full and diagonal) of the
MetaGrad algorithm. We will make use of the following definitions:
\begin{align}
\notag
& \text{full} & & \text{diag}
\\
\label{eq:M.choices}
\Mfull_t &~\df~ \grad_t \grad_t^\top
& \Mdiag_t &~\df~ \diag(\grads_{t,1}^2, \ldots, \grads_{t,d}^2)
\\
\notag
\alphafull &~\df~ 1
& \alphadiag &~\df~ 1/d.
\end{align}
Depending on context, $\w_t \in \U$ will refer to the full or diagonal
MetaGrad prediction in round $t$. In the remainder we will drop the
superscript from the letters above, which will always be clear from
context.
MetaGrad will be defined by means of the following \emph{surrogate loss}
$\surr_t^\eta(\u)$, which depends on a parameter $\eta > 0$ that trades off \emph{regret} compared to $\u$ with
the square of the scaled directional derivative towards $\u$ (full case)
or its approximation (diag case):
\begin{equation}\label{eq:surrogate}
\surr_t^\eta(\u)
~\df~
- \eta(\w_t-\u)^\top \grad_t
+ \eta^2 (\u - \w_t)^\top \M_t (\u - \w_t)
.
\end{equation}
Our surrogate loss consists of a linear and a quadratic part.
Using the
language of \citeauthornumber{Orabona2015}, the data-dependent quadratic part
causes a ``time-varying regularizer'' and \citeauthornumber{adagrad} would call
it ``temporal adaptation of the proximal function''.
The
sum of quadratic terms in our surrogate is what appears in the regret
bound of Theorem~\ref{thm:roughthm}.
The MetaGrad algorithm is a two-level hierarchical construction,
displayed as Algorithms~\ref{alg:MetaGradMaster} (master algorithm that
learns the learning rate) and~\ref{alg:MetaGradSlave} (sub-module, a
copy running for each learning rate $\eta$ from a finite grid). Based on our
analysis in the next section, we recommend using the grid in
\eqref{eqn:grid}.
\begin{algorithm2e}[t]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmiccomment}[1]{\hfill $\triangleright$~\textit{#1}~~~~~~
\begin{algorithmic}[1]
\REQUIRE Grid of learning rates $\frac{1}{5 D G} \geq \eta_1 \ge \eta_2 \ge \ldots$ with prior weights $\pi_1^{\eta_1}, \pi_1^{\eta_2}, \ldots$ \COMMENT{As in \eqref{eqn:grid}}
\FOR{$t=1,2,\ldots$}
\STATE Get prediction $\w_t^\eta \in \U$ of slave (Algorithm~\ref{alg:MetaGradSlave}) for each $\eta$
\STATE\label{line:tilted.ewa}
Play
$
\w_t
=
\frac{
\sum_\eta \pi_t^\eta \eta \w^\eta_t
}{
\sum_\eta \pi_t^\eta \eta \phantom{\w^\eta_t}
}
\in \U
$
\COMMENT{Tilted Exponentially Weighted Average}
\STATE\label{lin:gradien.trick}
Observe gradient $\grad_t = \nabla f_t(\w_t)$
\STATE\label{line:expw}
Update $\pi_{t+1}^\eta = \frac{\pi_t^\eta e^{-\alpha
\surr_t^\eta(\w_t^\eta)}}{\sum_\eta \pi_t^\eta e^{-\alpha\surr_t^\eta(\w_t^\eta)}}$ for all $\eta$
\COMMENT{\parbox[t]{\widthof{Exponential Weights with}}{Exponential Weights with surrogate loss \eqref{eq:surrogate}}}
\ENDFOR
\end{algorithmic}
\caption{MetaGrad Master}\label{alg:MetaGradMaster}
\end{algorithm2e}
\begin{algorithm2e}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmiccomment}[1]{\hfill $\triangleright$~\textit{#1}~~~~~~
\begin{algorithmic}[1]
\REQUIRE Learning rate $0 < \eta \leq \frac{1}{5 D G}$, domain size $D > 0$
\STATE $\w^\eta_1 = \zeros$ and $\Sigma^\eta_1 = D^2 \I$
\FOR{$t=1,2,\ldots$}
\STATE Issue $\w_t^\eta$ to master (Algorithm~\ref{alg:MetaGradMaster})
\STATE Observe gradient $\grad_t = \nabla f_t(\w_t)$ \COMMENT{Gradient at \emph{master} point $\w_t$}
\STATE\label{line:md}
Update
$
\begin{aligned}[t]
\Sigma^\eta_{t+1} &= \textstyle \del*{\frac{1}{D^2} \I + 2 \eta^2 \sum_{s=1}^t \M_s}^{-1}
\\
\widetilde{\w}_{t+1}^\eta &= \w^\eta_t - \Sigma^\eta_{t+1} \del*{\eta \grad_t + 2 \eta^2 \M_t (\w_t^\eta - \w_t)}
\\
\w_{t+1}^\eta &= \Pi_{\U}^{\Sigma_{t+1}^\eta} \del*{ \widetilde{\w}^\eta_{t+1}}
\end{aligned}$
\vspace{.3em}\linebreak
\mbox{}~~~with projection $\Pi_\U^{\Sigma}(\w) = \argmin_{\u \in \U} (\u - \w)^\top
\Sigma^{-1} (\u - \w)$
\ENDFOR
\end{algorithmic}
\smallskip
Implementation: For $\M_t = \Mdiag_t$ only maintain diagonal of $\Sigma_t^\eta$.
For $\M_t = \Mfull_t$ use rank-one update $\Sigma^\eta_{t+1} =
\Sigma^\eta_t - \frac{2 \eta^2\Sigma_t^\eta \grad_t \grad_t^\top
\Sigma_t^\eta}{1 + 2 \eta^2 \grad_t^\top \Sigma_t^\eta \grad_t}$ and simplify
$\widetilde{\w}_{t+1}^\eta = \w^\eta_t - \eta \Sigma^\eta_{t+1} \grad_t \del*{1 + 2 \eta\grad_t^\top (\w_t^\eta - \w_t)}$.
\caption{MetaGrad Slave}\label{alg:MetaGradSlave}
\end{algorithm2e}
\paragraph{Master}
The task of the Master Algorithm~\ref{alg:MetaGradMaster} is to learn
the empirically best learning rate $\eta$ (parameter of the surrogate
loss $\surr_t^\eta$), which is notoriously difficult to
track online because the regret is non-monotonic over rounds and may
have multiple local minima as a function of $\eta$ (see
\citep{learning.learning.rate} for a study in the expert setting). The
standard technique is therefore to derive a monotonic upper bound on the
regret and tune the learning rate optimally \emph{for the bound}. In
contrast, our approach, inspired by the approach for combinatorial games
of \citeauthornumber[Section~4]{squint}, is to have our master aggregate the
predictions of a discrete grid of learning rates. Although we provide a
formal analysis of the regret, the master algorithm does not depend on
the outcome of this analysis, so any slack in our bounds does not feed
back into the algorithm. The master is in fact very similar to the
well-known exponential weights method (line~\ref{line:expw}), run on the
surrogate losses, except that in the predictions the weights of the
slaves are \emph{tilted} by their learning rates
(line~\ref{line:tilted.ewa}), having the effect of giving a larger
weight to larger $\eta$. The internal parameter $\alpha$ is set to
$\alphafull$ from \eqref{eq:M.choices} for the full version of the
algorithm, and to $\alphadiag$ for the diagonal version.
\paragraph{Slaves}
The role of the Slave Algorithm~\ref{alg:MetaGradSlave} is to guarantee
small surrogate regret for a fixed learning rate $\eta$. We consider two
versions, corresponding to whether we take rank-one or diagonal matrices
$\M_t$ (see \eqref{eq:M.choices}) in the surrogate \eqref{eq:surrogate}.
The first version maintains a \emph{full} $d \times d$ covariance matrix
and has the best regret bound. The second version uses only
\emph{diagonal} matrices (with $d$ non-zero entries), thus trading off a
weaker bound with a better run-time in high dimensions.
Algorithm~\ref{alg:MetaGradSlave} presents the update equations in a computationally efficient form. Their intuitive motivation is given in the proof of Lemma~\ref{lem:surrogateregret}, where we show that the standard exponential weights method with Gaussian prior and surrogate losses $\surr_t^\eta(\u)$ yields Gaussian posterior with mean $\w_t^\eta$ and covariance matrix $\Sigma_t^\eta$.
The full version of MetaGrad is closely related to the Online Newton Step
algorithm \citep{ons} running on the original losses $f_t$: the differences are that
each Slave receives the Master's gradients $\grad_t = \nabla f_t(\w_t)$
instead of its own $\nabla f_t(\w_t^\eta)$, and that an additional term $2 \eta^2 \M_t (\w_t^\eta - \w_t)$ in line~\ref{line:md} adjusts for the difference between the Slave's parameters $\w_t^\eta$ and the Master's parameters $\w_t$. MetaGrad is
therefore a bona fide first-order algorithm that only accesses $f_t$
through $\grad_t$. We also note that we have chosen the Mirror Descent
version that iteratively updates and projects (see line~\ref{line:md}).
One might alternatively consider the Lazy Projection version (as in
\cite{Zinkevich2004,Nesterov2009,Xiao2010}) that forgets past
projections when updating on new data. Since projections are typically
computationally expensive, we have opted for the Mirror Descent version,
which we expect to project less often, since a projected point seems
less likely to update to a point outside of the domain than an
unprojected point.
\paragraph{Total run time}
As mentioned, the running time is dominated by the slaves.
Ignoring the projection, a slave with full covariance matrix takes
$O(d^2)$ time to update, while slaves with diagonal covariance matrix
take $O(d)$ time. If there are $m$ slaves, this makes the overall
computational effort respectively $O(md^2)$ and $O(md)$, both in time
per round and in memory.
Our analysis below indicates that $m = 1 + \ceil{\half \origlog_2 T}$
slaves suffice, so $m \leq 16$ as long as $T \leq 10^9$. In addition, each slave may incur the cost of a projection, which
depends on the shape of the domain $\U$. To get a sense for the
projection cost we consider a typical example. For the Euclidean ball a
diagonal projection can be performed using a few iterations of Newton's
method to get the desired precision. Each such iteration costs $O(d)$
time. This is generally considered affordable. For full projections the
story is starkly different. We typically reduce to the diagonal case by
a basis transformation, which takes $O(d^3)$ to compute using SVD. Hence
here the projection dwarfs the other run time by an order of magnitude.
We refer to \cite{adagrad} for examples of how to compute projections
for various domains $\U$. Finally, we remark that a potential speed-up
is possible by running the slaves in parallel.
\section{Analysis}\label{sec:analysis}
We conduct the analysis in three parts. We first discuss the master,
then the slaves and finally their composition. The idea is the
following. The master guarantees for all $\eta$ simultaneously that
\begin{subequations}
\label{eq:plan}
\begin{equation}\label{eq:master.plan}
0
~=~
\sum_{t=1}^T \surr_t^\eta(\w_t)
~\le~
\sum_{t=1}^T \surr_t^\eta(\w_t^\eta)
+ \text{master regret compared to $\eta$-slave}
.
\end{equation}
Then each $\eta$-slave takes care of learning $\u$, with regret $O(d
\log T)$:
\begin{equation}\label{eq:slave.plan}
\sum_{t=1}^T \surr_t^\eta(\w_t^\eta)
~\le~
\sum_{t=1}^T \surr_t^\eta(\u)
+
\text{$\eta$-slave regret compared to $\u$}
.
\end{equation}
These two statements combine to
\begin{equation}\label{eq:overall.plan}
\eta \sum_{t=1}^T (\w_t-\u)^\top \grad_t
- \eta^2 V_T^\u
~=~
- \sum_{t=1}^T \surr_t^\eta(\u)
~\le~
\text{sum of regrets above}
\end{equation}
\end{subequations}
and the overall result follows by optimizing $\eta$.
\subsection{Master}
To show that we can aggregate the slave predictions, we consider the
potential $\Phi_T \df \sum_\eta \pi_1^\eta e^{- \alpha \sum_{t=1}^T
\surr_t^\eta(\w_t^\eta)}$. In Appendix~\ref{app:masterProof}, we bound
the last factor $e^{- \alpha \surr_T^\eta(\w_T^\eta)}$ above by its
tangent at $\w_T^\eta = \w_T$ and obtain an objective that can be shown
to be equal to $\Phi_{T-1}$ regardless of the gradient $\grad_T$ if
$\w_T$ is chosen according to the Master algorithm. It follows that the
potential is non-increasing:
\begin{lemma}[Master combines slaves]\label{lem:pot.is.small}
The Master Algorithm guarantees $1 = \Phi_0 \ge \Phi_1 \ge \ldots \ge
\Phi_T$.
\end{lemma}
As $0 \le - \frac{1}{\alpha} \ln \Phi_T \le \sum_{t=1}^T
\surr_t^\eta(\w_t^\eta) + \frac{-1}{\alpha} \ln \pi_1^\eta$, this
implements step \eqref{eq:master.plan} of our overall proof strategy,
with master regret $\frac{-1}{\alpha} \ln \pi_1^\eta$. We further remark
that we may view our potential function $\Phi_T$ as a
\emph{game-theoretic supermartingale} in the sense of
\citeauthornumber{supermartingales}, and this lemma as establishing that
the MetaGrad Master is the corresponding \emph{defensive forecasting}
strategy.
\subsection{Slaves}
Next we implement step \eqref{eq:slave.plan}, which requires proving an
$O(d \log T)$ regret bound in terms of the surrogate loss for each
MetaGrad slave. In the full case, the surrogate loss is jointly
exp-concave, and in light of the analysis of ONS by
\citeauthornumber{ons} such a result is not surprising. For the diagonal
case, the surrogate loss lacks joint exp-concavity, but we can use
exp-concavity in each direction separately, and verify that the
projections that tie the dimensions together do not cause any trouble.
In Appendix~\ref{appx:surrogateregret} we analyze both cases
simultaneously, and obtain the following bound on the regret:
\begin{lemma}[Surrogate regret bound]\label{lem:surrogateregret}
For $0 < \eta \leq \frac{1}{5 D G}$, let $\surr_t^\eta(\u)$ be the
surrogate losses as defined in~\eqref{eq:surrogate} (either the full
or the diagonal version). Then the regret of
Slave Algorithm~\ref{alg:MetaGradSlave} is bounded by
\begin{equation*}
\sum_{t=1}^T \surr_t^\eta(\w_t^\eta)
\leq \sum_{t=1}^T \surr_t^\eta(\u)
+ \frac{1}{2 D^2} \norm{\u}^2
+ \frac{1}{2} \ln \det \del*{\I + 2 \eta^2 D^2 \sum_{t=1}^T \M_t}
\end{equation*}
for all $\u \in \U$.
\end{lemma}
\subsection{Composition}
To complete the analysis of MetaGrad, we first put the regret bounds for the master and slaves together as in \eqref{eq:overall.plan}. We then discuss how to choose the grid of $\eta$s, and optimize $\eta$ over this grid to get our main result. Proofs are postponed to Appendix~\ref{appx:composition}.
\begin{theorem}[Grid point regret]\label{thm:untuned.regret}
The full and diagonal versions of MetaGrad, with corresponding
definitions from \eqref{eq:B} and \eqref{eq:M.choices}, guarantee that,
for any grid point $\eta$ with prior weight $\pi_1^\eta$,
\[
\Rtrick_T^\u
~\le~
\eta V_T^\u
+
\frac{
\frac{1}{2 D^2} \norm{\u}^2
- \frac{1}{\alpha} \ln \pi_1^\eta
+ \frac{1}{2}
\ln \det \del*{\I + 2 \eta^2 D^2 \sum_{t=1}^T \M_t}
}{
\eta
}
\]
for all $\u \in \U$.
\end{theorem}
\paragraph{Grid}
We now specify the grid points and corresponding prior.
Theorem~\ref{thm:untuned.regret} above implies that any two $\eta$ that
are within a constant factor of each other will guarantee the same bound
up to essentially the same constant factor. We therefore choose an
exponentially spaced grid with a heavy tailed prior (see Appendix~\ref{appx:grid}):
\begin{equation}\label{eqn:grid}
\eta_i
~\df~
\frac{2^{-i}}{5 D G}
\quad
\text{and}
\quad
\pi_1^{\eta_i}
~\df~
\frac{C}{(i+1)(i+2)}
\quad
\text{for $i=0,1,2,\ldots,\ceil{\half \origlog_2 T}$,}
\end{equation}
with normalization $C = 1 + \wfrac{1}{(1 + \ceil{\half \origlog_2 T})}$. At the cost of a worse
constant factor in the bounds, the number of slaves can be reduced
by using a larger spacing factor, or by omitting some of the
smallest learning rates. The net effect of \eqref{eqn:grid} is that, for
any $\eta \in
[\frac{1}{5 D G \sqrt{T}},\frac{2}{5 D G}]$ there is an $\eta_i \in
[\half \eta, \eta]$, for which $- \ln \pi_1^{\eta_i} \leq 2\log(i+2) = O(\ln \ln (1/\eta_i)) = O(\ln \ln (1/\eta))$.
As these costs are independent of $T$, our regret guarantees still hold if the grid~\eqref{eqn:grid} is instantiated with $T$ replaced by any upper bound.
The final step is to apply
Theorem~\ref{thm:untuned.regret} to this grid, and to
properly select the learning rate $\eta_i$ in the bound. This leads to our main result:
\begin{theorem}[MetaGrad Regret Bound]\label{thm:mainbound}
Let $\S_T = \sum_{t=1}^T \M_t$ and $V_{T,i}^\u = \sum_{t=1}^T (u_i -
w_{t,i})^2 \grads_{t,i}^2$. Then the regret of
MetaGrad, with corresponding definitions from \eqref{eq:B} and
\eqref{eq:M.choices} and with grid and prior as in \eqref{eqn:grid}, is
bounded by
\begin{equation*}
\Rtrick_T^\u ~\le~ \sqrt{ 8 V_T^\u \del*{\frac{1}{D^2} \norm{\u}^2 + \Xi_T +
\frac{1}{\alpha}C_T}} + 5
D G \del*{\frac{1}{D^2} \norm{\u}^2 + \Xi_T + \frac{1}{\alpha}C_T}
\end{equation*}
for all $\u \in \U$,
where
\begin{align*}
\Xi_T &\le \min \set*{\ln \det \del*{\I + \frac{D^2 \rk(\S_T)}{V_T^\u} \S_T},
\rk(\S_T) \ln \del*{\frac{D^2}{V_T^\u} \sum_{t=1}^T \|\grad_t\|^2}}
\\
&= O(d \log(D^2 G^2 T))
\end{align*}
for the full version of the algorithm,
\begin{equation*}
\Xi_T = \sum_{i=1}^d \log \del*{\frac{D^2 \sum_{t=1}^T
\grads_{t,i}^2}{V_{T,i}^\u}} = O(d \log(D^2 G^2 T))
\end{equation*}
for the diagonal version, and $C_T = 4 \log\del*{3 + \half \origlog_2 T}
= O(\log \log T)$ in both cases. Moreover, for both versions of the
algorithm, the regret is simultaneously bounded by
\begin{equation*}
\Rtrick_T^\u
\leq
\sqrt{
8 D^2 \del*{\sum_{t=1}^T \|\grad_t\|^2}
\del*{
\frac{1}{D^2} \norm{\u}^2
+ \frac{1}{\alpha} C_T
}}
+
5 D G
\del*{\frac{1}{D^2} \norm{\u}^2
+ \frac{1}{\alpha} C_T}
\end{equation*}
for all $\u \in \U$.
\end{theorem}
These two bounds together show that the full version of MetaGrad
achieves the new adaptive guarantee of Theorem~\ref{thm:roughthm}. The
diagonal version behaves like running the full version separately per
dimension, but with a single shared learning rate.
\section{Discussion and Future Work}
One may consider extending MetaGrad in various directions. In particular
it would be interesting to speed up the method in high dimensions, for instance by sketching \cite{SON16}.
A broader question is to identify and be adaptive to more types of easy
functions that are of practical interest. One may suspect there to be a
price (in regret overhead and in computation) for broader adaptivity,
but based on our results for MetaGrad it does not seem like we are
already approaching the point where this price is no longer worth
paying.
\paragraph{Acknowledgments}
We would like to thank Haipeng Luo and the anonymous reviewers (in particular Reviewer 6)
for valuable comments.
Koolen acknowledges support by the Netherlands Organization for Scientific Research (NWO, Veni grant 639.021.439).
\DeclareRobustCommand{\VAN}[3]{#3}
\bibliographystyle{abbrvnat}
{\small
|
2,877,628,089,770 | arxiv | \section{Introduction}
In the recent past, there has been an ever increasing interest in studying Wireless Networked Control Systems (WNCS) that support time-critical-control applications which include, among many others, autonomous vehicle systems, automation of manufacturing processes, smart grids, Internet-of-Things (IoT), sensor networks and augmented reality.
A basic building block in WNCS is depicted in Figure~\ref{fig:NCS}. A sensor samples a plant/process of interest and transmits the status updates or packets over a wireless channel (link $1$) to a controller. The controller computes a control input using the received status update and transmits it to an actuator, using another communication channel (link $2$).
A status update that is received at the controller after a certain duration of its generation time may become stale, and the control decision taken based on this stale sample may result in untimely actuation affecting the performance of a time-critical-control application in a WNCS. Similarly, the same effect could result from a control decision (based on a fresh status update) reaching the actuator after a delay deadline.
In this respect, the traditional goal of maximizing throughput becomes less relevant as freshness of the status updates not only depends on queuing and transmission delays in the network, but also on the frequency of generating updates at the source.
\begin{figure}
\centering
\includegraphics[width = 3.2in]{NCS.eps}
\caption{A networked control system with a remote controller.}
\label{fig:NCS}
\vspace{-.5cm}
\end{figure}
Age of Information (AoI), proposed in~\cite{kaul_2011a}, has emerged as a relevant performance metric in quantifying the freshness of the status updates at a destination. It is defined as the time elapsed since the generation of the latest status update received at the destination. AoI accounts for the frequency of generation of updates by the source, since it linearly increases with time until a status update with latest generation time is received at the destination. Whenever such an update is received, AoI resets to the system delay of the update indicating its age.
\jpcolor{Motivated by the fact that having access to fresher status updates improves the control performance in WNCS, we model the control network by a two-hop FCFS queuing system and formulate a problem of computing optimal sampling rate that minimizes AoI in this system\footnote{A preliminary version of this work only considering single-hop scenario appeared in~\cite{Champati_DG1_2018}.}. Several research works in the recent past addressed the problem of optimizing sampling rate in different queuing systems under various settings. However, as we explain in Section~\ref{sec:related}, these works either consider a single-hop system or memoryless arrivals or some form of ``average age'' function. In contrast, we consider two novel aspects that are relevant to time-critical-control applications. First, we consider periodic arrival process by assuming that the process of interest is sampled at a constant rate $R$. This is motivated by the fact that sensors in practice are typically configured to generate samples periodically. Second, since optimizing average statistics of AoI may not meet stringent QoS requirements, for instance, in a safety-critical system~\cite{SafetyRequirements}, we consider optimizing \textit{AoI violation probability}, i.e., the probability that AoI at the actuator violates a given \textit{age limit} $d$. This metric represents, for example, a reliability measure constraint that is required at the actuator to insure that the state of the plant remains within a predetermined safety boundary. Furthermore, in a WNCS, an absolute guarantee (i.e., reliability of 1) may not be possible due to variability of the wireless channel and only probabilistic guarantees can be provided. This motivated us to use the distribution of AoI as a metric rather than other frequently used metrics in the literature, e.g., peak AoI and average AoI.
We consider a heterogeneous network, i.e., server at the first queue and server at the second queue may have different service-time distributions. The queues operate using First-Come-First-Serve (FCFS) scheduling discipline. \jpcolor{We note that, in the AoI literature, different scheduling disciplines are considered: for example~\cite{kaul_2012b,Huang_2015,Talak_2018b} considered FCFS,~\cite{kaul_2012a,Bedewy_2017a,Yates2018} considered LCFS, and~\cite{Costa_2016,Champati_GG1_2019} considered packet management schemes such as using a unit capacity queue with packet replacement.
{\color{black}
Our motivation for considering FCFS discipline in this work is the following. First, analysis and optimization of AoI violation probability under FCFS is an open problem. Second, it is not only an interesting problem from academic (queuing-theoretic) point of view, due to FCFS being more intuitive and hence such analysis being more comprehensible, but also important from practical point of view as most queues in practice operate under FCFS.
Third, key insights, e.g., as the sampling rate $R$ increases, in contrast to \textit{delay}, AoI first decreases and then increases~\cite{kaul_2012b}, that are established under FCFS discipline may be extended to other disciplines as well.
Lastly, the analysis of many important queuing disciplines can be based on, and sometimes directly derived from, that of FCFS, i.e., by introducing a queue reordering stage based on arrival instance, priority, or some fairness parameter before serving the head of the queue. We believe that our analysis can potentially be extended to such queuing disciplines in future works.
}
}
Assuming that the processing time at the controller is negligible, we aim to compute $R$ that minimizes the AoI \textit{violation probability} at the egress of the second queue. As we will see in a while, an exact expression for the AoI violation probability in a two-hop network with periodic arrivals and general service-time distributions is intractable.} Therefore, we resort to working with tractable upper bounds which facilitate the computation of ``good'' heuristic solutions. In particular, we first compute the upper bounds for the single-hop case, i.e., the D/G/1 queue, due to its relevance in applications where both controller and actuator are collocated. We formulate the Upper Bound Minimization Problems (UBMP) and \jpcolor{use them to compute heuristic rate solutions for AoI violation probability minimization}. We then extend the results for two-hop and N-hop tandem queuing systems using max-plus convolution for the service processes.
The main contributions of this work are summarized below:
\begin{itemize}
\item \jpcolor{We characterize the probability that AoI violates a given {age limit}~$d$ for a single source single destination multi-hop network under FCFS, and assuming periodic source where packets are generated at a constant rate $R$.}
\item We formulate the AoI violation probability minimization problem $\mathcal{P}$, and show that it is equivalent to minimizing the violation probability of the departure instant of a \jpcolor{tagged packet (defined in Section V)} over the rate region $[\frac{1}{d},\mu)$, where $\mu$ is the service capacity of the network.
\item Using the above characterization, we first propose a UBMP for the single-hop scenario, i.e., the D/G/1 queue. Noting that the objective function in the UBMP can be intractable, we propose a Chernoff-UBMP, that has a closed-form objective, and an $\alpha$-relaxed UBMP the solution of which has $\alpha > 1$ approximation ratio \jpcolor{(worst-case ratio)} with respect to \jpcolor{the objective function of the} UBMP.
\item We extend the derived results and formulations for the two-hop queuing system and $N$-hop tandem queuing system, and present example computation of the expressions for the case of two-hop for geometric, exponential, and Erlang service-time distributions.
\item We demonstrate the efficacy of the heuristic solutions provided by Chernoff-UBMP and $\alpha$-relaxed UBMP using simulation for different service-time distributions. \jpcol{Finally, we present simulation results comparing the performance of FCFS with queue management policies that use unit buffer and packet replacement.}
\end{itemize}
The rest of the paper is organized as follows. In Section~\ref{sec:related}, we present the related work. In Section~\ref{sec:model}, we present the problem formulation. Analysis of the AoI violation probability is presented in Section~\ref{sec:vioprob}. The UBMP formulations for single-hop, two-hop and N-hop scenarios are presented in Sections~\ref{sec:singleHop} and~\ref{sec:twohop}, respectively. We present the computation of the upper bounds for different service-time distributions in Section~\ref{sec:exampleDis}. Numerical results are presented in Section~\ref{sec:numerical} and we finally conclude in Section~\ref{sec:conclusion}.
\section{Related Work}\label{sec:related}
Several works in the AoI literature have focused on analyzing and providing expressions for average AoI statistics in different queuing systems, e.g., see~\cite{kaul_2012a,Chen2016,Najm_2016,Najm_2017,Soysal2019}. \jpcolor{The authors in~\cite{Costa_2016} studied the M/M/1/1 and M/M/1/2*\footnote{A unit capacity queue that holds the latest update.} systems, and computed the average AoI and the distribution of the peak AoI. In contrast, the authors in~\cite{Yoshiaki2018,Champati_GG1_2019} provided expressions for the distribution of AoI. However, for the case of periodic arrivals, closed-form expressions are provided only for single-hop scenario and for exponential service times in~\cite{Yoshiaki2018}, and for the case of no queue in~\cite{Champati_GG1_2019}. Next, we summarize works that consider optimizing AoI under different system settings. An interested reader may also refer to~\cite{KostaMonograph_2017} and \cite{SunMonograph2020} for a comprehensive survey of recent work in this area.
In~\cite{kaul_2012b}, the authors have addressed the problem of computing the optimal arrival rate to minimize the \textit{time-average age} for M/M/1, M/D/1 and D/M/1 queuing systems. This problem was addressed for M/M/1 with multiple sources in~\cite{yates_2012a}. Several research works that followed considered different design choices including the arrival rate~\cite{Huang_2015}, inter-arrival time distribution for a given arrival rate and/or service-time distribution for a given service rate~\cite{Talak_2018a,Soysal2019,Talak2019a,Talak2019b}, under different scheduling disciplines and optimized average AoI or average peak AoI in a single-source-single-server system. \jpcol{In~\cite{Bedewy2019}, preemptive Last Generated First Served (LGFS) policy was shown to minimize the age process in a multi-server single-hop system with exponential service times.}
An alternative approach to the above works, the generate-at-will source model was studied in~\cite{yates_2015a,Sun_2017,Champati_2020}, where generation of a status update can be completely controlled. While the authors in~\cite{yates_2015a} solved for optimal-waiting times between generation times to minimize the average AoI, the authors in~\cite{Sun_2017} solved the problem for any non-decreasing function of AoI, and the authors in~\cite{Champati_2020} solved the problem of minimum achievable peak AoI in any single-source-single-server system. The authors in~\cite{Talak_2018b} studied average AoI and average peak AoI minimization for multiple source-destination links in a wireless network with interference constraints. They used the method of minimizing upper bounds as a means to show that optimal rate design and optimal link scheduling can be separated and provided performance guarantees for the proposed solutions.
In addition to the above, the following literature considered multi-hop settings. \jpcol{For a line network with a single source and no queues, under Poisson arrivals and exponential service times, expressions were derived in~\cite{Yates2018} for moments, Moment Generating Function (MGF), and stationary distribution of AoI for preemptive last-come-first-served policy. In~\cite{Bedewy_2017a}, optimal queuing policies were investigated for a multi-hop network for any arrival sequence and service-time distributions. It was shown that, among non-preemptive policies, LGFS minimizes age processes, in stochastic ordering sense, at all the nodes.} The authors in~\cite{Talak2017,Farazi2019} studied average AoI and average peak AoI minimization in a multi-hop wireless network with interference constraints and with packet flows between multiple source-destination pairs assuming that transmission time of a packet equals a unit time slot. The authors in~\cite{Buyukates2018} studied average AoI for $L$-hop multicast network with a single source, $n$ nodes in the first hop, $n^2$ nodes in the second hop, and so on $n^L$ nodes in the the last hop, with each node having a shifted exponential service time.}
Optimizing AoI was also extensively studied for the systems with energy-harvesting source, e.g., see~\cite{yates_2015a,Bacinoglu_2015a,Bacinoglu_2019}. In the context of a cloud gaming system the authors in~\cite{Yates2017} used the D/G/1 system model to study the effect of freshness on video frame rendering to the client. Specifically, they have analyzed the average age by considering the aspect of missing frames. In contrast to all the above works, with motivations from the sensor-controller-actuator system in WNCS we study the problem of AoI violation probability minimization in a two-hop queuing system with periodic arrivals.
\section{System Model and Problem Statement}\label{sec:model}
\label{model}
Motivated by the sensor-controller-actuator communicating over wireless channels, we study a two-hop queuing system, shown in Figure~\ref{fig:model}, under FCFS scheduling. The source generates packets (status updates) at a constant rate $R$.
Thus, $R$ models the sampling rate of a process under observation. Let $T = \frac{1}{R}$ denote the inter-arrival time between any two packets. We index the nodes by $k \in \{1,2\}$, and the packets by $n \in \{0,1,2\ldots\}$.
Let $A_{{k}}(n,R)$ denote the arrival instant of packet $n$ and $D_{{k}}(n,R)$ the corresponding departure instant at node $k$. For notational simplicity, we use $A(n,R) = A_1(n,R)$ and $D(n,R) = D_{{2}}(n,R)$ to denote the arrivals and departures of the system, respectively. Also, we have $A_{{2}}(n,R) = D_{{1}}(n,R)$.
The arrival time of packet $n$ to the system is given by $A(n,R) = \frac{n}{R}$.
The service time for packet $n$ at node $k$ is given by a random variable $X_{k}^n$. For $k \in \{1,2\}$, we assume $X_{k}^n$ are i.i.d., for all $n$, with mean service rate $\mu_{k} = \frac{1}{\E[X_{k}^1]} > 0$.
Also, we assume that $X_{1}^n$ and $X_{2}^n$ are independent, for all $n$, but may have non-identical distributions, i.e., the servers could be heterogeneous. We define $\mu \triangleq \min(\mu_{1},\mu_{2})$. Later, in Section~\ref{sec:multihop}, we show how the results can be extended to $N$-hop tandem queuing network.
\begin{figure}
\centering
\includegraphics[width = 3.2in]{model.eps}
\caption{Model of the two-hop network.}
\label{fig:model}
\end{figure}
At the destination, we are interested in maintaining timely state information of the process.
We are thus interested in the AoI metric, denoted by $\Delta(t,R)$, which is defined as:
\begin{align}\label{eq:AoI-Definition}
\Delta(t,R) \triangleq t - \max_n\{A(n,R): D(n,R) \leq t\}.
\end{align}
For a given \textit{age limit} requirement $d > 0$, in the following we study the distribution of AoI by characterizing its violation probability, i.e., $\P(\Delta(t,R) > d)$, both in the transient and the steady states of the system.
Given the age limit $d$, we are interested in solving the following problem $\mathcal{P}$:
\begin{equation*}
\begin{aligned}
& \underset{R}{\text{min}}\: \lim_{t\rightarrow \infty} \P(\Delta(t,R) > d).
\end{aligned}
\end{equation*}
Let $R^*(d)$ denote an optimal rate solution for $\mathcal{P}$. \jpcolor{In the sequel, we refer to $\lim_{t\rightarrow \infty} \P(\Delta(t,R) > d)$ as \textit{AoI violation probability}.}
Henceforth, we drop $R$ from the notation when it is obvious from the context, for the sake of notation simplicity. For $k \in \{1,2\}$, the MGF of $X^n_{k}$ is given by $M_{k}(s) = \mathbb{E}[e^{sX^n_{k}}]$.
We now state the Chernoff bound, which will be used extensively to formulate the upper bound minimization problems in the sequel. \jpcolor{Assuming that the moment generating function of a random variable $Y$ exists, the \textbf{Chernoff bound} for its distribution is given by
\begin{align*}
\P\{Y > y\} \leq \min_{s > 0}\, e^{-sy} \mathbb{E}[e^{sY}].
\end{align*}}
Note that the upper bounds derived using the Chernoff bound involves minimization over the parameter $s$. We shall see that, for the two-hop network, these bounds attain finite values only when there exists $s > 0$ such that $\max(M_{1}(s),M_{2}(s)) < e^{s/R}$. To this end, we formulate the minimization problems over the set $\mathcal{S} \subseteq \mathbb{R}^+$ which characterizes $s$ values for which $\max(M_{1}(s),M_{2}(s)) < e^{s/R}$, i.e.,
\begin{align}\label{eq:calS}
\mathcal{S} \triangleq \{s > 0: \max(M_{1}(s),M_{2}(s)) < e^{s/R}\}.
\end{align}
We assume that $\mathcal{S}$ is non-empty. In the following lemma we show that this assumption is in fact a sufficient condition for the stability of the system.
\begin{lemma}\label{lem:queueStability}
If there exists $s > 0$ such that
\begin{align*}
\max(M_{1}(s),M_{2}(s)) < e^{s/R},
\end{align*}
then the queues are stable.
\end{lemma}
\begin{proof}
Recall that the queues are stable if $\min(\mu_{1},\mu_{2}) > R $. Consider the case $M_{1}(s) < e^{s/R}$, which implies
\begin{align*}
\mathbb{E}[e^{sX^n_{1}}] < e^{s/R}
\Rightarrow e^{s\mathbb{E}[X^1_{1}]} < e^{s/R}
\Rightarrow \mu_{1} > R,
\end{align*}
for any $s > 0$. In the second step above we have used Jensen's inequality. Similarly, if $M_{2}(s) < e^{s/R}$, then $\mu_{2} > R$. Therefore, \jpcolor{if there exists $s > 0$ such that} $\max(M_{1}(s),M_{2}(s)) < e^{s/R}$, then $\min(\mu_{1},\mu_{2}) > R $, and the lemma follows.
\end{proof}
We define
\begin{align}\label{eq:beta}
\beta_{k}(s) \triangleq \frac{M_{k}(s)}{e^{s/R}},\, k \in \{1,2\}.
\end{align}
By definition, for all $s \in \mathcal{S}$, $\beta_{k}(s) < 1$. \jpcolor{The list of symbols used in the paper are summarized in Table~\ref{tabel1}. }
\begin{table}[ht]
\renewcommand{\arraystretch}{1.2}
\caption{\jpcolor{List of Symbols}}
\centering
\begin{tabular}{|l|c|c|}
\hline
$k$ & Node/link index \\
\hline
$N$ & Number of nodes \\
\hline
$n$ & Packet index \\
\hline
$R$ & Sampling rate \\
\hline
$T$ & Inter-arrival time ($\frac{1}{R}$) \\
\hline
$A_k(n,R)$ & Arrival time of packet $n$ at node $k$\\
\hline
$D_k(n,R)$ & Departure time of packet $n$ at node $k$\\
\hline
$A(n,R)$ & Arrival time of packet $n$ in the system ($\frac{n}{R}$)\\
\hline
$D(n,R)$ & Departure time of packet $n$ from the system\\
\hline
$X^n_k$ & Service time of packet $n$ at node $k$\\
\hline
$\mu_k$ & Service rate at node $k$\\
\hline
$\Delta(t,R)$ & Age of information at time $t$\\
\hline
$d$ & Age limit\\
\hline
$M_k(\cdot)$ & MGF of service at node $k$\\
\hline
$\hat{n}_R$ & Index of the first arrival after time $t-d$\\
\hline
\end{tabular}
\label{tabel1}
\end{table}
\section{AoI Violation Probability Analysis}\label{sec:vioprob}
\jpcolor{In this section, we study the properties of the distribution of AoI -- the results derived are valid for any number of nodes in tandem between the source and the destination given that the packets are input to the network by the source at a constant rate $R$ and the network uses FCFS.}
We start by investigating structural characteristics of the stochastic behaviour of AoI. Toward this end, we use the max-plus representation of Reich's equation to model the evolution of the queues.
For any realization of the service times at node $k$, the relation between $D_{{k}}(n,R)$, $A_{{k}}(n,R)$ and $\{X_{{k}}^n\}$, is given by~\cite{JorgBook2017}:
\begin{align}\label{eq:departureTime}
D_{{k}}(n,R) = \max_{0 \leq v \leq n} \{A_{{k}}(n-v,R) + \sum_{i=0}^{v}X_{{k}}^{n-i}\}.
\end{align}
\jpcolor{We note that equation~\eqref{eq:departureTime} is a direct consequence of using recursion on a fundamental relation in queuing system: $D_{{k}}(n,R) = \max\{D_{{k}}(n-1,R), A_k(n)\} + X^n_k$, which states that the departure time of packet $n$ is given by either its service time plus departure time of previous packet $n-1$ or arrival time of packet $n$ plus its service time, whichever is greater.}
Consider the definition in \eqref{eq:AoI-Definition}, for $\Delta(t,R)$ not to exceed the age limit $d$, the latest departure at $t$ must have arrived no earlier than $t-d$. Therefore, to study the distribution of $\Delta(t,R)$, we tag the packet arriving on or immediately after $t-d$ and use it to characterize this process.
Given rate $R$, let ${\hat{n}_\text{R}}$ denote the \jpcolor{index of the first arrival since time $t-d$}, given by
\begin{align}\label{eq:nR}
{\hat{n}_\text{R}} \triangleq \lceil R(t-d) \rceil.
\end{align}
The tagged packet\footnote{${\hat{n}_\text{R}}$ is a function of $t-d$ as well. We omit $t-d$ from the notation here for ease of exposition. } ${\hat{n}_\text{R}}$ plays a key role in characterizing the violation probability as we will show next.
In the following lemma we present a key insight regarding the transient characterization of the AoI violation probability.
\begin{lemma}
\label{lem1}
Given the input arrival rate $R$, age limit $d$, and $t < \infty$, if there exists $n$ such that $t-d \leq \frac{n}{R} < t$, then $\P\{\Delta(t,R) > d\} = \P\{D({\hat{n}_\text{R}}) > t\}$, otherwise, $\P\{\Delta(t,R) > d\} = 1$.
\end{lemma}
\begin{proof}
Let $n^*_\text{R}$ be the latest packet departure at $t$, i.e., $n^*_\text{R} = \argmax_{n} \{D(n,R) \leq t\}$. Thus,
$\Delta(t,R) = t - A(n^*_\text{R}).$
\textbf{Case 1:}
If an $n$ such that $t-d \leq \frac{n}{R} < t$ does not exist, i.e., there is no arrival during the time interval $[t-d,t)$, then the arrival time of $n^*_\text{R}$ must be strictly less than $t-d$, i.e., $A(n^*_\text{R}) < t-d$. Therefore,
\begin{align*}
\P(\Delta(t,R) > d) = \P(t - A(n^*_\text{R}) > d) = 1.
\end{align*}
\begin{figure}
\centering
\includegraphics[width = 2.8in]{lemma1.eps}
\caption{Time-line of events for Case 2 in Lemma \ref{lem1} proof. }
\label{fig:Case2}
\end{figure}
\textbf{Case 2:} If there exists $n$ such that $t-d \leq \frac{n}{R} < t$, then $t-d \leq \frac{{\hat{n}_\text{R}}}{R} < t$, since ${\hat{n}_\text{R}}$ is the first arrival on or after time $t-d$, see Figure~\ref{fig:Case2}. In this case, we show that the event $\{\Delta(t,R) \leq d\}$ is equivalent to the event $\{D({\hat{n}_\text{R}}) \leq t\}$. Suppose that the event $\{\Delta(t,R) \leq d\}$ occurred, then $A(n^*_\text{R}) \geq t - d$. By definition of ${\hat{n}_\text{R}}$, we should have $A({\hat{n}_\text{R}}) \leq A(n^*_\text{R})$ which implies $D({\hat{n}_\text{R}}) \leq D(n^*_\text{R}) \leq t$, due to FCFS assumption.
Therefore,
\begin{equation}\label{eq:Lemma1-case2}
\{\Delta(t,R) \leq d\} \subseteq \{D({\hat{n}_\text{R}}) \leq t\}.
\end{equation}
To prove equivalence of the two events, we show that the relation above also holds the other way around.
Suppose that the event $\{D({\hat{n}_\text{R}}) \leq t\}$ occurred. Again, it should be true that $A(n^*_\text{R}) \geq A({\hat{n}_\text{R}})$. Otherwise, $D(n^*_\text{R}) < D({\hat{n}_\text{R}}) \leq t$ which contradicts the definition of $n^*_\text{R}$ that it is the latest departure before $t$. Therefore,
\begin{align*}
\Delta(t,R) = t - A(n^*_\text{R}) \leq t - A({\hat{n}_\text{R}}) \leq t - (t-d) = d.
\end{align*}
This implies that $\{D({\hat{n}_\text{R}}) \leq t\} \subseteq \{\Delta(t,R) \leq d\}$. Therefore, the equivalence holds and the result is proven.
\end{proof}
\begin{comment}
\begin{theorem}\label{thm:transient}
For a D/G/1 queue in the transient state, given $d$ and $t < \infty$, $R = \frac{1}{t-d}$ minimizes $\P\{\Delta(t,R) > d\}$.
\end{theorem}
\begin{proof}
If $R < \frac{1}{t}$, then there does not exist an $n$ such that $t-d \leq \frac{n}{R} < t$. Therefore, from Lemma~\ref{lem1}
\end{proof}
\end{comment}
\jpcolor{The intuition behind the result in Lemma~\ref{lem1} is that AoI exceeds $d$ at time $t$ if none of the packets generated/arrived in the time interval $[t-d,t]$ have reached the destination at time $t$ or no packets are generated in this interval. Note that \textbf{Case 1} in the above proof essentially represents an under-sampling of the process under observation, i.e., at the current time $t$ the sampling rate $R$ is simply too low such that there is no packet generated in the time interval $[t-d,t]$.}
We next present the steady-state results for the two-hop system based on the result obtained in Lemma \ref{lem1}.
\begin{theorem}\label{thm:steadystate}
Given age limit $d$, the steady state distribution of AoI is characterized as follows:
\begin{enumerate}
\item If $R \geq \frac{1}{d}$, then
\begin{align}\label{eq:steadystateUB}
\!\!\!\lim_{t \rightarrow \infty} \P \{\Delta(t,R) > d\} = \lim_{t \rightarrow \infty} \P\{D({\hat{n}_\text{R}}) > t\}.
\end{align}
\item Else if $R < \frac{1}{d}$, then
\begin{align*}
& \limsup_{t \rightarrow \infty} \P\{\Delta(t,R) > d\} = 1, \\
& \liminf_{t \rightarrow \infty} \P\{\Delta(t,R) > d\} = \lim_{t \rightarrow \infty} \P\{D({\hat{n}_\text{R}}) > t\}.
\end{align*}
\end{enumerate}
\end{theorem}
\begin{proof}
For the two cases above consider the following:
\textbf{Case 1 ($\mathbf{R \geq \frac{1}{d}}$):} Since the samples are generated at a constant rate, for $R \geq \frac{1}{d}$ we claim that there exist an $n$ such that $t-d \leq \frac{n}{R} < t$, for all $t$. We first prove this claim for $R > \frac{1}{d}$. We have
\begin{align*}
A({\hat{n}_\text{R}}) = \frac{\lceil R(t-d) \rceil}{R} \leq \frac{R(t-d) + 1}{R} < t\, .
\end{align*}
Furthermore, since $t-d \leq A({\hat{n}_\text{R}})$ for any $t$ by definition, the claim holds at least for ${\hat{n}_\text{R}}$, for $R > \frac{1}{d}$. To prove the claim for $R = \frac{1}{d}$, we consider
\begin{align*}
t-d \leq\! \frac{n}{R} \! <\! t \, \Leftrightarrow \, \frac{n}{R}\! < \! t \leq d + \frac{n}{R} \, \Leftrightarrow \, n \! < Rt \leq \! n + 1.
\end{align*}
Note that for any $R$ and $t$ there always exists an $n$ such that the last inequality above holds. Therefore, the claim is true and Case~1 follows from Lemma~\ref{lem1} by letting $t$ go to infinity.
\textbf{Case 2 ($\mathbf{R < \frac{1}{d}}$):} In this case, the existence of $n$ such that $t-d \leq \frac{n}{R} < t$ depends on $t$. \jpcolor{To see this, for a given $n$ consider the two time intervals $(\frac{n}{R},\frac{n}{R}+d]$ and $(\frac{n}{R}+d,\frac{n+1}{R})$. Note that the latter time interval is non-empty because $d < \frac{1}{R}$. Now, for time instants $t \in (\frac{n}{R},\frac{n}{R}+d]$ we have $t-d \leq \frac{n}{R} < t$, and therefore for such $t$ using Lemma~\ref{lem1} we have $\P\{\Delta(t,R) > d\} = \P\{D({\hat{n}_\text{R}}) > t\}$. On the other hand, for time instants $t \in (\frac{n}{R}+d,\frac{n+1}{R})$, there is no $n$ value such that $t-d \leq \frac{n}{R} < t$ is true, and therefore from Lemma~\ref{lem1} we have $\P\{\Delta(t,R) > d\} = 1$. This implies that as $t$ goes to infinity the violation probability either equals $\P\{D({\hat{n}_\text{R}}) > t\}$ or $1$ depending on the value of $t$.} Thus, we obtain the limit supremum and the limit infimum.
\end{proof}
{\color{black} Intuitively, given $R$, the support of the steady state AoI distribution should be $[\frac{1}{R},\infty)$, because AoI cannot be less than $\frac{1}{R}$ when the samples are generated at rate $R$. Not only Theorem~\ref{thm:steadystate} asserts this intuitive reasoning, but also characterizes the limit infimum and limit supremum for the region $d < \frac{1}{R}$, where the AoI violation probability does not exist.}
Therefore, to ensure the existence of the AoI violation probability
we consider the feasible rate region $[\frac{1}{d},\mu)$, where
$\mu = \min(\mu_1,\mu_2)$,
and $R < \mu$ ensures queue stability.
In light of this, and using~\eqref{eq:steadystateUB} from Theorem~\ref{thm:steadystate}, we formulate an equivalent problem
$\mathcal{\tilde P}$ as follows:
\begin{equation}\label{equivalentProb}
\begin{aligned}
& \underset{\frac{1}{d} \leq R < \mu}
{\text{min}}\: \lim_{t\rightarrow \infty} \P(D({\hat{n}_\text{R}}) > t).
\end{aligned}
\end{equation}
\textbf{\textit{Remark 1:}} \jpcolor{The results in Lemma~\ref{lem1} and Theorem~\ref{thm:steadystate} are valid for arbitrary single-source single-destination network with constant arrival rate $R$ and using FCFS queuing discipline, i.e.,
packets should be received at the destination in the same order as they are transmitted by the source.} For arbitrary network topology, one can formulate problem $\tilde{\mathcal{P}}$ given in~\eqref{equivalentProb} with the following constraints on R: 1) $R \geq \frac{1}{d}$, and 2) $R$ belongs to the rate region in which the network is stable.
Next, we present our solution approach for solving $\tilde{\mathcal{P}}$ for a single-hop case and then show how the approach can be extended for the two-hop system in Section~\ref{sec:twohop}.
\section{Single-Hop Scenario}\label{sec:singleHop}
In this section we solve $\mathcal {\tilde P}$ by assuming that $X^n_{2} = 0$ for all $n$. This implies that $D(n) = D_{{1}}(n)$, $\mu_{2} = \infty$, and the system is equivalent to the D/GI/1 system. Our motivation for presenting the single-hop case is because of its importance in solving the two-hop case, and also due to its relevance to practical scenarios, where only estimation of the processes is required, or both controller and actuator are collocated.
In order to find a solution for $\mathcal {\tilde P}$, we must first evaluate the probability $\P\{D({\hat{n}_\text{R}}) > t\}$, where $D(n)$ is given by \eqref{eq:departureTime}.
Note that $D(n)$ is random, since the service process $\{X_{1}^n,n \geq 0\}$ is random, and is given in terms of the maximum of $n+1$ random variables.
Hence, obtaining an exact expression is tedious.
Therefore, we opt for a more tractable approach by using probabilistic inequalities to obtain bounds on the distribution of $D({\hat{n}_\text{R}})$.
Consequently, we propose the Upper Bound Minimization Problem (UBMP) and its more computationally tractable counterparts {$\alpha$-UBMP}~and Chernoff-UBMP to obtain near optimal heuristic solutions for $\mathcal{\tilde P}$.
\subsection{A Bound for the Distribution of $D$}
As mentioned earlier, the evaluation of the distribution function of $D(n)$ requires the computation of the distribution of the maximum of random variables. Fortunately, there are several approaches that have been used in the literature to estimate this probability. One such approach approximates the probability of the maximum by the maximum probability, i.e., $\P\{\max_i Y_i > y\} \approx \max \, \P\{ Y_i>y \}$. However, this approximation is not always accurate and in some cases may result in very large deviation from the actual distribution. Hence, it cannot be used when reliability of the solution must be well defined as it is the case here. An alternative approach is to use extreme value theorem. However, the obtained extreme value distributions are not always tractable. A more promising approach is to use Boole's inequality, commonly known as the ``union bound,'' where the probability of a union of events is bounded by the sum of their probabilities. The bound obtained in our case is not only tractable, but also provides good heuristic solutions for $\mathcal {\tilde P}$.
In the following lemma, we present this upper bound for the distribution function $\lim_{t \rightarrow \infty} \P\{D({\hat{n}_\text{R}}) > t\}$.
\begin{lemma}\label{lem:singleHopUB}
Given $d$, we have
\vspace{-.3cm}
\begin{align*}
\lim_{t\rightarrow \infty} \P(D({\hat{n}_\text{R}}) > t) \leq \sum_{v=0}^{\infty} \Phi(v,R),
\vspace{-.5cm}
\end{align*}
where
\vspace{-.3cm}
\begin{align}\label{eq:Phi:singlehop}
\Phi(v,R) \triangleq \P\left\{\sum_{i=0}^{v} X_{1}^i > d + \frac{v-1}{R}\right\}.
\end{align}
\end{lemma}
\begin{proof}
Using~\eqref{eq:departureTime}, we have
\allowdisplaybreaks {\begin{align*}
&\P\{D({\hat{n}_\text{R}})\hspace{-.1cm} > \hspace{-.1cm}t\}\! = \!\P \hspace{-.1cm}\left\{\!\max_{0 \leq v \leq {\hat{n}_\text{R}}} \hspace{-.1cm} \left (\!\!A({\hat{n}_\text{R}}-v)\hspace{-.1cm} + \hspace{-.1cm}\sum_{i=0}^{v}X_{1}^{{\hat{n}_\text{R}}-i}\right) \hspace{-.1cm} > \hspace{-.1cm} t \right\} \\
&\overset{(a)}{\leq} \sum_{v=0}^{{\hat{n}_\text{R}}} \P\left\{\sum_{i=0}^{v} X_{1}^{{\hat{n}_\text{R}}-i} > t - \frac{{\hat{n}_\text{R}}-v}{R}\right\}\\
& \overset{(b)}{\leq} \sum_{v=0}^{{\hat{n}_\text{R}}} \P\left\{\sum_{i=0}^{v} X_{1}^{{\hat{n}_\text{R}}-i} > t - \frac{R(t-d) + 1-v}{R}\right\} \\
&= \sum_{v=0}^{{\hat{n}_\text{R}}} \,\,\underbrace{ \P\left\{\sum_{i=0}^{v} X_{1}^i > d + \frac{v-1}{R}\right\} }_{ \triangleq \, \Phi(v,R)}.
\end{align*}}
\noindent In step (a) we have applied the union bound, and used ${\hat{n}_\text{R}} = \lceil R(t - d) \rceil \leq R(t-d) + 1$ in step (b). The result follows by noting that ${\hat{n}_\text{R}}$ goes to infinity as $t$ goes to infinity.
\end{proof}
\subsection{UBMP Formulations}\label{subsec:singleHop:alphaUB}
Using~\eqref{equivalentProb}, Lemma~\ref{lem:singleHopUB}, and $\mu_{2} = \infty$, we obtain the following UBMP problem.
\begin{equation}\label{UBMP:singleHop}
\begin{aligned}
& \underset{\frac{1}{d} \leq R < \mu_1}{\text{min}} \quad \sum_{v=0}^{\infty} \Phi(v,R).
\end{aligned}
\end{equation}
It is worth noting that the function $\Phi(0,R)$ is non-increasing in $R$ while the functions $\{\Phi(v,R):v > 1\}$ are non-decreasing in $R$.
A shortcoming of UBMP is that its objective function is intractable, in general, as it involves computation of a sum of infinite terms and each term requires computation of the distribution of sum of service times.
To this end, we formulate Chernoff-UBMP obtained by using Chernoff bound for $\Phi(v,R)$ in Lemma~\ref{lem:singleHopUB}.
\subsubsection{Chernoff-UBMP}
Since $X^n_{1}$ are i.i.d, the Chernoff bound for $\Phi(v,R)$, defined in~\eqref{eq:Phi:singlehop}, is given by
{\allowdisplaybreaks \begin{align}\label{eq:ChernoffPhi}
\Phi(v,R) &\leq \min_{s \in \mathcal{S}}\; e^{-s(d+\frac{v-1}{R})} \mathbb{E}[e^{s \sum_{i=0}^{v} X_{1}^i}] \nonumber\\
&= \min_{s \in \mathcal{S}}\; e^{-s(d+\frac{v-1}{R})} M^{v+1}_{1}(s) \nonumber \\
&= \min_{s \in \mathcal{S}}\; e^{-s(d-\frac{1}{R})} M_{1}(s) \beta^v_1(s),
\end{align}
where \jpcol{$\beta_1(s) = \frac{M_1(s)}{e^{s/R}}$ (defined in~\eqref{eq:beta}),} \jpcolor{and $\mathcal{S}$ is defined in~\eqref{eq:calS}}. Recall that, $\beta_1(s) < 1$ for all $s \in \mathcal{S}$. Therefore, using~\eqref{eq:ChernoffPhi} in the result of Lemma~\ref{lem:singleHopUB}, we obtain
\begin{align}\label{eq:ChernoffSinglehop}
&\sum_{v=0}^{\infty} \Phi(v,R) \leq \sum_{v=0}^{\infty} \min_{s \in \mathcal{S}}\; e^{-s(d-\frac{1}{R})} M_{1}(s)\beta^v_1(s) \nonumber\\
&\leq \min_{s \in \mathcal{S}}\; e^{-s(d-\frac{1}{R})} M_{1}(s) \sum_{v=0}^{\infty} \beta^v_1(s)\nonumber \\
& = \min_{s\in \mathcal{S}}\; \underbrace{e^{-s(d-\frac{1}{R})}\cdot \frac{M_{1}(s)}{(1 - \beta_1(s))}}_{\triangleq \, \Psi_1(s,d,R)}.
\end{align}}
Even though the Chernoff bound relaxes the upper bound in Lemma~\ref{lem:singleHopUB}, its objective function has a closed-form expression and can be computed numerically.
The following theorem immediately follows from~\eqref{eq:ChernoffSinglehop} and Lemma~\ref{lem:singleHopUB}.
\begin{theorem}\label{thm:singlehop}
Given $d$, an upper bound for the violation probability for a single hop is given by
\begin{align*}
\lim_{t \rightarrow \infty} \P\{D({\hat{n}_\text{R}}) > t\} \leq \min_{s\in \mathcal{S}} \; \Psi_1(s,d,R),
\end{align*}
where $\Psi_1(s,d,R)$ is defined in~\eqref{eq:ChernoffSinglehop}.
\end{theorem}
With a slight abuse in the usage, we refer to the bound given in Theorem~\ref{thm:singlehop} as \textit{Chernoff bound}.
In the following, we formulate the Chernoff-UBMP for the single-hop scenario:
\begin{equation}\label{Chernoff-UBMP:singleHop}
\underset{ \frac{1}{d} \leq R < \mu_{1} }{\text{min}} \, \underset{s\in \mathcal{S}}{\text{min}} \; \Psi_1(s,d,R).
\end{equation}
\begin{lemma}\label{lem:singlehop:convexR}
The function $\Psi_1(s,d,R)$ is strictly convex with respect to $\frac{1}{R}$.
\end{lemma}
\begin{proof}
Recall that $T = \frac{1}{R}$. We prove that $\frac{\partial^2 \Psi_1(s,d,T)}{\partial T^2} > 0$ for all $s\in \mathcal{S}$. Let us define $f(T)$ as follows:
\begin{align*}
f(T) = \frac{e^{2sT}}{(e^{sT}-M_{1}(s))}.
\end{align*}
Then, we rewrite $\Psi_1(s,d,T)$ as follows:
\begin{align*}
\Psi_1(s,d,T) = e^{-sd}[M_{1}(s)] f(T).
\end{align*}
From the above equation we infer that it is sufficient to prove $\frac{\partial^2 f(T)}{\partial T^2} > 0$. Taking first derivative $f'(T) = \frac{\partial f(T)}{\partial T}$, we obtain
\begin{align}\label{eq:firstder}
f'(T) &= \frac{2se^{2sT}}{(e^{sT}-M_{1}(s))} - \frac{e^{2sT}2se^{sT}}{(e^{sT}-M_{1}(s))^2} \nonumber \\
&= s f(T)\left[1-\frac{M_{1}(s)}{e^{sT}-M_{1}(s)}\right].
\end{align}
Taking the second derivative $f''(T) = \frac{\partial^2 f(T)}{\partial^2 T}$, we obtain
\begin{align*}
&f''(T)\! =\! sf'(T) \left[1 \! -\frac{M_{1}(s)}{e^{sT}-M_{1}(s)}\right]\!\! +\! \frac{s^2 f(T)M_{1}(s)e^{sT}}{(e^{sT}-M_{1}(s))^2} \\
&= \! s^2\! f(T) \left[1\! -\frac{M_{1}(s)}{e^{sT}-M_{1}(s)}\right]^2\!\! +\! \frac{s^2 f(T)M_{1}(s)e^{sT}}{(e^{sT}-M_{1}(s))^2} > 0.
\end{align*}
In the second step above we have used~\eqref{eq:firstder}. The last step follows by noting that $e^{sT} > M_{1}(s)$ for all $s\in \mathcal{S}$, $M_{1}(s)>0$ for all $s$, and $f(T) > 0$.
\end{proof}
\begin{lemma}\label{lem:singlehop:convexs}
For $s > 0$, the function $\Psi_1(s,d,R)$ is convex in $s$.
\end{lemma}
\begin{proof}
We have
\begin{align*}
&\Psi_1(s,d,R) =\frac{e^{-s(d-\frac{1}{R})} M_{1}(s)}{(1 - \beta_1(s))}\\
& = e^{-s(d-\frac{1}{R})} \sum_{v=0}^{\infty} M_{1}(s) \beta^v_1(s)\\
&= \sum_{v=0}^{\infty} e^{-s(d+\frac{v-1}{R})} M^{v+1}_{1}(s) = \sum_{v=0}^{\infty} (\mathbb{E}[e^{-s\hat{X}}])^{v+1},
\end{align*}
where $\hat{X} = (d+\frac{v-1}{R})/(v+1)-X^1_1$. Recall that the sum of convex functions is a convex function. Therefore, from the last step above, we infer that $\Psi_1(s,d,R)$ is convex if $(\mathbb{E}[e^{-sX}])^{v+1}$ is convex for $v \in \{0,1,\ldots$\}. For $s>0$, $e^{-s\hat{X}}$ is convex in $s$ for any $v$ and and any realization of $X^1_1$. Therefore, $\mathbb{E}[e^{-s\hat{X}}]$ is convex, and since $x^{v+1}$ is convex and increasing in $x$, we have that $(\mathbb{E}[e^{-sX}])^{v+1}$ is convex. Hence the result is proven.
\end{proof}
Both Lemmas~\ref{lem:singlehop:convexR} and~\ref{lem:singlehop:convexs} can be leveraged to efficiently solve~\eqref{Chernoff-UBMP:singleHop}.
The heuristic solutions we obtain by solving the Chernoff-UBMP can be improved further for service distributions for which the distribution of a finite sum of service times can be computed exactly. Therefore, we next propose a relatively tight upper bound called $\alpha$-relaxed upper bound and formulate $\alpha$-UBMP.
\subsubsection{$\alpha$-UBMP}
In the upper bound provided in Lemma~\ref{lem:singleHopUB}, we propose to compute first $K < \infty$ terms of the summation, and use Chernoff bound for the rest of the terms. In the following, we make this precise. We first present a bound on the summation starting from $K$.
\begin{lemma}\label{lem:singlehop:alpharelaxed}
For any $K \geq 0$, we have
\begin{align*}
\sum_{v = K}^{\infty} \Phi(v,R) \leq \min_{s \in \mathcal{S}} \Psi_1(s,d,R)\beta^K_1(s).
\end{align*}
\end{lemma}
\begin{proof}
The result follows by using the upper bound for $\Phi(v,R)$ given in~\eqref{eq:ChernoffPhi} and repeating the steps in~\eqref{eq:ChernoffSinglehop} for the summation over $v$ from $K$ to infinity.
\end{proof}
For the single hop scenario we define $\alpha$ as follows.
\begin{align*}
\alpha = 1 + \frac{\min_{s \in \mathcal{S}} \Psi_1(s,d,R)\beta^K_1(s)}{\sum_{v = 0}^{K-1} \Phi(v,R)}.
\end{align*}
Note that $\alpha$ depends on the value of $K$. Using Lemmas~\ref{lem:singleHopUB} and~\ref{lem:singlehop:alpharelaxed}, we next state the $\alpha$-relaxed upper bound without proof.
\begin{theorem}
Given $d$, the $\alpha$-relaxed upper bound for the violation probability for a single hop is given by
\begin{align*}
\lim_{t \rightarrow \infty}\! \P\{\!D({\hat{n}_\text{R}})\! >\! t\}\! \leq\!\! \sum_{v = 0}^{K-1}\!\! \Phi(v,R)\! +\! \min_{s\in \mathcal{S}} \! \Psi_1(s,d,R)\beta^K_1\!(s).
\end{align*}
\end{theorem}
Note that, by definition the $\alpha$-relaxed upper bound is at most $\alpha$ times worse than the upper bound $\sum_{v = 0}^{\infty} \Phi(v,R)$. More precisely, the $\alpha$-relaxed upper bound has $\alpha$ approximation factor with respect to $\sum_{v = 0}^{\infty} \Phi(v,R)$. To see this,
{\allowdisplaybreaks\begin{align*}
&\sum_{v = 0}^{K-1} \Phi(v,R) + \min_{s\in \mathcal{S}} \; \Psi_1(s,d,R)\beta^K_1(s)\! \\
&= \! \sum_{v = 0}^{K-1} \Phi(v,R)\!\left(1 + \frac{\min_{s \in \mathcal{S}} \Psi_1(s,d,R)\beta^K_1(s)}{\sum_{v = 0}^{K-1} \Phi(v,R)} \right) \\
&\leq \! \alpha \! \sum_{v = 0}^{\infty} \Phi(v,R).
\end{align*}}
Note that $\alpha > 1$, and it is easy to see that as $K$ increases, the value of $\alpha$ approaches $1$ from above. In this work, we choose $K$ the largest value that is computationally tractable in numerical evaluations.
Now, we formulate {$\alpha$-UBMP}~as follows:
\begin{equation*}
\begin{aligned}
\underset{\frac{1}{d} \leq R < \mu_{1}}{\text{min}} \sum_{v = 0}^{K-1} \Phi(v,R) + \min_{s\in \mathcal{S}} \Psi_1(s,d,R)\beta^K_1(s).
\end{aligned}
\end{equation*}
\section{Extensions to Two-Hop and N-Hop Scenarios}\label{sec:twohop}
In this section, we present Chernoff-UBMP and {$\alpha$-UBMP}~ for the two-hop scenario and also present Chernoff-UBMP for N-hop tandem queuing network.
In the following we first focus on the two-hop scenario. Similar to the case of single-hop scenario, we use Reich's equation and apply union bound to obtain an upper bound for the AoI violation probability which is presented in the following lemma.
\begin{lemma}\label{lem:twoHop}
Given $d$, and ${\hat{n}_\text{R}}$ as defined in \eqref{eq:nR}, we have
\begin{align*}
\lim_{t\rightarrow \infty} \P(D({\hat{n}_\text{R}}) > t)\! \leq \! \lim_{{\hat{n}_\text{R}}\rightarrow \infty} \sum_{v_0=0}^{{\hat{n}_\text{R}}} \sum_{v_1=0}^{{\hat{n}_\text{R}}-v_0} \Phi(v_0,v_1,R),
\end{align*}
where
\begin{align*}
\Phi(v_0,v_1,R) \triangleq \P\left\{\!\sum_{i=0}^{v_0}\! X_{2}^{i} +\! \sum_{i=0}^{v_1}\! X_{1}^{i} \! >\! d\! +\! \frac{v_0 + v_1 - 1}{R}\!\right\}.
\end{align*}
\end{lemma}
\begin{proof}
The proof is given in Appendix~\ref{lem:twoHop:proof}.
\end{proof}
\subsection{Chernoff-UBMP for Two-Hop Scenario}
\begin{theorem}\label{thm:twoHop:Chernoff}
For the two-hop network with deterministic arrivals, the violation probability is upper bounded as follows:
\begin{align*}
\lim_{t \rightarrow \infty} \P\{D({\hat{n}_\text{R}}) > t\} \leq \min_{s\in \mathcal{S}} \Psi_2(s,d,R),
\end{align*}
where
\begin{align}\label{eq:Psi2}
\Psi_2(s,d,R) = \frac{e^{-s(d-\frac{1}{R})} M_{1}(s) M_{2}(s)}{(1 - \beta_1(s))(1 - \beta_2(s))}.
\end{align}
\end{theorem}
\begin{proof}
We use the relation between departure times, arrival times and the service times given by~\eqref{eq:departureTime} iteratively and apply union bound and Chernoff bound to obtain the result. The details of the proof are given in Appendix~\ref{thm:twoHop:Chernoff:proof}.
\end{proof}
The Chernoff-UBMP problem for the two-hop network is stated below:
\begin{equation}\label{prob:twoHopUBMP}
\begin{aligned}
\underset{\frac{1}{d} \leq R < \mu}{\text{min}}\, \min_{s \in \mathcal{S}}\; \Psi_2(s,d,R).
\end{aligned}
\end{equation}
The lemmas below provide convexity properties of $\Psi_2(s,d,R)$. Since the proofs of the lemmas are similar to that in the case of single-hop scenario (Lemmas~\ref{lem:singlehop:convexR} and~\ref{lem:singlehop:convexs}), we omit them here.
\begin{lemma}\label{lem:twHopConvexR}
For the two-hop network with deterministic arrivals, given $s \in \mathcal{S}$ and $d > 0$, $\Psi_2(s,d,R)$ is convex with respect to $\frac{1}{R}$.
\end{lemma}
\begin{lemma}\label{lem:twHopConvexs}
For the two-hop network with deterministic arrivals, given $s \in \mathcal{S}$ and $d > 0$, $\Psi_2(s,d,R)$ is convex with respect to $s$.
\end{lemma}
\subsection{$\alpha$-UBMP for Two-Hop Scenario}
In the following theorem we present the $\alpha$-relaxed upper bound.
\begin{theorem}\label{thm:twohop:alphaUB}
For the two-hop network with deterministic arrivals, for any $K \geq 1$, the $\alpha$-relaxed upper bounded is given by
\begin{align*}
\sum_{v_0=0}^{K - 1} \sum_{v_{1}=0}^{K-1} \Phi(v_0,v_1,R) + \min_{s\in \mathcal{S}} \Psi(s,d,R,K),
\end{align*}
where
\begin{align*}
&\Psi(s,d,R,K) \nonumber\\
&= \! e^{-s(d-\frac{1}{R}\!)} M_{1}(s)M_{2}(s) \frac{(\beta_1^K\!(s)\! +\! \beta_2^K\!(s)\!-\!\beta_1^K\!(s)\beta^K_2\!(s)\!)}{(1\!-\!\beta_1(s))(1\!-\!\beta_2(s))}.
\end{align*}
\end{theorem}
\begin{proof}
The proof is given in Appendix~\ref{thm:twohop:alphaUB:proof}.
\end{proof}
We note that the $\alpha$-relaxed upper bound is computationally expensive when compared to that in the single-hop scenario because of the nested sum.
{\color{black}\subsection{N-hop Scenario}\label{sec:multihop}}
For an N-hop tandem network we have $k \in \{1,2,\ldots,N\}$ and $D(n) = D_N(n)$. For simplicity of presentation, in this section, we assume that $X^n_{k}$ are identically distributed. Therefore, we have $\mu = \mu_{k}$ for all $k$, and $M_{k}(s) = M_{1}(s)$ for all k. We now define the set $\mathcal{S}$ as follows.
\begin{align*}
\mathcal{S} = \{s > 0: M_{1}(s) < e^{s/R}\}.
\end{align*}
\begin{lemma}\label{lem:NHop}
Given $d$, and ${\hat{n}_\text{R}}$ as defined in \eqref{eq:nR}, we have
\vspace{-.3cm}
\begin{align*}
\lim_{t\rightarrow \infty} \P(D({\hat{n}_\text{R}})\! > \! t) \! \leq \! \lim_{{\hat{n}_\text{R}}\rightarrow \infty}\!\! \sum_{v_0=0}^{{\hat{n}_\text{R}}}\!\! \sum_{v_1=0}^{{\hat{n}_\text{R}}-v_0}\!\!\!\ldots \!\!\!\!\sum_{v_{N-1}=0}^{{\hat{n}_\text{R}}-v_{N-2}} \!\!\!\!\Phi(v_0^{N-1}\!\!,\!R),
\vspace{-.5cm}
\end{align*}
where
\vspace{-.3cm}
\begin{align}\label{eq:Phi:Nhop}
\Phi(v_0^{N-1}\!,R)\! \triangleq \! \P\left\{\!\sum_{k=0}^{N-1}\! \sum_{i=0}^{v_k} X_{N-k}^{i} > d \!+\! \frac{\sum_{k=0}^{N-1} v_k\! -\! 1}{R}\!\right\},
\end{align}
and $v_0^{N-1} = (v_0,v_1,\ldots,v_{N-1})$.
\end{lemma}
\begin{proof}
The proof follows similar steps as the proof of Lemma~\ref{lem:twoHop} and is omitted.
\end{proof}
\begin{theorem}\label{thm:multihop}
For the $N$-hop network with deterministic arrivals, the violation probability is upper bounded as follows:
\begin{align*}
\lim_{t \rightarrow \infty} \P\{D({\hat{n}_\text{R}}) > t\} \leq \min_{s\in \mathcal{S}} \Psi_N(s,d,R),
\end{align*}
where
\begin{align}\label{eq:Psi}
\Psi_N(s,d,R) = \frac{e^{-s(d-\frac{1}{R})}[M_{1}(s)]^{N}}{[1 - \frac{M_{1}(s)}{e^{s/R}}]^{N}}.
\end{align}
\end{theorem}
\begin{proof}
We use the relation between departure times, arrival times and the service times given by~\eqref{eq:departureTime} recursively starting from the last node $N$, and apply union bound and Chernoff bound to obtain the result. The proof follows similar steps as in the proof of Theorem~\ref{thm:twoHop:Chernoff} and therefore it is omitted.
\end{proof}
Therefore, an upper bound minimization problem for the N-hop network can be stated as follows:
\begin{equation}\label{prob:NHopUBMP}
\underset{\frac{1}{d} \leq R < \mu}{\text{min}} \, \underset{s \in \mathcal{S}}{\text{min}} \, \Psi_N(s,d,R).
\end{equation}
\noindent \textbf{\textit{Discussion:}} We note that similar to the single-hop and two-hop scenario $\Psi_N(s,d,R)$ is also convex with respect to $\frac{1}{R}$ and with respect to $s$. One may also obtain $\alpha$-UBMP for the $N$-hop scenario. However, the $\alpha$-relaxed upper bound involves the nested sum which becomes computationally expensive as $N$ increases. Furthermore, we note that as $N$ increases the upper bounds become more relaxed and therefore the heuristic solutions provided by Chernoff-UBMP may not be close to optimal solution. Nevertheless, these heuristic solutions could potentially be used as starting points. {\color{black}For example, when the controller has non-negligible processing time, the sensor-controller-actuator can be modelled as a three-hop tandem queuing system and one may use the heuristic solutions provided by the Chernoff-UMBP for three-hop scenario}.
Next, we present an independent result for service-time distributions with bounded support.
\subsubsection*{Service Distributions with Bounded Support}
Note that in practice, the service time distributions typically have bounded support. For example, the channel capacity for transmissions is always upper bounded due to bandwidth limitation. Considering that the service time is upper bounded by $b \in \mathbb{R}_{>0}$, in the following theorem we present a result for computing an optimal rate for age limits above certain threshold.
\jpcolor{\begin{corollary}
For an N-hop network, if the support of the service time distribution is upper bounded by $b < \infty$, then for all $d \geq (N+1)b$, the AoI violation probability is zero at \jpcolor{$R \leq (N+1)/d$}, i.e., these rate solutions are optimal for~\eqref{equivalentProb}.
\end{corollary}
\begin{proof}
We rewrite $\Phi(v_0^{N-1},R)$ (defined in~\eqref{eq:Phi:Nhop}) as follows:
\begin{align*}
\Phi(v_0^{N-1}\!,R)\! =\! \P\left\{\!\sum_{k=0}^{N-1}\!\sum_{i=0}^{v_k}\! \left(\!X_{N-k}^{i} - \frac{1}{R}\right)\! > \!d \! -\! \frac{N+1}{R}\!\right\}.
\end{align*}
For $R \leq (N+1)/d$, we have $X^n_k \leq b \leq \frac{1}{R^*}$ for all $k \geq 1$ and for all $n$, and we obtain
\begin{align*}
\Phi(v_0^{N-1}\!,R^*) &= \P\left\{\sum_{k=0}^{N-1}\sum_{i=0}^{v_k} \left(X_{N-k}^{i} - \frac{1}{R}\right) > 0\right\}\! = 0.
\end{align*}
Therefore, from Lemma~\ref{lem:NHop} we conclude that the AoI violation probability $\lim_{t\rightarrow \infty} \P(T_D(\hat{n}_{R}) > t)$ is equal to zero when $R \leq (N+1)/d$.
\end{proof}
}
\section{Application Examples: Geometric, Exponential and Erlang Service}\label{sec:exampleDis}
In the following we show the computation of the upper bounds for typical service distributions, namely, geometric, exponential and Erlang. These distributions are most commonly used in the queuing analysis, and also they serve as good models for several practical service-time processes.
Note that for these distributions, the distribution of the sum of service times is known and thus the $\alpha$-relaxed upper bound can be computed. Later in Section~\ref{sec:numerical} we will evaluate the performance of the the computed heuristic solutions for these service distributions. To shorten the expressions, in the sequel we denote
\begin{align*}
Y_1 = \sum_{i=0}^{v_{1}} X_{1}^{i}, \text{ }
Y_2 = \sum_{i=0}^{v_0} X_{2}^{i}, \text{ and }
\kappa = d + \frac{v_0 + v_1 - 1}{R}.
\end{align*}
\subsection{Geometric Service: Wireless Links with Packet Errors}
Consider that each packet generated by the sensor is of fixed length and the packets that carry actuator commands are also of fixed length, possibly different from sensor packet length. To accommodate for packet transmission errors in the wireless links, we use
geometric distribution to model the number of time slots required for transmitting a packet successfully. In particular, we consider the service distributions at link $1$ and link $2$ to be geometric with success probabilities $p_1$ and $p_2$, respectively. Given an age limit $d$ at the actuator, we compute $R$ heuristically.
In the following we compute the first term of the $\alpha$-relaxed upper bound given in Theorem~\ref{thm:twohop:alphaUB}. Since $Y_1$ and $Y_2$ are integers, we have
\begin{align}\label{eq:probterm}
&\sum_{v_0=0}^{K - 1} \sum_{v_{1}=0}^{K-1} \Phi(v_0,v_1,R) = \sum_{v_0=0}^{K - 1} \sum_{v_{1}=0}^{K-1} \P \left\{Y_1 + Y_2 \! > \kappa \!\right\} \nonumber\\
&= \sum_{v_0=0}^{K - 1} \sum_{v_{1}=0}^{K-1} \P \left\{Y_1 + Y_2 \! > \lfloor \kappa \rfloor\right\}.
\end{align}
Since for geometrical distribution $X^{i}_k \geq 1$, for all $i$ and $k \in \{1,2\}$, we have $Y_1 \geq v_1 + 1$ and $Y_2 \geq v_0 + 1$. Therefore, for $\lfloor \kappa \rfloor <= v_1 + v_2 + 1$, we have $\P \{Y_1 + Y_2 > \lfloor \kappa \rfloor\} = 1$. For $\lfloor \kappa \rfloor >= v_1 + v_2 + 2$ we compute the probability by conditioning on $Y_2=y$ for positive integers $y \geq v_0 + 1$.
\begin{align*}
& \P \left\{Y_1 \! + \! Y_2 \! > \! \lfloor \kappa \rfloor \right\}\\
& = \!\! \sum_{y = v_0 + 1}^{ \infty} \!\! \P \left\{Y_1 \! + \! Y_2 \! > \! \lfloor \kappa \rfloor | Y_2\! =\! y \right\}\!\P\{Y_2 \!=\! y\} \\
&=\!\! \sum_{y = v_0 + 1}^{\lfloor \kappa \rfloor - v_1 - 1}\!\!\!\!\!\! \P \left\{Y_1 \! >\! \lfloor \kappa \rfloor \! - \! y \right\}\!\P\{Y_2 = y\}\! + \! \P\{Y_2 \! \geq \! \lfloor \kappa \rfloor \! - \! v_1\}.
\end{align*}
In the last step above we have used $\P \left\{Y_1 > \lfloor \kappa \rfloor - y \right\} = 1$ for $y \geq \lfloor \kappa \rfloor - v_1$.
Noting that the sum of i.i.d. geometric random variables has a negative binomial distribution, we have
\begin{align*}
\P\{Y_2 = y\} &= \P\left\{\sum_{i=0}^{v_0} X_{2}^{i} = y\right\} \\
& = \binom{y-1}{v_0}p_2^{v_0+1}(1-p_2)^{y-v_0-1},
\end{align*}
and
\begin{align*}
\P \left\{Y_1 \! > \lfloor \kappa \rfloor - y \right\} = \frac{B(1-p_2; \lfloor \kappa \rfloor - y - v_1,v_1 + 1)}{B(\lfloor \kappa \rfloor - y - v_1,v_1 + 1)},
\end{align*}
where $B(\cdot)$ is the incomplete beta function given by
\begin{align*}
B(z;a,b) &= \int_{0}^{z} x^{a} (1-x)^{b} dx, \\
B(a,b) &= \int_{0}^{1} x^{a} (1-x)^{b} dx .
\end{align*}
Similarly, we compute $\P\{Y_2 \geq \lfloor \kappa \rfloor -v_1\}$. Finally, using $\P \left\{Y_1 + Y_2 \! > \lfloor \kappa \rfloor \right\}$ we compute~\eqref{eq:probterm}. For computing the Chernoff bound we require the moment generating function, which for geometric service is given below.
\begin{align*}
M_{k}(s) = \frac{p_k e^s}{1-(1-p_k)e^s}.
\end{align*}
Since the Chernoff bound is convex in $s$, using bisection algorithm we compute its minimum value.
\begin{comment}
In Figure~\ref{fig:twoHop_Geometric_varR}, we present the bounds and the simulated violation probability for the two-hop network with geometric service times. Note that for $d = 10$, the rate that minimizes the relaxed upper bound and the Chernoff bound is $0.4$, while the optimal rate is $0.5$. For $d = 10$, the rate that minimizes Chernoff bound is $0.4$, while the optimal rate is $0.5$ which is the solution provided by relaxed upper bound. This demonstrates that the proposed solution approach can potentially be used as a first-hand tool in computing desired sampling rates given the empirical service time distributions.
\begin{figure}
\centering
\includegraphics[width = 3in]{../figures/20181229_twoHop_Geometric_varR.eps}
\caption{Upper bounds for varying arrival rate $R$. \textbf{Two-hop}, geometric service with $p_1 = .9$ and $p_2 = .85$, $K = 20$, and $\alpha \approx 1$.}
\label{fig:twoHop_Geometric_varR}
\end{figure}
\end{comment}
\subsection{Exponential Service}
In this subsection, we study the two-hop system with exponentially distributed service times with rates $\mu_{1}$ and $\mu_{2}$ at links $1$ and $2$, respectively. For this case, $Y_1$ is a sum of $v_1+1$ i.i.d. exponential random variables, which is given by the Gamma distribution with shape parameter $v_1+1$ and rate parameter $\mu_{1}$. Similarly, $Y_2$ has Gamma distribution with shape parameter $v_2+1$ and rate parameter $\mu_{2}$. Therefore, we compute $\Phi(v_0,v_1,R)$ as follows.
\jpcol{\begin{align}\label{eq:phiExp}
&\Phi(v_0,v_1,R) = \int_{0}^{\infty} \P\{Y_1 > \kappa - y\}f_{Y_2}(y) dy \nonumber \\
&= \int_{0}^{\kappa} \P\{Y_1 > \kappa - y\}f_{Y_2}(y) dy + \int_{\kappa}^{\infty} f_{Y_2}(y) dy,
\end{align}}
where $f_{Y_2}(\cdot)$ is the PDF of $Y_2$, given by
\begin{align*}
f_{Y_2}(y) &= \frac{\mu^{v_2+1}_{2} y^{v_2} e^{-\mu_{2} y}}{v_2!} \, ,\\
\P\{Y_1 > \kappa - y\} &= \frac{\Gamma(v_1+1,\mu_{1}(\kappa - y))}{v_1!},
\end{align*}
and $\Gamma(x,a)$ is the upper incomplete gamma function:
\begin{align*}
\Gamma(x,a) = \int_{a}^{\infty}y^{x-1}e^{-y} dy.
\end{align*}
Further, if $\mu_1 = \mu_2 = \mu$, then
\begin{align*}
\Phi(v_0,v_1,R)\big|_{\mu_1 = \mu_2} = \frac{\Gamma(v_0+v_1+2,\mu\kappa)}{(v_0+v_1+1)!}.
\end{align*}
For computing the Chernoff bound we use the MGF of the exponential distribution which is given below.
\begin{align*}
M_{k}(s) = \frac{\mu_{k}}{\mu_{k} - s}, \text{ for }s < \mu_{k}.
\end{align*}
\subsection{Erlang Service}
Consider the Erlang service distribution at link $k$ has shape parameter $b_k$ and rate $\lambda_k$. This implies $\mu_k = b_k \lambda_k$. We note that, in this case, $Y_k$ has Gamma distribution with shape parameter $(v_k+1)b_k$ and rate parameter $\lambda_k$. Therefore, we compute the bounds using similar expressions given in the previous subsection.
\textit{\textbf{Remark 2:}} We note that the Chernoff upper bound and the $\alpha$-relaxed upper bound presented above may take values greater than $1$. It is natural to cap the values of these upper bounds by $1$ because for probability values an upper bound greater than $1$ is not of any use, in general. However, somewhat to our surprise, in our simulations we found that allowing the values of the proposed bounds greater than $1$ provides good heuristic solutions for the sampling rate, especially for parameter setting where the upper bounds are always greater than $1$. Since our primary objective is to find upper bounds that can provide good heuristic solutions, but need not necessarily be tight upper bounds, we consider values greater than $1$ for the bounds in our numerical evaluation. However, this should not be confused with the violation probability which does not exceed 1 at all times.
\section{Numerical Evaluation}\label{sec:numerical}
In this section, we evaluate the performance of {$\alpha$-UBMP}~solutions and {Chernoff-UBMP}~solutions for geometric, exponential and Erlang service distributions. We first study the trends of the proposed upper bounds in comparison to the AoI violation probability obtained using simulation for both single-hop and two-hop scenarios. We then evaluate the quality of numerically computed solutions using the UBMPs in comparison with that of the simulation-based estimate of the optimum violation probability. \jpcol{Finally, we present simulation results comparing the performance of FCFS with queue management policies that use unit buffer and packet replacement.}
\jpcolor{Since Chernoff-UBMP is a convex optimization problem we used bisection search, and for $\alpha$-UBMP we used brute-force search to compute the respective optimal rates}. The numerical computations are done using MATLAB, and the simulation is implemented in C where we run $10^{10}$ iterations for each data point. The default parameters are as follows. For exponential distribution $\mu_1$ and $\mu_2$ equal 1 packet/ms; for Erlang distribution we use shape parameters $b_1 = b_2 = 3$ and rate parameters $\lambda_1 = \lambda_2 = 3$, and therefore the mean rates $\mu_1$ and $\mu_2$ equal one packet/ms; for geometric service we choose success probabilities $p_1 = 0.85$ and $p_2 = 0.9$, \jpcol{and the slot duration is $1$ ms}. The minimum value for $R$ is chosen to be $0.2$ packets/ms and its maximum value is chosen to be $0.75*\min(\mu_1,\mu_2)$ packets/ms. \jpcol{For all the figures with varying rate $R$ on the x-axis, a constant resolution of $0.025$ is used.} The minimum value for $d$ is chosen to be $5$ ms and its maximum value is chosen to be $15$ ms.
We use $K = 30$ for computing $\alpha$-relaxed upper bound for all the distributions because for Geometric service MATLAB does not provide precision guarantees for higher $K$ values for computing $\Phi(v_0,v_1,R)$, and for other service distributions, choosing $K = 30$ is sufficient to obtain $\alpha$ values close to $1$.
\begin{figure*}[ht!]
\centering
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width=\textwidth]{20190329_singleHop_Geometric_varR.eps}
\caption{Geometric service with success probability $p_1 = 0.85$.}
\vspace{-0.0cm}
\label{fig:singleHop_Geometric_varR}
\end{subfigure} \quad
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width=\textwidth]{20181015_singleHop_Exp_varR.eps}
\caption{Exponential service with mean rate $\mu_1 = 1$.}
\vspace{-0.0cm}
\label{fig:singleHop_Exp_varR}
\end{subfigure} \quad
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width=\textwidth]{20181107_singleHop_Erlang3_varR.eps}
\caption{Erlang service with $b_1 = 3$, $\lambda_1 = 3$ and $\mu_1 = 1$.}
\vspace{-0.0cm}
\label{fig:singleHop_Erlang3_varR}
\end{subfigure}
\caption{Comparison of the upper bounds for varying arrival rate $R$ in a \textit{single hop} for different service time distributions.}\label{fig:singleHop_varR}
\end{figure*}
\begin{figure*}[ht!]
\centering
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width=\textwidth]{20190329_singleHop_Geometric_vard.eps}
\caption{Geometric service with success probability $p_1 = 0.85$.}
\vspace{-0.0cm}
\label{fig:singleHop_Geometric_vard}
\end{subfigure} \quad
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width=\textwidth]{20181015_singleHop_Exp_vard.eps}
\caption{Exponential service with mean rate $\mu_1 = 1$.}
\vspace{-0.0cm}
\label{fig:singleHop_Exp_vard}
\end{subfigure} \quad
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width=\textwidth]{20190220_singleHop_Erlang3_vard.eps}
\caption{Erlang service with $b_1 = 3$, $\lambda_1 = 3$ and $\mu_1 = 1$.}
\vspace{-0.0cm}
\label{fig:singleHop_Erlang3_vard}
\end{subfigure}
\caption{Comparison of the upper bounds for varying age limit $d$ in a \textit{single hop} for different service time distributions.}\label{fig:singleHop_vard}
\end{figure*}
\begin{figure*}[ht!]
\centering
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width=\textwidth]{20190221_twoHop_Geometric_varR.eps}
\caption{Geometric service with $p_1 = 0.85$ and $p_2 = 0.9$.}
\vspace{-0.0cm}
\label{fig:twoHop_Geometric_varR}
\end{subfigure} \quad
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width=\textwidth]{20190218_twoHop_Exp_varR.eps}
\caption{Exponential service with $\mu_1 = \mu_2 = 1$.}
\vspace{-0.0cm}
\label{fig:twoHop_Exp_varR}
\end{subfigure} \quad
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width=\textwidth]{20181107_twoHop_Erlang3_varR.eps}
\caption{Erlang service with $b_1 = b_2 = 3$, $\lambda_1 = \lambda_2 = 3$, and $\mu_1 \! = \! \mu_2 =\! 1$.}
\vspace{-0.0cm}
\label{fig:twoHop_Erlang3_varR}
\end{subfigure}
\caption{Comparison of the upper bounds for varying arrival rate $R$ in a \textit{two hop} network for different service time distributions.}\label{fig:twoHop_varR}
\end{figure*}
\begin{figure*}[ht!]
\centering
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width=\textwidth]{20190222_twoHop_Geometric_vard.eps}
\caption{Geometric service with $p_1 = 0.85$ and $p_2 = 0.9$}
\vspace{-0.0cm}
\label{fig:twoHop_Geometric_vard}
\end{subfigure} \quad
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width=\textwidth]{20190218_twoHop_Exp_vard.eps}
\caption{Exponential service with $\mu_1 = \mu_2 = 1$.}
\vspace{-0.0cm}
\label{fig:twoHop_Exp_vard}
\end{subfigure} \quad
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width=\textwidth]{20190220_twoHop_Erlang3_vard.eps}
\caption{Erlang service with $b_1 = b_2 = 3$, $\lambda_1 = \lambda_2 = 3$, and $\mu_1 \! = \! \mu_2 =\! 1$.}
\vspace{-0.0cm}
\label{fig:twoHop_Erlang3_vard}
\end{subfigure}
\caption{Comparison of the upper bounds for varying age limit $d$ in a \textit{two hop} network for different service time distributions.}\label{fig:twoHop_vard}
\end{figure*}
\begin{figure*}[ht!]
\centering
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width = \textwidth]{20190329_comparisonSol_Geometric.eps}
\caption{Geometric service with $p_1 = 0.85$ and $p_2 = 0.9$.}
\vspace{-0.0cm}
\label{fig:comparisonSol_Geometric}
\end{subfigure} \quad
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width = \textwidth]{20190219_comparisonSol_exp.eps}
\caption{Exponential service with $\mu_1 = \mu_2 = 1$ packets/ms.}
\vspace{-0.0cm}
\label{fig:comparisonSol_exp}
\end{subfigure} \quad
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width = \textwidth]{20190222_comparisonSol_Erlang3.eps}
\caption{Erlang service with $b_1 = b_2 = 3$, $\lambda_1 = \lambda_2 = 3$, and $\mu_1 \! = \! \mu_2 =\! 1$.}
\vspace{-0.0cm}
\label{fig:comparisonSol_Erlang3}
\end{subfigure}
\caption{Evaluation of the rate solutions obtained using upper bound minimization for different service time distributions.}\label{fig:comparisonSol}
\end{figure*}
\subsection{Properties of Upper Bounds}
\subsubsection{Single Hop}
In Figures~\ref{fig:singleHop_varR} and~\ref{fig:singleHop_vard}, we present the upper bounds and the simulated AoI violation probability for varying arrival rate $R$ and varying age limit $d$ for different distributions for the single-hop scenario. From Figure~\ref{fig:singleHop_varR}, we observe that the upper bounds and the violation probability have convex nature and a global minimum \jpcolor{(highlighted in black circles)} in the chosen range of $R$. Further, observe that the curvature of the upper bounds approximately follow the curvature of the simulated violation probability around its minimum value and only deviates at higher sampling rate. This is an interesting property as it suggests that a rate that minimizes the upper bound \jpcolor{will be a ``good" rate solution for minimizing} the violation probability.
We note that the $\alpha$-relaxed upper bound curves are not continuous because the probability terms $\Phi(v_0,v_1,R)$ involves a floor function, namely, $\lfloor \beta \rfloor$.
From Figure~\ref{fig:singleHop_vard}, we observe that the decay rates of the upper bounds match closely the decay rate of the violation probability. This further strengthens our statement above that minimizing the upper bounds results in good heuristic rate solutions for the considered range of age limits.
\subsubsection{Two Hop}
In Figures~\ref{fig:twoHop_varR} and~\ref{fig:twoHop_vard}, we present the upper bounds and the simulated AoI violation probability for varying arrival rate $R$ and varying age limit $d$ for different distributions for the two-hop scenario. We observe similar trends as in the case of single-hop scenario. Nevertheless, the bounds become relatively looser. This can be attributed to the fact that the union bound is applied twice for the two-hop scenario.
Note that for both single-hop and two-hop scenarios $\alpha$-relaxed bound is much lower than the Chernoff bound. Nevertheless, Chernoff bound can be useful for the cases where the exact distribution of the summation of service times is intractable.
\jpcol{\subsubsection{Service Times with Higher Variance}
In this section, we study how the upper bounds perform for service time with higher variance. In Figure~\ref{fig:twoHop_Exp_varR_heterogenous}, we consider heterogeneous exponential service times, with $\mu_1 = 0.75$ and $\mu_2 = 1$. We have chosen $\mu_1 = 0.75$ so that the variance in this case is higher than that of the homogeneous case where both $\mu_1$ and $\mu_2$ are equal to $1$.
We note that the trend persists and that the mismatch in the minima among the three curves in this case becomes greater when lower age limit is considered, i.e., $d=5$ ms. However, the mismatch is much smaller at higher $d$. We also note that compared to the homogeneous server case in Figure~\ref{fig:twoHop_Exp_varR}, heterogeneity does not affect the conclusions regarding system behaviour with respect to AoI. Nevertheless, the performance becomes more dependent on the bottleneck link in this case.
In Figure~\ref{fig:variance}, we consider hyper-exponential service time distribution with probability density function given by $p\lambda_1e^{-\lambda_1 x} + (1-p)\lambda_2e^{-\lambda_2 x}$. We choose $p = 0.91$, $\lambda_1 = 0.95$, and $\lambda_2 = 2$ such that the mean value is equal to $1$ ms. We note that this distribution has higher variance compared to exponential-service time distribution with mean $1$. For computing the alpha-relaxed upper bound, we set $K=6$ and numerically evaluated the convolution of hyper-exponential probability distribution functions to obtain values for $\phi(v_0,v_1,R)$.
From both Figures~\ref{fig:twoHop_Exp_varR_heterogenous} and~\ref{fig:variance}, we observe that for the two-hop scenario, for $d = 5$, the mismatch between the heuristic rate solution provided by $\alpha$-UBMP and the optimal rate solution is relatively bigger. Nevertheless, under these settings, it should be noted that the value of the minimum AoI violation probability is not significantly lower than that achieved by the heuristic rate solution. Again, the main trends noticed with the other three service distributions, see Figure~\ref{fig:twoHop_varR}, are prevailing here as well.
}
\begin{figure}
\centering
\includegraphics[width = 3in]{20200718_twoHop_Exp_varR_heterogenous_mu1_threeFourth_mu2_1.eps}
\caption{Comparison of the upper bounds for varying arrival rate $R$ in a \textit{two hop} network for heterogeneous exponential service time distributions, with $\mu_1 = 0.75$ packet/ms and $\mu_2=1$ packets/ms.}
\label{fig:twoHop_Exp_varR_heterogenous}
\end{figure}
\begin{figure*}[ht!]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{20200718_singleHop_HyperExp_varR.eps}
\caption{Single hop}
\vspace{-0.0cm}
\label{fig:singleHop_HyperExp_varR}
\end{subfigure} \quad \quad \quad \quad
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{20200718_twoHop_HyperExp_varR.eps}
\caption{Two hop}
\vspace{-0.0cm}
\label{fig:twoHop_HyperExp_varR}
\end{subfigure}
\caption{Comparison of the upper bounds for varying arrival rate $R$ for hyper-exponential service-time distribution.}\label{fig:variance}
\end{figure*}
\subsection{Quality of the Heuristic Solution}
In Figure~\ref{fig:comparisonSol}, we compare the violation probabilities for rate solutions obtained by solving the UBMPs and the estimated minimum/optimum violation probability obtained by exhaustive search using simulation, for both single-hop and two-hop scenarios. Note that the difference between the violation probabilities achieved by the heuristic rate solutions and the optimum violation probability is negligible. This suggests that the solutions of the UBMPs are near optimal for $\mathcal{P}$ \jpcolor{for the considered service-time distributions}.
This can be attributed to the fact that the upper bounds have decay rate that matches the decay rate of the violation probability as stated before.
Although $\alpha$-relaxed upper bound is much lower than Chenoff bound the solutions of $\alpha$-UBMP provide only slightly lower violation probability than that of the Chernoff-UBMP solutions. Thus, Chernoff-UBMP is relatively tractable and the rate solutions provided can be used as first step toward computing close-to-optimal solutions by utilizing additional information about the service distributions.
\textbf{\textit{Remark 3:}} We note that unlike the time-average age objective, which is minimized at $0.515$ utilization factor ($\lambda_1/\mu_1$) for the D/M/1 queue~\cite{kaul_2012b}, the optimal rate solution and in turn the utilization factor that minimizes AoI violation probability depends on age limit $d$. For a comparison, in Figure~\ref{fig:comparisonSol_exp} the single-hop scenario is equivalent to D/M/1 system and in this case the optimal utilization factors are $\{0.425,0.4,0.4,0.35,0.35,0.35,0.35,0.35,0.35\}$.
\begin{figure*}[ht!]
\centering
\begin{subfigure}[b]{0.42\textwidth}
\includegraphics[width=\textwidth]{20200714_singleHop_unitBuffer.eps}
\caption{Single hop.}
\vspace{-0.0cm}
\label{fig:singleHop_unitBuffer}
\end{subfigure} \quad \quad \quad
\begin{subfigure}[b]{0.42\textwidth}
\includegraphics[width=\textwidth]{20200714_twoHop_unitBuffer.eps}
\caption{Two hop.}
\vspace{-0.0cm}
\label{fig:twoHop_unitBuffer}
\end{subfigure}
\caption{Comparison of AoI violation probability achieved under different queue management policies under exponential-service times with rates equal to one packet/sec.}\label{fig:unitBuffer}
\end{figure*}
\jpcol{\subsection{Queue Management Policies with Unit Buffer}
Although this work is dedicated to study AoI distribution for WNCS under FCFS\footnote{By FCFS we mean First-Come-First-Serve with infinite buffer.} scheduling,
recent research results have shown that considering a unit buffer with queue management policies provide lower AoI statistics in comparison to FCFS with infinite buffer~\cite{Costa_2016,Champati_GG1_2019,Bedewy_2017a,Bedewy2019}. In this section, we investigate this effect by applying two widely referenced (in the context of AoI research) queue management policies, namely, FCFS-Unit Buffer and LGFS-Unit Buffer, to every queue in our network and then compare the achieved AoI violation probability to that we obtained earlier under FCFS.
Both of these policies employs a one-packet buffer, however, they differ in that, whenever the buffer is occupied and a new packet arrives, the existing packet is kept in the first while it is replaced with the newly arriving packet in the second.
In~\cite{Bedewy_2017a}, it was shown that LGFS-Unit Buffer minimizes AoI processes, in stochastic ordering sense, among all non-preemptive service policies for any arrival process and service-time distributions. This implies, for the tandem two-queue system we consider, LGFS-Unit Buffer results in minimum AoI violation probability. Hence, it provides a good reference to measure the performance of other queue management policies against.
In Figures~\ref{fig:singleHop_unitBuffer} and~\ref{fig:twoHop_unitBuffer},
the AoI violation probability is plotted against the arrival rate $R$ for the FCFS as well as the two unit buffer queue management policies mentioned above, assuming exponential-service times with rate $\mu = 1$ packet/sec and for two age limits $d=\{5, 10\}$ ms.
We observe that, the minimum AoI violation probability under FCFS-Unit Buffer and LGFS-Unit Buffer is comparable to that under FCFS in the single-hop scenario. However, the performance of FCFS-Unit Buffer deteriorates drastically in the two-hop case compared to the other two, see Figure~\ref{fig:twoHop_unitBuffer}. This can be attributed to the fact that, under FCFS-Unit Buffer, packets that are served at first link may still be dropped when arriving at the second link if its buffer is already occupied. This effect may be exacerbated when more links (hops) are added to the tandem and even more when heterogeneous service processes are present at different links along the cascade where latter links experience higher utilization. This example shows that in tandem-queuing systems, it is not always true that FCFS with finite buffer has lower AoI statistics than that of FCFS with infinite buffer. We contrast this with the performance trends of these policies for a system with parallel servers between the source and the destination~\cite{Bedewy2019}, where it was demonstrated that, under Poisson arrivals and exponential service-times, the minimum values for average AoI and average peak AoI achieved under FCFS with infinite buffer are much higher than that for the case with finite buffer.
Furthermore, in both scenarios, we observe that the AoI violation probability under FCFS with infinite queue is quite close to that of LGFS-Unit Buffer at low utilization (i.e., low $R$), and more importantly the minima for both cases are reasonably close and are achieved around the same arrival rate $R$. The reason for such behaviour can be attributed to the fact that at such low arrival rate the buffer would be empty most of the time mimicking the unit buffer behaviour. In the two-hop scenario, the FCFS would still have buffer space at the second hop to hold (and not drop as FCFS-Unit Buffer may do) a packet that is successfully forwarded by the first hop, hence, countering the performance deterioration experienced by FCFS-Unit Buffer due to packet drops that we highlighted in the two-hop scenario above.
The above observations are quite interesting. They suggest, at least for deterministic arrivals and tandem-queuing system, using FCFS with infinite queue may achieve a minimum AoI violation probability that is reasonably close to the optimum (achieved by LGFS-Unit Buffer among the set of non-preeemptive policies~\cite{Bedewy_2017a}). This also opens an interesting research question for future work: how far the minimum AoI violation probability achieved under FCFS can be from the optimum?
}
\section{Conclusion and Future Work}\label{sec:conclusion}
We provide a general characterization of AoI violation probability for a network with periodic input arrivals. Using this characterization, we formulate an optimization problem $\mathcal P$ to find the optimal input rate which minimizes the AoI violation probability. Further, we show that $\mathcal{P}$ is equivalent to the problem of minimizing the violation probability of the departure time of a tagged arrival ${\hat{n}_\text{R}}$ over the rate region $[\frac{1}{d},\mu)$.
Noting that computing an exact expression for the violation probability is hard,
we propose an Upper Bound Minimization Problem (UBMP) and its more computationally tractable versions Chernoff-UBMP and {$\alpha$-UBMP}, which result in heuristic rate solutions. We also present the Chernoff-UBMP for N-hop tandem queuing system.
We solve Chernoff-UBMP and {$\alpha$-UBMP}~for single-hop and two-hop scenarios for three service-time distributions, namely, geometric, exponential and Erlang. Numerical results suggest that the rate solutions of {$\alpha$-UBMP}~are near optimal for $\mathcal{P}$, demonstrating the efficacy of our method.
{\color{black} Furthermore, our simulation results suggest that, FCFS performs close to the optimum achieved by LGFS-Unit Buffer and drastically outperforms FCFS-Unit Buffer in a two-hop network with respect to AoI violation probability.
This opens up an interesting research question: how far will be the minimum AoI violation probability achieved under FCFS from the optimum?
}
Another interesting research direction for future work would be to extend our results to stochastic arrivals.
We are also studying the computational complexity for solving {$\alpha$-UBMP}~and investigating more efficient solution methods, i.e., by identifying the range of $\alpha$ for which a good heuristic solution for $\mathcal{P}$ can be obtained.
{\color{black} Finally, we would like to investigate different queuing disciplines relevant to AoI.}
|
2,877,628,089,771 | arxiv |
\section{Conclusion}
\label{sec:conclusion}
We have presented a framework for active learning of a library of Probabilistic Movement Primitives from demonstration. Our method leverages existing active learning techniques while utilizing the information encoded in the ProMPs to compute the active learning measure guiding sample selection. We demonstrated with real-robot experiments that our method provides an advantage over randomly choosing demonstrations over the space in which generalization is desired. Our method provides an uncertainty estimate of task success over a given region, enabling the robot to be deployed to situations where a teacher may not be available, e.g. remote missions in space.
In this paper, we only considered task generalization over a static environment. In future work, we will explore adapting our methods to dynamic environments in which task constraints vary over time, such as obstacles that are not fixed features of the environment. Additional future work could examine incorporating a more informed prior for classifying feasible and infeasible regions that either leverages knowledge of an environmental map or could be learned and transferred from previous tasks.
\section{Discussion}
\label{sec:discussion}
\td{Discuss that we are looking at a specific state and context variable combination that gives us a reasonably straight forward transformation between the context variables and the ProMP state. We are avoiding making our approach over-generalized, but it may be worth mentioning that if you can provide an analytic transformation function between state and context variables then our method can accommodate it.}
\section{Experimental Results}
\label{sec:experiment_results}
\subsection{Qualitative Comparison of Uncertainty Sampling}
\label{sec:experiment_results:qualitative}
We perform a qualitative comparison of the four uncertainty sampling methods described in Section~\ref{sec:methods:active_learning}. We analyze the progression of the uncertainty sampling metrics over the grid data space described in Section~\ref{sec:experiment_setup} as more demonstrations are achieved. As seen in Figure \ref{fig:sampling}, Least Confident, Minimum Margin, and Maximum Entropy each tend to fixate selection on the boundaries between ProMPs. Once at least two neighboring ProMPs become well-estimated enough to produce meaningful probability measures, they begin to ``compete'' over the territory covered in part by both ProMPs. This behavior is not desirable for the purpose of promoting task generalization over the entire space. As such, we found that while these measures are the go-to objective functions for uncertainty-sampling approaches in supervised active learning \cite{settles2012active}, they do not provide a suitable mechanism for encouraging the creation of a ProMP library that can generalize well over a given space.
We propose the Greatest Mahalanobis Distance, described in Section~\ref{sec:methods:active_learning}, as an alternative to these standard measures. As seen in Figure \ref{fig:mahal_random}, the Mahalanobis distance objective tends to converge to low values instead of becoming heightened on boundaries between ProMPs. Even if a task instance can be achieved by multiple ProMPs (i.e. the instance exists near a boundary between two ProMPs), its minimum Mahalanobis distance is unaffected by such competition. We submit that this behavior makes Greatest Mahalanobis Distance the most suitable measure among the four compared for active learning of ProMPs, as it will tend to drive the learning into regions that have not been explored, instead of fixating on boundaries between already well-estimated regions of the task space.
\subsection{Task Success on Execution}
\label{sec:experiment_results:execution}
In order to demonstrate the efficacy of the Greatest Mahalanobis Distance measure for active learning, we compare our method against randomly selecting task instances on task executions on the robot as described in Section~\ref{sec:experiment_setup}.
In order to account for randomness in the learning process, we perform ten trials of learning ProMP libraries over the space. We then chose the ProMP library that achieved the median performance on a validation metric for testing on the robot.
We use the recorded demonstrations over the discretized space to perform the ten learning trials. In each trial we use a random seed for the random sample generation, and we use the same seed to generate a small set of initial samples to initialize our active learning method. For each trial, we generated task instances and collected 25 demonstrations for each method. We then ranked the capability of the ProMP libraries by the value of the Greatest Mahalanobis Distance computed over all task instances for use as our validation metric.
We generated a test set of ten random planar object poses to attempt with each comparison method. We emphasize that the test poses were generated from a continuous set, i.e. they are not selected from candidates in the discretized space, and as such they are not likely to be identical to any instances the methods received demonstrations for. The object was tracked and placed on the table by the user to align with the coordinate frame of the generated instance, as described in Section~\ref{sec:experiment_setup}. For each method, the most likely ProMP and condition point to produce task success were selected based on the Greatest Mahalanobis Distance measure for that object pose. The resulting ProMP policy was then executed. We used task completion as our metric of success, where the task is considered successfully completed if the object remains in the robot's grasp after the lifting phase described in Section~\ref{sec:experiment_setup} has completed.
Random selection of task instances resulted in only 2 out of 10 successful grasps. 3 of the instances were attempted but quickly failed due to the robot knocking the object off the table or pushing the object away as the fingers started closing around it. The other 5 instances could not even be attempted due to safety concerns in watching the execution previews in rviz. These were primarily cases where the hand was clearly going to collide with the object or table at a high velocity, risking potential damage to the robot hand. In summary, random selection resulted in 20\% success, 30\% failure, and 50\% infeasible due to safety concerns.
Our Greatest Mahalanobis Distance approach resulted in 6 successful grasps, 3 failed grasps, and only 1 infeasible instance due to safety concerns. Only 1 of the failed grasps was due to the robot missing the grasp entirely due to pushing the object away when the fingers close. The other 2 failures were attempted overhead grasps in which the robot reached a suitable pre-grasp and closed the fingers around the upper portion of the drill, but then proceeded to drop the object on the lifting phase. The infeasible instance was due to what was a clear collision between the fingers and the object at a high velocity. To summarize, our method resulted in 60\% task success, 30\% failure, and only 10\% were infeasible.
We note that the recorded demonstrations were generally either an overhead grasp towards the head of the drill, or a side grasp radially located about the drill handle. Overhead grasps were more suitable when the object was located closer to the base of the robot, whereas side grasps were more appropriate the further the object was located from the base. However, from the user's perspective, overhead grasps were significantly more difficult to demonstrate successfully. This is primarily due to the weight of the drill requiring a precise grasp pre-shape from above to fully enclose the drill head without losing grip on the lifting phase.
\subsection{Learning Feasible and Infeasible Task Regions}
\label{sec:experiment_results:regions}
In some situations the boundaries of the context region \(\mathcal{C}_d\) to generalize over may not be explicitly known a priori. For such cases we propose a minor extension to our approach enabling the robot to learn an explicit infeasible region \(\mathcal{R}\) to avoid. We propose modeling this region using a Gaussian mixture model defined on the context space \(\mathcal{C}\).
To formulate this as an active learning problem we treat the learned mixture of ProMPs as a single positive class, with class probability defined by Eq.~\ref{eqn:promp_mixture}, and the GMM to represent the negative class. When asked to provide a sample the user provides a demonstration as before if the sample represents a point in the feasible region; otherwise the user simply labels the point infeasible and the active learner provides a new sample. In two-class cases, the Least-Confident and Minimum-Margin uncertainty sampling methods are equivalent to Maximum Entropy~\cite{settles2012active}.
Figure~\ref{fig:sampling:two_class} visualizes the maximum entropy associated with a feasible-infeasible learning trial. In the case pictured an obstacles sits in the center of the table, which the robot should not collide with. We see that the points of highest entropy (lighter colors) lie near the boundary between this infeasible center region and the surrounding areas, known to be feasible from example demonstrations. Thus the maximum entropy metric proves useful in this scenario, selecting samples to refine the boundary between the neighboring feasible and infeasible regions.
\section{Experimental Setup}
\label{sec:experiment_setup}
We illustrate the qualitative differences of the active learning strategies under consideration using a simple grasping task. The goal is for the robot to be able to pick up a drill placed in an arbitrary planar pose on a table in the robot's reachable workspace, as illustrated in Figures~\ref{fig:cover} and~\ref{fig:sequences}. We chose this task because it affords an easily discernible comparison of the different methods while providing a non-trivial space to optimize over. In order to maintain consistency in the demonstrations available to each comparison method, we discretized the sampling space into a grid with planar positions in 5cm intervals and planar orientations in increments of 45 degrees. The result is a total of approximately 700 possible planar poses for selection. We provided one demonstration for each of these samples through kinesthetic teaching of the robot in gravity compensation mode.
We provide a qualitative characterization of the three uncertainty sampling methods discussed in Section \ref{sec:methods:active_learning}; namely, Least Confident, Minimum Margin, and Maximum Entropy. We show that each of these measures computed over the ProMP probabilities exhibits undesired behavior in the context of active learning for ProMPs. We then provide a more rigorous quantitative analysis comparing our proposed method of Greatest Mahalanobis Distance to a random-selection strategy. We present results of executing the grasping task using both methods and show that our method provides better task generalization over the space with fewer demonstrations required from the teacher.
We performed our experiments\footnote{Data is available at \texttt{\url{http://bit.ly/al_promp_data}}.}\footnote{Code is available at \texttt{\url{http://bit.ly/al_promp_code}}.}\footnote{Video is available at \texttt{\url{https://youtu.be/3H4pLdiR8CI}}.} on a KUKA LBR4+ robot arm equipped with a ReFlex TakkTile hand \cite{odhner2014compliant,reflex_website}. Given the Cartesian waypoints generated from a ProMP policy, we formulate a Sequential Quadratic Program to obtain a joint trajectory by minimizing the L2 squared error between the end-effector pose and the Cartesian waypoints~\cite{sundaralingam2019relaxed}. We tracked the resulting trajectory with a real-time Orocos~\cite{bruyninckx2001open} joint space PD controller operated at 500Hz. Grasps were performed by assuming a canonical preshape and closing the hand until contact was made (as detected by the TakkTile~\cite{tenzer2014takktile} pressure sensors on the ReFlex fingers). We then drove the motors a small additional amount to achieve a firm grasp, following the control approach from~\cite{jentoft2014limits}. Once grasped, a pre-defined lifting sequence was executed to lift the object approximately 20cm above the table. A grasp is considered successful if the object is still in the robot's grasp at the end of the lifting sequence.
Prior to executing any trajectory on the physical robot, we perform a kinematic simulation of the robot with the environment model overlaid in rviz. We do not execute any trajectory that is clearly dangerous in terms of colliding with the environment at a non-trivial velocity.
We use the drill from the YCB dataset \cite{calli2015ycb} as the object to be grasped by the robot, as shown in Figure~\ref{fig:cover}. We track the pose of the object using the Bayesian object tracker described in \cite{wuthrich2013probabilistic}. The pose is visualized in rviz and overlaid on the camera feed coming from an ASUS X-tion Pro RGB-D camera. Selected task poses for the object are also displayed in this way, and the human user utilizes the displays to align the object pose with the generated task instance pose.
\section{Introduction}
\label{sec:intro}
Learning from demonstration~\cite{atkeson1997robot,billard2008robot,argall2009survey} offers a promising approach for robot users untrained in programming to command robots to perform common manipulation tasks. By teaching the robot through demonstration, the user can provide manipulation expertise without needing to be an expert in robotics.
Probabilistic Movement Primitives (ProMPs) provide a useful policy representation for generating adaptable robot motion learned from demonstration~\cite{paraschos2018using}. A ProMP encodes a distribution over trajectories and is typically initialized with several demonstrations from a human teacher. Task generalization to new goals and contexts is primarily achieved by conditioning the trajectory distribution on desired trajectory waypoints. This generalization mechanism has been successfully applied in a variety of applications including grasping objects while avoiding obstacles~\cite{paraschos2017prioritization}, relocating objects of unknown weight~\cite{paraschos2018probabilistic}, collaborative assembly tasks~\cite{ewerton2015learning}, and robot table tennis~\cite{gomez-gonzalez2016using}.
However, ProMPs require an indeterminate number of demonstrations to confidently generalize over the desired task space and appropriately estimate the associated task covariance~\cite{maeda2017active}. Many real-world tasks require hundreds of demonstrations to fully estimate the demonstration covariance~\cite{colome2017demonstration}. If too few demonstrations are provided, numerical issues arise in the form of singular covariance matrices, and it is common to use a non-informative prior for the covariance in order to sidestep this issue~\cite{paraschos2018using,wilbers2017context,havoutis2017supervisory}. However, the generalization capability of a ProMP can be compromised when using parameters that do not adequately estimate the true distribution associated with a task~\cite{paraschos2018using,havoutis2017supervisory}. The current alternatives available to the teacher are to either expend undue effort to exhaustively demonstrate a task to the robot, or to attempt to capture, in only a small number of demonstrations, the task variation necessary to achieve the desired generalization. There is a need for a third option that guides the teacher to provide only those demonstrations necessary to ensure the desired task generalization is achievable.
\input{cover_fig.tex}
In this paper, we present a novel active learning procedure for learning a library of ProMPs from demonstration that is capable of task generalization over a desired region. We frame this as an active learning problem by conceiving of each ProMP in the library as its own class, with the guiding intuition that we want to fully ``classify'' the space, i.e. achieve full ProMP coverage of the space. We adopt an uncertainty sampling approach~\cite{settles2012active} that enables the robot to generate the task sample for which it is least likely to ``predict'' correctly, i.e. generalize to with a ProMP. By allowing the learner to generate task samples to be ``labeled'', i.e. demonstrated by the teacher, we remove the burden of the teacher to decide which demonstration to provide next. Additionally, by informing task selection with uncertainty measures, we reduce the total number of demonstrations necessary to achieve a task than if demonstrations are given in an ad hoc manner.
We provide a qualitative comparison of different uncertainty sampling measures commonly used for active learning in supervised learning settings: Least Confident, Maximum Entropy, and Minimum Margin. We show that these measures are not suitable for promoting task generalization with ProMPs. We propose a new measure we call Greatest Mahalanobis Distance that effectively generates task instances that are not in close proximity to any existing ProMP distribution. We demonstrate with grasping experiments on a real KUKA robot that our method affords task generalization with fewer demonstrations more effectively than randomly sampling over the space.
To briefly summarize our contributions, in this paper we:
\begin{enumerate}
\item Formalize an active learning approach for learning manipulation tasks from demonstration using Probabilistic Movement Primitives.
\item Provide qualitative comparisons of the three most common uncertainty sampling techniques: Least Confident, Minimum Margin, and Maximum Entropy.
\item Present a novel uncertainty sampling function suitable for building a mixture of ProMPs capable of generalizing a manipulation task within a given region.
\item Leverage the probabilistic information encoded in ProMP policies to automatically determine which ProMP in the mixture a new demonstration should be incorporated into, and which ProMP to execute for a new task instance.
\end{enumerate}
We structure the remainder of the paper as follows. We first review related work in active learning from demonstration in Section~\ref{sec:related_work}. We then provide a brief technical overview of ProMPs in Section \ref{sec:methods:background} and define our novel approach to learning a library of ProMPs through active learning in Sections~\ref{sec:methods:mixture}--\ref{sec:methods:promp_context}. We present an overview of our experimental setup in Section~\ref{sec:experiment_setup} and describe the corresponding results in Section~\ref{sec:experiment_results} for grasping experiments performed on a physical KUKA LBR4+ robot. We conclude in Section~\ref{sec:conclusion} with some final remarks and directions for future work.
\section{Methods}
\label{sec:methods}
We first provide a brief background on ProMPs in Section~\ref{sec:methods:background} to introduce the concepts relevant to our contributions. We describe our approach to learning a mixture of ProMPs in Section~\ref{sec:methods:mixture}. In Section~\ref{sec:methods:active_learning} we present our novel approach for active learning of ProMPs and discuss the methods we compare. Finally, we provide details in Section~\ref{sec:methods:promp_context} on how to use our active learning method for the concrete task we consider in our experiments: reaching to grasp an object.
\subsection{Background}
\label{sec:methods:background}
We utilize a formulation of ProMPs that closely parallels that of \cite{paraschos2018using}. The ProMP trajectory distribution has the general form
\begin{equation}
\label{eqn:prob_trajectory}
p(\bm{\tau} \mid \bm{w}, \bm{\Sigma}_{\bm{y}}) = \prod_t p(\bm{y}_t \mid \bm{\Psi}_t \bm{w}, \bm{\Sigma}_{\bm{y}}) \\
\end{equation}
where $\bm{\tau} = [\bm{y}_0, \dots, \bm{y}_T]$ is a trajectory of the state $\bm{y}_t \in \mathcal{S}$ for state space $\mathcal{S}\subseteq\mathbb{R}^d$, $\bm{\Psi}_t \in \mathbb{R}^{d \times dn}$ is a block-diagonal matrix of $n$ basis functions for each dimension of the state, $\bm{w} \in \mathbb{R}^{dn}$ is a weight vector, and $\bm{\Sigma}_{\bm{y}}$ is the observation noise. We assume, as in \cite{paraschos2018using}, that the time-dependent distributions are Gaussian, i.e. $p(\bm{y}_t \mid \bm{\Psi}_t \bm{w}, \bm{\Sigma}_{\bm{y}}) = \mathcal{N}(\bm{y}_t \mid \bm{\Psi}_t \bm{w}, \bm{\Sigma}_{\bm{y}})$. This results in $p(\bm{\tau} \mid \bm{w}, \bm{\Sigma}_{\bm{y}})$ being Gaussian as well since it is a product of Gaussian distributions.
We parameterize the distribution with $\bm{\theta} = \{\bm{\mu}_{\bm{w}}, \bm{\Sigma}_{\bm{w}}\}$ and marginalize out the weights such that
\begin{align}
p(\bm{\tau} \mid \bm{\theta}, \bm{\Sigma}_{\bm{y}}) &= \int p(\bm{\tau} \mid \bm{w}, \bm{\Sigma}_{\bm{y}}) p(\bm{w} \mid \bm{\theta}) d\bm{w}
\end{align}
Task generalization is achieved by conditioning $p(\bm{w} \mid \bm{\theta})$ on a desired trajectory waypoint $\bm{y}_t^*$ with covariance $\bm{\Sigma}_{\bm{y}_t}^*$. The updated parameters $\bm{\theta}^* = \{\bm{\mu}_{\bm{w}}^*, \bm{\Sigma}_{\bm{w}}^*\}$ are computed by
\begin{align}
\bm{\mu}_{\bm{w}}^* &= \bm{\mu}_{\bm{w}} + \bm{\Sigma}_{\bm{w}} \bm{\Psi}_t\left(\bm{\Sigma}_{\bm{y}_t}^*
+ \bm{\Psi}_t^T\bm{\Sigma}_{\bm{w}}\bm{\Psi}_t\right)^{-1}(\bm{y}_t^*
- \bm{\Psi}_t^T\bm{\mu}_{\bm{w}}) \label{eqn:condition_mu} \\
\bm{\Sigma}_{\bm{w}}^* &= \bm{\Sigma}_{\bm{w}} - \bm{\Sigma}_{\bm{w}} \bm{\Psi}_t\left(\bm{\Sigma}_{\bm{y}_t}^*
+ \bm{\Psi}_t^T\bm{\Sigma}_{\bm{w}}\bm{\Psi}_t\right)^{-1}\bm{\Psi}_t^T\bm{\Sigma}_{\bm{w}} \label{eqn:condition_sigma}
\end{align}
This closed-form update is possible because we assume, as in \cite{paraschos2018using}, that $p(\bm{w} \mid \bm{\theta})$ is Gaussian.
\subsection{Learning a Mixture of ProMPs from Demonstration}
\label{sec:methods:mixture}
We employ a mixture of multiple ProMPs parameterized as $\mathcal{M} = \{(\bm{\theta}_1,\pi_1), \dots, (\bm{\theta}_J,\pi_J)\}$ where $\bm{\theta}_j = \{\bm{\mu}_{\bm{w}}^j, \bm{\Sigma}_{\bm{w}}^j\}$, since it is known that a single ProMP is not sufficient to properly characterize a given space~\cite{ewerton2015learning}. We formalize the mixture as
\begin{equation}
\label{eqn:promp_mixture}
p(\bm{\tau} \mid \mathcal{M}) = \sum_{j=1}^J \pi_j \mathcal{N}(\bm{\tau} \mid \bm{\Psi} \bm{\mu}_{\bm{w}}^j, \bm{\Psi}^T\bm{\Sigma}_{\bm{w}}^j\bm{\Psi}+\bm{\Sigma}_{\bm{y}})
\end{equation}
where $\pi_j \in [0,1]$ are mixture coefficients and $\bm{\mu}_{\bm{w}}^j$, $\bm{\Sigma}_{\bm{w}}^j$ are the mean and covariance associated with the $j^{th}$ ProMP.
The mixture $\mathcal{M}$ is learned incrementally over time as new demonstrations are acquired. In order to incorporate a new demonstration, we first learn a weight vector $\bm{w}$ from the demonstration using Ridge regression as in~\cite{paraschos2018using}. For the first demonstration received, a new ProMP is created with mean $\bm{\mu}_{\bm{w}}^1 = \bm{w}$ and covariance $\bm{\Sigma}_{\bm{w}}^1 = \gamma \mathbf{I}$, where $\mathbf{I}$ is the identity matrix and $\gamma \in \mathbb{R}^+$ is a scaling parameter. This serves as a non-informative prior for the covariance \cite{paraschos2018using}. For subsequent demonstrations, we must determine which ProMP in the mixture the new demonstration should be incorporated into. In contrast to previous work~\cite{koert2018incremental} that learns a separate model for a gating function to the mixture components, we directly utilize the probabilistic information encoded in the learned ProMPs to determine which ProMP a new demonstration should be incorporated into. We use the Mahalanobis distance~\cite{mahalanobis1936generalized} as a measure of disparity between $\bm{w}$ and each ProMP distribution $\bm{\theta}_j$ given by
\begin{equation}
\label{eqn:mahalanobis}
d(\bm{w}, \bm{\theta}_j) = \sqrt{(\bm{w} - \bm{\mu}_{\bm{w}}^j)^T (\bm{\Sigma}_{\bm{w}}^j)^{-1}(\bm{w} - \bm{\mu}_{\bm{w}}^j)}
\end{equation}
A demonstration is incorporated into the $j^{th}$ ProMP if the Mahalanobis distance between the learned weight vector and the ProMP distribution falls below a disparity threshold $\delta \in \mathbb{R}^+$, i.e. if $d(\bm{w}, \bm{\theta}_j) \leq \delta$. Instead of choosing a fixed threshold, we compute a robust measure of a disparity threshold for each ProMP utilizing the ProMP generative model. For each ProMP, we create a set of weight vector samples $\mathcal{W}_j$ and compute the value of Equation (\ref{eqn:mahalanobis}) for each $\bm{w}_i \in \mathcal{W}_j$. We use Median Absolute Deviation outlier filtering~\cite{leys2013detecting} to compute the threshold
\begin{equation}
\label{eqn:mad_filter}
\delta_j = \max\left\{d(\bm{w}_i, \bm{\theta}_j) : \frac{d(\bm{w}_i, \bm{\theta}_j) - M_j}{MAD_j} < \beta\right\}
\end{equation}
where $M_j$ is the median Mahalanobis distance of the sample weights $\bm{w}_i \in \mathcal{W}_j$ to the ProMP distribution $\bm{\theta}_j$ and $MAD_j$ is the Median Absolute Deviation computed by $MAD_j = \text{med}\left(|d(\bm{w}_i, \bm{\theta}_j) - M_j|\right)$. The parameter $\beta$ is an easily tuned parameter that governs how many outliers are discarded and has standard associated values ranging from approximately 3 (few outliers discarded) to 2 (many outliers discarded) \cite{leys2013detecting}.
Once it is determined that $d(\bm{w}, \bm{\theta}_j) \leq \delta_j$, the new demonstration is incorporated into the $j^{th}$ ProMP by updating the ProMP's distribution parameters as
\begin{align}
\bm{\mu_w}^j &= \frac{1}{N} \sum_{i=1}^N \bm{w}_i \\
\bm{\Sigma_w}^j &= \lambda \bm{\Sigma}_0 + \frac{(1-\lambda)}{N} \sum_{i=1}^N (\bm{w}_i - \bm{\mu_w}^j)(\bm{w}_i - \bm{\mu_w}^j)^T
\end{align}
The mean $\bm{\mu_w}^j$ is computed as the Maximum Likelihood Estimate (MLE) where $N$ is the number of samples the ProMP is learned from, including the newly acquired sample. $\bm{\Sigma_w}^j$ is updated as the Maximum A Posteriori (MAP) estimate under an Inverse Wishart Prior, which amounts to a convex combination of a positive semi-definite prior $\bm{\Sigma}_0$ and the MLE of the sample covariance~\cite{murphy2012machine}. We adopt the method of \cite{wilbers2017context} and set $\bm{\Sigma}_0$ to be the estimate of $\bm{\Sigma_w}^j$ from the previous learning iteration. This insures that $\bm{\Sigma_w}^j$ is always full rank (due to the initial diagonal prior) and that the parameter estimate is not unduly influenced by a new sample. We found this to be important in our experiments since, in general, the number of demonstrations each ProMP is learned from is considerably smaller than the dimensionality of the weight space; using an ill-conditioned matrix in the probability computations can result in non-informative values.
If it happens that $d(\bm{w}, \bm{\theta}_j) > \delta_j$ for every ProMP in the mixture, then we create a new ProMP with an uninformative prior as described previously. When initializing all ProMPs we set the initial $\bm{\Sigma_{0}}=\sigma \bm{I}$ for a small value of $\sigma \in \mathbb{R}^+$.
\subsection{Active Learning of ProMPs}
\label{sec:methods:active_learning}
The active learner's objective is to learn a mixture of ProMPs $\mathcal{M}$ that achieves task generalization over some region of its environment. We formalize this region by defining a continuous context space $\mathcal{C}$ that specifies the task to be performed in terms of context variables \cite{kober2011reinforcement} (e.g. the pose of an object to be grasped). We assume there is a subset $\mathcal{C}_d \subseteq \mathcal{C}$ over which task generalization is desired.
We estimate the achievable feasible region by the coverage achieved by the mixture of ProMPs at the timestep relevant for the task context $\bm{\eta}$. Because the context variable is not, in general, a direct subset of the ProMP state, we allow for a mapping $g:\mathcal{C} \rightarrow \mathcal{S}$ between the context space $\mathcal{C}$ and the ProMP state space $\mathcal{S}$. We define one such mapping below in Section~\ref{sec:methods:promp_context} suitable for our experimental grasping task.
We formalize our active learning problem by conceiving of each ProMP in the mixture to be its own class. We then employ active learning through uncertainty sampling~\cite{settles2012active}, in which the learner generates a new task instance for which the teacher can provide a demonstration governed by
\begin{equation}
\bm{\eta}^* = \argmax_{\bm{\eta} \in \mathcal{C}_d} U(\bm{\eta})
\end{equation}
where $\bm{\eta} \in \mathcal{C}_d$ is a context variable sufficient to describe the task and $U(\bm{\eta})$ is an uncertainty sampling function that measures the uncertainty the learner has about characterizing a given task instance as being a member of one of the available classes. We qualitatively compare the three most common uncertainty sampling measures \cite{settles2012active}:
\noindent\textbf{Least Confident:}
\begin{equation}
\label{eqn:least_confident}
U_{\tiny{lc}}(\bm{\eta}) = \argmax_{\bm{\eta} \in \mathcal{C}_d} \left[1 - p(z_1 \mid \bm{\eta}) \right]
\end{equation}
\noindent\textbf{Minimum Margin:}
\begin{equation}
\label{eqn:min_margin}
U_{mm}(\bm{\eta}) = \argmax_{\bm{\eta} \in \mathcal{C}_d}\left[p(z_2 \mid \bm{\eta}) - p(z_1 \mid \bm{\eta})\right]
\end{equation}
\noindent\textbf{Maximum Entropy:}
\begin{equation}
\label{eqn:max_entropy}
U_{me}(\bm{\eta}) = \argmax_{\bm{\eta} \in \mathcal{C}_d} -\sum_{z = 1}^{J} p(z \mid \bm{\eta}) \log p(z \mid \bm{\eta})
\end{equation}
In Equations \ref{eqn:least_confident}--\ref{eqn:max_entropy}, $p(z \mid \bm{\eta})$ indicates the probability of a class label $z$ being given to instance $\bm{\eta}$, where the class label corresponds to any one of the $J$ ProMPs. In Equations \ref{eqn:least_confident} and \ref{eqn:min_margin}, $z_1 = \argmax_z p(z \mid \bm{\eta})$ is the most likely label for instance $\bm{\eta}$ while $z_2$ in Eq.~\ref{eqn:min_margin} is the second most likely label. Intuitively, the Least Confident measure (Eq.~\ref{eqn:least_confident}) selects the task instance \(\bm{\eta}^*\) whose highest probability over all labels $z \in \mathcal{Z}$ is lowest compared to all other instances $\bm{\eta} \in \mathcal{C}_d$. The Minimum Margin measure (Eq.~\ref{eqn:min_margin}) chooses the instance with the greatest ambiguity between its two most likely classifications. The Maximum Entropy measure (Eq.~\ref{eqn:max_entropy}) identifies the instance with the highest label uncertainty over all classes.
We define an additional, novel uncertainty sampling function:
\noindent\textbf{Greatest Mahalanobis Distance}
\begin{equation}
\label{eqn:greatest_mahalanobis}
U_{gm}(\bm{\eta}) = \argmax_{\bm{\eta} \in \mathcal{C}_d} \min_{j} d\left(\bm{\eta}, \bm{\theta}_{\bm{\eta}}^j\right)
\end{equation}
where $d(\cdot)$ is the Mahalanobis distance defined in Equation~\ref{eqn:mahalanobis}, $j$ indexes over ProMPs, and $\bm{\theta}_{\bm{\eta}}^j = \{\bm{\mu}_{\bm{\eta}}^j, \bm{\Sigma}_{\bm{\eta}}^j\}$ defines a distribution over the context variable achieved by mapping the $j^{th}$ ProMP distribution parameters to the context space. We provide details for the specific mapping we utilize in this paper below in Section~\ref{sec:methods:promp_context}. Our Greatest Mahalanobis Distance approach is similar to Least Confident, but instead of choosing the instance with the lowest probability over classes (ProMPs), it selects the instance whose closest ProMP distribution is the farthest away
We found that in practice, the Mahalanobis distance is less susceptible to computational issues than the probability values computed for the other uncertainty sampling functions. The density function for a Gaussian distribution requires dividing by the determinant of the covariance matrix, which is equivalent to dividing by the product of the eigenvalues of the covariance matrix. This value can be extremely small when the covariance is estimated from a small sample set, causing the computation to become unstable. We show in our experiments in Section~\ref{sec:experiment_results} that the Greatest Mahalanobis Distance objective encourages the learner to select instances far away from instances it has already received demonstrations for, while the other uncertainty sampling functions tend to ``compete'' along the boundaries of the regions covered by adjacent ProMPs.
Given the new task instance $\bm{\eta}^*$ generated by the uncertainty sampling optimization, the teacher provides a demonstration. The demonstration is then incorporated into the mixture of ProMPs as described in Section~\ref{sec:methods:mixture}. The procedure iterates until a stopping criteria is met, e.g. the task success rate over a validation set reaches an acceptable percentage.
\subsection{Example ProMP Context}
\label{sec:methods:promp_context}
To be concrete in our formulation, we present a context mapping for the task of grasping an object placed arbitrarily on a surface. We use this mapping in our experiments presented later in Section~\ref{sec:experiment_results}. The task requires the robot to pick up an object located arbitrarily on a table surface. The ProMP state consists of the end-effector pose with respect to the robot's base frame $^{0}T_{ee}$ (e.g. from forward kinematics of the joint state), while the context space is the pose of the object with respect to the base frame $^{0}T_{obj}$ (e.g. from an object tracker using an RGB-D camera~\cite{wuthrich2013probabilistic}). Once a desired end-effector pose in the object frame $^{obj}T_{ee}$ is known, the mapping $g:\mathcal{C}\rightarrow\mathcal{S}$ from context space to state space, as described in Section~\ref{sec:methods:active_learning}, is achieved by a simple coordinate frame transformation:
\begin{equation}
g\left(^{0}T_{obj}\right) = {^{0}T_{obj}} \cdot {^{obj}T_{ee}} = {^{0}T_{ee}}
\end{equation}
The pose $^{obj}T_{ee}$ could be specified manually or from the output of a grasp planner; however, we instead employ a Gaussian Mixture Model (GMM) over successful end-effector poses in the object frame. The GMM is defined by
\begin{equation}
\label{eqn:ee_gmm}
p(\bm{y}_t) = \sum_{r=1}^R \beta_r \mathcal{N}(\bm{y}_t \mid \bm{\mu}_{\bm{y}_t}^r, \bm{\Sigma}_{\bm{y}_t}^r)
\end{equation}
where $\beta_r \in [0,1]$ are the mixture coefficients and $\bm{\mu}_{\bm{y}_t}^r, \bm{\Sigma}_{\bm{y}_t}^r$ are the mean and covariance of the end-effector pose in the object frame for the $r^{th}$ component. A visualization of the mean components learned from the demonstrations given in our experiments can be seen in Figure~\ref{fig:ee_gmm}. Using the known pose of the object in the base frame, we transform each $\bm{\mu}_{\bm{y}_t}^r$, $\bm{\Sigma}_{\bm{y}_t}^r$ to get $\bm{\tilde \mu}_{\bm{y}_t}^r$, $\bm{\tilde \Sigma}_{\bm{y}_t}^r$, which are the mean and covariance of the end-effector with respect to the base frame.
We then leverage these parameters as the condition points for the ProMP, i.e. we set $\bm{y}_t^* = \bm{\tilde \mu}_{\bm{y}_t}^r$ and $\bm{\Sigma}_{\bm{y}_t}^* = \bm{\tilde \Sigma}_{\bm{y}_t}^r$ in Equations \ref{eqn:condition_mu} and \ref{eqn:condition_sigma}. We then compute the probability of a particular task being achievable by the ProMP mixture as
\begin{equation}
\label{eqn:p_feasible}
p(\bm{\eta} \mid z=j) = \sum_{r=1}^R \beta_r \mathcal{N}(\bm{\tilde y}_t \mid \bm{\Psi}_t \bm{\tilde{\mu}}_{\bm{w}}^j, \bm{\Psi}_t^T \bm{\tilde{\Sigma}}_{\bm{w}}^j \bm{\Psi}_t+\bm{\Sigma}_y)
\end{equation}
where $z=j$ indicates the $j^{th}$ ProMP in the mixture; $\bm{\tilde y}$ is the ProMP state generated from the transformation of context variable; $\bm{\tilde{\mu}}_{\bm{w}}^j$ and $\bm{\tilde{\Sigma}}_{\bm{w}}^j$ are the posterior distribution parameters in weight space computed from Equations \ref{eqn:condition_mu} and \ref{eqn:condition_sigma}; and $\beta_r$ are the same as in Equation~\ref{eqn:ee_gmm}.
\input{ee_gmm_fig.tex}
We interpret Equation~\ref{eqn:p_feasible} as a measure of how capable the ProMP is of achieving the task when conditioned on the task-relevant pose determined by the context variable. There is little guidance in the literature for how to set $\bm{\Sigma}_{\bm{y}_t}^*$ and it is typically taken to be a scaled identity matrix~\cite{gomez-gonzalez2018adaptation}. We highlight this key advantage of our choice to learn the GMM: we obtain meaningful values for both the mean and the covariance for use in this conditioning operation.
We note that we are not able to directly compute the probabilities $p(z \mid \bm{\eta})$ for the uncertainty sampling measures (Eqs.~\ref{eqn:least_confident}--\ref{eqn:max_entropy}). Thus we use Bayes theorem and Eq.~\ref{eqn:p_feasible} giving
\begin{align}
p(z \mid \bm{\eta}) &= \frac{p(\bm{\eta} \mid z)p(z)}{p(\bm{\eta})}
= \frac{p(\bm{\eta} \mid z)p(z)}{\sum_{z_i}p(\bm{\eta} \mid z_i)p(z_i)}
\end{align}
where $z_i$ ranges over all possible classes. We use a uniform, uninformative prior for $p(z)$ to reflect our assumption that without further knowledge, any ProMP in the mixture might potentially be used to execute a task. More intelligent priors are worth exploring and we leave this for future work.
\section{Related Work}
\label{sec:related_work}
Active learning, where a learner actively poses queries to a teacher for input to reduce sample complexity, has been widely applied in supervised learning settings \cite{settles2012active}. Our approach is most suitably situated in the literature on \textit{active learning from demonstration} \cite{maeda2017active, silver2012active, grollman2007dogged, kroemer2010combining,chen2018active}, also referred to as \textit{active imitation learning}~\cite{shon2007active,judah2014active}, in which the learner generates task instances for which the teacher may provide a demonstration. Active learning from demonstration has been applied to autonomous navigation \cite{silver2012active}, object seeking with a quadruped robot \cite{grollman2007dogged}, grasping objects \cite{kroemer2010combining}, reaching to task space positions with a manipulator \cite{maeda2017active}, and generating smooth robot motion from a latent-space encoding~\cite{chen2018active}. Also included in this area are approaches where the learner does not request full task demonstrations, but instead asks for the action to take in the particular state that it is in \cite{shon2007active,judah2014active,chernova2009interactive}. These approaches, however, are only applicable to finite action spaces where actions are easily specified by a teacher.
The approaches most closely related to ours are those using active learning to learn Dynamic Movement Primitives (DMPs) for grasping \cite{kroemer2010combining} and reaching tasks \cite{maeda2017active}. In \cite{kroemer2010combining}, a hybrid system is presented such that a high-level active learner generates grasp configurations based on a variant of Upper Confidence Bound (UCB) policies \cite{sutton2018reinforcement}, and a low-level reactive DMP controller executes the grasp motion based on a task demonstration. In \cite{maeda2017active}, the robot incrementally learns DMPs for reaching pre-defined positions in its workspace. A Gaussian process is used for sampling trajectories with an associated variance. If a function of the variance is below an uncertainty threshold, then an existing DMP is used with the goal appropriately adapted. Otherwise, the human user is asked for a new demonstration to reach the new goal position. By utilizing ProMPs in our method instead of DMPs, we are able to achieve greater generalization capabilities \cite{paraschos2018using} while leveraging the probabilistic information already encoded in the policy representation to compute confidence measures. Additionally, we are able to provide a probabilistic measure of the robot's ability to generalize a task in a given region, as opposed to \cite{maeda2017active} which can only say for a given instance whether or not the robot is confident it can execute the motion. Our approach therefore has the advantage that the robot, after learning a task, can be deployed with an associated uncertainty estimate that it will succeed on any task instance it is given to perform.
Also relevant to our approach is work in the area of active learning for parameterized skills \cite{daSilva2014active}. In \cite{daSilva2014active} the agent selects tasks to practice in a reinforcement learning setting with the objective of optimizing for expected improvement in skill performance. Task competency is measured over a recursively split goal space in \cite{baranes2013active} for an intrinsically-motivated agent. Active Contextual Policy Search \cite{fabisch2014active} considers a learner that generates task contexts to condition a high-level policy on, such that a lower-level policy can be optimized to maximize an intrinsic reward function. These works are each applied in reinforcement learning settings and are agnostic to any particular policy representation. Our approach, on the other hand, makes use of human demonstrations and, by committing to a particular policy representation (ProMPs), we are able to compute task competency in a unified manner utilizing information from the policy representation itself.
\section{0pt}{2pt
\title{\LARGE \bf Active Learning of Probabilistic Movement
Primitives }
\IEEEoverridecommandlockouts
\author{Adam Conkey$^1$ and Tucker Hermans$^{1,2}$%
\thanks{$^{1}$School of Computing; Robotics Center; University of Utah, USA. $^{2}$NVIDIA, USA. \emph{Email: adam.conkey@utah.edu, thermans@cs.utah.edu}}}
\newcommand{\numValidation}{10}
\newcommand{\numTrials}{5}
\newcommand{\numValidationInterval}{5}
\newcommand{\numHoldout}{100}
\newcommand{\liftHeight}{30}
\begin{document}
\def0.993}\large\normalsize{0.993}\large\normalsize
\maketitle
\def0.993}\large\normalsize{0.993}\large\normalsize
\begin{abstract}
A Probabilistic Movement Primitive (ProMP) defines a distribution over trajectories with an associated feedback policy. ProMPs are typically initialized from human demonstrations and achieve task generalization through probabilistic operations. However, there is currently no principled guidance in the literature to determine how many demonstrations a teacher should provide and what constitutes a ``good'' demonstration for promoting generalization. In this paper, we present an active learning approach to learning a library of ProMPs capable of task generalization over a given space. We utilize uncertainty sampling techniques to generate a task instance for which a teacher should provide a demonstration.
The provided demonstration is incorporated into an existing ProMP if possible, or a new ProMP is created from the demonstration if it is determined that it is too dissimilar from existing demonstrations. We provide a qualitative comparison between common active learning metrics; motivated by this comparison we present a novel uncertainty sampling approach named ``Greatest Mahalanobis Distance.'' We perform grasping experiments on a real KUKA robot and show our novel active learning measure achieves better task generalization with fewer demonstrations than a random sampling over the space.
\end{abstract}
\input{intro.tex}
\input{related_work.tex}
\input{methods.tex}
\input{experiment_setup.tex}
\input{experiment_results.tex}
\input{conclusion.tex}
\bibliographystyle{IEEEtran}
|
2,877,628,089,772 | arxiv | \section{Introduction}
\begin{figure}[h]
\centering
\includegraphics[width=0.93\linewidth]{lm_eg.jpg}
\caption{The question ``Did Arsenal play Man United?'' cannot be answered because the predicate ``obliterate'' from the text snippet isn't in the Entailment Graph. A Language Model embeds ``obliterate'' so a nearest neighbor in the EG can be found, completing the directional inference.}
\label{fig:method-illustration}
\vspace{-0.25cm}
\end{figure}
An Entailment Graph (EG) is a learned structure for making natural language inferences of the form [premise] \textit{entails} [hypothesis], such as ``if Arsenal \textbf{defeated} Man United, then Arsenal \textbf{played} Man United.'' An EG consists of a set of vertices (typed natural language predicates), and a set of edges (directional entailments between predicates). They are constructed in an unsupervised manner using the Distributional Inclusion Hypothesis \cite{geffet-dagan-2005-distributional}: a representation is generated for each predicate based on its distribution with arguments in a training corpus, and these representations are used in learning directional entailments.
EGs are useful in tasks like knowledge graph link prediction \cite{hosseini2019-duality, hosseini-etal-2021-open-domain} and question-answering from text \cite{lewis_semantics_2013, mckenna-etal-2021-multivalent}; and as an unsupervised method, to build them only requires a parser and entity linker for a new language domain \cite{li-etal-2022-cross-lingual}. Further, EGs are fully explainable, because model decisions can be traced back to sentences in training data.
However, EGs suffer from sparsity of two kinds. One kind is \textit{edge sparsity}, arising from the fact that authors usually omit facts that the reader can be expected to infer for themselves, making it hard to learn edges. Recent work has improved on EG connectivity \cite{berant-etal-2015-efficient, hosseini2021unsupervised, chen-etal-2022-transitivity} but little attention has been paid to the related problem of \textit{vertex sparsity}, arising from predicates that are unseen at all in training. Because EGs are learned structures of predicates, they cannot reason about novel queries: in an inference task, if \textit{either} the premise or hypothesis predicate has not been seen in training (thus is missing from the graph), there is no possibility to have learned an edge, and the model will have no chance to report an entailment. In fact, many EG demonstrations don't achieve more than 50\% of task recall.
Like words, predicates occur in a Zipfian frequency distribution with an unboundedly long tail of rare predicates, so it is impractical to solve vertex sparsity by scaling up distributional learning.
Instead, we present a method for smoothing an Entailment Graph using a Language Model to search within the graph for approximations of a missing target predicate, completing otherwise impossible EG inferences. We illustrate the method in Figure~\ref{fig:method-illustration}. The paper offers three contributions:
\begin{enumerate}
\item A novel method for unsupervised smoothing of Entailment Graph vertices using a Language Model to find approximations of missing predicates.
\item An analysis of Language Model embedding space and a discussion of why this method is naturally suited to premise smoothing, but not hypothesis smoothing.
\item A theory for smoothing with high directional precision by constructing transitive inference chains, demonstrated on both premise and hypothesis.
\end{enumerate}
\section{Background}
Unsupervised Entailment Graph research has mainly oriented toward edges: overcoming edge sparsity using graph properties like transitivity \cite{berant-etal-2010-global, berant-etal-2015-efficient, hosseini2018}, incorporating contextual or extralinguistic information to improve edge precision \cite{hosseini-etal-2021-open-domain, guillou-etal-2020-incorporating}, and research into the underlying theory of the Distributional Inclusion Hypothesis \cite{kartsaklis-sadrzadeh-2016-distributional}. Recently, \citet{mckenna-etal-2021-multivalent} interpret the DIH in terms of eventualities which may have variable argument numbers, learning edges between predicates of different valencies. Though this work expands the kinds of graph vertices, it does not address the problem of vertex sparsity, which is especially severe for binary predicates. To our knowledge, no other work in unsupervised entailment models has approached this issue of vertex sparsity.
Older language models like word2vec \cite{mikolov2013word2vec} learned representations for a fixed vocabulary of words, and couldn't be used to estimate probabilities for unseen words. Earlier methods like those based on n-grams smoothed the distribution using mathematical re-estimation. However, recent research in sub-symbolic character-based models like ELMo \cite{peters-etal-2018-deep} and WordPiece models like BERT \cite{devlin-etal-2019-bert}, prove effective at generalizing from seen words to unseen. We leverage sub-symbolic encoding in this work as our means of smoothing, to generalize beyond a fixed vocabulary of predicates.
\section{Smoothing an Entailment Graph using a Language Model}
In this work we consider Entailment Graphs of typed binary predicates, as is the common mode of EG research. An Entailment Graph is defined $G = (V, E)$, consisting of a set of vertices $V$ of natural language predicates (with argument types in the set $\mathcal{T}$), and directed edges $E$ indicating entailments.
Binary predicates in $V$ have two argument slots labeled with their types. For example, the predicate $\textsc{travel.to}$(:person, :location) $\in V$, and the types :person, :location $\in \mathcal{T}$. An example directional entailment $\textsc{travel.to}$(:person, :location) $\vDash$ $\textsc{arrive.at}$(:person, :location) $\in E$.
Our smoothing method may be applied to any EG. In this work we show the complementary benefits of vertex-smoothing with existing methods in improving edge sparsity by comparing to two related baseline models, described in \S\ref{ssec:smoothing_exps}. These EGs are learned from the same set of vertices, but are constructed differently so have different edges. The FIGER type system is used for these experiments \cite{ling-figer}, where $|\mathcal{T}| = 49$. Typing aids EG precision by grouping predicates and their entailments by type-pair into $\mathcal{G}$ subgraphs: these models have up to $|\mathcal{T}|^2 = 49^2$ typed subgraphs $g \in \mathcal{G}$ in which learning is distributed. For example, the predicate $\textsc{kill}$(:medicine, :disease) in the subgraph $g^{(\textit{medicine-disease})}$ has different learned entailments than $\textsc{kill}$(:person, :person).
\begin{table*}[h]
\centering
\begin{tabular}{@{}ll@{}}
\toprule
\textbf{Typed Predicate} & \textbf{Constructed Sentence} \\
\midrule
\relation{(join.1,join.2)\#person\#organization} & ``person \predicatetext{join} organization'' \\
\relation{(give.2,give.to.2)\#medicine\#person} & ``\predicatetext{give} medicine \predicatetext{to} person'' \\
\relation{(export.1,export.to.2)\#location\_1\#location\_2} & ``location\_1 \predicatetext{export to} location\_2'' \\
\bottomrule
\end{tabular}
\caption{For an input typed predicate $x$, $L(x)$ constructs a pseudo-sentence and encodes it with a Language Model. The output representation is the average of the sentence vectors corresponding to the \predicatetext{predicate}.}
\label{tab:sents}
\end{table*}
\subsection{Smoothing Method}
Our method rests on the assumption that existing Entailment Graphs contain enough information to enable discovery of suitable replacements for an unseen target predicate that are already present in the graph, using a Language Model. For example, in the sports domain, the EG may be missing a rare predicate \textsc{obliterate} but contain similar predicates \textsc{beat} and \textsc{defeat} which can be found as close neighbors in Language Model embedding space. These nearby predicates are expected to have similar semantics (and entailments) to the unseen target predicate, and will thus be suitable replacements. See Figure~\ref{fig:method-illustration} for an illustration.
We define the smoothed retrieval function $S$, which replaces the typical method for retrieving a target predicate vertex $x$ from a typed subgraph $g^{(t)} = (V^{(t)}, E^{(t)})$, with typing $t \in \{\mathcal{T} \times \mathcal{T}\}$.
Ahead of test-time, for each typed subgraph $g^{(t)}$ we encode the EG predicate vertices $V^{(t)}$ as a matrix $\textbf{V}^{(t)}$. For each predicate $v^{(t)}_i \in V^{(t)}$, we encode $L(v_i^{(t)})=\textbf{v}_i^{(t)}$, a row vector $\textbf{v}^{(t)}_{i} \in \textbf{V}^{(t)}$.
At test-time we encode a corresponding vector for the target predicate $x$, $L(x) = \textbf{x}$. Then $S$ retrieves the K-nearest neighbors of $x$ in $g^{(t)}$:
\begin{align*}
& S(x, g^{(t)}, K) = \\
& \{v^{(t)}_{i} \mid v^{(t)}_{i} \in V^{(t)}, \text{ if } \textbf{v}^{(t)}_{i} \in \textit{KNN}(\textbf{x}, \textbf{V}^{(t)}, K)\}
\end{align*}
We define $L(\cdot)$ and configure $\textit{KNN}(\cdot)$ as follows.
$L(\cdot)$ is an unsupervised encoder for any typed natural language predicate using a pretrained Language Model. We first construct a short sentence from the typed predicate using each type as a stand-in argument in a CCG argument structure \cite{syntactic-process}, and then the sentence is encoded by the Language Model. For these experiments we use RoBERTa \cite{liu2019roberta}, a general-purpose contextual Language Model which shares a transformer architecture with other popular LMs but has robustly pretrained on 160GB of unlabeled text. We extract the embeddings of WordPieces corresponding to the predicate only, and average them to make the resulting predicate vector. See Table~\ref{tab:sents} for examples.
For the K-nearest neighbors search metric we use Euclidean Distance (l2 norm) from the target vector $\textbf{x}$ in embedding space. We precompute a BallTree which spatially organizes the EG vectors to speed up search \cite{scikit-learn}. At best, this reduces search time from linear in the number of vertices $|V^{(t)}|$ to log $|V^{(t)}|$.
\subsection{Testing Datasets}
Several datasets now exist for testing general predicate paraphrase and entailment, but we argue that the most important consideration when modifying Entailment Graph predictions is maintaining the capability for strong directional inference. A \textit{directional inference} is stricter than paraphrase or similarity, in that it is true only in one direction, but not both, e.g. $\textsc{defeat} \vDash \textsc{play}$ but $\textsc{play} \nvDash \textsc{defeat}$. Making these inferences is difficult, but crucial for nuanced language understanding. Therefore, we demonstrate our smoothing method on two fully directional datasets, which test both directions of these kinds of inferences, creating a 50\% positive/50\% negative class balance.
\textbf{Levy/Holt Dataset.} The Levy/Holt dataset has been explored thoroughly in previous work \cite{hosseini2021unsupervised, guillou-etal-2021-blindness, li-etal-2022-cross-lingual, chen-etal-2022-transitivity}. This dataset has the distinction of including inverses for all items, allowing systematic investigation of directionality, although it contains a high proportion of reversible entailments (paraphrases) and selection bias artifacts that can be picked up by fine tuning in supervised models, due to its construction method. We focus on the 1,784 questions forming the purely directional subset, which is more challenging.
\textbf{ANT Dataset.} ANT\footnote{To be released soon in a separate paper.} is a new, high-quality dataset improving on Levy/Holt, which tests predicate entailment in the general domain. It was created by expert annotation of entailment relations between predicate clusters, expanded automatically using WordNet and other dictionary resources into thousands of test questions of the format ``given [premise], is [hypothesis] true?'' We test on the purely directional subset of 2,930 questions.
See Table~\ref{tab:datasets} for dataset examples. Each dataset comes preprocessed to identify argument types using CoreNLP \cite{CoreNLP, finkel_NER} which roughly align with the EG's FIGER types. Typed relations are then extracted by the MoNTEE system \cite{devroe2021modality}, which are used as queries to our models.
\begin{table}[t]
\centering
\begin{tabular}{@{}p{0.9\linewidth}@{}}
\toprule
``The audience applauded the comedian'' $\vDash$ ``The audience observed the comedian'' \\[0.6cm]
``Apple supported Samsung'' $\vDash$ ``Apple had an opinion on Samsung'' \\[0.6cm]
``The laptop was assessed against the criteria'' $\nvDash$ ``The laptop satisfied the criteria'' \\
\bottomrule
\end{tabular}
\caption{Example queries from the (development) directional subset of ANT.}
\label{tab:datasets}
\end{table}
\subsection{Experiments with P and H smoothing}
\label{ssec:smoothing_exps}
We experiment by smoothing two recent Entailment Graphs: the graph of \citet{hosseini2018} (we refer to this model as \textbf{GBL} for short) and the state-of-the-art graph in \citet{hosseini-etal-2021-open-domain} (\textbf{CTX} for short). Importantly, these graphs are constructed from the same set of predicate vertices, but CTX improves upon the number of learned edges over GBL. GBL introduces a global edge-learning step after local learning, and CTX later improves on the local edge-learning step using a contextual link-prediction objective, then also globalizes. Both have previously scored highly amongst unsupervised models on the full Levy/Holt dataset.
We run two experiments on each dataset. (1) We apply our unsupervised smoothing method to augment the \textit{premise} of each test entailment relation, generating $K$ new target premises for each relation. Separately, (2) we smooth the \textit{hypothesis} of each test relation the same way. For both we try different values of the hyperparameter $K \in$ \{2, 3, 4\}.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{p_h_lm_smoothing_ant.png}
\caption{LM smoothing on ANT. Comparison of P(remise) and H(ypothesis) smoothing on the CTX model. We explored $K \in \{2,3,4\}$ and show the best $K_{premise}=4$ and $K_{hypothesis}=2$.}
\label{fig:p_h_lm_ant_ctx}
\end{figure}
\subsection{Results}
Plots for model performances are shown in Figure~\ref{fig:p_h_lm_ant_ctx}, in which we compare P-smoothing vs. H-smoothing of the CTX graph using the best $K_{premise}=4$ and $K_{hypothesis}=2$. In Appendix~\ref{sec:gbl_appendix} we also show P-smoothing in particular of the CTX graph vs. the GBL graph. For all models (best K selected) on both datasets we show summary statistics in Table~\ref{tab:results}, including area under the precision-recall curve (AUC) and average precision (AP) across the range of recall achieved. A sample of model outputs is given in Table~\ref{tab:sample}.
Our method selecting nearest-neighbors of a target predicate in an EG using their LM embedding distance has very different behavior for smoothing the premise vs. the hypothesis. We observe that P-smoothing is very effective at extending both the recall and precision of both Entailment Graphs it is applied to, with a slight advantage in AUC to higher values of K. When applied to the SOTA model CTX on the ANT dataset, our smoothing method increases maximum recall by 25.1 absolute percentage points to 74.3\% while increasing average precision from 66\% to 68\%. On the Levy/Holt dataset we similarly increase maximum recall by 16.3 absolute pp to 62.7\% while exceeding average precision. However, H-smoothing is actually detrimental: despite improving recall, average precision on ANT is severely cut to 59\%, with the lowest confidence predictions no better than chance (50\% precision).
We also note that P-smoothing greatly improves recall and precision when applied to \textit{both} GBL and CTX graphs. This shows the complementary nature of improving vertex sparsity with improving edge sparsity in Entailment Graphs: these techniques improve different aspects of the graph and improvements can be applied together. Since effects are similar for both Entailment Graphs, from now on we show results only for CTX, and report additional results for the weaker GBL in Appendix~\ref{sec:gbl_appendix}.
\begin{table}[tb]
\small
\begin{tabular}{@{}lcccc@{}}
\toprule
& \multicolumn{2}{c}{\bf ANT} & \multicolumn{2}{c}{\bf Levy/Holt} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}
\textbf{Model} & AUC & AP & AUC & AP \\
\midrule
GBL & 0.134 & 0.584 & 0.158 & 0.558 \\
GBL-Smooth-P$_{K=4}$ & \textbf{0.310} & \textbf{0.647} & \textbf{0.289} & \textbf{0.607} \\
GBL-Smooth-H$_{K=2}$ & 0.16 & 0.526 & 0.173 & 0.521 \\
\midrule
CTX & 0.324 & 0.657 & 0.279 & 0.602 \\
CTX-Smooth-P$_{K=4}$ & \textbf{0.501} & \textbf{0.675} & \textbf{0.381} & \textbf{0.608} \\
CTX-Smooth-H$_{K=2}$ & 0.345 & 0.585 & 0.303 & 0.580 \\
\bottomrule
\end{tabular}
\caption{The results of our smoothing method on the premise and hypothesis of inference queries, as compared to unsmoothed models on the ANT and Levy/Holt directional datasets. We report both area under the precision-recall curve (AUC) and average precision (AP) across the recall range.}
\label{tab:results}
\end{table}
\begin{table}[tb]
\centering
\small
\begin{tabular}{@{}>{\raggedright\arraybackslash}p{0.53\linewidth}>{\raggedright\arraybackslash}p{0.33\linewidth}@{}}
\toprule
\multicolumn{1}{>{\centering\arraybackslash}p{0.38\linewidth}}{\normalsize{\textbf{Predicate Missing\newline from EG}}} & \multicolumn{1}{>{\centering\arraybackslash}p{0.39\linewidth}}{\normalsize{\textbf{Nearest Neighbors \newline by Embed. Dist.}}} \\%\normalsize{\textbf{LM Nearest \newline Neighbors in EG}} \\
\midrule
\textsc{discredit}(:person, :thing) & \textsc{probe}, \textsc{accuse} \\[0.15cm]
\textsc{crack.up.at}(:person, :written\_work) & \textsc{make.joke.at}, \textsc{yell.at} \\[0.6cm
\textsc{minimize}(:organization, :thing) & \textsc{soften}, \textsc{evade} \\[0.6cm]
\textsc{rebuke}(:person, :person) & \textsc{oppose}, \textsc{remind} \\
\bottomrule
\end{tabular}
\caption{Sample of CTX outputs on ANT. Given a target predicate shown as \textsc{predicate}(type1, type2) where \textsc{predicate} may be missing from the EG, we show the top K=2 closest EG predicates in LM embedding space. The missing \textsc{predicate} may appear as either premise or hypothesis.}
\label{tab:sample}
\end{table}
\section{Discussion: The Asymmetry of LM Embeddings for Smoothing}
When used in nearest-neighbor search, LM embeddings perform differently when searching for a premise vs. hypothesis. We attribute this performance difference to a Language Model's fundamental bias toward producing more frequent observations from training corpora, coupled with the natural correlation of frequency with semantic generality in text. Combined, these conditions result in predicted vertices which are semantically more generalized, which is good for P-smoothing, but bad for H-smoothing.
\subsection{Language Model Frequency Bias}
As statistical learners, Language Models are biased toward high frequency words, since they are trained on a corpus to return the most probable outputs. Frequency bias has been studied in detail: LSTM-based LMs produce a Zipfian frequency distribution of words \cite{takahashi2017zipf}, and recent models for generation like GPT-2 and XLNet overfit to reporting bias \cite{shwartz-choi-2020-neural}. Overproduction of majority cases in training data cause known side-effects with ethical implications, like gender and racial bias \cite{mehrabi2021bias-survey}.
Research in Machine Translation has specifically studied this frequency bias as it relates to a semantic generalizing effect from translation input to output \cite{vanmassenhove-etal-2021-machine}. Across neural and phrase-based MT, systems produce translation outputs using words with higher training frequencies, which correlates with quantifiable lower lexical and syntactic richness than their inputs. This generalized output has long been colloquially called ``Machine Translationese'' due to its artificially non-specific tone.
\subsection{Frequency and Generality in Language}
Frequency has long been known to correlate with the semantic generality of a word \cite{caraballo-charniak-1999-determining}, and this property is used in fundamental algorithms like TF-IDF \cite{jones1972statistical}.
To relate frequency and generality for our purposes, we invoke for illustration a hierarchical taxonomy of predicates ordered by specificity, following from the theories of natural categories and prototype instances \cite{rosch1975family, rosch1976basic}. We conceptualize very general predicate categories at the top of this taxonomy such as ``act'' and ``move,'' with more concrete subcategories underneath, and highly specific ones at the bottom, like ``innoculate'' and ``perambulate.'' Rosch et al define a level of ``basic level categories'' which lie in the middle of their taxonomy, containing everyday concepts like ``dog'' and ``table'', which are learned early by humans and are used most commonly amongst all categories, even by adults \cite{mervis1976relationships}. We assume an analogous basic level in a predicate taxonomy, too, illustrated in Figure~\ref{fig:taxonomy}.
\begin{figure}[h]
\includegraphics[width=\linewidth]{abstractness_taxonomy.jpg}
\caption{The specificity taxonomy. The basic level contains ``everyday'' predicates. Those above the basic level become more general, and below become more concrete and specific. Usage frequency decreases moving away from the basic level.}
\label{fig:taxonomy}
\end{figure}
Critically, there are relatively few general categories at the top and very many specific ones at the bottom (consider for example, all the different ways you might ``move'' such as ``walk,'' ``run,'' ``sprint,'' ``circumnavigate''). However, with basic level categories being the most frequently used, moving in either direction in the taxonomy away from the basic level accompanies a decrease in usage frequency. Above the basic level, predicates are fewer and more abstract, and can be infelicitous in daily use (e.g. saying ``mammal'' when discussing a ``cat'' in Rosch's case or predicates like ``actuate'' in ours). Below the basic level, predicates are highly specialized and are typically used in specific contexts, so they are both numerous and lower-frequency (e.g. ``divebomb,'' ``defenestrate'').
This implies that a randomly sampled predicate $z$ is likely to be highly specific as there are very many of them. Fixing $z$ and randomly sampling another predicate $z'$ neighboring $z$, but sampled \textit{proportional} to observed frequencies, is likely to return a predicate of higher frequency, toward the basic level, which is usually higher in the specificity hierarchy. Thus given $z$, a frequency-proportional sample $z'$ is likely to be more general than $z$.
We claim that this applies to Language Models, and that LM embedding space is learned in a way that makes high frequency, generalized predicates easiest to find ``nearby'' target inputs. When Entailment Graph vertices are embedded in LM space, the neighborhood structure of a predicate is based on similarity, with general, frequent predicates embedded more centrally so that they often appear as a neighbor to the many, more specific predicates. In effect, traversing this neighborhood structure moves \textit{up} the specificity taxonomy.
We now test this claim by demonstrating a theory for vertex smoothing, showing how to smooth the premise and hypothesis by manipulating the specificity of smoothing predictions.
\section{Directionality by Transitive Chaining}
Applying the same nearest-neighbor search to the premise and hypothesis respectively yields drastically different results, because of a fundamental difference in the \textit{role} of a proposition as a premise or hypothesis. An optimal smoothing algorithm can be formalized as follows for symbolic inference models such as Entailment Graphs, taking into account the role of the proposition we are smoothing by construction of transitive inference chains.
\subsection{Constructing a Transitive Chain}
We formalize vertex smoothing as a search for optimal replacements. Experiments in \S\ref{ssec:smoothing_exps} show that recall may be improved by finding already-learned predicates to approximate missing target predicates. The problem is in maintaining high precision. We start with a target entailment relation $Q: p \vDash h$, with unknown truth value to be verified by a model which is \dashuline{missing} entries for at least $p$ or $h$. We claim that searching for \uline{replacement} predicates $p'$ and/or $h'$ to build a $Q_s$ suitable for the model must be done as follows:
\begin{table*}[htb]
\centering
\begin{tabular}{@{}lllcl@{}}
\toprule
\thead{Relation Category} & \multicolumn{2}{l}{\thead{Entailment Rules}} & \thead{WordNet Demo Relation} & \thead{Wordnet Demo Example} \\
\cmidrule(lr){1-1} \cmidrule(lr){2-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5}
$x$ entails $x'$ & x $\vDash$ x' & x' $\nvDash$ x & Hypernym & sprint $\Rightarrow$ move \\[0.2cm]
$x$ entailed-by $x'$ & x $\nvDash$ x' & x' $\vDash$ x & Hyponym & play $\Rightarrow$ fumble \\[0.2cm]
$x$ paraphrases $x'$ & x $\vDash$ x' & x' $\vDash$ x & Synonym & assault $\Rightarrow$ attack \\[0.2cm]
$x$ mutually non-entails $x'$ & x $\nvDash$ x' & x' $\nvDash$ x & Antonym & win $\Rightarrow$ lose \\ \bottomrule
\end{tabular}
\caption{The four categorical relations $\mathcal{C}$ between a predicate $x$ and its replacement $x'$, defined in terms of entailment, such that $x' \in c(x), c \in \mathcal{C}$. We empirically demonstrate using a WordNet relation $r \subset c$.}
\label{tab:relations}
\end{table*}
\begin{enumerate}[leftmargin=*]
\item \textbf{Generalize P}. Insert a more general premise $p'$ such that $p \vDash p'$, yielding a $Q_s: p' \vDash h$. \\[0.25cm]
\begin{tabular}{lccc}
($Q$) & \text{``$a$ \dashuline{obliterated} $b$''} & $\vDash$ & \text{``$a$ played $b$''} \\
& \rotatebox[origin=c]{270}{$\vDash$} & & \\
($Q_s$) & \text{``$a$ \uline{{beat}} $b$''} & $\vDash$ & \text{``$a$ played $b$''}
\end{tabular} \\[0.1cm]
\item \textbf{Specialize H}. Insert a more specialized hypothesis $h'$ such that $h' \vDash h$, yielding a $Q_s: p \vDash h'$. \\[0.25cm]
\begin{tabular}{lccc}
($Q$) & \text{``$a$ bought $b$''} & $\vDash$ & \text{``$a$ \dashuline{shopped for} $b$''} \\
& & & \rotatebox[origin=c]{90}{$\vDash$} \\
($Q_s$) & \text{``$a$ bought $b$''} & $\vDash$ &
\text{``$a$ \uline{{paid for}} $b$''}
\end{tabular} \\[0.1cm]
\item \textbf{Generalize P and Specialize H}. Insert new $p'$ and $h'$ as above, yielding a $Q_s: p' \vDash h'$.
\end{enumerate}
Because both $Q$ and $Q_s$ are test relations they each have unknown truth value. However, we construct $Q_s$ by ensuring that $p$ entails $p'$ and $h'$ entails $h$, for the purpose of completing a transitive inference chain from $p$ to $h$. By insertion of $p'$ and/or $h'$ in the intermediary steps of the chain, we can thus leverage confirmation of $Q_s$ to confirm $Q$.
\vspace{0.25cm}
\textbf{Case 1.} $p \vDash p'$ is known, so if a model confirms $p' \vDash h$, then $p \vDash h$ is confirmed by transitivity.
\textbf{Case 2.} If a model confirms $p \vDash h'$, already knowing $h' \vDash h$ confirms $p \vDash h$ by transitivity.
\textbf{Case 3.} This is a combination of the above. Knowing $p \vDash p'$ and $h' \vDash h$, if a model confirms $p' \vDash h'$, then $p \vDash h$ is confirmed by transitivity.
\vspace{0.25cm}
Restricting the generation of replacement predicates means that a model is not always guaranteed to find a suitable insertion leading to a transitive chain, therefore we cannot expect to attain perfect recall. However, when an additional inference is found this way, it is likely to be correct, aiding model precision.
Alternative smoothing methods which generate a replacement $Q_s$ in a different way (such as with a Language Model) provide no such guarantee of transitivity or correctness. A model will thus generate false positives by mistakenly confirming $Q_s$ when in fact $Q$ is not true, harming overall precision. For instance, if we generalized $h$ instead of specializing it, such that we know $h \vDash h'$ and construct $Q_s: p \vDash h'$. We cannot guarantee entailment between the original $p$ and $h$, so confirming $Q_s$ does not actually confirm $Q$.
\subsection{Demonstration using WordNet Relations}
\label{ssec:wordnet}
We now demonstrate these ideas empirically using WordNet \cite{wordnet}, a handcrafted resource of English lexical relations such as synonymy and hypernymy. We aim to show that explicitly guiding the search for replacement predicates by constructing transitive chains provides a means for smoothing both premise and hypothesis. For completeness, we explore all possible entailment configurations between a predicate $x$ and its smoothed replacement $x'$. The four relation categories $\mathcal{C}$ (shown in Table~\ref{tab:relations}) are ``entailment,'' ``reverse entailment,'' ``mutual entailment'' (paraphrase), and ``mutual non-entailment.'' We test all four categories to demonstrate the theory.
\begin{figure*}[h!]
\includegraphics[width=\linewidth]{ctx_wordnet_smoothing_comparison.png}
\caption{Comparison of WordNet demo relations used in smoothing P(remise), H(ypothesis), and P+H. We compare smoothing effects on the entailment graph CTX \citep{hosseini-etal-2021-open-domain}. Hypernyms are shown useful for P-smoothing, and hyponyms less so for H-smoothing.}
\label{fig:ctx_wn_comparison}
\end{figure*}
We re-run the experiment of \S\ref{ssec:smoothing_exps} by smoothing the CTX \citep{hosseini-etal-2021-open-domain} model on the ANT directional dataset (we also test GBL, see appendix). However, in this design the target premise or hypothesis is augmented without using the Language Model. Instead, we generate replacements from each category in $\mathcal{C}$ using WordNet. These entailment categories are broad, so we choose a specific WordNet lexical relation as an instance of each category, then at test-time generate smoothing predictions from the WN database. To illustrate, we choose \textit{$x$ has hypernym $x'$} as our instance of the ``entails'' category. At test-time if given a predicate such as ``elect,'' we retrieve WN hypernyms like ``choose.'' Besides \textit{hypernymy}, entailment comprises many relations (often missing from WordNet) like \textit{precondition} including ``be a candidate,'' so enumerating all kinds of entailment for this experiment is not possible.
We note that WordNet was used as part of ANT's construction, so this demonstration is meant to explain our model's behavior rather than claim a new dataset score.
To produce smoothing predictions for a predicate, we query WordNet for the predicate head with desired relation $c \in \mathcal{C}$ and extract all results from the first word sense, then insert each into the predicate. For example, given a target predicate \texttt{(receive.2,receive.from.2)} we use the WordNet relation \textit{hyponym}(``receive'') $\Rightarrow$ ``inherit'' to form \texttt{(inherit.2,inherit.from.2)}.
We test all four WordNet demo relations for P-smoothing and separately for H-smoothing in order to compare their effects.
\subsection{Results}
We show the results of this experiment in Figure~\ref{fig:ctx_wn_comparison}. In analysis we noted that synonyms and antonyms always performed in between hyponyms and hypernyms (even sometimes outperforming the base EG). As extremes, it is most interesting to focus on hypernymy and hyponymy, so we omit synonyms and antonyms from the plots for clarity.
Importantly, from these plots we note a switch in performance of hypernyms and hyponyms between P- and H-smoothing on the CTX Entailment Graph (similar results for GBL, see appendix). It is clear that generalizing the premise using hypernyms is highly effective in terms of recall and precision, and that specializing the premise with hyponyms is extremely damaging to precision. For the hypothesis, the reverse is true: specializing with hyponyms can lead to some performance gains, while generalizing with hypernyms worsens it.
These results nearly replicate the behavior of our KNN model experiments discussed earlier in \S\ref{ssec:smoothing_exps}, verifying that nearest neighbor search in embedding space has a semantically generalizing effect. This result is reflected in Table~\ref{tab:sample}, which shows examples of these generalized predictions.
We note two phenomena of interest. (1) In both models, performance is boosted in the low-recall/high-precision range when using both optimal smoothers ($P_{hyper} + H_{hypo}$), higher than using either smoother individually. (2) Additionally, $H_{hypo}$ is the best of all four $H$ smoothers tested, though it appears unreliable on its own without $P$ smoothing: $H_{hypo}$ is not useful for smoothing CTX (though it does improve the weaker Entailment Graph, GBL, see appendix).
We suggest that both of these phenomena are related to data frequency. Generalized hypernyms such as \textsc{beat} and \textsc{use} are quite common in training data, and therefore have more learned edges in the Entailment Graph with higher quality edge weights. However, highly specialized hyponyms like \textsc{elongate} can be extremely sparse in training data, leading to poorer representations with fewer edges. Phenomenon (1) shows that involving a frequently-occurring smoothed premise of high-quality makes it more likely to find an edge to a smoothed hypothesis, leading to some performance gains over either smoother individually. Phenomenon (2) shows that hypothesis smoothing may itself be more challenging than premise smoothing, and less stable due to relative sparsity of hyponyms (specializations) in corpora. If $h$ is missing from the Entailment Graph (meaning that it wasn't seen in training) then deriving a candidate $h'$ specialized from $h$ will also be unlikely to occur in training data, thus if found in the EG it may have few or poorly learned edges. Although beneficial in the low-recall setting, differences in data sparsity make hypothesis smoothing fundamentally harder.
\section{Conclusions}
It is clear from these experiments that smoothing target predicates at inference time calls for guiding the search for replacement predicates differently for premise and hypothesis. P-smoothing must be performed by generalizing, while H-smoothing requires specialization in order to maintain or improve directional precision.
We have shown an unsupervised method for P-smoothing an Entailment Graph using Language Model embeddings, which improves both recall and precision on two difficult directional entailment datasets. We improve over a SOTA Entailment Graph on Levy/Holt (directional) by 16.3 absolute percentage points in recall (to 62.7\%), and on ANT (directional) by 25.1 absolute (to 74.3\%) in recall, both while exceeding average precision.
Further, we developed a smoothing theory by controlling the search for smoothing predictions for both premise and hypothesis in order to build transitive inference chains, and demonstrated it using gold standard WordNet relations. Our experiments replicated the behavior of the unsupervised LM-based smoother, explaining that LM embeddings are useful for premise smoothing, but not hypothesis smoothing due to a semantic generalizing effect in embedding space neighborhood search.
|
2,877,628,089,773 | arxiv | \section{#1}\setcounter{equation}{0}}
\def{\rp\sigma} {{\bar\sigma}}
\def{[\sigmab]} {{[{\rp\sigma}]}}
\def{[\sigmab,\psu_\sigma]} {{[{\rp\sigma},{\hat\psi}_\sigma]}}
\def{[\sigmab_3]} {{[{\rp\sigma}_3]}}
\def{[\sigmab_3,\phu_3]} {{[{\rp\sigma}_3,{\hat\varphi}_3]}}
\def{[\sigmab_3,\phu_3]^+_{\phantom i}} {{[{\rp\sigma}_3,{\hat\varphi}_3]^+_{\phantom i}}}
\def{[\sigmab_3^+]} {{[{\rp\sigma}_3^+]}}
\def{[\sigmab_1]} {{[{\rp\sigma}_1]}}
\def{[\sigmab_1,\phu_1]} {{[{\rp\sigma}_1,{\hat\varphi}_1]}}
\def{[\sigmab_1^+]} {{[{\rp\sigma}_1^+]}}
\def{[\sigmab_1,\phu_1]^+_{\phantom i}} {{[{\rp\sigma}_1,{\hat\varphi}_1]^+_{\phantom i}}}
\def{[\sigmab]\oei} {{[{\rp\sigma}]'}}
\def{[\sigmab,\psu_\sigma]\oei} {{[{\rp\sigma},{\hat\psi}_\sigma]'}}
\def{[\sigmab]^\oo_{}} {{[{\rp\sigma}]^\circ_{}}}
\def{[\sigmab,\psu_\sigma]^\oo_{}} {{[{\rp\sigma},{\hat\psi}_\sigma]^\circ_{}}}
\def{[\sigmab,\phu_\sigma]} {{[{\rp\sigma},{\hat\varphi}_\sigma]}}
\def{[\sigmab^+]} {{[{\rp\sigma}^+]}}
\def{[\sigmab,\phu]} {{[{\rp\sigma},{\hat\varphi}]}}
\def{[\sigmab',\phu']} {{[{\rp\sigma}',{\hat\varphi}']}}
\def{[\sigmab']} {{[{\rp\sigma}']}}
\def{[\sigmab_2]} {{[{\rp\sigma}_2]}}
\def{[\sigmab_2,\phu_2]} {{[{\rp\sigma}_2,{\hat\varphi}_2]}}
\def{[\sigmab_2^+]} {{[{\rp\sigma}_2^+]}}
\newcommand\Sj[2] {\bar S_{{\rm J}\bar{#1},\bar{#2}}}
\defS^\J {S^{\rm J}}
\def\mbox{SL$(2{,}{\dl Z})$} {\mbox{SL$(2{,}{\dl Z})$}}
\def{\rm so} {{\rm so}}
\defS^\oo {S^\circ}
\newcommand\sO[1] {{\rm s}_{#1}^\circ}
\def{\rm SO} {{\rm SO}}
\renewcommand\sp[1]{{\rm s}_{#1}'}
\defS\oei {S'}
\def{\rm Spin} {{\rm Spin}}
\def\scriptscriptstyle {\scriptscriptstyle}
\def\scriptstyle {\scriptstyle}
\defstring theories {string theories}
\def{\rm SU} {{\rm SU}}
\defsuperconformal {superconformal }
\newcommand\sumbo[1]{\sum_{\scriptstyle\bar #1 \atop Q_{\cal G}(#1)=0}}
\newcommand\sumBo[1]{\sum_{\scriptstyle[\bar #1]\atop Q_{\cal G}(#1)=0}}
\newcommand\SumBo[1]{\sum_{\scriptstyle[\bar #1,\psi_{#1}]\atop Q_{\cal G}(#1)=0}}
\newcommand\sumBoo[1]{\sum_{\scriptstyle[\bar #1]^\circ\atop Q_{\cal G}(#1)=0}}
\newcommand\sumBop[1]{\sum_{\scriptstyle[\bar #1]'\atop Q_{\Gs'}(#1)=0}}
\newcommand\sumBoP[1]{\sum_{\scriptstyle[\bar #1]'\atop Q_{\cal G}(#1)=0}}
\newcommand\sumBoh[1]{\sum_{\scriptstyle[\bar #1]_<\atop Q_{H_<}(#1)=0}}
\newcommand\sume[1]{\sum_{#1\in\M_{1/2}}}
\newcommand\sumeb[1]{\sum_{\bar #1\in\Mb_{1/2}}}
\newcommand\sumf[1]{\!\sum_{#1\inM_{\rm f}}\!\!}
\newcommand\sumfb[1]{\sum_{\bar #1\in\Mb_{\rm f}}}
\newcommand\sumFb[1]{\sum_{\bar #1\in\MFb}}
\def\sum_{\J\in\Gs/\cals_\lambda} {\sum_{{\rm J}\in{\cal G}/{\cal S}_\lambda}}
\def\sum_{\J'\in\Gs/\cals_\mu} {\sum_{{\rm J}'\in{\cal G}/{\cal S}_\mu}}
\newcommand\summ[1]{\sum_{#1\inM}}
\newcommand\sumb[1]{\sum_{\bar #1\in\bar\M}}
\newcommand\sumo[1]{\sum_{#1\in\Mo}}
\newcommand\sumob[1]{\sum_{\bar #1\in\bar\Mo}}
\newcommand\sumofb[1]{\sum_{\bar #1\in\bar\Mo\cup\Mb_{\rm f}}}
\newcommand\Sumphiphu[1]{\sum_{\scriptstyle\varphi\in{\cal S}_{#1}^* \atop\varphi\succ{\hat\varphi}}}
\newcommand\sumphipsu[1]{\sum_{\scriptstyle\varphi_{#1}\in{\cal S}_{#1}^* \atop
\varphi_{#1}\succ{\hat\psi}_{#1}}}
\newcommand\Sumphipsu[1]{\sum_{\scriptstyle\varphi\in{\cal S}_{#1}^* \atop\varphi\succ{\hat\psi}}}
\newcommand\Sumpsiphu[1]{\sum_{\scriptstyle\psi\in{\cal S}_{#1}^* \atop \psi\succ{\hat\varphi}}}
\newcommand\sumpsipsu[1]{\sum_{\scriptstyle\psi_{#1}\in{\cal S}_{#1}^* \atop
\psi_{#1}\succ{\hat\psi}_{#1}}}
\newcommand\Sumpsipsu[1]{\sum_{\scriptstyle\psi\in{\cal S}_{#1}^* \atop \psi\succ{\hat\psi}}}
\newcommand\sumpsipsuo[1]{\sum_{\scriptstyle\psi_{#1}\in{\cal S}_{#1}^* \atop
\psi_{#1}\succ{\hat\psi^\oo}_{#1}}}
\newcommand\sumpsipsup[1]{\sum_{\scriptstyle\psi_{#1}\in{\cal S}_{#1}^* \atop
\psi_{#1}\succ{\hat\psi\oei}_{#1}}}
\newcommand\sumt[1]{\!\sum_{#1\inM\cup\M_{1/2}}\!\!\!}
\def\mbox{$\liefont{sl}(2)$} {\mbox{$\mathfrak {sl}(2)$}}
\defsym\-me\-tries {sym\-me\-tries}
\def\Theta_{\cal Q} {\Theta_{\cal Q}}
\def{[\tau,\psu_\tau]\oei} {{[\tau,{\hat\psi}_\tau]'}}
\def{\tilde\Beta} {{\tilde{\rm B}}}
\defC^B {C^B}
\def{\tilde{\calu}} {{\tilde{{\cal U}}}}
\def\tilde d {\tilde d}
\def\tilde {\tilde}
\def\tilde {\tilde}
\def\,{\times}\, {\,{\times}\,}
\def{\times} {{\times}}
\def\oT {\,\ot\,}
\def{\tildE{\J}} {{\tilde{{\rm J}}}}
\def\tilde\Kappa {\tilde\kappa_\lambda}
\newcommand\tN[3] {\tilde{\rm N}_{#1,#2}^{\ \ \ \ \ \ \ \ \ #3}}
\def\Tilde{\rm N} {\tilde{\rm N}}
\newcommand\tNl[3] {\tilde{\rm N}_{#1,#2,#3}^{}}
\def\Theta_{\omm} {\Theta_{\omm}}
\def\Tilde\Phi {\tilde\Phi}
\def\tildE\psi {\tilde\psi}
\def{\rm Tr}\, {{\rm Tr}\,}
\def{\rm tr} {{\rm tr}}
\def{\rm tr}\, {{\rm tr}\,}
\def{[\rhob]\oei} {{[{\bar\rho}]'}}
\def{[\rhob_3]\oei} {{[{\bar\rho}_3]'}}
\def{[\rhob_1]\oei} {{[{\bar\rho}_1]'}}
\def{[\rhob_2]\oei} {{[{\bar\rho}_2]'}}
\def\tildE s {\tilde s}
\def\Tilde S {\tilde S}
\defs\oei {s'}
\def{\dot C} {{\dot C}}
\def\tildE u {\tilde u}
\deftwo-dimensional {two-dimensional}
\renewcommand\u[1] {{\rm u}_{#1}}
\newcommand\uc[1] {\check{\rm u}_{#1}}
\def{\rm U} {{\rm U}}
\def{\U{\cap}\Gp} {{{\rm U}{\cap}{\G'}}}
\newcommand\uo[1] {{\rm u}_{#1}^\circ}
\def\mbox{$\liefont{u}(1)$} {\mbox{$\mathfrak {u}(1)$}}
\newcommand\up[1] {{\rm u}_{#1}'}
\def{\U'} {{{\rm U}'}}
\def{{\U'}^*} {{{{\rm U}'}^*}}
\def{\U^*} {{{\rm U}^*}}
\defuntwisted stabilizer {untwisted stabilizer}
\def{\cal V} {{\cal V}}
\def\Omega {\Omega}
\def{\rp\vac} {{\bar\Omega}}
\def{[\vacb]} {{[{\rp\vac}]}}
\def{\vac^\oo_{}} {{\Omega^\circ_{}}}
\def{\vac\oei} {{\Omega'}}
\newcommand\version[1] {\typeout{}\typeout{#1}\typeout{}
\vskip3mm \centerline{\fbox{{\tt DRAFT -- #1 -- }
{\small\draftdate}}} \vskip3mm}
\def\mbox{$\mathfrak V${\sl ir}} {\mbox{$\mathfrak V${\sl ir}}}
\defvertex operator algebra {vertex operator algebra}
\defVertex operator algebra {Vertex operator algebra}
\defvertex operator {vertex operator}
\def{\varphi} {{\varphi}}
\def\V_{\!\psu} {{\cal V}_{\!{\hat\psi}}}
\def\V_{\!\psu^+} {{\cal V}_{\!{\hat\psi}^+}}
\def\V_{\!\psu^+}^* {{\cal V}_{\!{\hat\psi}^+}^*}
\def\V_{\!\psu}^* {{\cal V}_{\!{\hat\psi}}^*}
\defwith respect to {with respect to }
\defwith respect to the {with respect to the }
\defWZW model {WZW model}
\defWZW theory {WZW theory}
\defWZW theories {WZW theories}
\newcommand\x[1] {\xi_{#1}}
\def{\cal X} {{\cal X}}
\def{\bf Y} {{\bf Y}}
\defextended classifying algebra {extended classifying algebra}
\newcommand\yN[3] {\tilde{\rm N}_{#1,#2}^{\;\;\ #3}}
\newcommand\YS[2] {\Tilde S_{{#1},{#2}}}
\def{\dl Z} {{\mathbb Z}}
\def${\dl Z}$ {${\mathbb Z}$}
\def\zeta_Y {\zeta_Y}
\def{\dl Z}_{>0} {{\mathbb Z}_{>0}}
\def{\dl Z}_{\ge0} {{\mathbb Z}_{\ge0}}
\newcommand\zmatrix[4]{\left(\begin{array}{cc}#1 \\ #3\end{array}\right)}
\documentclass[12pt]{article} \usepackage{amssymb,amsfonts,latexsym}
\def{\thesection.\arabic{equation}}{{\thesection.\arabic{equation}}}
\setlength{\textheight}{22.7cm} \topmargin= -5mm
\setlength{\textwidth}{17cm} \hoffset -22mm \raggedbottom
\begin{document}
\begin{flushright} {~} \\[-1cm] {\sf hep-th/9902132} \\[1mm]
{\sf CERN-TH/99-35} \\[1 mm]
{\sf ETH-TH/99-03} \\[1 mm]
{\sf February 1999} \end{flushright}
\begin{center} \vskip 14mm
{\Large\bf SYMMETRY BREAKING BOUNDARIES}\\[4mm]
{\Large\bf I.\ GENERAL THEORY}\\[17mm]
{\large J\"urgen Fuchs}\\[3mm] CERN \\[.6mm] CH -- 1211~~Gen\`eve 23\\[7mm]
and\\[7mm]
{\large Christoph Schweigert}\\[3mm] Institut f\"ur Theoretische Physik \\
ETH H\"onggerberg \\[.2em] CH -- 8093~~Z\"urich
\end{center}
\vskip 17mm
\begin{quote}{\bf Abstract}\\[1mm]
We study conformally invariant boundary conditions that break part
of the bulk symmetries. A general theory is developped for those boundary
conditions for which the preserved subalgebra is the fixed algebra under an
abelian orbifold group. We explicitly construct the boundary states and
reflection coefficients as well as the annulus amplitudes.
Integrality of the annulus coefficients is proven in full generality.
\end{quote}
\newpage
\sect{Introduction and summary}\label{s.1}
The space of conformally invariant boundary conditions
of two-dimensional\ conformal field theories\ is of interest in statistical mechanics, e.g.\
for the description of the Kondo effect and the theory of critical percolation,
as well as in open string theory, where particular attention followed the
observation \cite{polc3} that string perturbation theory in solitonic sectors
can be formulated in terms of world sheets with boundaries. In these
applications it is crucial that the boundary condition s preserve conformal invariance;
in contrast, additional symmetries that the bulk theory may possess typically
need not be respected.
The special case of boundary conditions that preserve the full bulk symmetry
was already considered a long time ago \cite{card9}. In this case the
consistent conformal boundary conditions are in one-to-one correspondence with
the irreducible representations of the fusion rule algebra of the theory,
the so-called (generalized) quantum dimensions. To be precise, this result
holds when the torus partition function\ is given by charge conjugation. More recently, it
has been observed \cite{prss3,fuSc5} that also in the case when the torus
partition function\ corresponds to some simple current automorphism of the fusion rules,
one can find a relative of the fusion algebra whose irreducible representation s precisely
correspond to the boundary conditions that preserve the full bulk symmetry.
This algebra has been dubbed the {\em classifying algebra}.\/
The consideration of Dirichlet boundary conditions for a free boson conformal
field theory brought yet another insight. Namely, for every conformal field
theory, say with charge conjugation modular invariant, one should also study
boundary conditions that relate left and right movers by some
automorphism of the fusion rules \cite{fuSc6,reSC} that preserves conformal
weights. For a given fusion rule automorphism $\gg^\star$, respectively\ the
corresponding automorphism ${g}$ of the chiral algebra, there will typically
exist several distinct conformally invariant boundary conditions. They
constitute the possible {\em Chan$\mbox{-\hspace{-.66 mm}-}$ Paton types\/} for the fixed
{\em automorphism type\/} ${g}$. Again, it is natural to construct
these boundary condition s as irreducible representation s of some classifying algebra\ that generalizes
the fusion rule algebra \cite{fuSc6}.
One goal of this paper is to identify these algebra s; but to do so, it turns
out to be convenient to solve a more general problem. The boundary
conditions of automorphism type ${g}$ respect only a subset of the
bulk symmetries ${\mathfrak A}$, namely the subalgebra ${\mathfrak A}^{({g})}_{}$ of those
elements that are fixed under ${g}$. More generally, one may therefore
address the following question. Given a subalgebra ${\bar\cala}$ of the chiral algebra\
${\mathfrak A}$, determine all those boundary conditions that preserve (at least)
${\bar\cala}$, but not necessarily all of ${\mathfrak A}$. It should be appreciated that even
when we ask this question for the subalgebra ${\bar\cala}\,{=}\,{\mathfrak A}^{({g})}_{}$
associated to some automorphism ${g}$ of finite order, it is by no means
clear that {\em all\/} the boundary conditions preserving ${\bar\cala}$ possess
a definite automorphism type. It will be a
non-trivial result of our analysis that this is indeed true.
As long as ${\bar\cala}$ is
completely arbitrary, at present this problem is still too general to be
tractable. We will therefore restrict our attention to a particular subclass
of (consistent) subalgebras. Namely, we require that ${\bar\cala}$ be the {\em fixed
algebra\/} ${\mathfrak A}^G$ of some group $G$ of automorphisms
of the chiral algebra ${\mathfrak A}$. In other words, ${\bar\cala}\,{=}\,{\mathfrak A}^G$ is the
chiral algebra of an {\em orbifold\/} of the theory that has chiral algebra\
${\mathfrak A}$. The orbifold group $G$ need not necessarily be a finite group; one
may even study orbifolds with respect to finite-dimensional\ Lie groups. But for the
present purposes we assume that $G$ is indeed finite, and we still
specialize further to the situation that $G$ is a finite abelian group.
In this case the original chiral algebra ${\mathfrak A}$ can be reassembled
from its subalgebra\ ${\bar\cala}$ by an {\em integer spin simple current extension\/}.
This allows us to utilize simple current technology
\cite{scya,intr,scya6,krSc,fusS3,fusS6}. This way we have several nice
structures at our disposal, which have passed various rather non-trivial checks
in chiral conformal field theory\ (see e.g.\ \cite{fusS6,fusS4,bant7,bant6}). They allow us
to write down a natural candidate for a classifying algebra. We then take this
ansatz and compute reflection coefficients and annulus coefficients, and
afterwards show that these quantities pass the usual consistency checks.
In particular, the annulus coefficients are proven to be integral; it
should be noticed that this property is an outcome of our
analysis rather than a requirement we impose.
For the convenience of the reader, we now present a brief summary of
our main results. We assume full reducibility (which is satisfied in all
known examples), i.e.\ that we can decompose the representation\ spaces
${\cal H}_\lambda$ for all primary fields of the ${\mathfrak A}$-theory as
\begin{equation} {\cal H}_\lambda= \bigoplus_{{\bar\mu}} V_\lambda^{\bar\mu}
\otimes \bar{\cal H}_{\bar\mu} \labl1
into irreducible ${\bar\cala}$-modules $\bar{\cal H}_{\bar\mu}$. The degeneracy spaces
$V_\lambda^{\bar\mu}$ introduced this way are modules of suitable subgroups
$U_\lambda$ of the orbifold group $G$. We make the mild assumption that
each of these $G$-modules is irreducible, so that $V_\lambda^{\bar\mu}
\,{\cong}\,V_\Psi$ where $\Psi\,{\in}\, U_\lambda^*$. As a consequence, we can
label the primary fields of the orbifold theory by pairs $(\lambda,\Psi)$ where
$\lambda$ is an ${\mathfrak A}$-primary and $\Psi\,{\in}\, U_\lambda^*$. Actually, at this
point we have somewhat oversimplified the story. Indeed, by assumption we have
an action of $G$ on the chiral algebra, and thus on the vacuum primary field
$\lambda\,{=}\,\Omega$. While this does induce an action of $U_\lambda$ on the
degeneracy spaces that arise in the decomposition for other primaries
as well, that action is in general only {\em projective\/}. Thus in general
we must allow for $V_\Psi$ to be only a projective
module. Note that projective modules of an abelian group
do not necessarily have dimension one; accordingly, additional multiplicities
will occur in our analysis. That this effect is indeed realized in concrete
models can already be seen for orbifolds of a free boson, compactified at
self-dual radius; when one orbifoldizes by the dihedral group $D_2$, then
the dihedral group acts on the primary field of conformal dimension
$\Delta\eq1/4$ only projectively, and those projective irreducible representation s are, of
course, irreducible representation s of the universal central extension of
$D_2$, the quaternion group (for more details see \cite{dvvv}).
Technically, we will proceed in this work in a manner that is opposite to the
orbifold philosophy, i.e.\ we
express ${\mathfrak A}$-quantities in terms of quantities of the ${\bar\cala}$-theory
instead of the other way round. It can be seen that under the above-mentioned
non-degeneracy assumption the primaries ${\rm J}\,{=}\,(\Omega,\Psi)$
of the ${\bar\cala}$-theory that come from the vacuum sector $\Omega$ of the original
theory form an abelian group ${\cal G}$ under fusion; in other words,
they are {\em simple currents\/}. This group is actually isomorphic to
the character group of the orbifold group $G$, i.e.\ ${\cal G}\,{=}\,{G^*}$.
Equipped with this information, we are then in a position to apply simple
current technology. First, by its action through the fusion product, the
simple current group ${\cal G}$ organizes the ${\bar\cala}$-primaries ${\bar\lambda}$
into orbits. Generically this action is not free, so one associates to
every primary field
${\bar\lambda}$ its stabilizer, i.e.\ the subgroup ${\cal S}_\lambda$ of ${\cal G}$ whose
elements leave ${\bar\lambda}$ fixed (as ${\cal G}$ is abelian, the stabilizer is the
same for all fields on the same ${\cal G}$-orbit). Further, for every simple current
${\rm J}\,{\in}\,{\cal G}$ we associate to each primary field ${\bar\lambda}$ the rational number
\begin{equation} Q_{\rm J}(\lambda):= \Delta_{\bar\lambda}+ \Delta_{\rm J} - \Delta_{{\rm J}\star{\bar\lambda}}
\bmod{\dl Z} \,, \Labl QJ
called the monodromy charge of ${\bar\lambda}$, which is constant on ${\cal G}$-orbits.
In orbifold terminology, the fields whose monodromy charge vanishes for
every ${\rm J}\,{\in}\,{\cal G}$ are those in the untwisted sector of the orbifold. More
generally, the function ${g}_\lambda^{}\,{\equiv}\,{g}_\lambda^{(Q)}
{:}\ {\cal G}\,{\to}\,{\mathbb C}$ with
\begin{equation} {g}_\lambda^{}({\rm J}):= \exp(2\pi{\rm i} Q_{\rm J}(\lambda)) \end{equation}
for all ${\rm J}\,{\in}\,{\cal G}$ is an element of the character group ${\cal G}^*\,{=}\,({G^*})^*_{}
\,{\cong}\,G$ and can be identified with an element of the orbifold group;
${g}_\lambda$ characterizes the twist sector to which the field ${\bar\lambda}$
belongs.
Based on this description one might expect that it is possible to
express the ${\mathfrak A}$-primaries $\lambda$ in terms of ${\bar\cala}$-quantities
as follows. The label $\lambda$ is interpreted as a pair $({[\lambdab]},\psi)$,
consisting of a ${\cal G}$-orbit ${[\lambdab]}$ and a character $\psi$
of the stabilizer ${\cal S}_\lambda$. This would correspond to the decomposition
\begin{equation} {\cal H}_\lambda \leadsto
\bigoplus_{{\rm J}\in{\cal G}/{\cal S}_\lambda} \bar{\cal H}_{{\rm J}\star{\bar\lambda}} \labl{s3.1}
of irreducible ${\mathfrak A}$-modules, with
the character $\psi\,{\in}\,{\cal S}_\lambda^*$ accounting for the fact
that inequivalent ${\mathfrak A}$-modules can be equivalent as ${\bar\cala}$-modules.
However, as established in \cite{fusS6}, this ansatz is too naive.
The origin of the failure was actually already mentioned above; namely,
the decomposition \erf{s3.1} would exclude the possibility of having only
a projective action of the orbifold group on sectors other than the
vacuum. In contrast, the formalism developped in \cite{fusS6}, which is
briefly reviewed in appendix \ref{s.a}, correctly takes this effect into
account. What is required as an additional ingredient is to introduce
for each ${\bar\lambda}$ a certain subgroup ${\cal U}_\lambda$ of ${\cal S}_\lambda$,
called the {\em untwisted stabilizer\/} of ${\bar\lambda}$. This subgroup is
of quadratic index; the positive integer
\begin{equation} d_\lambda := \sqrt{|{\cal S}_\lambda|\,/\,|{\cal U}_\lambda|} \Labl dl
is just the dimension of the relevant projective representation.
The analysis of \cite{fusS6} shows that the ${\bar\cala}$-primaries are in fact
described by pairs ${[\lambdab,{\hat\psi}]}$, where ${\hat\psi}$ is a character of the untwisted
stabilizer rather than of the full stabilizer. The action of
${\cal G}/{\cal S}_\lambda$ is then implemented by an equivalence relation
that also involves the character ${\hat\psi}$ (see formula \Erf eq).
In \cite{fuSc10}, where some of our results were announced, we have
concentrated on the case where for all fields ${\bar\lambda}$ the untwisted
stabilizer coincides with the full stabilizer; in the present work, the
whole structure is displayed for the most general situation.
We can now exhibit the boundary conditions that preserve
only the subalgebra\ ${\bar\cala}$ of the bulk symmetries. Owing to factorization,
boundary conditions are characterized \cite{card9,cale} by the one-point
correlation function s of bulk fields on the disk. The corresponding chiral block s are
two-point blocks on the sphere. However, as only the symmetries in ${\bar\cala}$
are preserved, these blocks are not the ordinary chiral two-point
blocks of the ${\mathfrak A}$-theory; rather, we should take the chiral block s of the
${\bar\cala}$-theory and combine them in a way compatible with
the decomposition of the spaces ${\cal H}_\lambda$. Since states in
different ${\bar\cala}$-modules that occur in such a decomposition
are possibly reflected differently at the boundary, this way
we arrive at an independent chiral two-point block for each pair
$({\bar\lambda},{\hat\psi}_\lambda)$, where ${\bar\lambda}$ is a field in the untwisted
sector of the orbifold theory and ${\hat\psi}_\lambda$ is a character of the
untwisted stabilizer of ${\bar\lambda}$. We must still be somewhat more careful,
though. The chiral blocks of our interest are linear forms
\begin{equation} {\cal H}_\lambda^{}\otimes{\cal H}_{\lambda^{\!+}_{\phantom i}} \to {\mathbb C}\,. \end{equation}
However, when the degeneracy space has dimension $d_\lambda\,{>}\,1$, then
we cannot simply obtain boundary blocks for the ${\mathfrak A}$-theory by composing
the corresponding boundary blocks of the ${\bar\cala}$-theory (which are linear
forms $\bar{\cal H}_{\bar\lambda}^{}\,{\otimes}\,\bar{\cal H}_{\lambdab^{\!+}_{\phantom i}}\,{\to}\,{\mathbb C}$);
rather, the construction of a boundary block then
requires in addition a linear form on the tensor product of the
$d_\lambda$-dimensional degeneracy spaces. There are
$d_\lambda^2\,{=}\,|{\cal S}_\lambda|/|{\cal U}_\lambda|$ such forms.
As a consequence, for each primary ${\bar\lambda}$ in the untwisted sector
of the ${\bar\cala}$-theory we get
\begin{equation} N_{\rm block}({\bar\lambda}) = d_\lambda^2\,|{\cal U}_\lambda|
= |{\cal S}_\lambda| \Labl Nb
many independent chiral two-point blocks. As we will demonstrate in section
\ref{s.4}, the labels characterizing these blocks naturally combine into a pair
$({\bar\lambda},\psi_\lambda)$, where $\psi_\lambda$ is now a character of the
{\em full\/} stabilizer.
Next we analyze also the way in which these blocks combine to correlation function s,
whereby we effectively characterize the boundary conditions. We first observe
that in the case where the full bulk symmetry ${\mathfrak A}$ is conserved and the
torus partition function\ is given by charge conjugation, the boundary condition s correspond to the
(generalized) quantum dimensions of the ${\mathfrak A}$-theory. Quantum dimensions, in
turn, are related to primary fields via the modular S-matrix of the theory.
(Actually in this simple case the structure is somewhat obscured by the fact
that the modular S-matrix is symmetric so that there exists a natural
identification between quantum dimensions and primary fields.) The fact that a
modular transformation
relates boundary {\em blocks\/} to boundary {\em conditions\/}
has become even more apparent in the example considered in \cite{fuSc5}.
It is therefore not too surprising that also in the more general situation
considered here, boundary blocks and boundary conditions are connected by a
modular transformation. Let us further explore this idea heuristically. The
labels ${\bar\lambda}$ of the boundary blocks are subject to ${g}_\lambda\,{\equiv}
\,1$. In orbifold language, this means that we are only dealing with the
untwisted sector of the orbifold. Thus along the `space' direction of the
torus only the twist by the identity occurs. It follows that after
a modular S-transformation, only the identity appears as twist in the `time'
direction of the torus, which in turn means that the usual orbifold projection
does not take place. In simple current language, the corresponding
statement is that the boundary conditions are labelled by ${\cal G}$-{\em orbits\/}
${[\rhob]}$ of ${\bar\cala}$-primaries rather than by individual primary
fields. On the other hand, after the modular S-transformation arbitrary twists
in the `space' direction occur in the orbifold; this means that
in the labelling of the boundary condition s {\em all\/} ${\cal G}$-orbits ${[\rhob]}$ appear, not just
those with vanishing monodromy charges, i.e.\ not just the ones in the
untwisted sector. In fact in \cite{fuSc12}\ we will show that the
character ${g}_\rho\,{\in}\,{\cal G}^*\cong G$ furnished by the monodromy charges
of ${\bar\rho}$ can be naturally identified with the
automorphism type of the boundary condition. This in turn allows us to {\em derive\/}
(rather than to assume ad hoc) that every boundary condition of the form
considered here possesses a definite automorphism type. Finally, for the
boundary conditions there is an additional degeneracy, too, this time governed
by the {\em untwisted\/} stabilizer. As a matter of fact, in the structures
we are going to exhibit, consistency is achieved through a rather subtle (and
beautiful) interplay between the untwisted stabilizer and the full stabilizer.
For the convenience of the reader, we now collect a few explicit formul\ae.
They are most conveniently presented in terms of a certain square matrix $\Tilde S$.
This matrix diagonalizes the structure constants of the classifying algebra; accordingly
its first index refers to a boundary block $({\bar\lambda},\psi_\lambda)$, while
the second index corresponds to a boundary condition ${[\rhob,\psu_\rho]}$. The formula
for $\Tilde S$ is
\begin{equation} \Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob,\psu_\rho]}}
= \Frac{|{\cal G}|}{(|{\cal S}_\lambda|\,|{\cal U}_\lambda|\,|{\cal S}_\rho|\,|{\cal U}_\rho|)
^{1/2}_{}} \sum_{{\rm J}\in{\cal S}_\lambda\cap{\cal U}_\rho} \psi_\lambda({\rm J})\,
{\hat\psi}_\rho({\rm J})^*\, S^{\rm J}_{{\bar\lambda},{\bar\rho}} \,. \end{equation}
Roughly, one has to sandwich certain matrices $S^{\rm J}$ between the characters
$\psi_\lambda\,{\in}\,{\cal S}_\lambda^*$ and ${\hat\psi}_\rho\,{\in}\,{\cal U}_\rho^*$; these
matrices are the modular transformation
matrices for one-point chiral block s with insertion of the simple current ${\rm J}$ on
the torus \cite{bant6} and appear naturally in the study of simple current
extensions \cite{fusS6}. In terms of the matrix $\Tilde S$ the one-dimen\-sional\ irreducible representation s
of the classifying algebra\ which provide the reflection coefficients read
\begin{equation} R^{}_{[\rhob,\psu_\rho]} (\Tilde\Phi_{({\bar\lambda},\psi_\lambda)}) =
\frac{\Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob,\psu_\rho]}}} {\Tilde S_{{\rp\vac},{[\rhob,\psu_\rho]}}} \,. \end{equation}
We will also see that there is a natural conjugation on the boundary conditions,
a map of order two that implements the reversal of the orientation
of the boundary.
Finally, we display the annulus amplitude for an annulus with boundary
conditions ${[\rhob_1,\psu_1]}$ and ${[\rhob_2,\psu_2]}$.
As we will see, it is natural to express the annulus amplitude as a linear
combination
\begin{equation} {\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}} = \sum_{{[\sigmab,\psu_\sigma]\oei}}
{\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}^{{[\sigmab,\psu_\sigma]\oei}}\,{\cal X}'_{{[\sigmab,\psu_\sigma]\oei}} \end{equation}
of characters ${\cal X}'_{{[\sigmab,\psu_\sigma]\oei}}$ of the conformal field theory\ that is obtained by extending the
${\bar\cala}$-theory by the simple currents in the subgroup
\begin{equation} \Gs' \equiv \Gs'_{\rho_1\rho_2}
:=\{ {\rm J}\,{\in}\, {\cal G} \,|\, Q_{\rm J}(\rho_1)\eq0\,{=}\, Q_{\rm J}(\rho_2) \} \Labl Hp
of ${\cal G}$. The annulus coefficients ${\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}^{{[\sigmab,\psu_\sigma]\oei}}$ can then
be written, up to a prefactor, as a sum of the fusion rule coefficients
$\Ne{[\rhob,\psu_\rho]\oei}{[\sigmab,\psu_\sigma]\oei}{[\tau,\psu_\tau]\oei}$ of the $\Gs'$-extension:
\begin{equation} {\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}^{{[\sigmab,\psu_\sigma]\oei}}
= N \sum_{\scriptstyle{\hat\psi\oei}_1\in\calu'_1 \atop{\hat\psi\oei}_1\succ\Check\psi_1}
\sum_{\scriptstyle{\hat\psi\oei}_2\in\calu'_2 \atop{\hat\psi\oei}_2\succ\Check\psi_2}
\sum_{{\rm J}\in{\cal G}/\Gs''} \Ne{{[\rhob_2,\psu_2]\oei}}{{\rm J}{[\sigmab,\psu_\sigma]\oei}}{{[\rhob_1,\psu_1]\oei}} \,. \end{equation}
Here $\Gs''$ is a certain subgroup of ${\cal G}$ which is intermediate between
$\Gs'$ and ${\cal G}$, i.e.\ $\Gs'\,{\subseteq}\,\Gs''\,{\subseteq}\,{\cal G}$, and
$\Check\psi_i\,{=}\,{\hat\psi}_i|_{\caluhp i}^{}$. As an important consistency check we
will present a general proof that the prefactor
$N\,{\equiv}\,N_{\rho_1\rho_2}$, which is a quite complicated ratio
involving the sizes of various subgroups of ${\cal G}$ (see formula \erf{dud}), is
always a non-negative integer, so that the annulus amplitude can
be consistently interpreted as a partition function for open string states.
More precisely, the number $N$ can be written as a product of three
separate integral factors, each of which possesses a natural
group theoretic respectively representation theoretic interpretation
(see formul\ae\ \erf{dudo'} and \erf{dudu'}).
While this definitely implies that the annulus coefficients are non-negative
integers (as befits the coefficients of a partition function), the
interpretation of the prefactor $N$ should
also play a role in non-chiral field-state correspondence. One expects to be
able to associate to every open string state a field operator in the
full conformal field theory. Now the partition function for these states is the
annulus amplitude, and the presence of additional multiplicities in the latter
means that several distinct operators in the {\em full\/} conformal field theory\ must be built
from one and the same {\em chiral\/} vertex operator. An explicit construction
of these operators is not known for the moment, but in any case the information
that the multiplicities all possess an interpretation in terms of fusion
rules and other representation theoretic objects seems to be highly relevant.
The rest of this paper is organized as follows.
We start in section \ref{s.2} with a description of our setup, i.e.\
boundary condition s which preserve a subalgebra of the bulk symmetries that is fixed under
a finite abelian group of automorphisms. The analysis
of such boundary condition s proceeds in two steps, where in the first step one works
exclusively at the chiral level, while in the second non-chiral quantities
enter. The general features of the chiral part are collected in section
\ref{s.3}, while in section \ref{s.4} a natural basis for the basic chiral
ingredients, the {\em boundary blocks\/}, is constructed. Section \ref{s.5}
is devoted to the non-chiral level. First, in subsection \ref{s.51}, we
show that the boundary condition s of interest to us are governed by a {\em classifying algebra\/}; in
the rest of this section we establish the precise form of this algebra\ and
investigate its properties. While we regard our arguments leading to these
results as convincing, they are not mathematically rigorous. As further evidence
we therefore perform, in section \ref{s.6}, several additional consistency
checks based on properties of the annulus amplitudes, the most important one
(subsection \ref{s.64}) being a general proof of the
integrality of the annulus coefficients that appear in the open string channel.
In a follow-up paper \cite{fuSc12}, we will address several complementary issues which
concern the structure of the space of symmetry breaking boundary condition s and display
the boundary condition s for various classes of conformal field theories\ explicitly.
More concretely, we start by associating to each boundary condition\ its {\em automorphism type\/},
which arises as a direct consequence of the general structure.
Then we show that boundary condition s of definite automorphism type\ can be naturally
formulated with the help of certain twisted boundary blocks, obeying
twisted Ward identities, and that they carry their own individual classifying algebra, which
is a suitable quotient of the total classifying algebra.
Further we study the realization of T-duality on the space of boundary condition s,
show that this space carries an action of the orbifold group (`boundary
homogeneity'), and introduce the concept of a {\em universal\/} classifying
algebra for all conformally invariant boundary condition s.
Finally we will exhibit in a large number of examples the
concrete realization of various structures that we have uncovered.
\bigskip\noindent{\small {\bf Acknowledgement} \\
We are grateful to Peter Bantay and Bert Schellekens for very helpful
correspondence.}
\sect{Broken bulk symmetries}\label{s.2}
In this paper we analyze the following situation. We start with some
prescribed conformal field theory\ that is consistently defined on all closed orientable
surfaces, and choose the charge conjugation modular invariant as the
partition function on the torus. This theory is, moreover, assumed to be
non-heterotic, i.e.\ for left and right movers we deal with one and
the same symmetry algebra ${\mathfrak A}$, the {\em chiral algebra\/}, which contains
the Virasoro algebra $\mbox{$\mathfrak V${\sl ir}}$. (This condition in fact refers to the oriented
Schottky cover of the surface, which for a closed orientable surface consists
of two isomorphic disjoint sheets; the requirement is that we deal with one
and the same chiral conformal field theory\ on both sheets.) We call ${\mathfrak A}$ the algebra of
{\em bulk symmetries\/}. For technical reasons we will assume that the theory
is rational, i.e.\ that it contains only finitely many ${\mathfrak A}$-primaries.
Boundary conditions that respect the full bulk symmetry have been studied
for quite a while \cite{card9,cale}.
In contrast, in the present work we are interested in boundary conditions
that do not preserve all bulk symmetries, but only a subalgebra ${\bar\cala}$
of ${\mathfrak A}$. This does not yet restrict at all the kind of boundary condition
we consider, since to any arbitrary boundary condition one may associate
the subalgebra of ${\mathfrak A}$ that is preserved. Further, we are interested
in conformally invariant boundary conditions only, so that ${\bar\cala}$ must
in particular contain the Virasoro subalgebra of ${\mathfrak A}$. Moreover, ${\mathfrak A}$
must be `consistent'; by this qualification we understand that the algebra
is closed under charge conjugation and
allows for the definition of sheaves of chiral block s which come with a
projectively flat Knizhnik\hy Zamolodchikov connection\ and which obey consistent factorization rules.
A typical chiral algebra ${\mathfrak A}$ will, however, possess
very many, if not infinitely many, consistent subalgebras ${\bar\cala}$.
Accordingly, the first step towards a classification of all conformal boundary
conditions would be to classify all those subalgebras. This problem
depends largely on the specific bulk conformal field theory\ under consideration, and (except for
a discussion of a possible limiting algebra\ of an inductive system of classifying algebra s
in \cite{fuSc12}) we will not have to say much about it.
On the other hand, for sufficiently simple theories, such as the
Virasoro minimal models or the free boson or its
${\dl Z}_2$-orbifold, all consistent subalgebra s are known. More
generally, once the problem of classifying the consistent subalgebra s has
been solved for any single model, the methods presented below provide us
(possibly modulo the existence of so-called complex charges, compare
\cite{sasT2}) with all conformal boundary conditions of that model.
Here we rather concentrate on the task of classifying all boundary
conditions that preserve some specified consistent subalgebra ${\bar\cala}$.
As long as ${\bar\cala}$ is a completely arbitrary subalgebra, this problem is
still too general and cannot be solved with the methods that are available
at present. We will therefore restrict our attention
to a particular subclass of consistent subalgebras. Namely, we require
that ${\bar\cala}$ be the {\em fixed algebra\/} of some group $G$ of automorphisms
of the chiral algebra ${\mathfrak A}$. In other words,
\begin{equation} {\bar\cala} = {\mathfrak A}^G \end{equation}
is the chiral algebra of an {\em orbifold\/}
of the original theory. In principle the orbifold group $G$ can be quite
arbitrary; for instance, it need not even be finite, but rather could be some
finite-dimensional\ Lie group. Still, for the purpose of the present paper we restrict
our attention to the case when $G$ is finite, and when moreover it is abelian.
This situation may seem rather special compared to the
general problem sketched above, but it nevertheless covers a variety of
cases of practical interest. Examples are provided by
the critical three-state Potts model and, more generally, by Virasoro
minimal models of $(A,D_{\rm even})$ type, by
Dirichlet boundary condition s for a free boson for which only the chiral algebra
of the ${\dl Z}_2$-orbifold of the boson theory is preserved, by D-branes in
toroidal compactifications at generic positions, by charge conjugation in
WZW theories, and by those boundary condition s for a free boson that correspond to a change
in the compactification radius. (For a more extensive list, see the final
sections in the follow-up \cite{fuSc12}\ of this paper.) Moreover, already with this
restriction we can gain a number of additional physical insights,
e.g.\ concerning the relation between boundary condition s that preserve subalgebra s
${\bar\cala}_1$ and ${\bar\cala}_2$ of ${\mathfrak A}$ that are contained in each other.
Let us briefly recall how to describe boundary conditions in conformal field theory\
on surfaces $\cal C$ with boundaries. First \cite{fuSc6},
one must set up a chiral conformal field theory\ on a closed oriented twofold covering surface
$\tilde{\cal C}$ of $\cal C$, the {\em Schottky cover\/} \cite{ales},
from which $\cal C$ is obtained by dividing out an anti-conformal
involution. This amounts to specifying a system of {\em chiral block s\/}
that has a Knizhnik\hy Zamolodchikov connection\ and obeys factorization rules;
as a consequence of factorization, the blocks most relevant to the boundary condition s
are the chiral block s for a single bulk field insertion on the disk, which
are two-point blocks on the projective line ${\mathbb P}^1$. In an independent
second step we have to construct {\em correlation function s\/} as linear combinations
of these blocks that satisfy \cite{card9,cale,prss3,bppz,runk}
locality and factorization constraints. As was emphasized in
\cite{fuSc6}, these two conceptual levels should be carefully distinguished,
and accordingly we will divide our discussion in two parts. We start
by analyzing, in the next two sections, the chiral conformal field theory\ on the Schottky cover.
\sect{Chiral theory for symmetry breaking boundaries}\label{s.3}
As just pointed out, for the thorough investigation
of boundary conditions it is advisable
to distinguish clearly between the two conceptual levels of {\em chiral\/}
conformal field theory\ and {\em full\/} conformal field theory. In the chiral theory, boundaries are not yet
present explicitly; but as a prerequisite for analyzing the breaking of bulk
symmetries by boundary conditions in the full theory various structures
need to be understood already at this stage. These chiral concepts are the
topic of the present and the next section.
\subsection{Simple currents versus orbifolds}
As already outlined above, our general situation is as follows. We are given
a rational conformal field theory\ with chiral algebra ${\mathfrak A}$, and we consider boundary
conditions that preserve only a consistent subalgebra ${\bar\cala}$ of ${\mathfrak A}$.
Let us assume that ${\mathfrak A}$ can be obtained from its subalgebra ${\bar\cala}$ by the
extension with a simple current group ${\cal G}$, which is some finite abelian group.
Each simple current ${\rm J}\,{\in}\,{\cal G}$ corresponds to an irreducible infinite-dimensional\ representation\
space $\bar{\cal H}_{\rm J}$ of ${\bar\cala}$; moreover, the vacuum module $\bar{\cal H}_{\rp\vac}$
of ${\bar\cala}$ and the vacuum module ${\cal H}_\Omega$ of ${\mathfrak A}$ are related as
\begin{equation} {\cal H}_\Omega \cong \bigoplus_{{\rm J}\in{\cal G}} \bar{\cal H}_{\rm J} \,, \end{equation}
where the symbol `$\cong$' stands for isomorphism as ${\bar\cala}$-modules;
the identity element in ${\cal G}$ corresponds to $\bar{\cal H}_{\rp\vac}$.
In this situation the chiral algebra ${\bar\cala}$ is necessarily
an orbifold subalgebra of ${\mathfrak A}$. Namely,
we can obtain an action of the dual group $G\,{=}\,{\cal G}^*$ on ${\cal H}_\Omega$
as follows. For every ${g}\,{\in}\, G$, we define $R({g})$ to act on the subspace
$\bar{\cal H}_{\rm J}$ of ${\cal H}_\Omega$ as the multiple ${\rm J}({g})\,\mbox{\sl id}_{\bar{\cal H}_{\rm J}}^{}$ of
the identity map, where ${\rm J}$ is regarded
as a character on $G$. Field-state correspondence relates the vectors in
${\cal H}_\Omega$ to operators in the chiral algebra, and thus this prescription
provides us with a group of automorphisms of ${\mathfrak A}$ that is isomorphic to $G$.
Conversely, suppose we are given an action of a finite abelian group $G$ on
the chiral algebra ${\mathfrak A}$ that leaves the Virasoro subalgebra\ $\mbox{$\mathfrak V${\sl ir}}\,{\subseteq}
\,{\mathfrak A}$ pointwise fixed. Then ${\mathfrak A}$ contains as a subalgebra the algebra
${\mathfrak A}^G$ of all elements that are left pointwise fixed under the action
of the orbifold group $G$, and ${\mathfrak A}^G$ contains the Virasoro subalgebra\
of ${\mathfrak A}$. Again by field-state correspondence, we then also
have an action of $G$ on the vacuum module ${\cal H}_\Omega$,\
\futnote{Here we make the assumption that the action of $G$ on the vacuum
module ${\cal H}_\Omega$ is a honest action rather than only a projective one. This
condition should be regarded as part of the definition of the term orbifold.
If it were not satisfied, the structure to be divided out would no longer be
a group.}
and this action commutes with the action of ${\mathfrak A}^G$. It can then be shown
\cite{dolm5,doMa2} that ${\cal H}_\Omega$ is completely reducible as an
${\mathfrak A}^G$-module, so that we can
decompose ${\cal H}_\Omega$ into irreducible submodules of\,\,%
\futnote{Note that $G$ commutes with the Virasoro algebra\ so that it preserves
the grading of the infinite-dimensional\ space ${\cal H}_\Omega$. Thus each homogeneous
subspace of fixed conformal weight is a finite-dimensional\ $G$-module, which is
fully reducible. As $G$ even commutes with all of ${\mathfrak A}^G$, full reducibility
with respect to $G\,{\times}\,{\mathfrak A}^G$ then follows from full reducibility with respect to ${\mathfrak A}^G$.
Incidentally, a vertex operator algebra for which
every graded representation is fully reducible possesses only finitely many
inequivalent irreducible representation s \cite{dolm3}, thus giving rise to a rational conformal field theory.\\
A decomposition of the vacuum module of the form \erf{chi1} is expected to
hold for arbitrary finite orbifold groups $G$, and has been proven for many
non-abelian groups in \cite{dolm5,doMa2}. Analogous decompositions are valid
\cite{doMa3} for other ${\mathfrak A}$-modules, including twisted modules.}
$G\,{\times}\,{\mathfrak A}^G$ as
\begin{equation} {\cal H}_\Omega \cong \bigoplus_{\bar\lambda} \bigoplus_{\Psi\in G^*_{}}
V_{\Psi}\otimes \bar{\cal H}_{\bar\lambda} \,, \labl{chi1}
where $\bar{\cal H}_{\bar\lambda}$ are irreducible ${\mathfrak A}^G$-modules and $V_\Psi$ are
irreducible modules of $G$ (and are thus one-dimen\-sional).
It follows in particular that all ${\mathfrak A}^G$-modules $\bar{\cal H}_{\bar\lambda}$ that
appear in \erf{chi1} are simple currents. This holds true because the fusion
of these modules must be compatible with the decomposition of tensor products
of irreducible $G$-representation s. The latter decomposition, in turn, is just
described by the dual group ${\cal G}\,{=}\, G^*_{}$, and hence we conclude that
with respect to the fusion product the modules $\bar{\cal H}_{\bar\lambda}$
appearing in \erf{chi1} form a simple current group isomorphic to ${\cal G}$.
\subsection{Simple current extensions}
That the boundary conditions of interest to us preserve only the subalgebra
${\bar\cala}$ implies that generically the fields corresponding to vectors in
different ${\bar\cala}$-submodules of a given ${\mathfrak A}$-module are reflected
differently at the boundary. Accordingly we need to decompose every sector
of the chiral conformal field theory, i.e.\ every irreducible representation\ of ${\mathfrak A}$, into irreducible
${\bar\cala}$-representation s (again we impose full reducibility with respect to ${\bar\cala}$).
Fortunately, the decomposition of ${\mathfrak A}$-modules in terms of ${\bar\cala}$-modules
is a purely chiral issue, i.e.\ is in particular independent of any boundary
effects, and this chiral issue is well understood in the simple
current framework. We summarize some relevant information here; for more
details see appendix \ref{s.a} and \cite{scya6,fusS6}. The fusion product
provides an action of the simple current group ${\cal G}$ on the fields ${\bar\lambda}$
of the ${\bar\cala}$-theory. To each primary field ${\bar\lambda}$ one then associates
a subgroup of ${\cal G}$, the stabilizer
\begin{equation} {\cal S}_\lambda := \{{\rm J}\,{\in}\,{\cal G} \,|\, {\rm J}{\bar\lambda}\,{=}\,{\bar\lambda} \} \,. \end{equation}
Stabilizer subgroups are constant on ${\cal G}$-orbits (which we already anticipated
by writing ${\cal S}_\lambda$ in place of ${\cal S}_{\bar\lambda}$);
conjugate ${\bar\cala}$-fields have identical stabilizers, too.
In the decomposition of a given ${\mathfrak A}$-module ${\cal H}_\lambda$
only ${\bar\cala}$-modules on a single ${\cal G}$-orbit appear.
However, one and the same ${\cal G}$-orbit ${[\mub]}$ of primaries in the
${\bar\cala}$-theory can give rise to several distinct primaries
of the ${\mathfrak A}$-theory. In other words, inequivalent ${\mathfrak A}$-modules can be
isomorphic as ${\bar\cala}$-modules. This effect is controlled by
a subgroup of the stabilizer, the so-called
untwisted stabilizer, which in turn is obtained with the help of the following
structure. To each ${\bar\cala}$-primary ${\bar\lambda}$ one can associate a bi-homo\-mor\-phism\
\begin{equation} F_\lambda:\quad {\cal G}\,{\times}\,{\cal G} \to {\mathbb C}^\times \end{equation}
that is alternating in the sense that $F_\lambda({\rm J},{\rm J})\eq1$ for all ${\rm J}\,{\in}\,{\cal G}$,
that again depends only on the orbit, and that is the same for any two
conjugate orbits (for the precise definition see appendix \ref{s.b}).
Every alternating bi-homo\-mor\-phism\ is the commutator cocycle for some
two-cocycle ${\cal F}$ on ${\cal G}$, i.e.
\begin{equation} F_\lambda({\rm J}_1,{\rm J}_2)
= {\cal F}_\lambda({\rm J}_1,{\rm J}_2) \,/\, {\cal F}_\lambda({\rm J}_2,{\rm J}_1) \,, \Labl ff
and the cohomology class of ${\cal F}_\lambda$ is uniquely determined by $F_\lambda$.
\subsection{The untwisted stabilizer}
The commutator cocycle $F_\lambda$ allows us to single out the
{\em untwisted stabilizer\/} \cite{fusS6} as the subgroup
\begin{equation} {\cal U}_\lambda := \{ {\rm J}\,{\in}\,{\cal S}_\lambda \,|\, F_\lambda({\rm J},{\rm K})\eq1
\mbox{ for all } {\rm K}\,{\in}\,{\cal S}_\lambda \} \end{equation}
of the full stabilizer ${\cal S}_\lambda$. As shown in \cite{fusS6}, those
${\mathfrak A}$-primaries that are isomorphic as ${\bar\cala}$-modules
are naturally labelled by characters of the group ${\cal U}_\lambda$. As a
consequence, the ${\mathfrak A}$-primaries can be denoted by ${\cal G}$-orbits ${[\lambdab,{\hat\psi}_\lambda]}$
of pairs consisting of a primary label ${\bar\lambda}$ of the ${\bar\cala}$-theory and
a character ${\hat\psi}_\lambda$ of the untwisted stabilizer\ of ${\bar\lambda}$. (The equivalence
relation that defines the classes ${[\lambdab,{\hat\psi}_\lambda]}$ involves both constituents
of the pair, see formula \Erf eq.)
A second important piece of information that we can extract from the
commutator cocycle $F_\lambda$ is a collection of
projective representations of ${\cal S}_\lambda$. They are characterized by the
two-cocycle ${\cal F}_\lambda$, or rather its cohomology class.
The theory of projective representations of finite abelian groups
(for a brief summary see appendix \ref{s.b}) tells us that the projective
irreducible representation s are labelled by the characters ${\hat\psi}$ of the {\em untwisted\/}
stabilizer ${\cal U}_\lambda$ and, moreover, that they all have the same dimension
$d_\lambda$ that was defined in \Erf dl, i.e.
\begin{equation} d_\lambda = \sqrt{\s\lambda\,/\,\u\lambda} \end{equation}
with
\begin{equation} \s\lambda := |{\cal S}_\lambda| \,,\qquad \u\lambda := |{\cal U}_\lambda| \,.
\end{equation}
Note that, even though not manifest from their definition,
the numbers $d_\lambda$ are indeed integral \cite{fusS6,bant7}; they constitute
the (additional) ground state
degeneracy of the resolved fixed point in the ${\mathfrak A}$-theory \cite{fusS6}.
Taking all this information together, we arrive at the decomposition
\begin{equation} {\cal H}_\lambda^{} \equiv {\cal H}_{[\lambdab,{\hat\psi}]} =
\bigoplus_{{\rm J}\in{\cal G}/{\cal S}_\lambda} \V_{\!\psu} \otimes \bar{\cal H}_{{\rm J}{\bar\lambda}} \,,
\labl{deco}
where $\V_{\!\psu}$ is an irreducible projective ${\cal S}_\lambda$-module.
In the special case of the vacuum $\lambda\,{=}\,\Omega$ of the ${\mathfrak A}$-theory
the stabilizer is trivial, ${\cal S}_\Omega\,{=}\,\{{\rp\vac}\}$; we then recover
\erf{chi1} with ${\bar\lambda}\,{=}\,{\rm J}{\rp\vac}$ and $\Psi\,{\in}\,{G^*}\,{=}\,{\cal G}$ identified with
the simple current ${\rm J}$. It should also be kept in mind that on the right hand side\ of
these decompositions only such
irreducible representation s\ ${\bar\mu}$ of the ${\bar\cala}$-theory arise for which the monodromy charges
$Q_{\rm J}(\lambda)$ \Erf QJ with respect to all currents ${\rm J}\,{\in}\,{\cal G}$ vanish; this holds true
simply because it is satisfied \cite{scya6} for all irreducible representation s\ $\lambda$ in
the extension and because monodromy charges are constant on ${\cal G}$-orbits.
\sect{Boundary blocks}\label{s.4}
We have now collected sufficient background material so as to be able
to address in more detail
the basic ingredient needed for the analysis of boundaries at the chiral level.
As already mentioned, as a consequence of factorization this ingredient is
provided by the chiral block s for a single bulk insertion on the disk.
We will refer to these basic objects as the {\em boundary blocks\/} for broken
bulk symmetries. (When all bulk symmetries are preserved, these blocks are
also called Ishibashi states.)
By definition, such boundary blocks are two-point chiral blocks on a world
sheet ${\mathbb P}^1$ with the topology of the sphere. In more technical terms,
they are elements of
$({\cal H}_\lambda^{}{\otimes}{\cal H}_{\lambda^{\!+}_{\phantom i}})^\star_{\phantom I}$ -- the algebra ic
dual of ${\cal H}_\lambda^{}{\otimes}{\cal H}_{\lambda^{\!+}_{\phantom i}}$ -- i.e.\ linear forms
${\cal H}_\lambda^{}\,{\otimes}\,{\cal H}_{\lambda^{\!+}_{\phantom i}}\,{\to}\,{\mathbb C}$
on the tensor product spaces ${\cal H}_\lambda\,{\otimes}\,{\cal H}_{\lambda^{\!+}_{\phantom i}}$, which
satisfy the Ward identities for ${\bar\cala}$, i.e.\ are invariant under
the symmetries in the chiral algebra that are preserved.
In the special case when all bulk symmetries are respected,
every such two-point block is uniquely determined up to a scalar factor.
\subsection{Boundary blocks for broken symmetries}
We are interested in the situation where only the symmetries in the prescribed
subalgebra ${\bar\cala}$ of the chiral algebra\ are preserved. {}From the decomposition
\erf{deco} it follows that the linear forms we are after are forms on
\begin{equation} {\cal H}_\lambda^{}\otimes{\cal H}_{\lambda^{\!+}_{\phantom i}}
\equiv {\cal H}_{[\lambdab,{\hat\psi}]}\otimes{\cal H}_{[\lambdab,{\hat\psi}]^+_{\phantom i}}
\cong \V_{\!\psu}^{}\,{\otimes}\,\V_{\!\psu^+}\,\otimes\!\bigoplus_{{\rm J},{\rm K}\in{\cal G}/{\cal S}_\lambda}\!
\mbox{\large(} \bar{\cal H}_{{\rm J}{\bar\lambda}}^{}\,{\otimes}\, \bar{\cal H}_{{\rm K}{\lambdab^{\!+}_{\phantom i}}} \mbox{\large)} \,, \Labl72
and hence they are sums of tensor products of linear forms on the tensor
product spaces $\V_{\!\psu}^{}{\otimes}{\cal V}_{{\hat\psi}^+}$\, and
$\,\bigoplus_{{\rm J},{\rm K}}\bar{\cal H}_{{\rm J}{\bar\lambda}}^{}{\otimes}\bar{\cal H}_{{\rm K}{\lambdab^{\!+}_{\phantom i}}}$.
Moreover, since the ${\bar\cala}$-symmetries are preserved, the latter forms satisfy
the Ward identities of the ${\bar\cala}$-theory and hence are
precisely the two-point blocks for the relevant ${\bar\cala}$-fields. These
blocks in turn can be non-vanishing only when one deals with tensor products of
conjugate ${\bar\cala}$-modules, i.e.\ effectively we are working with forms on the
subspace
\begin{equation} \V_{\!\psu}^{}\,{\otimes}\,\V_{\!\psu^+}\,\otimes\!\bigoplus_{{\rm J}\in{\cal G}/{\cal S}_\lambda}\!
\mbox{\large(} \bar{\cal H}_{{\rm J}{\bar\lambda}}^{}\,{\otimes}\,\bar{\cal H}_{({\rm J}{\bar\lambda})^+_{\phantom i}} \mbox{\large)}
\end{equation}
of the space \Erf72. Moreover, when non-vanishing, then these forms on the
subspaces ${\cal H}_{\bar\mu}^{}\,{\otimes}\,{\cal H}_{\bar\mu^+_{\phantom i}}$ are uniquely fixed up to
normalization, just as the two-point blocks for ${\mathfrak A}$ are. Thus, in short,
they are just the ordinary boundary blocks
\begin{equation} {\bar\Beta}_{\bar\mu}:\quad {\cal H}_{\bar\mu}^{}\otimes{\cal H}_{\bar\mu^+_{\phantom i}} \to{\mathbb C} \Labl73
of the ${\bar\cala}$-theory.
It follows that the boundary block s can be written as linear combinations
\begin{equation} \sum_{\scriptstyle i\in\{1,2,...,d_\lambda^2\} \atop
\scriptstyle {\rm J}\in{\cal G}/{\cal S}_\lambda} \x{{\rm J}{\bar\lambda};i}^{}\,
{\rm b}_{{\hat\psi},(i)} \,\ot\, {\bar\Beta}_{{\rm J}{\bar\lambda}} \,, \Labl74
where the maps ${\rm b}_{{\hat\psi},(i)}$ constitute some basis of the linear forms
\begin{equation} \beta_{\hat\psi}:\quad \V_{\!\psu}\,{\otimes}\, \V_{\!\psu^+} \to {\mathbb C} \,. \end{equation}
The coefficients $\x{{\rm J}{\bar\lambda};i}\,{\in}\,{\mathbb C}$ appearing in \Erf74 are
undetermined at the level of chiral conformal field theory, simply because the Ward identities
for the unbroken symmetries are satisfied independently of the values of
these coefficients. At the level of full conformal field theory, however, we will be able to
determine them; each consistent set of values then corresponds to a
boundary condition that preserves ${\bar\cala}$.
Now for the ${\bar\cala}$-part we are already given a natural basis of linear forms,
namely the ordinary boundary blocks \Erf73. On the other hand, at this point
we are still lacking a concrete basis for the linear forms $\beta_{\hat\psi}$
on the degeneracy spaces. Therefore we now turn our attention to those forms
$\beta_{\hat\psi}$. As already mentioned, the degeneracy spaces are projective
modules of the stabilizer group ${\cal S}_\lambda$ or, more precisely,
ordinary modules of that twisted group algebra ${\mathbb C}_{{\cal F}_\lambda}
{\cal S}_\lambda$ which corresponds to (the cohomology class of)
the two-cocycle ${\cal F}_\lambda$ that was introduced in formula \Erf ff. Thus to
every ${\rm J}\,{\in}\,{\cal S}_\lambda$ is associated a linear map $R_{\hat\psi}({\rm J})$ on $\V_{\!\psu}$,
and these maps represent ${\cal S}_\lambda$ projectively in the sense that
\begin{equation} R_{\hat\psi}({\rm J})\, R_{\hat\psi}({\rm J}') = {\cal F}_\lambda({\rm J},{\rm J}')\, R_{\hat\psi}({\rm J}\J') \end{equation}
for all ${\rm J},{\rm J}'\,{\in}\,{\cal S}_\lambda$. When the simple current ${\rm J}$ is even
contained in the untwisted stabilizer ${\cal U}_\lambda\,{\subseteq}\,
{\cal S}_\lambda$, whose group algebra\ coincides with the center
of the twisted group algebra ${\mathbb C}_{{\cal F}_\lambda}{\cal S}_\lambda$,
then it is represented by a multiple of the unit matrix:
\begin{equation} R_{\hat\psi}({\rm J}) = {\hat\psi}({\rm J})\, \mbox{\small $1\!\!$}1_{d_\lambda} \quad {\rm for}\
{\rm J}\,{\in}\,{\cal U}_\lambda \,. \Labl1d
We also note that for any set $\{{\rm K}\}$ of representatives of the quotient
${\cal S}_\lambda/{\cal U}_\lambda$, the matrices $R_{\hat\psi}({\rm K})$ form a basis of the
full matrix algebra $M_{d_\lambda}(\V_{\!\psu})$ on $\V_{\!\psu}$.
(For further properties of twisted group algebras and their representations,
see appendix \ref{s.b}.)
By employing these maps $R_{\hat\psi}({\rm J})$ we will now construct a natural basis for
the linear forms $\beta_{\hat\psi}$ and thereby for the boundary blocks. To this
end we first establish an underlying basis $\{\calo _\psi\}$ for the
endomorphisms of $\V_{\!\psu}$.
Before doing so, however, we pause for a remark about the character ${\hat\psi}^+_{}$
that first appeared in formula \Erf72 above. It arises via the formula
\begin{equation} {[\lambdab,{\hat\psi}]^+_{\phantom i}} = [{\lambdab^{\!+}_{\phantom i}},{\hat\psi}^+_{}] \end{equation}
for the conjugation of ${\mathfrak A}$-representation s, and thus comes from the ${\cal G}$-orbit
that is conjugate to the ${\cal G}$-orbit of ${\bar\lambda}$. Now the commutator
cocycles of conjugate orbits are just each others' complex conjugates (see
relation \Erf Fs), so ${\hat\psi}^+_{}$ is a character of the same group
${\cal U}_\lambda$ as ${\hat\psi}$. However, it does not, in general, coincide with the
complex conjugate character ${\hat\psi}^*$ (see formula \Erf9n for the precise
definition). Thus in particular the irreducible projective
${\cal S}_\lambda$-module $\V_{\!\psu^+}$ in general neither coincides with $\V_{\!\psu}$
itself nor with the module $\V_{\!\psu}^\star$ that is dual to $\V_{\!\psu}$ in the sense
that the representation\ matrices are hermitian conjugate to those for $\V_{\!\psu}$.
\subsection{A natural basis for ${\rm End}(\V_{\!\psu})$}
As an intermediate step towards constructing the desired basis for the
linear forms on $\V_{\!\psu}\,{\otimes}\, \V_{\!\psu^+}$, we introduce in this subsection
a basis $\{\calo _\psi\}\,{\equiv}\,\{\calo _\psi^{({\hat\psi})}\}$
for the linear maps on the degeneracy space $\V_{\!\psu}$.
To this end we first introduce the following concept. For every character
$\psi$ of the full stabilizer, $\psi\,{\in}\,{\cal S}_\lambda^*$, the restriction
of $\psi$ to ${\cal U}_\lambda\,{\subseteq}\,{\cal S}_\lambda$ is an element
$\pi(\psi)\,{\in}\,{\cal U}_\lambda^*$. We write
\begin{equation} \psi\succ{\hat\psi} \qquad{\rm or }\qquad {\hat\psi}=\pi(\psi)
\equiv \psi|_{{\cal U}_\lambda} \Labl49
to characterize this situation.
Each ${\cal U}_\lambda$-character ${\hat\psi}$ has $d_\lambda^2$ pre-images under the
projection $\pi$. We will show how these pre-images label the desired
basis of the endomorphisms of $\V_{\!\psu}$, according to
$\{\calo _\psi\,|\,\psi\,{\succ}\,{\hat\psi}\}$.
We start with the observation that
for $\psi\,{\succ}\,{\hat\psi}$ the product $\psi^*({\rm J})R_{\hat\psi}({\rm J})$
does not depend on the choice of representative ${\rm J}$ of a class in the quotient
${\cal S}_\lambda/{\cal U}_\lambda$. For, if ${\rm K}\,{\in}\,{\cal U}_\lambda$, then
\begin{equation} \begin{array}{ll} \psi^*({\rm J}{\rm K})\, R_{\hat\psi}({\rm J}{\rm K}) \!\!
&= \psi^*({\rm J})\psi^*({\rm K})\, R_{\hat\psi}({\rm J}) R_{\hat\psi}({\rm K}) \\[.6em]
&=\psi^*({\rm J}){\hat\psi}^*({\rm K})\,R_{\hat\psi}({\rm J})\, {\hat\psi}({\rm K})\,\mbox{\small $1\!\!$}1_{d_\lambda}
= \psi^*({\rm J})\, R_{\hat\psi}({\rm J}) \end{array} \end{equation}
owing to the identity \Erf1d.
Therefore for each $\psi\,{\succ}\,{\hat\psi}$ we can introduce the endomorphism
\begin{equation} \calo _\psi := \mbox{\large(}\Frac{\u\lambda}{\s\lambda}\mbox{\large)}^{3/4}\!\!
\sum_{{\rm J}\in{\cal S}_\lambda/{\cal U}_\lambda}\! \psi({\rm J})^* R_{{\hat\psi}}({\rm J})
\,\in {\rm End}(\V_{\!\psu}) \,, \labl{clo}
and these maps are well-defined. The following argument shows that the matrices
$\calo _\psi$ form a basis of the full matrix algebra
$M_{d_\lambda}\,{=}\,{\rm End}(\V_{\!\psu})$.\,%
\futnote{Naively one might also expect that these matrices are (proportional
to) idempotents. But the non-triviality of the cocycle ${\cal F}_\lambda$ spoils
this property.}
We use the fact, derived in the appendix after relation \Erf29,
that the partial sum over characters yields a non-zero result if and only
if ${\rm K}\,{\in}\,{\cal U}_\lambda$ or, more precisely,
\begin{equation} \Sumpsipsu\lambda \psi({\rm K}) = \Frac{\s\lambda}{\u\lambda}\,
\delta_{{\rm K}\in{\cal U}_\lambda}\, {\hat\psi}({\rm K}) \,. \labl{lem}
The identity \erf{lem} implies that
\begin{equation} \Sumpsipsu\lambda \psi({\rm K})\, \calo _\psi
= \mbox{\large(}\Frac{\s\lambda}{\u\lambda}\mbox{\large)}^{1/4}\!\!
\sum_{{\rm J}\in{\cal S}_\lambda/{\cal U}_\lambda} \delta^{}_{{\rm K}{\rm J}^{-1}\in{\cal U}_\lambda}
{\hat\psi}({\rm K}{\rm J}^{-1})\, R_{{\hat\psi}}({\rm J})
= d_\lambda^{1/2}\, {\hat\psi}({\rm K}{\rm J}^{-1}_{\rm K})\, R_{{\hat\psi}}({\rm J}_{\rm K}) \end{equation}
for all ${\rm K}\,{\in}\,{\cal S}_\lambda$. Here ${\rm J}_{\rm K}$ denotes the chosen representative
in ${\cal S}_\lambda/{\cal U}_\lambda$ that is in the same class as ${\rm K}$.
These sums, of course, depend on ${\rm K}$, and not just on ${\rm K}$ modulo
${\cal U}_\lambda$. However, for any set $\{{\rm K}\}$ of representatives of
${\cal S}_\lambda/{\cal U}_\lambda$, we recover all elements in a basis of the
space of endomorphisms of $\V_{\!\psu}$, because the operators
$R_{\hat\psi}({\rm J})$ span this space.
It follows that the operators $\calo _\psi$ span the space ${\rm End}(\V_{\!\psu})$ of
endomorphisms; for dimensional reasons, they therefore constitute a basis
of ${\rm End}(\V_{\!\psu})$, as claimed. Furthermore, since its construction is entirely
specified in terms of the character ${\hat\psi}$ and the simple currents in
${\cal S}_\lambda$, this basis is indeed completely natural.
Later on we will also need the traces of the operators $\calo _\psi$ and of a
product of two of them, for two ${\cal S}_\lambda$-characters $\psi,\varphi$ that
restrict to the same ${\cal U}_\lambda$-character ${\hat\psi}$. To evaluate
these traces we observe that the trace of $R_{\hat\psi}({\rm J})$ is given by
\begin{equation} {\rm tr}\, R_{\hat\psi}({\rm J}) = \delta^{}_{{\rm J}\in{\cal U}_\lambda}\, {\hat\psi}({\rm J})\,
{\rm tr}\,\mbox{\small $1\!\!$}1_{d_\lambda} = d_\lambda\, {\hat\psi}({\rm J})\,\delta^{}_{{\rm J}\in{\cal U}_\lambda}
\Labl oR
(compare the relations \Erf1p and \Erf0t). We then find
\begin{equation} \begin{array}{ll} {\rm tr}\, \clo_\psi \!\!
&= d_\lambda^{-3/2} \displaystyle\sum_{{\rm J}\in{\cal S}_\lambda/{\cal U}_\lambda}\psi({\rm J})^*\,
{\rm tr}\, R_{\hat\psi}({\rm J})
\\{}\\[-.8em]
&= d_\lambda^{-1/2} \displaystyle\sum_{{\rm J}\in{\cal S}_\lambda/{\cal U}_\lambda}
\psi({\rm J})^*{\hat\psi}({\rm J})\, \delta^{}_{{\rm J}\in{\cal U}_\lambda}
= d_\lambda^{-1/2}\, \psi({\rm J}_1)^*{\hat\psi}({\rm J}_1) = d_\lambda^{-1/2}
\end{array} \end{equation}
(here ${\rm J}_1$ is the chosen representative of the class of the unit element
of ${\cal S}_\lambda/{\cal U}_\lambda$)
as well as\,%
\futnote{In the last line we assume that the cocycle ${\cal F}$ has been chosen to
be standard, which means (see formula \Erf44) that for elements of the basis
of the twisted group algebra the operations of forming the inverse and of
conjugating with an element of the center look the same as in the untwisted
case. This property of ${\cal F}$ can always be achieved by a suitable choice of
basis.}
\begin{equation} \begin{array}{ll} {\rm tr}\, \calo _\psi^\dagger \calo _\varphi \!\!
&= d_\lambda^{-3}\! \displaystyle\sum_{{\rm J}_1,{\rm J}_2\in{\cal S}_\lambda/{\cal U}_\lambda}
\psi({\rm J}_1)\,\varphi({\rm J}_2)^*\, {\rm tr}\, R_{\hat\psi}^\dagger({\rm J}_1)\,R_{\hat\psi}({\rm J}_2)
\\{}\\[-.8em]
&= d_\lambda^{-3}\! \displaystyle\sum_{{\rm J}_1,{\rm J}_2\in{\cal S}_\lambda/{\cal U}_\lambda}
\psi({\rm J}_1)\,\varphi({\rm J}_2)^*\, {\rm tr}\, R_{\hat\psi}({\rm J}_1^{-1}{\rm J}_2) \cdot
{\cal F}_\lambda({\rm J}_1^{-1},{\rm J}_2)
\\{}\\[-.8em]
&= d_\lambda^{-3+1}\! \displaystyle\sum_{{\rm J}_1,{\rm J}_2\in{\cal S}_\lambda/{\cal U}_\lambda}
\psi({\rm J}_1)\,\varphi({\rm J}_2)^*\,{\cal F}_\lambda({\rm J}_1^{-1},{\rm J}_2)\,
\delta_{{\rm J}_1^{-1}{\rm J}_2\in{\cal U}_\lambda}\, {\hat\psi}({\rm J}_1^{-1}{\rm J}_2)
\\{}\\[-.8em]
&= d_\lambda^{-2}\! \displaystyle \sum_{{\rm J}\in{\cal S}_\lambda/{\cal U}_\lambda}
\psi({\rm J})\,\varphi({\rm J})^*\,{\cal F}_\lambda({\rm J}^{-1}\!,{\rm J})\,
= \Frac{\u\lambda}{\s\lambda} \displaystyle \sum_{{\rm J}\in{\cal S}_\lambda/{\cal U}_\lambda}
\psi({\rm J})\,\varphi({\rm J})^*
= \delta_{\psi,\varphi} \,. \end{array} \Labl oo
Thus the basis $\{\calo _\psi\,|\,\psi\,{\succ}\,{\hat\psi}\}$ is orthonormal.
Also, combining these results we learn that the endomorphisms
$d_\lambda^{-1/2}\clo_\psi$ form a partition of unity:
\begin{equation} \Sumpsipsu\lambda\! \clo_\psi = d_\lambda^{1/2}\, \mbox{\small $1\!\!$}1_{d_\lambda}^{}
\,. \Labl2d
\subsection{A natural basis for the boundary blocks}
Our next aim is to associate to each of the endomorphisms $\clo_\psi{:}\
\V_{\!\psu}\,{\to}\,\V_{\!\psu}$ with $\psi\,{\succ}\,{\hat\psi}$ and to each
${\rm J}\,{\in}\,{\cal G}/{\cal S}_\lambda$ a linear form
\begin{equation} {\rm b}_\psi^{}\equiv{\rm b}_\psi^{({\hat\psi};{\rm J}{\bar\lambda})}:\quad
\V_{\!\psu} \,{\otimes}\, \V_{\!\psu^+} \to {\mathbb C} \,, \end{equation}
in such a way that the collection of these forms constitutes a basis. (Thus
these maps are required to provide a concrete realization of the basis
elements ${\rm b}_{{\hat\psi},(i)}$ that were introduced in formula \Erf74; in
particular the label $i$ appearing there turns out to be nothing but a character
$\psi\,{\in}\,{\cal S}_\lambda^*$ with $\psi\,{\succ}\,{\hat\psi}$.)
This is achieved by first constructing a suitable non-degenerate linear form
\begin{equation} \beta_\circ\equiv\beta_\circ^{({\hat\psi};\J\lambdab)}:\quad \V_{\!\psu} \,{\otimes}\, \V_{\!\psu^+}\to {\mathbb C} \Labl:n
and then defining
\begin{equation} {\rm b}_\psi\equiv{\rm b}_\psi^{({\hat\psi};{\rm J}{\bar\lambda})}
:= \beta_\circ \circ (\clo_\psi\oT\mbox{\sl id}) \,, \Labl nO
i.e.\ ${\rm b}_\psi(v\raisebox{.07em}{$\scriptstyle\otimes$} w)\,{:=}\, \beta_\circ(\clo_\psi v\,\ot\, w)$
for all $v\,{\in}\,\V_{\!\psu}$ and all $w\,{\in}\,\V_{\!\psu^+}$.
To obtain $\beta_\circ$, we observe that when restricted to an isotypic component in
the decomposition \erf{deco}, the boundary block ${\rm B}_\lambda{:}\ {\cal H}_\lambda^{}{\otimes}
{\cal H}_{\lambda^{\!+}_{\phantom i}}\,{\to}\,{\mathbb C}$ of the ${\mathfrak A}$-theory satisfies the Ward identities of
the ${\bar\cala}$-theory and is therefore proportional to the corresponding boundary
block $\bar{\rm B}_{\bar\lambda}{:}\ \calhb_{\bar\lambda}^{}{\otimes}\calhb_{\lambdab^{\!+}_{\phantom i}}\,{\to}\,{\mathbb C}$ of the
${\bar\cala}$-theory. This implies that upon choosing any two fixed elements
$p_\circ$ and $q_\circ$ of $\calhb_{\bar\lambda}$ and $\calhb_{\lambdab^{\!+}_{\phantom i}}$ for which $\bar{\rm B}_{\bar\lambda}(p_\circ\,\ot\,q_\circ)$
is non-zero, the prescription
\begin{equation} \beta_\circ(v\,\ot\, w)
:= \frac{{\rm B}_\lambda(v\raisebox{.07em}{$\scriptstyle\otimes$}p_\circ\,\ot\, w\raisebox{.07em}{$\scriptstyle\otimes$}q_\circ)}{\bar{\rm B}_{\bar\lambda}(p_\circ\,\ot\,q_\circ)} \labl n
yields a well-defined linear form on $\V_{\!\psu}{\otimes}\V_{\!\psu^+}$. Moreover,
by the non-degeneracy and uniqueness of ${\rm B}_\lambda$ and
$\bar{\rm B}_{\bar\lambda}$, it is non-degenerate. Note that all forms in question are unique
up to a scalar. Of course, the scalar factor for $\beta_\circ$ can be different for
different isotypic components of ${\cal H}_\lambda$, so that for each
${\rm J}\,{\in}\,{\cal G}/{\cal S}_\lambda$ we obtain a different map $\beta_\circ^{({\hat\psi};\J\lambdab)}$. Hence for
arbitrary elements $p,q$ of $\bigoplus_{\J\in\Gs/\cals_\lambda}\calhb_{{\rm J}{\bar\lambda}}$ and $\bigoplus_{\J\in\Gs/\cals_\lambda}\calhb_{({\rm J}{\bar\lambda})^+_{\phantom I}}$, respectively, we have
\begin{equation} {\rm B}_\lambda(v\raisebox{.07em}{$\scriptstyle\otimes$}p^{\scriptscriptstyle(\J\lambdab)}\,\ot\, w\raisebox{.07em}{$\scriptstyle\otimes$}q^{\scriptscriptstyle(\J\lambdab)}) = \beta_\circ^{({\hat\psi};\J\lambdab)}(v\,\ot\, w) \cdot \bar{\rm B}_{{\rm J}{\bar\lambda}}(p^{\scriptscriptstyle(\J\lambdab)}\,\ot\,q^{\scriptscriptstyle(\J\lambdab)}) \end{equation}
for all ${\rm J}\,{\in}\,{\cal G}/{\cal S}_\lambda^*$, where $p\,{=:}\,\sum_{\J\in\Gs/\cals_\lambda} p^{\scriptscriptstyle(\J\lambdab)}$ with
$p^{\scriptscriptstyle(\J\lambdab)}\,{\in}\,\bar{\cal H}_{{\rm J}{\bar\lambda}}$ and analogously for $q$.
By the linear independence of the endomorphisms $\clo_\psi$, also the
forms \Erf nO are linearly independent; since there are $d_\lambda^2$ of
them, they therefore provide us indeed with a basis of the linear forms on
$\V_{\!\psu}{\otimes}\V_{\!\psu^+}$. We now combine this basis with the ${\bar\cala}$-blocks
\Erf73 with ${\bar\mu}\,{=}\,{\rm J}{\bar\lambda}$.
When doing so, we still have to allow for an arbitrary over-all
normalization of the blocks, which cannot be determined at the present
stage. We thus arrive at the linear forms
\begin{equation} {\tilde\Beta}^{}_{({\bar\mu},\psi)}\equiv{\tilde\Beta}_{({\bar\mu},\psi)}^\lambda
:= \norm{\bar\mu}\psi\,d_\lambda^{-1/2}\,{\rm b}_\psi \,\ot\, \bbb\mu \labl{tBeta}
on ${\cal H}_\lambda^{}{\otimes}{\cal H}_{\lambda^{\!+}_{\phantom i}}$ with some non-zero $\norm{\bar\mu}\psi\,{\in}\,{\mathbb C}$,
acting as
\begin{equation} {\tilde\Beta}_{({\rm J}{\bar\lambda},\psi)}(v\raisebox{.07em}{$\scriptstyle\otimes$} p\,\ot\, w\raisebox{.07em}{$\scriptstyle\otimes$} q)
:= \norm{{\rm J}{\bar\lambda}}\psi\,d_\lambda^{-1/2}\, {\rm b}_\psi(v\raisebox{.07em}{$\scriptstyle\otimes$} w)
\cdot \bar{\rm B}_{{\rm J}{\bar\lambda}}(p^{\scriptscriptstyle(\J\lambdab)}\raisebox{.07em}{$\scriptstyle\otimes$}q^{\scriptscriptstyle(\J\lambdab)}) \Labl bB
for all $v\,{\in}\,\V_{\!\psu}$, $w\,{\in}\,\V_{\!\psu^+}$, $p\,{\in}\,{\cal H}_\lambda$ and $q\,{\in}\,{\cal H}_{\lambda^{\!+}_{\phantom i}}$.
When $\lambda\,{=}\,{[\lambdab,{\hat\psi}]}$, then for ${\bar\mu}\,{=}\,{\rm J}{\bar\lambda}$ with ${\rm J}$ ranging over
${\cal G}/{\cal S}_\lambda$ and $\psi$ over all $\psi\,{\succ}\,{\hat\psi}$, the forms
${\tilde\Beta}^{}_{({\bar\mu},\psi)}$ constitute a natural basis for the boundary
blocks on ${\cal H}_\lambda^{}{\otimes}{\cal H}_{\lambda^{\!+}_{\phantom i}}$ that preserve ${\bar\cala}$.
In short, the boundary block s are naturally labelled by pairs
$({\bar\mu},\psi)$, one label referring to a primary field ${\bar\mu}$ of the
${\bar\cala}$-theory with vanishing monodromy charge $Q_{\cal G}(\mu)\eq0$, the
other a character of the {\em full\/} stabilizer, $\psi\,{\in}\,{\cal S}_\mu^*$.
We remark that with the help of the identity \Erf2d\,%
\futnote{The introduction of the explicit factor of $d_\lambda^{-1/2}$
in \erf{tBeta} was chosen with hindsight, so as to cancel the corresponding
factor in \Erf2d.}
one checks that the
ordinary boundary blocks of the ${\mathfrak A}$-theory can be expressed in terms of
the blocks ${\tilde\Beta}_{({\bar\mu},\psi)}$ as
\begin{equation} {\rm B}_\lambda = \Plupsipsu\lambda \bigoplus_{\J\in\Gs/\cals_\lambda} (\norm{{\rm J}{\bar\lambda}}\psi)^{-1}\,
{\tilde\Beta}_{({\rm J}{\bar\lambda},\psi)} \,. \Labl BJ
Moreover, it is easy to see that the form $\beta_\circ$ satisfies a `degeneracy space
Ward identity', i.e.
\begin{equation} \beta_\circ \circ (y\,\ot\,{\bf1} - {\bf1}\,\ot\, y) = 0 \Labl10
for every $y\,{\in}\,{\rm End}(\V_{\!\psu})$.\,%
\futnote{Note that in the two terms $y$ acts on different spaces, i.e.\ in
more pedantic notation the identity reads $\beta_\circ\,{\circ}\,[R_{\hat\psi}^{}(y)\,\ot\,{\bf1}
- {\bf1}\,\ot\, R_{{\hat\psi}^+}(y)]\eq0$. Just like in the usual Ward identities,
in \Erf10 the representation\ symbols are suppressed.}
This result, in turn, when combined with the
Ward identities of the ${\bar\cala}$-theory, immediately implies that the
linear combination \Erf BJ indeed satisfies the Ward identities of
the ${\mathfrak A}$-theory, i.e.
\begin{equation} {\rm B}_\lambda \circ \mbox{\large(} Y_n \,\ot\,{\bf1} + (-1)^{\Delta_Y-1}\, {\bf1} \,\ot\,
Y_{-n} \mbox{\large)} = 0 \Labl WI
for all $Y{\in}\,{\mathfrak A}$ ($\Delta_Y$ denotes the conformal weight of $Y$).
\subsection{Scalar products}
For the computation of annulus amplitudes we need to deal with suitable
scalar products of the boundary block s. As a matter of fact,
the boundary blocks are not normalizable; but for every $t\,{>}\,0$ there
exists a modified inner product
\begin{equation} \langle {\tilde\Beta}_{({\bar\lambda},\psi)}
|\, {\rm e}^{-(2\pi/t)(L_0\raisebox{.07em}{$\scriptscriptstyle\otimes$}{\bf1}+{\bf1}\raisebox{.07em}{$\scriptscriptstyle\otimes$} L_0-c/12)} \,|
{\tilde\Beta}_{({\bar\mu},\varphi)} \rangle \Labl mf
with respect to which they become normalizable.\,%
\futnote{While at this point this observation is a mere peculiarity without any
particular application, these modified inner products indeed appear
in the computation of annulus amplitudes, see subsection \ref{s.62} below.
In that context, $t$ is the modular parameter of the annulus.}
To substantiate this statement and perform the concrete calculation, we
compare it to the analogous computation for the boundary blocks of the
${\bar\cala}$-theory. As usual \cite{prss2,prss3}, we normalize the ordinary
boundary blocks of the ${\mathfrak A}$-theory by prescribing
the over-all factor in their modified inner product, according to
\begin{equation} \langle {\rm B}_\lambda
|\, {\rm e}^{-(2\pi/t)(L_0\raisebox{.07em}{$\scriptscriptstyle\otimes$}{\bf1}+{\bf1}\raisebox{.07em}{$\scriptscriptstyle\otimes$} L_0-c/12)} \,|
{\rm B}_\mu \rangle = \Frac1{S_{\lambda,\Omega}}\,
\raisebox{.15em}{$\chi$}_\lambda(2{\rm i}/ t)\, \delta_{\lambda,\mu}^{} \,, \labl{skp1}
and analogously for the boundary blocks of the ${\bar\cala}$-theory:
\begin{equation} \langle {\bar\Beta}_{\bar\lambda}
|\, {\rm e}^{-(2\pi/t)(L_0\raisebox{.07em}{$\scriptscriptstyle\otimes$}{\bf1}+{\bf1}\raisebox{.07em}{$\scriptscriptstyle\otimes$} L_0-c/12)} \,|
{\bar\Beta}_{\bar\mu} \rangle = \Frac1{\Sb\lambda\Omega\!\!\raisebox{.54em}{$\phantom.$}}\,
\bar{\raisebox{.15em}{$\chi$}}_{\bar\lambda}(2{\rm i}/ t)\, \delta_{{\bar\lambda},{\bar\mu}}^{} \,. \labl{skp2}
What we need as an additional new ingredient is to construct an inner product
on the space of linear maps \Erf nO; this is achieved as follows.
First we construct a scalar product on the degeneracy space $\V_{\!\psu}$, i.e.\
a sesquilinear map
\begin{equation}
\kappa_{{\hat\psi}}:\quad \V_{\!\psu} \times \V_{\!\psu} \to {\mathbb C} \, . \end{equation}
The construction uses the invariant scalar products on the modules of
${\mathfrak A}$ and ${\bar\cala}$. Namely, on ${\mathfrak A}$ we have an antilinear conjugation map
$c{:}\ Y\,{\mapsto}\,Y^\dagger$, and on each ${\mathfrak A}$-module ${\cal H}_\lambda$ there is
a scalar product $\kappa_\lambda{:}\ {\cal H}_\lambda{\times}{\cal H}_\lambda\,{\to}\,{\mathbb C}$
which is invariant in the sense that
\begin{equation} \kappa_\lambda(Y_n p,p') + \kappa_\lambda(p,(Y_n)^\dagger p') = 0 \end{equation}
for all $p,p'\,{\in}\,{\cal H}_\lambda$ and all $Y_n\,{\in}\,{\mathfrak A}$.
Such a scalar product on an {\em irreducible\/} module is unique up to a scalar.
Moreover, since the subalgebra ${\bar\cala}$ must be consistent, it is closed under
$c$, and hence there is an analogous structure on ${\bar\cala}$-modules.
The scalar product on $\V_{\!\psu}$ can now be constructed as follows. The subspace
$\V_{\!\psu}\,{\otimes}\, \calhb_{\bar\lambda}$ of ${\cal H}_\lambda$ inherits a scalar product from the scalar
product $\kappa_\lambda$ of ${\cal H}_\lambda$. For any two fixed vectors $v,v'$ in the degeneracy
space $\V_{\!\psu}$, the mapping $(p,p')\,{\mapsto}\,\kappa_\lambda(v\raisebox{.07em}{$\scriptstyle\otimes$} p,v'\raisebox{.07em}{$\scriptstyle\otimes$} p')$
for $p,p'\,{\in}\,\calhb_{\bar\lambda}$ provides a sesquilinear form.
This sesquilinear form is still unitary with respect to the restriction of the
conjugation $c$ to the subalgebra ${\bar\cala}$. It must thus be proportional
to the standard scalar product $\bar\kappa_{\bar\lambda}$ on $\calhb_{\bar\lambda}$. We call the
constant of proportionality $\kappa_{{\hat\psi}}(v,v')$.
In formulae,
\begin{equation} \kappa_{{\hat\psi}}(v,v') = \frac{\kappa_\lambda(v\raisebox{.07em}{$\scriptstyle\otimes$} p,v'\raisebox{.07em}{$\scriptstyle\otimes$} p')}
{\bar\kappa_{\bar\lambda}(p,p')} \,, \labl{kappav}
where any pair $p,p'\,{\in}\,\calhb_{\bar\lambda}$ of vectors can be chosen that obeys
$\bar\kappa_{\bar\lambda}(p,p')\nE0$. One verifies that $\kappa_{{\hat\psi}}$ constitutes a
non-degenerate scalar product on the degeneracy space $\V_{\!\psu}$.
The scalar product $\kappa_{{\hat\psi}}$ possesses an invariance property as well. Consider
the elements of ${\mathfrak A}$ that commute with the subalgebra ${\bar\cala}$. Since the
conjugation $c$ is an automorphism of ${\mathfrak A}$, this commutant is mapped by
$c$ to itself, so that the commutant, too, comes with its own conjugation.
The scalar product is now invariant in the sense that
\begin{equation} \kappa_{{\hat\psi}}(yv,v') = \kappa_{{\hat\psi}}(v,c(y) v') \end{equation}
for all $y\,{\in}\,{\rm End}(\V_{\!\psu})$. (In case the commutant should be smaller than
${\rm End}(\V_{\!\psu})$, one simply extends $c$ to the rest of ${\rm End}(\V_{\!\psu})$.)
Now since the degeneracy spaces $\V_{\!\psu}$, and analogously also $\V_{\!\psu^+}$, carry
an invariant scalar product, also the space of linear forms on
$\V_{\!\psu}\otimes\V_{\!\psu^+}$ and the space of endomorphisms ${\rm End}(\V_{\!\psu})\cong
(\V_{\!\psu})^*_{}\,{\otimes}\,\V_{\!\psu}$ have a scalar product. For the latter, it is given
by $\kappa_{{\rm End}(\V_{\!\psu})}(y,y')\,{=}\,{\rm tr}\,(y^\dagger y')$. (Notice that the trace is
independent
of the scalar product on $\V_{\!\psu}$; the latter does enter, however, through the
hermitian conjugation.) On the space of linear forms, the scalar product reads
\begin{equation} \kappa_{\Bet}({\rm b}_\psi,{\rm b}_\varphi) :=
\sum_{i,j=1}^{d_\lambda} {\rm b}_\psi(v_i\raisebox{.07em}{$\scriptstyle\otimes$} w_j) \cdot {\rm b}_\varphi(v_i\raisebox{.07em}{$\scriptstyle\otimes$} w_j)
\,, \end{equation}
where $\{v_i\}$ and $\{w_j\}$ are orthonormal bases of $\V_{\!\psu}$ and $\V_{\!\psu^+}$
with respect to the scalar products $\kappa_{{\hat\psi}}$ and $\kappa_{{\hat\psi}^+}$, respectively.
The non-degenerate form $\beta_\circ$ defined in \erf n provides us with an isomorphism
\begin{equation} y \,\mapsto\, \beta_\circ \circ (y \,\ot\, \mbox{\sl id}) \Labl35
between ${\rm End}(\V_{\!\psu})$ and the space of linear forms on $\V_{\!\psu}\,{\otimes}\,\V_{\!\psu^+}$.
We would like to check that this isomorphism is a homothety, i.e.\ that it
preserves
angles. With the orthonormal bases introduced above we need to show that
\begin{equation} \sum_{i,j=1}^{d_\lambda} \beta_\circ(yv_i\,\ot\, w_j)^*_{} \beta_\circ(y'v_i,w_j)
= \xi\, {\rm tr}\, y^\dagger y' \Labl36
for some non-zero number $\xi\,{\equiv}\,\xi_{[\lambdab,{\hat\psi}]}\,{\in}\,{\mathbb C}$, implying
in particular that
\begin{equation} \langle {\rm b}_{\varphi} | {\rm b}_\psi \rangle
\equiv \kappa_{\Bet}({\rm b}_\psi,{\rm b}_\varphi)
= \xi\, {\rm tr}\, \clo_\psi^\dagger \clo_\varphi
= \xi\, \delta_{\psi,\varphi} \,. \end{equation}
In components with respect to the two chosen orthonormal bases, the relation
\Erf36 amounts to $\sum_k (\beta_\circ)_{ki}^*(\beta_\circ)_{kj}^{}\,{=}\,\xi\delta_{ij}$, while
without reference to the basis of $\V_{\!\psu}$ it means
that for all $v,v'\,{\in}\,\V_{\!\psu}$ one has
\begin{equation} \sum_{j=1}^{d_\lambda} \beta_\circ(v,w_j)^*_{}\, \beta_\circ(v',w_j) =
\xi\, \kappa_{{\hat\psi}}(v,v') \,, \Labl y1
where the sum is over any arbitrary orthonormal basis $\{w_j\}$ of $\V_{\!\psu^+}$.
The validity of this relation can be established by showing that
$\bar\kappa_{\bar\lambda}(p,p')\sum_j\beta_\circ(v\raisebox{.07em}{$\scriptstyle\otimes$} w_j)^*_{}\beta_\circ(v'\raisebox{.07em}{$\scriptstyle\otimes$} w_j)$
provides a non-degenerate and invariant scalar product on ${\cal H}_\lambda$.
This is indeed possible; the details are presented in appendix \ref{s.c}.
\subsection{The normalization of the boundary blocks}
We are now finally in a position to determine the value of the
over-all normalization constant $\norm{\bar\lambda}\psi$
that was left undetermined in the definition \erf{tBeta} of the boundary blocks
${\tilde\Beta}_{({\bar\lambda},\psi)}$. To this end we have to prescribe some
normalization of the modified inner product of these blocks,
much as was done in \erf{skp1} and \erf{skp2} for the ordinary
boundary blocks. As it turns out, a convenient prescription is
\begin{equation} \langle {\tilde\Beta}_{({\bar\lambda},\psi)}
|\,{\rm e}^{-(2\pi/t)(L_0\raisebox{.07em}{$\scriptscriptstyle\otimes$}{\bf1}+{\bf1}\raisebox{.07em}{$\scriptscriptstyle\otimes$} L_0-c/12)}\,| {\tilde\Beta}_{({\bar\mu},\varphi)}
\rangle
= \Frac1{(|{\cal G}|/\u\lambda)\, \Sb\lambda\Omega\!\!\raisebox{.54em}{$\phantom.$}}\, \bar{\raisebox{.15em}{$\chi$}}_{\bar\lambda}(2{\rm i}/ t)\,
\delta_{{\bar\lambda},{\bar\mu}}\, \delta_{\psi,\varphi} \,. \Labl ia
We also observe that relation \erf{oo} amounts to the statement
that the operators $\calo _\psi$ with $\psi\,{\succ}\,{\hat\psi}$ form an orthonormal
basis of the space of endomorphisms ${\rm End}(\V_{\!\psu})$. It follows that
the constants $d_\lambda^{1/2}(\norm{\bar\lambda}\psi)^{-1}$ are precisely
the constants of proportionality between
the scalar product on ${\rm End}(\V_{\!\psu})$ and on the space of linear
forms that appear in the relation \Erf36. This implies in particular that
$\norm{\bar\lambda}\psi$ actually depends only on the ${\cal U}_\lambda$-character
${\hat\psi}$ and not on the particular $\psi$ that extends it to a
character of ${\cal S}_\lambda$.
To proceed, we combine formula \Erf ia with the decomposition
\Erf BJ of the ordinary boundary blocks ${\rm B}_\lambda$ of the
${\mathfrak A}$-theory. We then find
\begin{equation} \begin{array}{ll} \langle {\rm B}_\lambda
|\, {\rm e}^{-(2\pi/t)(L_0\raisebox{.07em}{$\scriptscriptstyle\otimes$}{\bf1}+{\bf1}\raisebox{.07em}{$\scriptscriptstyle\otimes$} L_0-c/12)} \,|
{\rm B}_\mu \rangle \!\!
&= \displaystyle \Sumpsipsu\lambda \Sumphiphu\mu \sum_{\J\in\Gs/\cals_\lambda} \sum_{\J'\in\Gs/\cals_\mu}
(\norm{{\rm J}{\bar\lambda}}{\hat\psi}^*)^{-1}_{} (\norm{{\rm J}'{\bar\mu}}{\hat\varphi}^{})^{-1}_{}\,
\\{}\\[-1.32em] & \hsp{4.9}
\langle {\tilde\Beta}_{({\rm J}{\bar\lambda},\psi)}
|\, {\rm e}^{-(2\pi/t)(L_0\raisebox{.07em}{$\scriptscriptstyle\otimes$}{\bf1}+{\bf1}\raisebox{.07em}{$\scriptscriptstyle\otimes$} L_0-c/12)} \,|
{\tilde\Beta}_{({\rm J}'{\bar\mu},\varphi)} \rangle
\\{}\\[-.6em]
&= \delta_{{\hat\psi},{\hat\varphi}}^{}\,\delta_{{\bar\lambda},{\bar\mu}}^{}\,
{\displaystyle \Sumpsipsu\lambda \sum_{\J\in\Gs/\cals_\lambda}} |\norm{{\rm J}{\bar\lambda}}{\hat\psi}|^{-2}_{}\,
\Frac1{(|{\cal G}|/\u\lambda)\, \bar S_{{\rm J}{\bar\lambda},{\rp\vac}}\!\!\raisebox{.54em}{$\phantom.$}}\,
\bar{\raisebox{.15em}{$\chi$}}_{{\rm J}{\bar\lambda}}(\frac{2{\rm i}}t)
\\{}\\[-.8em]
&= \delta_{{\hat\psi},{\hat\varphi}}^{}\,\delta_{{\bar\lambda},{\bar\mu}}^{}\, d_\lambda^2\,
\Frac1{(|{\cal G}|/\u\lambda)\, \Sb\lambda\Omega \!\!\raisebox{.54em}{$\phantom.$}}\,
{\displaystyle\sum_{\J\in\Gs/\cals_\lambda}} |\norm{{\rm J}{\bar\lambda}}{\hat\psi}|^{-2}_{}\,
\bar{\raisebox{.15em}{$\chi$}}_{{\rm J}{\bar\lambda}}(\frac{2{\rm i}}t)
\\{}\\[-.8em]
&= \delta_{\lambda,\mu}^{}\, d_\lambda\,
\Frac1{(|{\cal G}|/\u\lambda)\, \Sb\lambda\Omega \!\!\raisebox{.54em}{$\phantom.$}}\,
|\norm{\bar\lambda}{\hat\psi}|^{-2}_{}\, \raisebox{.15em}{$\chi$}_\lambda(\frac{2{\rm i}}t)
\,. \end{array} \end{equation}
Here in the last step we have used the information that according to
formula \erf{skp1} the result must
be proportional to the ${\mathfrak A}$-character $\raisebox{.15em}{$\chi$}_\lambda$, so that the relation
\erf X between the ${\mathfrak A}$-characters and those of the ${\bar\cala}$-theory tells
us in particular that the normalizations $\norm{{\rm J}{\bar\lambda}}{\hat\psi}$
in fact do not depend on ${\rm J}$, and hence only on $\lambda\,{=}\,{[\lambdab,{\hat\psi}]}$.
Moreover, by inspection of formula \erf S for the modular matrix $S$ of the
${\mathfrak A}$-theory we have
\begin{equation} S_{{[\lambdab,{\hat\psi}]},\Omega}
= \Frac{|{\cal G}|}{\sqrt{\s\lambda\u\lambda}}\, \Sb\lambda\Omega \,. \Labl23
Thus the normalization condition \erf{skp1} also allows us to
determine the explicit value of the constants $\norm{\bar\lambda}{\hat\psi}$, namely
$|\norm{\bar\lambda}{\hat\psi}|^{-2}d_\lambda\u\lambda/|{\cal G}| \,{=}\, \sqrt{\s\lambda\u\lambda}
/|{\cal G}|$, and hence simply\,%
\futnote{We admit that our conventions were chosen with some hindsight.}
\begin{equation} |\norm{\bar\lambda}{\hat\psi}| = 1 \,. \end{equation}
Note that we determine these constants only up to a phase. Manifestly, we
cannot do better, because the relations \erf{skp1} and \erf{skp2} determine
the boundary blocks also only up to a phase.
To conclude this section, we summarize our results about the natural
basis of boundary blocks for symmetry breaking boundary conditions. For
every primary $\lambda\,{=}\,{[\lambdab,{\hat\psi}]}$ of the ${\mathfrak A}$-theory we have
$(|{\cal G}|/|{\cal S}_\lambda|)\,{\cdot}\,d_\lambda^2$ basis elements
${\tilde\Beta}_{({\bar\mu},\psi)}$ which
are labelled by those ${\bar\cala}$-primaries ${\bar\mu}$ that are on the ${\cal G}$-orbit
of ${\bar\lambda}$ and by the ${\cal S}_\lambda$-characters $\psi$ that restrict to
${\hat\psi}$. These boundary blocks obey the normalization condition \Erf ia.
\sect{The classifying algebra} \label{s.5}
In this section we turn to the level of {\em full\/} conformal field theory. We explain how
the representation theory of a classifying algebra allows us to determine the
boundary conditions for a given conformal field theory, and we explicitly construct the
classifying algebra for boundary conditions that preserve a prescribed
subalgebra\ ${\bar\cala}$ of the bulk symmetries.
\subsection{Boundary conditions and reflection coefficients}\label{s.51}
Because of factorization, a boundary condition\ should essentially be characterized by a
consistent collection of one-point correlation functions for all bulk fields
$\pho\lambda$ on the disk \cite{card9,cale,lewe3,prss3}. Thus in order to
classify the boundary condition s, one needs to find all consistent one-point correlation function s
$\langle\pho\lambda\rangle$ for those fields. As explained in \cite{fuSc6},
the correlation functions on a surface with boundaries are linear combinations
of blocks on its Schottky cover; for the disk
this oriented cover has the topology of the sphere, so that the relevant
chiral block s are those studied in section \ref{s.4}. Our task is
now to determine the coefficients that give the correct physical correlators.
For a more detailed discussion it is convenient to use the language of vertex
operators and operator products.
For every vector $v\,\ot\,\tilde v\,{\in}\, {\cal H}_\lambda^{}\,{\otimes}\,{\cal H}_{\tilde\lambda}$
we have a vertex operator $\pho\lambda(v\raisebox{.07em}{$\scriptstyle\otimes$}\tilde v;z)$. Such vertex operators
are suitable linear combinations of pairs of chiral vertex operators, the
correlators of which are nothing but the boundary blocks discussed above.
In view of the description \Erf74 of the boundary blocks and their precise
definition \erf{tBeta}, we are thus looking
for coefficients $\xi_{{\bar\mu},\psi}$ such that the value of the one-point
correlator for $\pho\lambda(v\raisebox{.07em}{$\scriptstyle\otimes$}\tilde v;z)$ on the disk at $z\eq0$ is
\begin{equation} \langle \pho\lambda(v\raisebox{.07em}{$\scriptstyle\otimes$}\tilde v;z{=}0) \rangle
= \Sumpsipsu\lambda \sum_{{\rm J}\in{\cal G}/{\cal S}_\lambda} \x{{\rm J}{\bar\lambda},\psi}\,
{\tilde\Beta}_{({\rm J}{\bar\lambda},\psi)} (v\raisebox{.07em}{$\scriptstyle\otimes$}\tilde v) \,. \labl{oben}
Moreover, it follows from the results at the chiral level that this correlator
can be non-vanishing only for $\tilde\lambda\,{=}\,{\lambda^{\!+}_{\phantom i}}$, which we therefore
assume from now on.
Note, however, that the index structure of the vertex operator in formula
\erf{oben} is tailored to the case of a closed orientable surface, where the
bulk fields $\pho\lambda$ are the only fields present. In contrast, for
surfaces with boundaries, where there are also boundary fields $\Psi(x)$,
and allowing for boundary conditions that break part of the bulk symmetries,
the index structure can actually be more complicated. Accordingly
we have to be careful when interpreting formula \erf{oben}.
What we have to implement correctly is the fact that, while chiral vertex
operators can definitely be extracted from the three-point chiral block s on
${\mathbb P}^1$, their concrete form does depend on which chiral symmetries are
preserved. In the situation of interest to us we are not allowed to employ all
symmetries of the bulk, but rather we must take the three-point blocks of
the orbifold theory with symmetry ${\bar\cala}$ for extracting the chiral vertex
operators. In other words, we must
take into account that states in different ${\bar\cala}$-submodules of
${\cal H}_\lambda^{}\,{=}\,\bigoplus_{{\rm J}\in{\cal G}/{\cal S}_\lambda}\V_{\!\psu}\,{\otimes}\,
\bar{\cal H}_{({\rm J}{\bar\lambda},{\hat\psi})}$ can cause different excitations on the boundary
and can thus be reflected differently. Note that, unlike in formula \erf{deco},
here we have attached the label ${\hat\psi}$ also to the ${\bar\cala}$-modules $\bar{\cal H}$,
so as to indicate that the reflection at the boundary may also depend on the
particular ${\mathfrak A}$-module into which a given ${\bar\cala}$-module is embedded.
In addition, we have to account for the dimensionality of the projective
${\cal S}_\lambda$-module $\V_{\!\psu}$, which amounts to using characters
$\psi\,{\in}\,{\cal S}_\lambda^*$ instead of ${\hat\psi}\,{\in}\,{\cal U}_\lambda^*$.
Accordingly, when studying the behavior of bulk fields close to the boundary,
for vectors $v\,\ot\,\tilde v\,{\in}\,(\V_{\!\psu}{\otimes}\bar{\cal H}_{({\bar\lambda},{\hat\psi})})\,{\otimes}\,
({\cal V}_{{\hat\psi}^+}{\otimes}\bar{\cal H}_{({\lambdab^{\!+}_{\phantom i}},{\hat\psi}^+)})\,{\subset}\,
{\cal H}_\lambda^{}\,{\otimes}\,{\cal H}_{\lambda^{\!+}_{\phantom i}}$
we must work with vertex operators that are labelled as
\begin{equation} \phi_{({\bar\lambda},\psi),({\lambdab^{\!+}_{\phantom i}},\psi^+)}(v\raisebox{.07em}{$\scriptstyle\otimes$}\tilde v;z) \,. \Labl52
As for the correlation function s, this means that in place of \erf{oben} we are interested
in the individual summands
\begin{equation} \langle \phi_{({\bar\lambda},\psi),({\lambdab^{\!+}_{\phantom i}},\psi^+)}(v\raisebox{.07em}{$\scriptstyle\otimes$}\tilde v;z{=}0)
\rangle = \x{{\bar\lambda},\psi}\,
{\tilde\Beta}_{({\bar\lambda},\psi)} (v\raisebox{.07em}{$\scriptstyle\otimes$}\tilde v) \,. \labl{unten}
In order to determine the coefficients $\x{{\bar\mu},\psi}$ in this relation,
we study the operator product expansion describing the excitation that a
bulk field causes on the boundary when it approaches the boundary; this
operator product reads
\begin{equation} \phi_{({\bar\lambda},\psi),({\lambdab^{\!+}_{\phantom i}},\psi^+)}(r{\rm e}^{{\rm i}\sigma})
= \sum_{\bar\mu} (1{-}r^2)^{-2\Delta_{\bar\lambda}+\Delta_{\bar\mu}}_{}\,
\Rc a{({\bar\lambda},\psi)}{\bar\mu}\,\Psi^{aa}_{\bar\mu}({\rm e}^{{\rm i}\sigma})
+ \mbox{descendants} \quad\; {\rm for}\;\ r\,{\to}\,1 \,. \labl{oben2}
Comparing this expansion with relation \erf{unten} we learn that
\begin{equation} \x{{\bar\mu},\psi} = \Rc a{({\bar\mu},\psi)}{\rp\vac}\, \langle\Psi^{aa}_{\rp\vac}\rangle
\,. \Labl xR
In words, up to a normalization given by the (constant) one-point correlator
of a boundary vacuum field $\Psi^{aa}_{\rp\vac}$, the coefficients $\x{{\bar\mu},\psi}$
are equal to the {\em reflection coefficients\/} $\Rc a{({\bar\mu},\psi)}{\rp\vac}$.
We pause to comment on the index structure of the boundary fields
$\Psi^{ab}_{\bar\mu}(x)$. The underlying three-point blocks for the operator product
\erf{oben2} are those of the orbifold theory, because boundary fields are
involved and the latter only need to preserve the symmetries in ${\bar\cala}$.
As a consequence, the boundary field carries a chiral label ${\bar\mu}$ of
the orbifold theory. In addition, there are two labels $a,b$ which account
for the fact that the insertion of a boundary field can change the boundary condition.
(And finally, in order to account for annulus coefficients that are bigger
than one -- which can appear for ${\bar\mu}\,{\ne}\,{\rp\vac}$ -- one must allow for an
additional degeneracy label, which we suppress.)
The presence of these boundary labels on the right hand side\ of \erf{oben2} tells us
that, in contrast to conformal field theory\ on surfaces that are closed and orientable,
on surfaces with boundaries the locality and factorization constraints for
the correlation function s do not, in general, possess a unique solution.
Rather, there are several consistent collections of reflection coefficients
$\Rc a{({\bar\lambda},\psi)}{\rp\vac}$, and as a consequence there are several solutions
\begin{equation} \langle\phi_{({\bar\lambda},\psi),({\lambdab^{\!+}_{\phantom i}},\psi^+)}\rangle
= \langle\phi_{({\bar\lambda},\psi),({\lambdab^{\!+}_{\phantom i}},\psi^+)}{\rangle}_a \end{equation}
which are indexed by the boundary condition s.
Note that up to this point it was not necessary to specify the values that the
boundary label $a$ can take.
To determine the possible boundary conditions, we analyze the factorization of
bulk-bulk-boundary correlators in much the same manner as
\cite{lewe3,sasT2,prss3} for boundary condition s that preserve all of ${\mathfrak A}$. This is
possible because, by the requirement that ${\bar\cala}$ is a consistent chiral
algebra, the orbifold chiral blocks obey the usual factorization rules.
Concretely, we consider two different factorization limits of the disk correlation function\
\begin{equation} \langle \phi_{({\bar\lambda}_1,\psi_1),({\bar\lambda}_1^+,\psi_1^+)}(z_1)\,
\phi_{({\bar\lambda}_2,\psi_2),({\bar\lambda}_2^+,\psi_2^+)}(z_2) {\rangle}_a \end{equation}
involving two bulk fields. On one hand we can use the operator product between
bulk fields (this is an operator product respecting the full ${\mathfrak A}$-symmetry,
although for fields that are only ${\bar\cala}$-primaries but may be
${\mathfrak A}$-descendants)
and afterwards the operator product \erf{oben2}, so as to express the correlator
in terms of bulk operator product coefficients and a single reflection
coefficient $\Rc a{({\bar\lambda}_3,\psi_3)}{\rp\vac}$. On the other hand, applying the
expansion \erf{oben2} twice expresses the correlator in terms of two
bulk-boundary operator products, i.e.\ two reflection coefficients. The latter
are to be understood as prefactors of a four-point block on the projective
line ${\mathbb P}^1$, and since boundary insertions are involved, these are
four-point blocks of the orbifold theory. The two different factorizations
correspond to such blocks in different channels, so that for their comparison
one must relate them through fusing matrices or, to be precise, through fusing
matrices of the ${\bar\cala}$-theory. Such matrices exist
because by assumption the orbifold chiral blocks come with a Knizhnik\hy Zamolodchikov connection.
Taking everything together we arrive at a relation of the form
\begin{equation} \Rc a{({\bar\lambda}_1,\psi_1)}{\rp\vac}\, \Rc a{({\bar\lambda}_2,\psi_2)}{\rp\vac}
= \sum_{{\bar\lambda}_3,\psi_3}
\tN{({\bar\lambda}_1,\psi_1)}{({\bar\lambda}_2,\psi_2)}{({\bar\lambda}_3,\psi_3)}\,
\Rc a{({\bar\lambda}_3,\psi_3)}{\rp\vac} \,, \labl{res1}
where the numbers $\Tilde{\rm N}$ are combinations of bulk operator product coefficients
and fusing matrices of the ${\mathfrak A}$- {\em and\/} of the ${\bar\cala}$-theory. Notice
that the fact that quantities of both the original
and the orbifold theory appear is in accordance with the index structure
of the vertex operators \Erf52, in which there appear individual fields
${\bar\lambda}$ of the orbifold theory rather than ${\cal G}$-orbits of such fields,
but also a character $\psi$ that keeps track of the information
as submodule of which ${\mathfrak A}$-module a given ${\bar\cala}$-module occurs.
None of the constituents of the numbers $\Tilde{\rm N}$ depends on the boundary label
$a$. The result \erf{res1} can therefore be interpreted as follows. The
conformally invariant boundary conditions that preserve ${\bar\cala}$ correspond to
one-dimen\-sional\ representations of some algebra, which we call the {\em classifying
algebra\/} and denote by \mbox{$\calc(\calap)$}. It is expected \cite{fuSc6}
that the algebra\ \mbox{$\calc(\calap)$}\ shares most properties of fusion algebra s, i.e.\ it should
be a commutative associative semisimple algebra, so that in particular all
its irreducible representation s are one-dimen\-sional. These properties imply
the existence of a diagonalizing matrix $\Tilde S$ through which the structure
constants of \mbox{$\calc(\calap)$}\ are expressible via an analogue of the Verlinde formula.
It is worth stressing that the two labels of the diagonalizing matrix $\Tilde S$
are on a rather different footing; the row index labels the basis of the
classifying algebra \mbox{$\calc(\calap)$}\ which is given by the allowed boundary blocks, while
the column index labels the irreducible representations of \mbox{$\calc(\calap)$}. In the case
of boundary conditions that preserve the full bulk symmetry (and where
the pairing for the labels of the bulk fields is given by charge conjugation,
i.e.\ $\tilde\lambda\,{=}\,{\lambda^{\!+}_{\phantom i}}$), it has already been argued long ago
\cite{card9} that $\Tilde S$ is the modular matrix that implements the modular
transformation $\tau\,{\mapsto}\,{-}1/\tau$ on the characters. In this case
the classifying algebra\ is just the fusion rule algebra\ and the reflection coefficients are
the generalized quantum dimensions; in particular there is a natural
correspondence between the two types of labels.
In the general case, this natural correspondence does not persist. But it has
been seen that even in more general situations (see \cite{prss3,fuSc5}
for an example) nevertheless the two sets of labels are still related by
modular transformations. Moreover, it can be expected that the boundary
conditions are labelled by {\em orbits\/} of fields rather than individual
fields, as in \cite{fuSc5}. That this is indeed the case can be seen as follows.
As for the labels ${\bar\lambda}$ of the boundary blocks, only those occur
which appear in the decomposition of some ${\mathfrak A}$-module, which means
that they satisfy $Q_{\rm J}(\lambda)\eq0$ for every ${\rm J}\,{\in}\,{\cal G}$. In orbifold
terminology, we are only dealing with the untwisted sector of the orbifold
or, in other words, along the `space' direction of the
torus only the trivial twist by the identity occurs. This implies that after
a modular S-transformation, only the identity appears as a twist
in the `time' direction of the torus, which in turn tells us that we must not
perform the usual orbifold projection in the twisted sector.
Translating this back into simple current terminology, we arrive at the
statement that the boundary conditions must not be labelled by
individual primary fields of the ${\bar\cala}$-theory, but rather by
${\cal G}$-{\em orbits\/} of ${\bar\cala}$-primaries.
On the other hand, in the `time' direction we start with arbitrary
twists, since the labelling is by individual primary fields; it follows that
after the S-transformation arbitrary twists
in the `space' direction occur in the orbifold. Thus in the labelling of the
boundary condition s {\em all\/} ${\cal G}$-orbits appear, not just those with vanishing monodromy
charges, i.e.\ not just the ones in the untwisted sector.
Moreover, by comparison with the S-transformation of the ${\mathfrak A}$-characters
one is led to expect that these orbits are to be combined with the characters
of the relevant untwisted stabilizer; as we will see below, this provides us indeed with a
consistent ansatz for the classifying algebra.
\subsection{The matrix $\Tilde S$}
As advocated above, the boundary blocks are in one-to-one correspondence
with the elements of a basis of the classifying algebra\ \mbox{$\calc(\calap)$}, while the ${\bar\cala}$-preserving
boundary conditions are in one-to-one correspondence with the (isomorphism
classes of) one-dimen\-sional\ irreducible representation s of \mbox{$\calc(\calap)$}. Thus
a basis of \mbox{$\calc(\calap)$}\ is labelled by pairs $({\bar\lambda},\psi_\lambda)$
consisting of a primary label ${\bar\lambda}$ of the ${\bar\cala}$-theory in the
untwisted sector (i.e.\ $Q_{\rm J}(\lambda)\eq0$ for all ${\rm J}\,{\in}\,{\cal G}$) and a
character $\psi_\lambda\,{\in}\,{\cal S}^*_\lambda$ of the stabilizer
of ${\bar\lambda}$, while the arguments at the end of the previous subsection
tell us that the one-dimen\-sional\ irreducible \mbox{$\calc(\calap)$}-representation s are labelled by
${\cal G}$-orbits ${[\rhob,\psu_\rho]}$ of pairs consisting of an arbitrary primary label ${\bar\rho}$
of the ${\bar\cala}$-theory and a character ${\hat\psi}_\rho$ of the untwisted stabilizer\ of ${\bar\rho}$.
According to our general expectations the
classifying algebra\ \mbox{$\calc(\calap)$}\ should possess most properties of fusion algebra s,
in particular there should exist a diagonalizing square matrix $\Tilde S$.
Note that according to the previous remarks the row and column labels of
this matrix are on a rather different footing, so that at this point it is
still far from obvious that the two sets of labels indeed have equal size.
Our strategy is now to start by making an educated ansatz for the matrix $\Tilde S$
and then develop the classifying algebra\ and its representation\ theory along analogous lines as one
may study fusion algebra s by starting from the modular S-matrix $S$.
We stress that, unlike the considerations in the previous section, here we
are indeed making an {\em ansatz\/}, and it will be necessary to support this
ansatz by performing various consistency checks, the most basic one being that
$\Tilde S$ is manifestly a square matrix.
But once one accepts this ansatz, the arguments presented
in the previous subsection allow us to learn more about how the
fusing matrices in an integer spin simple current extension are related
to the fusing matrices of the original theory.
As a matter of fact, by combining
the considerations that relate symmetry breaking, orbifolds and integer spin
simple current extensions with the results about simple current extensions
obtained in \cite{fusS6}, it follows that, up to normalizations, there is
a natural candidate for the matrix $\Tilde S$. It reads
\begin{equation} \Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob,\psu_\rho]}}
:= \Frac{|{\cal G}|}{\sqrt{\s\lambda\u\lambda\s\rho\u\rho}}
\sum_{{\rm J}\in{\cal S}_\lambda\cap{\cal U}_\rho} \psi^{}_\lambda({\rm J})\,
{\hat\psi}_\rho({\rm J})^*\, S^{\rm J}_{{\bar\lambda},{\bar\rho}} \,. \Labl tS
Here the matrices $S^{\rm J}$ are those which appear \cite{fusS6} in the modular
S-matrix of the simple current extension; upon a canonical\,%
\futnote{The choice of canonical basis still leaves some residual
freedom in the normalization, which remains to be clarified.}
normalization of the one-point chiral block s with insertion ${\rm J}$ on the torus, the
matrix $S^\J$ also represents the modular S-transformation on these blocks
\cite{bant6}. (For the convenience of readers who are not familiar with the
pertinent results of \cite{fusS6}, we summarize them in appendix \ref{s.a}.
For a brief account, see also section 3 of \cite{fuSc10}.)
Note that at the chiral level, where one deals with conformal field theory\ on a complex curve,
there is no direct influence of boundaries \cite{fuSc6}. The chiral conformal field theory\
structures that are related to the matrices $S^{\rm J}$ which enter the discussion
here are thus logically independent of any boundary data; they have passed
independent tests \cite{fusS6,bant6} in the context of closed conformal field theory. In the
present situation, where the ${\bar\cala}$-theory can be regarded as an orbifold
of the ${\mathfrak A}$-theory, as compared to \cite{fusS6} there is actually even
further evidence for the existence of the matrices $S^{\rm J}$. Namely, under a
finiteness assumption on the codimension of a certain subspace of the vacuum
module, it has been proven in \cite{zhu3} (see also \cite{dolm6,miya6}) that
one can associate a modular S-transformation
matrix to the chiral blocks on the torus for
arbitrary descendants of the vacuum. In our case, we are precisely concerned
with one-point blocks for descendants of the vacuum of the ${\mathfrak A}$-theory (which
are {\em not\/} descendants of the vacuum in the ${\bar\cala}$-theory, though).
In connection with the reasoning of \cite{fusS6} one may say
that if an extension by integer spin simple currents is possible at all, then
such matrices $S^{\rm J}$ must necessarily exist in order to
comply with the general result of \cite{zhu3}.
As a first consistency check, we consider the special case where
$Q(\rho)\,{\equiv}\,0$. These boundary conditions correspond precisely to
orbits that furnish primary fields in the extended theory. On the
other hand, the boundary conditions that respect the full bulk symmetry
should also be recovered from our ansatz, since a fortiori they preserve
the subalgebra ${\bar\cala}$. According to \cite{card9}, these boundary
conditions correspond to primary fields of the ${\mathfrak A}$-theory.
Indeed, the following consideration shows that for $Q(\rho)\,{\equiv}\,0$ we
recover the modular S-matrix of the ${\mathfrak A}$-theory. The latter can
be expressed through the matrices $S^{\rm J}$ as in \erf S, i.e.
\begin{equation} S_{{[\lambdab,{\hat\psi}_\lambda]},{[\rhob,\psu_\rho]}}
= \Frac{|{\cal G}|}{\sqrt{\s\lambda\u\lambda\s\rho\u\rho}}
\sum_{{\rm J}\in{\cal U}_\lambda\cap{\cal U}_\rho} {\hat\psi}_\lambda({\rm J})\,
{\hat\psi}_\rho({\rm J})^*\, S^{\rm J}_{{\bar\lambda},{\bar\rho}} \,, \Labl5S
where both ${\bar\lambda}$ and ${\bar\rho}$ have monodromy charge zero. Because of the
latter property, we know that for every ${\rm J}\,{\in}\,{\cal S}_\lambda{\setminus}\,
{\cal U}_\lambda$ there exists at least one ${\rm K}\,{\in}\,{\cal S}_\lambda$ such that
\begin{equation} S^{\rm J}_{{\bar\lambda},{\bar\rho}} = S^{\rm J}_{{\rm K}{\bar\lambda},{\bar\rho}}
= F_\lambda({\rm K},{\rm J})\cdot 1\cdot S^{\rm J}_{{\bar\lambda},{\bar\rho}} \Labl ss
with $F_\lambda({\rm K},{\rm J})\nE1$, from which we conclude that
$S^{\rm J}_{{\bar\lambda},{\bar\rho}}\eq0$ for all ${\rm J}\,{\in}\,{\cal S}_\lambda{\setminus}\,
{\cal U}_\lambda$. It follows that for $Q_{\cal G}(\rho)\eq0$ the ${\rm J}$-summations
in the two expressions \Erf tS and \Erf5S actually extend over the
same range; moreover, we then have ${\hat\psi}_\lambda({\rm J})\,{=}\,\psi_\lambda({\rm J})$ for
all ${\rm J}$ that appear in the sum, so the two expressions indeed are equal, i.e.
\begin{equation} \Tilde S_{({\rm J}{\bar\lambda},\psi_\lambda),{[\rhob,\psu_\rho]}} = S_{{[\lambdab,{\hat\psi}_\lambda]},{[\rhob,\psu_\rho]}} \end{equation}
for all ${\rm J}\,{\in}\,{\cal G}$ and all $\psi_\lambda\,{\succ}\,{\hat\psi}_\lambda$.
\subsection{Properties of $\Tilde S$}
Let us now establish further properties of the matrix $\Tilde S$ that we defined in
\Erf tS. As a matter of fact, we first need to check that $\Tilde S$ is well-defined,
i.e.\ does not depend on the choice of representative of the ${\cal G}$-orbit
of the pair $({\bar\rho},{\hat\psi}_\rho)$. To do so, we need
the explicit form of the equivalence relation, which reads
$({\bar\rho},{\hat\psi}_\rho)\sim {\rm J}'({\bar\rho},{\hat\psi}_\rho)\,{=}\,({\rm J}'{\bar\rho},{}_{{\J}'}{\hat\psi}_\rho)$
(see formulae \Erf eq and \Erf JJ). We observe that
\begin{equation} {}_{{\J}'}{\hat\psi}_\rho({\rm J})^* S^{{\rm J}}_{{\bar\lambda},{\rm J}'{\bar\rho}}
= [F_\rho({\rm J}',{\rm J})^*\,{\hat\psi}_\rho({\rm J})]^* \cdot {\rm e}^{2\pi{\rm i} Q_{{\rm J}'}(\lambda)}_{}
\, F_\rho({\rm J}',{\rm J})^*\, S^{{\rm J}}_{{\bar\lambda},{\bar\rho}}
= {\hat\psi}_\rho({\rm J})^* S^{{\rm J}}_{{\bar\lambda},{\bar\rho}} \,, \Labl51
where we used the simple current property \Erf QF of $S^{\rm J}$ and the fact that
${\bar\lambda}$ is in the untwisted sector.
This tells us that the corresponding part of formula \Erf tS, and hence the
whole matrix $\Tilde S$, is indeed independent of the choice of representative.
We may define an analogous transformation as in this equivalence relation
also when dealing with characters of full stabilizers, i.e.\ also for the row
index of $\Tilde S$, namely
\begin{equation} {\rm J}'\,({\bar\lambda},\psi_\lambda):= ({\rm J}'{\bar\lambda},{}_{{\J}'}\psi_\lambda) \Labl eQ
for all ${\rm J}'\,{\in}\,{\cal G}$, with
\begin{equation} {}_{{\J}'}\psi_\lambda({\rm J}):= F_\lambda({\rm J}',{\rm J})^*\,\psi_\lambda({\rm J})
\,. \Labl jJ
By the bi-homo\-mor\-phism\ property of $F$, ${}_{{\J}'}\psi_\lambda$ is again a character of
${\cal S}_\lambda$. It then follows that $\Tilde S$ satisfies the standard
simple current relation, too, i.e.\ we have
\begin{equation} \Tilde S_{{\rm J}'({\bar\lambda},\psi_\lambda),{[\rhob,\psu_\rho]}}
= {\rm e}^{2\pi{\rm i} Q_{{\rm J}'}(\rho)}_{} \cdot
\Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob,\psu_\rho]}} \Labl SQ
for every ${\rm J}'\,{\in}\,{\cal G}$. This holds because in each term in the ${\rm J}$-summation
on the right hand side\ of \Erf tS the factor of $F_\lambda({\rm J}',{\rm J})^*$ that comes from
the action of ${\rm J}'$ on $\psi_\lambda$ cancels against the factor
$F_\lambda({\rm J}',{\rm J})$ that accompanies the phase ${\rm e}^{2\pi{\rm i} Q_{{\rm J}'}(\rho)}$
in the simple current relation for $S^{\rm J}$.
Next we note that the matrix $\Tilde S$ is (in general) not symmetric; in fact it
does not even make sense to talk about symmetry, because the label sets for the
rows and columns are different. It does, however,
make sense to talk about invertibility and unitarity. A direct calculation
shows that $\Tilde S$ is invertible, the inverse being given by
\begin{equation} (\Tilde S^{-1})^{{[\rhob,\psu]},({\bar\lambda},\varphi)}_{}
= \Frac{\u\lambda}{|{\cal G}|}\, \Tilde S^*_{({\bar\lambda},\varphi),{[\rhob,\psu]}} \,. \Labl3U
That \Erf3U is a right-inverse is seen by
\begin{equation} \hsp{-.99} \begin{array}{ll}
\displaystyle \sum_{[\rhob]} \sum_{{\hat\psi}_\rho\in{\cal U}_\rho^*}
\Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob,\psu_\rho]}}\,
\mbox{\large(} \Tilde S_{({\bar\mu},\psi_\mu),{[\rhob,\psu_\rho]}}\mbox{\large)}_{}^* \!\!\!
&= \Frac{|{\cal G}|^2}{\sqrt{\s\lambda\u\lambda\s\mu\u\mu}}
\displaystyle\sum_{[\rhob]} \sum_{{\rm J}\in{\cal S}_\lambda\cap{\cal S}_\mu\cap{\cal U}_\rho}
\!\Frac1{\s\rho}\, \psi_\lambda({\rm J}) \psi_\mu({\rm J})^* \,
S^{{\rm J}}_{{\bar\lambda},{\bar\rho}}\, (S^{{\rm J}}_{{\bar\mu},{\bar\rho}})^*
\\{}\\[-.8em]
&= \Frac{|{\cal G}|}{\sqrt{\s\lambda\u\lambda\s\mu\u\mu}}
\displaystyle\sum_{{\rm J}\in{\cal S}_\lambda\cap{\cal S}_\mu} \psi_\lambda({\rm J}) \psi_\mu({\rm J})^*
\sum_{\bar\rho} S^{{\rm J}}_{{\bar\lambda},{\bar\rho}}\, (S^{{\rm J}}_{{\bar\mu},{\bar\rho}})^*
\\{}\\[-.8em]
&= \Frac{|{\cal G}|}{\s\lambda\u\lambda}\, \delta_{{\bar\lambda},{\bar\mu}}^{}
\displaystyle\sum_{{\rm J}\in{\cal S}_\lambda} \psi_\lambda({\rm J}) \psi_\mu({\rm J})^*
= \Frac{|{\cal G}|}{\u\lambda}\cdot \delta_{({\bar\lambda},\psi_\lambda),
({\bar\mu},\psi_\mu)}^{}
\,. \end{array} \Labl1U
Here in the first step we inserted the definition \Erf tS and performed the
${\hat\psi}_\rho$-summation,
while in the second step we replaced
$\sum_{[\rhob]}$ by $\sum_{\bar\rho}\,(\s\rho/|{\cal G}|)$, which is possible owing
to the fact that $Q_J(\lambda)\eq0\,{=}\, Q_J(\mu)$ and that ${\rm J}\,{\in}\,{\cal U}_\rho$,
and furthermore we dropped taking the intersection with ${\cal U}_\rho$ in the
${\rm J}$-summation. To see that this change in the summation range is allowed,\,%
\futnote{Compare also the remarks before eq.\ (C.2) in \cite{fusS6}.}
let first ${\rm J}\,{\not\in}\,{\cal S}_\rho$; then according to \Erf00 we simply have
$S^{\rm J}_{{\bar\lambda},{\bar\rho}}\,{=}\, 0\,{=}\, S^{\rm J}_{{\bar\mu},{\bar\rho}}$. Otherwise, i.e.\ when
${\rm J}\,{\in}\,{\cal S}_\rho{\setminus}\,{\cal U}_\rho$, there must exist a
${\rm J}'\,{\in}\,{\cal S}_\rho$ with $F_\rho({\rm J}',{\rm J})\ne1$, and we have
\begin{equation} S^{\rm J}_{{\bar\lambda},{\bar\rho}} = S^{\rm J}_{{\bar\lambda},{\rm J}'{\bar\rho}}
= F_\rho({\rm J}',{\rm J})^*\, {\rm e}^{2\pi{\rm i} Q_{{\rm J}'}(\lambda)}\, S^{\rm J}_{{\bar\lambda},{\bar\rho}}
= F_\rho({\rm J}',{\rm J})^*\, S^{\rm J}_{{\bar\lambda},{\bar\rho}} \,, \Labl FQ
where again we use that $Q(\lambda)\eq0$; thus in this case we have
$S^{\rm J}_{{\bar\lambda},{\bar\rho}}\eq0$ as well.
To check that the matrix \Erf3U is also a left-inverse of $\Tilde S$, we start by
calculating
\begin{equation} \begin{array}{ll}
\displaystyle \sumbo\lambda \sum_{\psi_\lambda\in{\cal S}_\lambda^*}
\Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob,\psu_\rho]}} \cdot \Frac{\u\lambda}{|{\cal G}|}\,
\Tilde S_{({\bar\lambda},\psi_\lambda),{[\sigmab,\psu_\sigma]}}^* \!\!\!
&= \Frac{|{\cal G}|}{\sqrt{\s\rho\u\rho\s\sigma\u\sigma}}
\displaystyle\sumbo\lambda \sum_{{\rm J}\in{\cal U}_\rho\cap{\cal U}_\sigma\cap{\cal S}_\lambda}\!\!
{\hat\psi}_\rho({\rm J})^*{\hat\psi}_\sigma({\rm J})\,
S^{{\rm J}}_{{\bar\lambda},{\bar\rho}}\, (S^{{\rm J}}_{{\bar\lambda},{\rp\sigma}})^*_{}
\\{}\\[-.8em]
&= \Frac{|{\cal G}|}{\sqrt{\s\rho\u\rho\s\sigma\u\sigma}}
\displaystyle\sum_{{\rm J}\in{\cal U}_\rho\cap{\cal U}_\sigma}
{\hat\psi}_\rho({\rm J})^*{\hat\psi}_\sigma({\rm J})\, \!\!\!
\sumbo\lambda S^{{\rm J}}_{{\bar\lambda},{\bar\rho}}\, (S^{{\rm J}}_{{\bar\lambda},{\rp\sigma}})^*_{}
\,. \end{array} \end{equation}
Here after first performing the $\psi_\lambda$-summation,
we dropped taking the intersection with ${\cal S}_\lambda$ in the ${\rm J}$-summation,
which is allowed for the same reason as above.\,%
\futnote{Note, however, that in the present case we would {\em not\/} be
allowed to drop the {\em untwisted\/} stabilizer ${\cal U}_\lambda$ if it were
present, because for currents in ${\cal S}_\lambda{\setminus}\,{\cal U}_\lambda$ the
above reasoning would not go through: in \Erf FQ one would now have a factor of
${\rm e}^{2\pi{\rm i} Q_{{\rm J}'}(\rho)}$, which unlike ${\rm e}^{2\pi{\rm i} Q_{{\rm J}'}(\lambda)}$
is not necessarily equal to one, since ${\bar\rho}$ can be in any twist sector.}
Next we extend the ${\bar\lambda}$-summation to all sectors by inserting a projector
and use the unitarity of $S^{\rm J}$ (see formula \Erf ip), so as to obtain
\begin{equation} \hsp{-.8} \begin{array}{ll}
\displaystyle \sumbo\lambda \sum_{\psi_\lambda\in{\cal S}_\lambda^*}
\Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob,\psu_\rho]}} \cdot \Frac{\u\lambda}{|{\cal G}|}\,
\Tilde S_{({\bar\lambda},\psi_\lambda),{[\sigmab,\psu_\sigma]}}^* \!\!
&= \Frac1{\s\rho\u\rho}\, \displaystyle\sum_{{\rm J}'\in{\cal G}} \delta^{}_{{\bar\rho},{\rm J}'{\rp\sigma}}
\sum_{{\rm J}\in{\cal U}_\rho} {\hat\psi}_\rho({\rm J})^*\, F_\sigma({\rm J}',{\rm J})^*\, {\hat\psi}_\sigma({\rm J})
\\{}\\[-1.1em]
& \hsp{-6.3}
= \Frac1{\s\rho}\, \displaystyle\sum_{{\rm J}'\in{\cal G}} \delta^{}_{{\bar\rho},{\rm J}'{\rp\sigma}}\,
\delta^{}_{{\hat\psi}_\rho,{}_{{\J}'}{\hat\psi}_\sigma}
= \Frac1{\s\rho}\, \displaystyle\sum_{{\rm J}'\in{\cal G}}
\delta^{}_{({\bar\rho},{\hat\psi}_\rho),{\rm J}'({\rp\sigma},{\hat\psi}_\sigma)}
= \delta^{}_{{[\rhob,\psu_\rho]},{[\sigmab,\psu_\sigma]}}
\,. \end{array} \end{equation}
(Here we also used the fact that ${\bar\rho}\,{=}\,{\rm J}'{\rp\sigma}$ already implies
${\cal U}_\rho\,{=}\, {\cal U}_\sigma$ and ${\cal S}_\rho\,{=}\,{\cal S}_\sigma$.)
The fact that $\Tilde S$ has a two-sided inverse means
in particular that $\Tilde S$ is a square matrix. This implies the sum rule
\begin{equation} \sumbo\lambda |{\cal S}_\lambda| = \sum_{[\rhob]} |{\cal U}_\rho| \,. \Labl sr
In words: the number of primary {\em fields\/} in the untwisted (i.e., charge
zero) sector, counted with their (full) stabilizer, is the same as the number
of {\em orbits\/} in all sectors, counted with their untwisted stabilizer.
It is also worth pointing out that $\Tilde S$ is (in general) not unitary.
Of course, we could redefine the matrix $\Tilde S$ so as to make it unitary; however,
this would spoil some other nice properties of $\Tilde S$
and hence we refrain from doing so.
\subsection{Conjugation}
Since the row and column labels of $\Tilde S$ are on different footings,
there are two distinct matrices which are candidates for conjugations,
namely
\begin{equation} C^B := \Tilde S\, \Tilde S^{\rm t} \labl tC
and
\begin{equation} C^\calb := \Tilde S^{\rm t}\,U\,\Tilde S \,; \labl cC
here the superscripts $B$ and ${\cal B}$ indicate that the entries of these two
matrices are labelled by boundary blocks and boundary states (i.e.\ boundary condition s),
respectively. The presence of the diagonal matrix $U$, defined as
\begin{equation} U_{({\bar\lambda},\psi_\lambda),({\bar\mu},\psi_\mu)}
:= \Frac{\u\lambda}{|{\cal G}|}\,\delta_{({\bar\lambda},\psi_{\lambda}),({\bar\mu},\psi_\mu)}
\,, \labl U
in \Erf cC accounts for the natural weight of the boundary blocks, cf.\
for instance formula \Erf3U.
Both matrices are manifestly symmetric. To establish further properties, we
write them in the form
\begin{equation} \begin{array}{ll}
C^B_{({\bar\lambda},\psi_{\lambda}),({\bar\mu},\psi_\mu)} \!\!\!
&= \Frac{|{\cal G}|}{\sqrt{\s\lambda\u\lambda\s\mu\u\mu}}
\displaystyle\sum_{\bar\rho} \sum_{{\rm J}\in{\cal S}_\lambda\cap{\cal S}_\mu}
\psi_\lambda({\rm J}) \psi_\mu({\rm J}^{-1}) \,
S^{\rm J}_{{\bar\lambda},{\bar\rho}}\, S^{{\rm J}^{-1}}_{{\bar\mu},{\bar\rho}}
\\{}\\[-.8em]
&= \Frac{|{\cal G}|}{\sqrt{\s\lambda\u\lambda\s\mu\u\mu}}
\displaystyle\sum_{{\rm J}\in{\cal S}_\lambda\cap{\cal S}_\mu}
\psi_\lambda({\rm J}) \psi_\mu({\rm J}^{-1})\, \delta_{{\bar\lambda},{\bar\mu}^+} \,
\eta^{\rm J}_{\bar\lambda}
= \Frac{|{\cal G}|}{\u\lambda} \,
\Cl\lambda_{\psi_\lambda,\psi_\mu}\, \delta_{{\bar\lambda},{\bar\mu}^+}
\,, \\{}\\[-.5em]
C^\calb_{{[\rhob,\psu_\rho]},{[\sigmab,\psu_\sigma]}} \!\!
&= \Frac{|{\cal G}|}{\sqrt{\s\rho\u\rho\s\sigma\u\sigma}}
\displaystyle\sumbo\lambda \sum_{{\rm J}\in{\cal U}_\rho\cap{\cal U}_\sigma}
{\hat\psi}_\rho({\rm J}^{-1})^*\,{\hat\psi}_\sigma({\rm J})^*\,
S^{{\rm J}^{-1}}_{{\bar\lambda},{\bar\rho}}\,S^{\rm J}_{{\bar\lambda},{\rp\sigma}}
\\{}\\[-.8em]
&= \Frac1{\sqrt{\s\rho\u\rho\s\sigma\u\sigma}}
\displaystyle\sum_{\bar\lambda} \sum_{{\rm K}\in{\cal G}} {\rm e}^{2\pi{\rm i} Q_{\rm K}(\lambda)}
\sum_{{\rm J}\in{\cal U}_\rho\cap{\cal U}_\sigma} {\hat\psi}_\rho({\rm J}){\hat\psi}_\sigma({\rm J}^{-1})\,
S^{\rm J}_{{\bar\rho},{\bar\lambda}}\,S^{\rm J}_{{\bar\lambda},{\rp\sigma}}
\\{}\\[-.8em]
&= \Frac1{\sqrt{\s\rho\u\rho\s\sigma\u\sigma}}
\displaystyle\sum_{{\rm J}\in{\cal U}_\rho\cap{\cal U}_\sigma}{\hat\psi}_\rho({\rm J}){\hat\psi}_\sigma({\rm J}^{-1})\,
\sum_{{\rm K}\in{\cal G}} F_\sigma({\rm K},{\rm J})\,
\eta^{\rm J}_{\bar\rho}\, \delta_{{\bar\rho},({\rm K}{\rp\sigma})^+_{\phantom I}}
\\{}\\[-.8em]
&= \Frac1{\s\rho} \displaystyle\sum_{{\rm K}\in{\cal G}} \Cr\rho_{{\hat\psi}_\rho,{}_{{\rm K}}{\hat\psi}_\sigma}\,
\delta_{{\bar\rho},({\rm K}{\rp\sigma})^+_{\phantom I}}
\,. \end{array} \end{equation}
Here we used the identities \Erf-J and \Erf eJ, and introduced
\begin{equation} \Cl\lambda_{\psi,\psi'}
:= \Frac1{\s\lambda} \sum_{{\rm J}\in{\cal S}_\lambda} \psi({\rm J})\, \eta^{\rm J}_{\bar\lambda}\,
\psi'({\rm J})^* \Labl Cl
as well as
\futnote{Compare formula (C.3) of \cite{fusS6}.}
\begin{equation} \Cr\rho_{{\hat\psi},{\hat\psi}'}
:= \Frac1{\u\rho} \sum_{{\rm J}\in{\cal U}_\rho} {\hat\psi}({\rm J})\, \eta^{\rm J}_{\bar\rho}\,
{\hat\psi}'({\rm J})^* \Labl Cr
for any two characters $\psi,\psi'\,{\in}\,{\cal S}_\lambda^*$, respectively\
${\hat\psi},{\hat\psi}'\,{\in}\,{\cal U}_\rho^*$.
To proceed, we need several properties of the matrices $\Cl\lambda$ and
$\Cr\rho$. First, with the help of the identity \Erf es we have
\begin{equation} \mbox{\large(} \Cl\lambda_{\psi,\psi'} \mbox{\large)}^*
= \Frac1{\s\lambda} \displaystyle\sum_{{\rm J}\in{\cal S}_\lambda}
\psi({\rm J})^*\,(\eta^{\rm J}_{\bar\lambda})^*\,\psi'({\rm J})
= \Frac1{\s\lambda} \displaystyle\sum_{{\rm J}\in{\cal S}_\lambda}
\psi({\rm J}^{-1})\,\eta^{{\rm J}^{-1}}_{\bar\lambda}\,\psi'({\rm J}^{-1})^*
= \Cl\lambda_{\psi,\psi'} \,, \Labl*C
i.e.\ $\Cl\lambda$ is real.
Second, combining this reality property with the identity \Erf ep we see that
\begin{equation} \Cl{\lambda^+}_{\psi,\psi'}
= \Frac1{\s\lambda} \displaystyle\sum_{{\rm J}\in{\cal S}_\lambda}
\psi({\rm J})\, \eta^{\rm J}_{{\bar\lambda}^+}\, \psi'({\rm J})^*
= \Frac1{\s\lambda} \displaystyle\sum_{{\rm J}\in{\cal S}_\lambda}
\psi'({\rm J})^*\,(\eta^{\rm J}_{\bar\lambda})^*\,\psi({\rm J})
= \mbox{\large(} \Cl\lambda_{\psi',\psi} \mbox{\large)}^* = \Cl\lambda_{\psi',\psi} \,, \end{equation}
i.e.
\begin{equation} \Cl{\lambda^+} = \mbox{\large(} \Cl\lambda \mbox{\large)}^{\rm t}_{} \,. \Labl Ct
Analogous computations yield
\begin{equation} \mbox{\large(} \Cr\rho \mbox{\large)}^* = \Cr\rho \qquad{\rm and}\qquad
\Cr{\rho^+} = \mbox{\large(} \Cr\rho \mbox{\large)}^{\rm t}_{} \,. \Labl CT
Finally, implementing the identity \Erf er, i.e.\ the fact that
$\eta^{\rm J}_{\bar\rho}$ is a character of ${\cal U}_\rho$, we have
\begin{equation} \sum_{{\hat\psi}'\in{\cal U}_\rho^*} \Cr\rho_{{\hat\psi},{\hat\psi}'}\, {\hat\psi}'({\rm J}')
= \Frac1{\u\rho} \displaystyle\sum_{{\rm J}\in{\cal U}_\rho}
{\hat\psi}({\rm J})\, \eta^{\rm J}_{\bar\rho} \sum_{{\hat\psi}'\in{\cal U}_\rho^*} {\hat\psi}'({\rm J})^*\,{\hat\psi}'({\rm J}')
= {\hat\psi}({\rm J}')\, \eta^{{\rm J}'}_{\bar\rho} = {\hat\psi}^+({\rm J}') \Labl8n
with the character ${\hat\psi}^+\,{\in}\,{\cal U}_\rho^*$ (not to be mixed up with the
complex conjugate character ${\hat\psi}^*\,{\in}\,{\cal U}_\rho^*$) as defined by \Erf9n, which
means that the map $\Cr\rho$ on the boundary condition s is a permutation. We can thus write
\begin{equation} \Cr\rho_{{\hat\psi},{\hat\psi}'} \equiv \delta^{}_{{\hat\psi}',{\hat\psi}^+}
= \delta^{}_{{\hat\psi}',\pi_{\bar\rho}({\hat\psi})} \end{equation}
with some permutation $\pi_{\bar\rho}$. Because of \Erf CT we also have
\begin{equation} \pi_{{\bar\rho}^+}^{} = (\pi_{\bar\rho})^{-1} \,. \Labl2n
Having obtained these properties of $\Cr\rho$, we can finally
conclude that $C^\calb$ is a {\em conjugation\/}, i.e.\ it is symmetric and
each column and row contains just a single non-zero element:
\begin{equation} C^\calb_{{[\rhob,\psu_\rho]},{[\sigmab,\psu_\sigma]}} = \Frac1{\s\rho} \sum_{{\rm K}\in{\cal G}}
\, \delta^{}_{{\bar\rho},({\rm K}{\rp\sigma})^+_{\phantom I}}\,
\Cr\rho_{{\hat\psi}_\rho,{}_{{\rm K}}{\hat\psi}_\sigma} = \delta_{{[\rhob,\psu_\rho]},{[\sigmab,\psu_\sigma]}^+} \end{equation}
with ${[\sigmab,\psu_\sigma]}^+\,{=}\,[{\rp\sigma}^+,{\hat\psi}_\sigma^+]$
(and ${\hat\psi}_\sigma^+({\rm J})\,{\equiv}\,{\hat\psi}_\sigma({\rm J})\eta^{{\rm J}}_{\rp\sigma}$).
In particular $C^\calb$ is an involution, i.e.\ we have
\begin{equation} (C^\calb)_{\phantom I}^2 = \mbox{\small $1\!\!$}1 \,. \end{equation}
(The crucial property of $\pi_{\bar\rho}$ entering here is \Erf2n;
that relation is not tied to the order of $\pi_{\bar\rho}$, so that in particular
$\pi_{\bar\rho}$ need not have order two itself.)
For the map $C^B$ on boundary blocks,
the conclusions are somewhat different. We still have
\begin{equation} \sum_{\psi'\in{\cal S}_\lambda^*} \Cl\lambda_{\psi,\psi'}\, \psi'({\rm J}')
= \psi({\rm J}')\, \eta^{{\rm J}'}_{\bar\lambda} \end{equation}
similarly to \Erf8n; but since $\eta_{\bar\lambda}$ is only a character of
${\cal U}_\lambda$, but not necessarily of ${\cal S}_\lambda$, the expression on the
right hand side, seen as a function on ${\cal S}_\lambda$, is not a character any more.
Therefore $C^B$ in general no longer constitutes a permutation.
(Of course, when ${\cal U}_\lambda$ coincides with ${\cal S}_\lambda$, it still does.
In this case the arguments are completely parallel to those above, leading to
the conclusion that $UC^B$ is a conjugation as well.) As a consequence, the
matrix $C^B$ is not, in general, a (weighted) conjugation. Nevertheless we can
again conclude that $C^B$ is (weighted) involutive. Namely, we find
\begin{equation} \begin{array}{ll}
\displaystyle\sum_{\psi'\in{\cal S}_\lambda^*} \mbox{\Large(} \sum_{\psi''\in{\cal S}_\lambda^*}
\Cl\lambda_{\psi,\psi''}\Cl\lambda_{\psi',\psi''} \mbox{\Large)} \, \psi'({\rm J}) \!\!
&= \Frac1{\s\lambda} \displaystyle\sum_{{\rm J}'\in{\cal S}_\lambda} \eta^{{\rm J}'}_{\bar\lambda}
\eta^{{\rm J}^{-1}}_{\bar\lambda} \sum_{\psi''\in{\cal S}_\lambda^*}
\psi({\rm J}')\psi''({\rm J}')^*\psi''({\rm J})
\\{}\\[-.8em]
&= \eta^{\rm J}_{\bar\lambda}\eta^{{\rm J}^{-1}}_{\bar\lambda}\, \psi({\rm J})
= \psi({\rm J})
\,, \end{array} \end{equation}
where in the last step we used the identity \Erf et, and hence
\begin{equation} \begin{array}{ll}
((C^B)^2)_{({\bar\lambda},\psi_\lambda),({\bar\nu},\psi_\nu)} \!\!\!
&= \Frac{|{\cal G}|^2}{\u\lambda^2} \, \delta^{}_{{\bar\lambda},{\bar\nu}}\,
\displaystyle\sum_{\psi_\mu\in{\cal S}_\mu^*}
\Cl\lambda_{\psi_\lambda,\psi_\mu}\Cl{\lambda^+}_{\psi_\mu,\psi_\nu}
= \Frac{|{\cal G}|^2}{\u\lambda^2} \, \delta^{}_{{\bar\lambda},{\bar\nu}}\,
\delta^{}_{\psi_\lambda,\psi_\nu}
= \Frac{|{\cal G}|^2}{\u\lambda^2} \,
\delta^{}_{({\bar\lambda},\psi_\lambda),({\bar\nu},\psi_\nu)}
\,. \end{array} \end{equation}
We can also deduce that
the inverse of $C^B$ is given in terms of the inverse of $\Tilde S$ as
\begin{equation} (C^B)^{-1}_{}
= (\Tilde S^{-1})^{\rm t}\, \Tilde S^{-1} \,. \labl{icon}
Let us point out that the existence of a conjugation $C^\calb$ on the boundary
{\em conditions\/} does not come as a big surprise. Indeed, it precisely
implements what one heuristically expects as the result of changing the
orientation of the boundary. The latter manipulation is required e.g.\ when
one wants to glue surfaces along boundaries.
In contrast, for the boundary {\em blocks\/} such a manipulation would not
make any sense; accordingly it is not too surprising that in the most
general case a genuine conjugation on the boundary blocks does not exist.
\subsection{Structure constants}
According to our general expectations, the structure constants of the
classifying algebra are to be defined through a Verlinde-like formula
via the matrix $\Tilde S$. We first introduce a corresponding quantity
\begin{equation} \tNl{({\bar\lambda}_1,\psi_{\lambda_1})}
{({\bar\lambda}_2,\psi_{\lambda_2})} {({\bar\lambda}_3,\psi_{\lambda_3})}
:= \sum_{[\rhob,\psu_\rho]} \frac{
\Tilde S_{({\bar\lambda}_1,\psi_{\lambda_1}),{[\rhob,\psu_\rho]}}
\Tilde S_{({\bar\lambda}_2,\psi_{\lambda_2}),{[\rhob,\psu_\rho]}}
\Tilde S_{({\bar\lambda}_3,\psi_{\lambda_3}),{[\rhob,\psu_\rho]}}} {\Tilde S_{{\rp\vac},{[\rhob,\psu_\rho]}}} \Labl4m
with only lower indices
and then raise the third index with the inverse \erf{icon} of $C^B$, so as
to arrive at the expression
\begin{equation} \begin{array}{ll}
\tN{({\bar\lambda}_1,\psi_1)}{({\bar\lambda}_2,\psi_2)}{({\bar\lambda}_3,\psi_3)} \!\!
&= \displaystyle\sum_{[\rhob]}\sum_{{\hat\psi}_\rho\in{\cal U}^*_\rho}
\Frac{ \Tilde S_{({\bar\lambda}_1,\psi_1),{[\rhob,\psu_\rho]}}\,
\Tilde S_{({\bar\lambda}_2,\psi_2),{[\rhob,\psu_\rho]}}\,
(\Tilde S^{-1})^{{[\rhob,\psu_\rho]},({\bar\lambda}_3,\psi_3)} } {\Tilde S_{{\rp\vac},{[\rhob,\psu_\rho]}}}
\\{}\\[-.95em]
&= \Frac{\u{\lambda_3}}{|{\cal G}|}\,\displaystyle\sum_{[\rhob]}\sum_{{\hat\psi}_\rho\in{\cal U}^*_\rho}
\Frac{ \Tilde S_{({\bar\lambda}_1,\psi_1),{[\rhob,\psu_\rho]}}\,
\Tilde S_{({\bar\lambda}_2,\psi_2),{[\rhob,\psu_\rho]}}\,
\Tilde S_{({\bar\lambda}_3,\psi_3),{[\rhob,\psu_\rho]}}^* } {\Tilde S_{{\rp\vac},{[\rhob,\psu_\rho]}}}
\end{array} \Labl4n
for the structure constants. More explicitly, we find
\begin{equation} \hsp{-.8} \begin{array}{ll}
\tN{({\bar\lambda}_1,\psi_1)}{({\bar\lambda}_2,\psi_2)}{({\bar\lambda}_3,\psi_3)} \!\!\!
&= \Frac{|{\cal G}|\,\sqrt{\u3}}{\sqrt{\s1\u1\s2\u2\s3}}\,
\displaystyle\sum_{\scriptstyle{\rm J}_i\in{\cal S}_i\atop {\rm J}_3={\rm J}_1{\rm J}_2}\!
\psi_1({\rm J}_1)\psi_2({\rm J}_2)\psi_3({\rm J}_3)^* \,
\sum_{[\rhob]} \Frac1{\s\rho}\, S^{{\rm J}_1}_{{\bar\lambda}_1,{\bar\rho}}
S^{{\rm J}_2}_{{\bar\lambda}_2,{\bar\rho}} (S^{{\rm J}_3}_{{\bar\lambda}_3,{\bar\rho}})^*
\,/\, \bar S_{{\rp\vac},{\bar\rho}}
\\{}\\[-.8em]
&= \Frac{\sqrt{\u3}}{\sqrt{\s1\u1\s2\u2\s3}}\,
\displaystyle\sum_{\scriptstyle{\rm J}_i\in{\cal S}_i\atop {\rm J}_3={\rm J}_1{\rm J}_2}\!
\psi_1({\rm J}_1)^*\psi_2({\rm J}_2)^*\psi_3({\rm J}_3) \,
\sum_{{\bar\rho}} S^{{\rm J}_1}_{{\bar\rho},{\bar\lambda}_1,}
S^{{\rm J}_2}_{{\bar\rho},{\bar\lambda}_2} (S^{{\rm J}_3}_{{\bar\rho},{\bar\lambda}_3})^*
\,/\, \bar S_{{\rp\vac},{\bar\rho}}
\,. \end{array} \Labl5n
\subsection{Semisimplicity and irreducible representation s}
The following properties of the structure constants and of the classifying algebra\ \mbox{$\calc(\calap)$}\
now follow directly:
\nxt
The structure constants \Erf4n are manifestly
symmetric in the first two indices. Thus \mbox{$\calc(\calap)$}\ is {\em commutative\/}.
\nxt
We have
\begin{equation} \hsp{-1.36} \begin{array}{l} \left(
\Tilde\Phi_{({\bar\lambda}_1,\psi_1)} \star \Tilde\Phi_{({\bar\lambda}_2,\psi_2)}\right)\star
\Tilde\Phi_{({\bar\lambda}_3,\psi_3)} \\{}\\[-.8em] \hsp{.8}
= \displaystyle\sum_{[\rhob]} \sum_{{\hat\psi}_\rho\in{\cal U}^*_\rho}\!
\sumbo{\lambda_4}\sum_{\psi_4\in{\cal S}^*_\lambda}\!
\frac{ \Tilde S_{({\bar\lambda}_1,\psi_1),{[\rhob,\psu_\rho]}}
\Tilde S_{({\bar\lambda}_2,\psi_2),{[\rhob,\psu_\rho]}} \Tilde S_{({\bar\lambda}_3,\psi_3),{[\rhob,\psu_\rho]}}}
{\Tilde S_{{\rp\vac},{[\rhob,\psu_\rho]}}^2}\, (\Tilde S^{-1})^{{[\rhob,\psu_\rho]},({\bar\lambda}_4,\psi_4)}\,
\Tilde\Phi_{({\bar\lambda}_4,\psi_4)} \,. \end{array} \end{equation}
This is totally symmetric in the three labels $({\bar\lambda}_i,\psi_i)$ for
$i\eq1,2,3$. It follows that \mbox{$\calc(\calap)$}\ is {\em associative\/}.
\nxt
It is also immediately verified that \mbox{$\calc(\calap)$}\ is {\em unital\/}. The
unit element is ${\bar\lambda}\,{=}\,{\rp\vac}$.
\nxt
By construction, the matrix $\Tilde S$ simultaneously diagonalizes all
matrices $\Tilde{\rm N}_{({\bar\lambda}_1,\psi_{\lambda_1})}$. Explicitly,
\begin{equation} \begin{array}{l}
\displaystyle\sum_{{\bar\lambda}_2,\psi_2} \sum_{{\bar\lambda}_3,\psi_3}
(\Tilde S^{-1})^{{[\rhob,\psu_\rho]},({\bar\lambda}_2,\psi_2)}\,
\tN{({\bar\lambda}_1,\psi_1)}{({\bar\lambda}_2,\psi_2)}{({\bar\lambda}_3,\psi_3)}\,
\Tilde S_{({\bar\lambda}_3,\psi_3),{[\rhob',\psu_\rho']}}\,
\\{}\\[-.9em] \hsp{3.32}
= \displaystyle\sum_{{\bar\lambda}_2,\psi_2} \sum_{{\bar\lambda}_3,\psi_3}
\sum_{{[\sigmab,\phu]}} \sum_{{[\sigmab',\phu']}} \sum_{{\bar\lambda}_4,\psi_4}
(\Tilde S_{{\rp\vac},{[\sigmab,\phu]}})^{-1}_{} (\Tilde S^{-1})^{{[\rhob,\psu_\rho]},({\bar\lambda}_2,\psi_2)}\,
\Tilde S_{({\bar\lambda}_3,\psi_3),{[\rhob',\psu_\rho']}}\,
\\{}\\[-.69em] \hsp{6.7}
\Tilde S_{({\bar\lambda}_1,\psi_1),{[\sigmab,\phu]}} \Tilde S_{({\bar\lambda}_2,\psi_2),{[\sigmab,\phu]}}
\Tilde S_{({\bar\lambda}_4,\psi_4),{[\sigmab,\phu]}}
(\Tilde S^{-1})^{{[\sigmab',\phu']},({\bar\lambda}_3,\psi_3)}
(\Tilde S^{-1})^{{[\sigmab',\phu']},({\bar\lambda}_4,\psi_4)}
\\{}\\[-.5em] \hsp{3.32}
= \delta^{{[\rhob,\psu_\rho]}}_{\ {[\rhob',\psu_\rho']}}\,
\Tilde S_{({\bar\lambda}_1,\psi_1),{[\rhob,\psu_\rho]}}\,/\, \Tilde S_{{\rp\vac},{[\rhob,\psu_\rho]}}
\,. \end{array} \Labl-n
Thus the regular representation\ of \mbox{$\calc(\calap)$}\ is fully reducible.
\nxt
Together these properties imply in particular that the associative algebra\
\mbox{$\calc(\calap)$}\ is {\em semisimple\/}.
\nxt
The matrix $C^B$ can be expressed through the structure constants as
\begin{equation} C^B_{({\bar\lambda}_1,\psi_{\lambda_1}),({\bar\lambda}_2,\psi_{\lambda_2})}
= \tNl{({\bar\lambda}_1,\psi_{\lambda_1})}{({\bar\lambda}_2,\psi_{\lambda_2})}{\rp\vac}
= |{\cal G}|\, \tN{({\bar\lambda}_1,\psi_{\lambda_1})}{({\bar\lambda}_2,\psi_{\lambda_2})}
{\ \ \ \ \ \ \ {\rp\vac}} \,. \end{equation}
Note, however, that generically this is not a conjugation.
(Recall that while the matrix $C^\calb$ provides a conjugation on the boundary condition s,
$C^B$ is in general only an involution, but not a conjugation on the
boundary blocks; for a given pair $({\bar\lambda}_1,\psi_1)$ the matrix element
$C^B_{({\bar\lambda}_1,\psi_{\lambda_1}),({\bar\lambda}_2,\psi_{\lambda_2})}$
can be non-vanishing for several pairs $({\bar\lambda}_2,\psi_{\lambda_2})$.)
The calculation in \Erf-n implies that each equivalence class ${[\rhob,\psu_\rho]}$
furnishes a one-dimen\-sional\ irreducible representation\ $R_{[\rhob,\psu_\rho]}$ of \mbox{$\calc(\calap)$}. According to the result
\erf{res1} these
irreducible representation s are, in turn, precisely the reflection coefficients. We thus have
\begin{equation} \Rc{[\rhob,\psu_\rho]}{({\bar\lambda},\varphi)}{\rp\vac}
= R^{}_{[\rhob,\psu_\rho]} (\Tilde\Phi_{({\bar\lambda},\varphi)})
= \frac{\Tilde S_{({\bar\lambda},\varphi),{[\rhob,\psu_\rho]}}}{\Tilde S_{{\rp\vac},{[\rhob,\psu_\rho]}}} \,. \labl R
Moreover, due to the sum rule \Erf sr and the result that the algebra is
semisimple we can conclude that in fact the reflection coefficients
\erf R even provide {\em all\/} inequivalent irreducible representation s and that these
are all one-dimen\-sional.
\subsection{Relation with chiral block s}
Our next aim is to write the structure constants of the classifying algebra
in terms of quantities related to chiral blocks. In connection with blocks,
the natural quantities are the structure constants with only lower indices.
We find that
\begin{equation} \begin{array}{ll}
\tNl{({\bar\lambda}_1,\psi_1)}{({\bar\lambda}_2,\psi_2)}{({\bar\lambda}_3,\psi_3)} \!\!
&= d_1 d_2 d_3\,|{\cal G}|
\displaystyle\sum_{({\rm J}_1,{\rm J}_2,{\rm J}_3)\in{\cal S}_1\times{\cal S}_2\times{\cal S}_3
\atop {\rm J}_1{\rm J}_2{\rm J}_3=1} \prod_{i=1}^3 \Frac{\psi_i({\rm J}_i)}{\s i} \sum_{\bar\rho}
\Frac{ S^{{\rm J}_1}_{{\bar\lambda}_1,{\bar\rho}}
S^{{\rm J}_2}_{{\bar\lambda}_2,{\bar\rho}} S^{{\rm J}_3}_{{\bar\lambda}_3,{\bar\rho}}} {\bar S_{{\rp\vac},{\bar\rho}}}
\\{}\\[-.8em]
&=: \Frac{d_1 d_2 d_3\,|{\cal G}|}{ |{\cal S}_1\,{\cdot}\,{\cal S}_2\,{\cdot}\,{\cal S}_3|} \cdot
\DpNl{({\bar\lambda}_1,\psi_1)}{({\bar\lambda}_2,\psi_2)}{({\bar\lambda}_3,\psi_3)} \,,
\end{array} \Labl3n
where $d_i\,{\equiv}\,d_{\lambda_i}$ etc., and where
in the second step we introduced {\em projected fusion coefficients\/} $\widehat{\rm N}$;
also, ${\cal S}_1\,{\cdot}\,{\cal S}_2\,{\cdot}\,{\cal S}_3$ is by definition the subgroup of ${\cal G}$
that is generated by the three subgroups ${\cal S}_i$.
To rewrite the coefficients $\widehat{\rm N}$ in a more convenient form, we
consider the group homomorphism $p{:}\ {\cal S}_1\,{\times}\,{\cal S}_2\,{\times}\,{\cal S}_3\to{\cal G}$
that is defined by taking the product in ${\cal G}$ of three elements,
\begin{equation} p: \quad {\cal S}_1\,{\times}\,{\cal S}_2\,{\times}\,{\cal S}_3\ni\,
(s_1,s_2,s_3) \,\mapsto\, p(s_1,s_2,s_3):=s_1s_2s_3 \,. \end{equation}
By the homomorphy theorem, the number of elements in the kernel of $p$ is
\begin{equation} |\ker p| = \s1\,\s2\,\s3 \,/\, |{\cal S}_1\,{\cdot}\,{\cal S}_2\,{\cdot}\,{\cal S}_3| \,.
\labl{kern}
Thus we can replace the product over the $\s i$ by the product of $|\ker p|$
and the number of elements in the group generated by all ${\cal S}_i$, so that
\begin{equation}
\DpNl{({\bar\lambda}_1,\psi_1)}{({\bar\lambda}_2,\psi_2)}{({\bar\lambda}_3,\psi_3)}
= \Frac1{|\ker p|}\,\sum_{({\rm J}_1,{\rm J}_2,{\rm J}_3)
\in{\cal S}_1\times{\cal S}_2\times{\cal S}_3 \atop {\rm J}_1{\rm J}_2{\rm J}_3=1}\,
\prod_{i=1}^3 \psi_i({\rm J}_i)\, \sum_{{\bar\rho}} \Frac{ S^{{\rm J}_1}_{{\bar\lambda}_1,{\bar\rho}}
S^{{\rm J}_2}_{{\bar\lambda}_2,{\bar\rho}} S^{{\rm J}_3}_{{\bar\lambda}_3,{\bar\rho}}} {S_{{\rp\vac},{\bar\rho}}}\,.
\labl{prfus}
We remark that $|\ker p|$ is precisely the number of elements
of the group over which we have to perform a
Fourier transformation in order to attain a projection on the chiral block s.
Accordingly $|\ker p|$ indeed needs to be absorbed into the definition of the
coefficients
$\DpNl{({\bar\lambda}_1,\psi_1)}{({\bar\lambda}_2,\psi_2)}{({\bar\lambda}_3,\psi_3)}$.
These relations suggest that the structure constants of the classifying
algebra should be related to an appropriate action of the simple current
group ${\cal G}$ on the space of chiral block s. This feature is familiar from
the situation studied in \cite{fuSc5}. However, owing to the fact that
the action of ${\cal G}$ is only projective, here
its precise realization is more involved and remains to be uncovered.
\sect{Annulus coefficients}\label{s.6}
\subsection{The $\Gs'$-extension}\label{s.61}
In this section we compute the annulus amplitude ${\rm A}_{\rho_1\,\rho_2}$ for
two arbitrary boundary conditions
\begin{equation} \rho_i \equiv [{\bar\rho}_i,{\hat\psi}_i] \end{equation}
($i\eq1,2$) and study its properties.
One way to obtain the annulus amplitude is to evaluate it in the closed string
channel, where it can be regarded as factorizing into two disk one-point
functions and a sphere two-point function, so that
it corresponds to propagation between the boundary
states $\langle{\cal B}_{[{\bar\rho}_i,{\hat\psi}_i]}|$, according to
\begin{equation} {\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}(t)
= \langle {\cal B}_{[\rhob_2,\psu_2]}|\, {\rm e}^{-(2\pi/t)(L_0\raisebox{.07em}{$\scriptscriptstyle\otimes$}{\bf1}+{\bf1}\raisebox{.07em}{$\scriptscriptstyle\otimes$} L_0-c/12)} \,|
{\cal B}_{[\rhob_1,\psu_1]} \rangle \,. \labl A
Here by a {\em boundary state\/}
\cite{card9,poca,clny3,dpfhls,ishi,grwa,fuSc6,reSC} one means a linear form
${\cal B}_{[\rhob,\psu]}$ on the space $\bigoplus_{\bar\mu}\bar{\cal H}_{\bar\mu}^{}\,{\otimes}\,\bar{\cal H}_{\bar\mu^+_{\phantom i}}$ of
all closed string states (which correspond to bulk fields) that is characterized
by the property that when applied to an element $v\,\ot\,\tilde v\,{\in}\,{\cal V}_{\hat\varphi}{\otimes}
\bar{\cal H}_{({\bar\lambda},{\hat\varphi})}\,{\otimes}\,{\cal V}_{{\hat\varphi}^+}{\otimes}\bar{\cal H}_{({\lambdab^{\!+}_{\phantom i}},{\hat\varphi}^+)}$
it yields the one-point correlator of the corresponding bulk field on the disk,
i.e.
\begin{equation} {\cal B}_{[\rhob,\psu]}(v\raisebox{.07em}{$\scriptstyle\otimes$}\tilde v) =
\langle\phi_{({\bar\lambda},\varphi),({\lambdab^{\!+}_{\phantom i}},\varphi^+)}(v\raisebox{.07em}{$\scriptstyle\otimes$}\tilde v;z{=}0)
{\rangle}_{[\rhob,\psu]} \,. \Labl88
It follows that ${\cal B}_{[\rhob,\psu]}$ can be written as the specific linear combination
\begin{equation} {\cal B}_{[\rhob,\psu]}
= \bigoplus_{\scriptstyle{\bar\lambda} \atop Q_{\cal G}(\lambda)=0}
\bigoplus_{\varphi\in{\cal S}_\lambda^*}
\Rc{[\rhob,\psu]}{({\bar\lambda},\varphi)}{\rp\vac}\, \langle\Psi^{{[\rhob,\psu]}\,{[\rhob,\psu]}}_{\rp\vac}\rangle\,
{\tilde\Beta}_{({\bar\lambda},\varphi)} \Labl89
of boundary blocks ${\tilde\Beta}_{({\bar\lambda},\varphi)}$.
To each boundary condition $\rho_i$ we associate the character
\begin{equation} {g}_i \equiv {g}_{\rho_i}^{(Q)}
\in {\cal G}^* = ({G^*})^*_{} \cong G \end{equation}
of ${\cal G}$ that maps every simple current to the value of the corresponding
monodromy charge \Erf QJ, i.e.
\begin{equation} {g}_i({\rm J}) := \exp(2\pi{\rm i} Q_{\rm J}(\rho_i)) \end{equation}
for all ${\rm J}\,{\in}\,{\cal G}$. As already mentioned in the introduction, this quantity
can be regarded as an element of the orbifold group $G$, and indeed it
coincides with the so-called automorphism type of the boundary condition\ (for details, see
\cite{fuSc12}). Inspection shows that in the amplitude \erf A one deals with linear
combinations of characters of that orbifold theory in which the cyclic subgroup
$\langle{g}_1^{-1}{g}_2\rangle$ of $G$ that is
generated by ${g}_1^{-1}{g}_2\,{\in}\, G$ is broken. (These combinations are
known as {\em twining characters\/} \cite{fusS3,fusS6} of the ${\mathfrak A}$-theory.)
This orbifold theory can equivalently be described as an integer spin simple
current extension of the ${\bar\cala}$-theory by a subgroup of ${\cal G}$ which is
a proper subgroup when ${g}_1\,{\ne}\,{g}_2$.
In more precise terms the situation is described as follows. The exact sequence
\begin{equation} 0 \to \langle {g}_1^{-1}{g}_2\rangle \to G \to
G / \langle {g}_1^{-1}{g}_2\rangle \to 0 \end{equation}
of finite abelian groups implies the exact sequence
\begin{equation} 0 \to ({\cal G}/ \langle {g}_1^{-1}{g}_2\rangle)^* \to {\cal G} \to
\langle {g}_1^{-1}{g}_2\rangle^* \to 0 \end{equation}
of their character groups. Therefore we can extend the ${\bar\cala}$-theory by
${({\cal G}/\langle{g}_1^{-1}{g}_2\rangle)}^*_{\phantom I}$. This is the subgroup
of those characters of $G$ which descend to characters of the quotient, and
these are precisely those which are the identity on $\langle{g}_1^{-1}
{g}_2\rangle$, i.e.\ those simple currents ${\rm J}$ which obey ${g}_1^{-1}
{g}_2({\rm J})\eq1$, which in turn is the same as $Q_{\rm J}(\rho_1)\,{=}\, Q_{\rm J}(\rho_2)$.
Accordingly, one expects that the annulus amplitude can be expressed as a
linear combination of characters of the extension of the ${\bar\cala}$-theory by
the subgroup
\begin{equation} {\cal G}^\circ \equiv {\cal G}^\circ_{\rho_1\rho_2}
:=\{ {\rm J}\,{\in}\, {\cal G} \,|\, Q_{\rm J}(\rho_1)\,{=}\, Q_{\rm J}(\rho_2) \} \Labl Go
of ${\cal G}$. As we will see, this is indeed possible; still, as it turns out,
this is not the most natural choice for the following reason. Ultimately,
our goal is to express the annulus amplitude as a linear combination
\begin{equation} {\rm A}_{\rho_1\,\rho_2} = \sum_\sigma{\rm A}_{\rho_1\,\rho_2}^\sigma\,
{\cal X}^{({\cal K})}_\sigma \end{equation}
of irreducible characters ${\cal X}^{({\cal K})}_\sigma$ in some extension ${\cal K}$
of the ${\bar\cala}$-theory. Now the interpretation of the annulus amplitude
as an open string {\em partition function\/} imposes the requirement that when
we expand ${\rm A}_{\rho_1\rho_2}(t)$ as a function of $q\,{=}\,\exp(2\pi{\rm i}({\rm i} t/2))$,
then the coefficients in this expansion are non-negative integers.
While this does not necessarily imply that already all the numbers
${\rm A}_{\rho_1\rho_2}^\sigma$ are
integers, it has been observed in many situations \cite{fuSc5,fuSc9} that the
multiplicities ${\rm A}_{\rho_1\rho_2}^\sigma$ possess a natural interpretation as
the rank of (a subsheaf of) a sheaf of chiral block s. (This interpretation also
enables one to establish the integrality property in full generality.)
In order to relate ${\rm A}_{\rho_1\,\rho_2}^\sigma$ to such a rank of a
chiral block, we need to work with an extended theory in which both $\rho_1$ and
$\rho_2$ are allowed fields, which is the case if their monodromy charges
both vanish. The ${\cal G}^\circ$-extension does not meet this condition in general;
rather, we need to consider the extension by the subgroup
\begin{equation} \Gs' \equiv \Gs'_{\rho_1\rho_2}
:=\{ {\rm J}\,{\in}\, {\cal G} \,|\, Q_{\rm J}(\rho_1)\eq0\,{=}\, Q_{\rm J}(\rho_2) \} \end{equation}
of ${\cal G}$, which is the largest subgroup of ${\cal G}^\circ$ that has the desired
property.
Thus we define the {\em annulus coefficients\/} ${\rm A}_{\rho_1\,\rho_2}^\sigma$
as the multiplicities of characters in the extension of the ${\bar\cala}$-theory
by $\Gs'$; in more precise notation,
\begin{equation} {\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}(t) = \sum_{[\sigmab]\oei} {\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}^{[\sigmab,\psu_\sigma]\oei}
\, {\cal X}'_{[\sigmab,\psu_\sigma]\oei}(\Frac{{\rm i} t}2) \,, \end{equation}
where ${[\sigmab,\psu_\sigma]\oei}$ is the $\Gs'$-orbit of $({\rp\sigma},{\hat\psi}_\sigma)$. We will
demonstrate that
these quantities can be expressed as a sum of fusion rule coefficients in the
$\Gs'$-extension, and check that various consistency requirements are satisfied.
Our first task is to make sure that the characters of the $\Gs'$-extension
indeed appear in the expression of the annulus partition function in the
closed string channel. These characters read (compare formula \erf X)
\begin{equation} {\cal X}'_{[\lambdab,{\hat\psi}_\lambda]'} = \Frac1{\sqrt{\sp\lambda\up\lambda}}
\sum_{{\rm J}\in\Gs'} \bar{\raisebox{.15em}{$\chi$}}_{({\rm J}{\bar\lambda},{\hat\psi\oei}_\lambda)}
\equiv \Frac1{\sqrt{\sp\lambda\up\lambda}}\,
\mbox{\large(} \sum_{{\rm J}\in\Gs'} \bar{\raisebox{.15em}{$\chi$}}_{{\rm J}{\bar\lambda}} \mbox{\large)}^{}_{{\hat\psi\oei}_\lambda} \,. \Labl cp
Here $\sp\lambda\,{=}\,|\cals'_\lambda|$ and $\up\lambda\,{=}\,|\calu'_\lambda|$ are
the cardinalities of the full and untwisted stabilizer $\cals'_\lambda$ and of
\begin{equation} \calu'_\lambda = \{ {\rm J}\,{\in}\,\cals'_\lambda \,|\,
F_\lambda({\rm J},{\rm K})\eq1\;{\rm for\; all}\; {\rm K}\,{\in}\,\cals'_\lambda\} \,, \end{equation}
respectively, which are relevant in the $\Gs'$-extension, and ${\hat\psi\oei}_\lambda$ is a
character of $\calu'_\lambda$. Note that while one has
\begin{equation} \cals'_\lambda = {\cal S}_\lambda \cap\Gs' \,, \Labl90
there is no simple relation between $\calu'_\lambda$ and ${\cal U}_\lambda$. In
particular, $\calu'_\lambda$ differs, in general, from the intersection
$\caluhp\lambda$, though it always contains it as a subgroup:
\begin{equation} \caluhp\lambda \,\subseteq\, \calu'_\lambda \,; \end{equation}
in fact, already in simple examples it happens that
$\calu'_\lambda$ is larger than ${\cal U}_\lambda$.\,%
\futnote{An example occurs for the case of the $D_4$ level 2 WZW theory. In this
case for the fixed point with stabilizer ${\dl Z}_2\times{\dl Z}_2$ the untwisted
stabilizer is trivial, but when the second boundary condition\ is taken to be in a
twisted sector, then both $\cals'$ and $\calu'$ are equal to the
corresponding ${\dl Z}_2$ under which the twisted sector is fixed.}
\subsection{Expressions for the annulus coefficients}\label{s.62}
To proceed, we need more explicit expressions for the annulus coefficients.
To this end we insert the result \Erf ia for the regulated inner product
\Erf mf of the boundary blocks and the relation \erf{unten}
between one-point correlators and boundary blocks into formula \erf A. We
also substitute for the coefficients in the latter relation the explicit
expressions \Erf xR and \erf R as well as the normalization
\begin{equation} \langle\Psi^{{[\rhob,\psu_\rho]}\,{[\rhob,\psu_\rho]}}_{\rp\vac}\rangle = \Tilde S_{{\rp\vac},{[\rhob,\psu_\rho]}} \Labl xr
for the vacuum boundary fields. This yields
\begin{equation} \begin{array}{ll} {\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}(t) \!\!
&= \displaystyle\sum_{\scriptstyle{\bar\lambda};\,\psi_\lambda\in{\cal S}_\lambda^* \atop\scriptstyle
{\bar\mu};\,\psi_\mu\in{\cal S}_\mu^*}
\mbox{\large(} \Rc{[\rhob_1,\psu_1]}{({\bar\lambda},\psi_\lambda)}{\rp\vac}\,
\langle\Psi^{{[\rhob_1,\psu_1]}\,{[\rhob_1,\psu_1]}}_{\rp\vac}\rangle {\mbox{\large)}}^* \,
\mbox{\large(} \Rc{[\rhob_2,\psu_2]}{({\bar\mu},\psi_\mu)}{\rp\vac}\,
\langle\Psi^{{[\rhob_2,\psu_2]}\,{[\rhob_2,\psu_2]}}_{\rp\vac}\rangle \mbox{\large)} \,
\\{}\\[-1.82em]& \hsp{10.9}
\langle {\tilde\Beta}_{({\bar\mu},\psi_\mu)}
|\, {\rm e}^{-(2\pi/t)(L_0\raisebox{.07em}{$\scriptscriptstyle\otimes$}{\bf1}+{\bf1}\raisebox{.07em}{$\scriptscriptstyle\otimes$} L_0-c/12)} \,|
{\tilde\Beta}_{({\bar\lambda},\psi_\lambda)} \rangle
\\{}\\[-.04em]
&= \displaystyle\sumbo\lambda\sum_{\psi_\lambda\in{\cal S}_\lambda^*}
\Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob_1,\psu_1]}}^* \Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob_2,\psu_2]}}\,
\Frac1{(|{\cal G}|/\u\lambda)\, \Sb\lambda\Omega \!\!\raisebox{.54em}{$\phantom.$}}\,
\bar{\raisebox{.15em}{$\chi$}}_{({\bar\lambda},\psi_\lambda)}(\Frac{2{\rm i}}t) \,. \end{array} \Labl1+
This result is in agreement with the requirement that in the closed channel
only those fields $({\bar\lambda},\psi_\lambda)$ are exchanged whose monodromy
charges with respect to the currents in $\Gs'$ vanish; indeed, the summation
even extends only over those fields
for which all monodromy charges of currents in the larger group ${\cal G}$ are zero.
Our next goal is to rewrite \Erf1+ entirely in terms of $\Gs'$-quantities;
first we arrive at a sum over $\Gs'$-orbits ${[\lambdab]'}$:\,%
\futnote{In the product of the two $\tilde S$-elements, one is supposed to
choose a representative of the orbit ${[\lambdab]'}$; it does not matter which one,
because the monodromy charges vanish.
If one worked with the larger group ${\cal G}^\circ$ instead of $\Gs'$,
one would again be able to show that only characters of the ${\cal G}^\circ$-extension
appear, as the monodromy charges of $\rho_1$ and $\rho_2$ are
equal and hence cancel due to the complex conjugation. However, the
two factors $\Tilde S$ separately would depend on the choice of representative
on the ${\cal G}^\circ$-orbits, and for having a well-defined expression one would
have to choose one and the same representative in both matrix elements.}
\begin{equation} \begin{array}{ll} {\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}(t) \!\!
&= \displaystyle\sumBoP\lambda \sum_{{\rm J}\in\Gs'/\cals'_\lambda}
\sum_{\psi_\lambda\in{\cal S}_\lambda^*}
\Tilde S_{({\rm J}{\bar\lambda},\psi_\lambda),{[\rhob_1,\psu_1]}}^*
\Tilde S_{({\rm J}{\bar\lambda},\psi_\lambda),{[\rhob_2,\psu_2]}}\,
\Frac1{(|{\cal G}|/\u\lambda)\, \Sb\lambda\Omega \!\!\raisebox{.54em}{$\phantom.$}}\,
\bar{\raisebox{.15em}{$\chi$}}_{({\rm J}{\bar\lambda},\psi_\lambda)}(\Frac{2{\rm i}}t)
\\{}\\[-.8em]
&= \displaystyle\sumBoP\lambda \sum_{\psi_\lambda\in{\cal S}_\lambda^*}
\Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob_1,\psu_1]}}^*
\Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob_2,\psu_2]}}\,
\Frac1{(|{\cal G}|/\u\lambda)\, \Sb\lambda\Omega \!\!\raisebox{.54em}{$\phantom.$}}\,
\mbox{\Large(} \Frac1{\s\lambda'} \sum_{{\rm J}\in \Gs'}\bar{\raisebox{.15em}{$\chi$}}_{({\rm J}{\bar\lambda},\psi_\lambda)}
(\Frac{2{\rm i}}t) \mbox{\Large)} \,. \end{array} \Labl1;
In \Erf1; we are still dealing with the characters $\psi_\lambda\,{\in}\,
{\cal S}_\lambda^*$ and ${\hat\psi}_i\,{\in}\,{\cal U}_i^*$. To express the amplitude through the
correct quantities ${\hat\psi\oei}_\lambda\,{\in}\,\calu^{\prime*_{}}_\lambda$ and ${\hat\psi\oei}_i\,{\in}\,\calu^{\prime*_{}}_i$,
analogously to \Erf49 we write
\begin{equation} \psi_\lambda \succ \psi'_\lambda \end{equation}
when the restriction of the
${\cal G}$-character $\psi_\lambda$ to the subgroup $\Gs'$ of ${\cal G}$ is equal
to the $\Gs'$-character $\psi'_\lambda$, and similarly when we deal with
other embedded pairs of groups, e.g.\ stabilizers and untwisted stabilizers
(however, for the ${\hat\psi}_i$ we will have to be careful because
in general $\calu'_i$ is not a subgroup of ${\cal U}_i$). We then arrive at
\begin{equation} \begin{array}{ll} {\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}(t) \!\!\!
&= \!\displaystyle\sumBoP\lambda \sum_{{\hat\psi\oei}_\lambda\in\calu^{\prime*_{}}_\lambda}
\mbox{\large(} \sumpsipsup\lambda
\Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob_1,\psu_1]}}^*
\Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob_2,\psu_2]}} \mbox{\large)}\,
\Frac1{(\dP\lambda^{}\,|{\cal G}|/\u\lambda)\, \Sb\lambda\Omega \!\!\raisebox{.54em}{$\phantom.$}}\,
{\cal X}'_{[\lambdab,{\hat\psi}_\lambda]'}(\Frac{2{\rm i}}t)
\,. \end{array} \end{equation}
We are now in a position to perform a modular transformation that involves
the S-matrix $S\oei$ of the $\Gs'$-extension; we get
\begin{equation} \begin{array}{ll} {\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}(t) \!\!\!
&= \displaystyle\sumBoP\lambda \sum_{{\hat\psi\oei}_\lambda\in\calu^{\prime*_{}}_\lambda}
\mbox{\Large(} \sumpsipsup\lambda
\Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob_1,\psu_1]}}^*
\Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob_2,\psu_2]}}^{} \mbox{\Large)}\,
\Frac1{(\dP\lambda^{}\,|{\cal G}|/\u\lambda)\, \Sb\lambda\Omega \!\!\raisebox{.54em}{$\phantom.$}}\,
\\{}\\[-1.54em]
& \hsp{12.9} \displaystyle\sum_{[\sigmab]\oei}\sum_{{\hat\psi\oei}_\sigma\in\calu^{\prime*_{}}_\sigma}
S\oei_{{[\lambdab,{\hat\psi}_\lambda]'},{[\sigmab,\psu_\sigma]\oei}} \,
{\cal X}'_{[\sigmab,\psu_\sigma]\oei}(\Frac{{\rm i} t}2) \,, \end{array} \Labl5r
from which we finally can read off the annulus coefficients as the
coefficients of the $\Gs'$-characters ${\cal X}'_{[\sigmab,\psu_\sigma]\oei}$.
Thus we finally see that the annulus coefficients are given by
\begin{equation} \begin{array}{ll}
{\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}^{[\sigmab,\psu_\sigma]\oei} \!\!
&= \displaystyle\sumBoP\lambda \Frac{|\Gs'|}{|{\cal G}|}\,\Frac{\u\lambda}{\sp\lambda}
\sum_{{\hat\psi\oei}_\lambda\in\calu^{\prime*_{}}_\lambda} \sumpsipsup\lambda
\Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob_1,\psu_1]}}^* \Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob_2,\psu_2]}}\,
S\oei_{{[\lambdab,{\hat\psi}_\lambda]'},{[\sigmab,\psu_\sigma]\oei}} \,/\, S\oei_{{[\lambdab,{\hat\psi}_\lambda]'},{\vac\oei}}
\,. \end{array} \Labl6r
As a check of the normalization of the annulus coefficients, let us specialize
to the case of boundary condition s that preserve the full bulk symmetry, in which case
$\Gs'\,{=}\,{\cal G}$ and the annulus coefficients coincide with the structure constants
of the fusion rule algebra. As seen after \Erf ss, in this case the matrix
elements of $\Tilde S$
coincide with those of $S\oei\,{=}\, S$ where one takes the orbit corresponding
to ${\bar\lambda}$. In addition we then have ${\hat\psi\oei}\,{=}\,{\hat\psi}$, so that the summation
over $\psi\,{\succ}\,{\hat\psi\oei}$ just amounts to a factor of $\s\lambda/\u\lambda$, and
we can use (compare \Erf23)
\begin{equation} S\oei_{{[\lambdab,{\hat\psi}_\lambda]'},{\vac\oei}}
= \Frac{|\Gs'|}{\sqrt{\s\lambda'\u\lambda'}}\, \bar S_{{\bar\lambda},{\rp\vac}}
\,. \end{equation}
Thus in the case $\Gs'\,{=}\,{\cal G}$ the result \Erf5r reduces to
\begin{equation} \begin{array}{ll} {\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}^{[\sigmab,\psu_\sigma]} \!\!
&= \displaystyle\sumBo\lambda \sum_{{\hat\psi}_\lambda\in{\cal U}_\lambda^*}
S_{{[\lambdab,{\hat\psi}_\lambda]},{[\rhob_1,\psu_1]}}^*\, S_{{[\lambdab,{\hat\psi}_\lambda]},{[\rhob_2,\psu_2]}} \, S_{{[\lambdab,{\hat\psi}_\lambda]},{[\sigmab,\psu_\sigma]}}
\,/\, S_{{[\lambdab,{\hat\psi}_\lambda]},\Omega}
\,, \end{array} \end{equation}
from which by comparison with the Verlinde formula for the ${\mathfrak A}$-theory we
learn that the annulus coefficients indeed coincide with the structure constants
of the fusion algebra.
Let us mention one immediate consequence of the result \Erf6r. Relabelling the
summation variable $({\bar\lambda},\psi_\lambda)$ to ${\rm J}({\bar\lambda},\psi_\lambda)$
for an arbitrary current ${\rm J}\,{\in}\,{\cal G}$ and inserting the simple current
symmetries \Erf SQ and \Erf QQ of $\Tilde S$ and $S\oei$, respectively, one finds that
\begin{equation} {\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}^{[\sigmab,\psu_\sigma]\oei}
= {\rm e}^{2\pi{\rm i}[Q_{\rm J}(\rho_2)-Q_{\rm J}(\rho_1)+Q_{\rm J}(\sigma)]}_{} \cdot
{\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}^{[\sigmab,\psu_\sigma]\oei} \,. \Labl AQ
It follows that ${\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}^{[\sigmab,\psu_\sigma]\oei}$ vanishes unless
$Q_{\rm J}(\sigma)\,{=}\, Q_{\rm J}(\rho_1)-Q_{\rm J}(\rho_2)$ for all ${\rm J}\,{\in}\,{\cal G}$. In short, the
annulus coefficients are graded by the monodromy charge.
\subsection{Relation with fusion coefficients of the $\Gs'$-extension}
To proceed we insert the formul\ae\ \Erf tS for $\Tilde S$ and (the analogue for
the $\Gs'$-extension of) \erf S for $S\oei$ into \Erf6r, leading to
\begin{equation} \begin{array}{ll}
{\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}^{{[\sigmab,\psu_\sigma]\oei}} \!\!\!
&=\displaystyle\sumBoP\lambda \Frac{\u\lambda^{}}{|{\cal G}|\,\dP\lambda}\,
[\bar S_{{\bar\lambda},{\rp\vac}}]^{-1}_{}
\sum_{{\hat\psi\oei}_\lambda\in\calu^{\prime*_{}}_\lambda}\sumpsipsup\lambda
\Frac{|{\cal G}|^2}
{\s\lambda^{}\u\lambda^{}\,\sqrt{\s{\rho_1}\u{\rho_1}\s{\rho_2}\u{\rho_2}}} \,
\Frac{|\Gs'|}{\sqrt{\sp\lambda\up\lambda\sp\sigma\up\sigma}}
\\{}\\[-1.0em]
& \hsp{-3.3}
\displaystyle\sum_{\scriptstyle {\rm J}_1\in{\cal S}_\lambda\cap{\cal U}_{\rho_1} \atop
\scriptstyle {\rm J}_2\in{\cal S}_\lambda\cap{\cal U}_{\rho_2}}
\displaystyle\sum_{{\rm J}_3\in\calu'_\lambda\cap\calu'_\sigma}
\psi_\lambda({\rm J}_1)^*{\hat\psi}_1({\rm J}_1)\, (S^{{\rm J}_1}_{{\bar\lambda},{\rhob_1}})^*_{} \,
\psi_\lambda({\rm J}_2){\hat\psi}_2({\rm J}_2)^*\, S^{{\rm J}_2}_{{\bar\lambda},{\rhob_2}} \,
{\hat\psi\oei}_\lambda({\rm J}_3){\hat\psi\oei}_\sigma({\rm J}_3)^*\,S^{{\rm J}_3}_{{\bar\lambda},{\rp\sigma}}
\,. \end{array} \end{equation}
This somewhat unwieldy expression simplifies a lot when one performs
the $\psi_\lambda$-summation (obtained by combining the ${\hat\psi\oei}_\lambda$- and
$\psi_\lambda\,{\succ}\,{\hat\psi\oei}_\lambda$-summation) and implements the fact that
$S^{{\rm J}_i}_{{\bar\lambda},{\bar\rho}}$ is non-zero only if $Q_{{\rm J}_i}(\rho)\eq0$ (which
follows from the identities \Erf00 and \Erf0Q), i.e.\ only if ${\rm J}_i\,{\in}\,\Gs'$:
\begin{equation} \begin{array}{ll}
{\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}^{{[\sigmab,\psu_\sigma]\oei}} \!\!\!
&= \Frac{|{\cal G}|\,|\Gs'|}{\sqrt{\s{\rho_1}\u{\rho_1}\s{\rho_2}\u{\rho_2}
\sp\sigma\up\sigma}}
\displaystyle\sumBoP\lambda \Frac1{\sp\lambda}
\\{}\\[-1.0em]
&\quad \displaystyle\sum_{\scriptstyle \J'_1\in\caluhp1,\,\J'_2\in\caluhp2,\,\J'_3\in
\calu'_\sigma \atop \J'_1=\J'_2\J'_3}
{\hat\psi}_1(\J'_1){\hat\psi}_2(\J'_2)^*{\hat\psi\oei}_\sigma(\J'_3)^*\,
(S^{\J'_1}_{{\bar\lambda},{\rhob_1}})^*_{}
S^{\J'_2}_{{\bar\lambda},{\rhob_2}}S^{\J'_3}_{{\bar\lambda},{\rp\sigma}} \,
[\bar S_{{\bar\lambda},{\rp\vac}}]^{-1}_{}
\,. \end{array} \end{equation}
Next we insert the analogue of \erf{lem} for the embedding
$\caluhp i\,{\subseteq}\,\calu'_i$ to arrive at an expression with
$\calu'_i$-characters:
\begin{equation} \begin{array}{ll}
{\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}^{{[\sigmab,\psu_\sigma]\oei}} \!\!\!
&= \Frac{|{\cal G}|\,|\Gs'|}{\sqrt{\s{\rho_1}\u{\rho_1}\s{\rho_2}\u{\rho_2}
\sp\sigma\up\sigma}}
\displaystyle\sumBoP\lambda \Frac1{\sp\lambda}
\cdot \Frac{\caluhP1}{\up{\rho_1}}\!\! \sum_{\scriptstyle{\hat\psi\oei}_1\in\calu'_1 \atop
{\hat\psi\oei}_1\succ\Check\psi_1}
\Frac{\caluhP2}{\up{\rho_2}}\!\! \sum_{\scriptstyle{\hat\psi\oei}_2\in\calu'_2 \atop
{\hat\psi\oei}_2\succ\Check\psi_2}
\\{}\\[-.8em]
&\quad
\displaystyle\sum_{\scriptstyle \J'_1\in\calu'_1,\,\J'_2\in\calu'_2,\,\J'_3\in\calu'_\sigma
\atop \scriptstyle \J'_1=\J'_2\J'_3}
{\hat\psi\oei}_1(\J'_1){\hat\psi\oei}_2(\J'_2)^*{\hat\psi\oei}_\sigma(\J'_3)^*\,
(S^{\J'_1}_{{\bar\lambda},{\rhob_1}})^*_{}
S^{\J'_2}_{{\bar\lambda},{\rhob_2}}S^{\J'_3}_{{\bar\lambda},{\rp\sigma}} \,
[\bar S_{{\bar\lambda},{\rp\vac}}]^{-1}_{}
\,. \end{array} \Labl8s
Here $\Check\psi_i$ denotes the character
\begin{equation} \Check\psi_i\,{:=}\,{\hat\psi}_i|_{\caluhp i}^{} \end{equation}
of $\caluhp i$.
The ${\bar\lambda}$-summation in formula \Erf8s is still over all $\Gs'$-orbits that
are even ${\cal G}$-allowed. We now rewrite it such that we sum over all
$\Gs'$-orbits that are just $\Gs'$-allowed; we first convert the summation
to a sum over {\em all\/} orbits by inserting the projector \erf P, and then
restrict again to $\Gs'$-allowed orbits, which means that
the factor ${\rm e}^{2\pi{\rm i} Q_{\rm J}(\lambda)}$ in \erf P is equal to 1 for ${\rm J}\,{\in}\,\Gs'$
and constant on the cosets of ${\cal G}$ with respect to $\Gs'$. This amounts to using
the projector
\begin{equation} \Frac{|\Gs'|}{|{\cal G}|}\sum_{[\J]'\in{\cal G}/\Gs'} {\rm e}^{2\pi{\rm i} Q_{\rm J}(\lambda)}
\,. \end{equation}
Afterwards we get rid of the phase factor ${\rm e}^{2\pi{\rm i} Q_{\rm J}(\lambda)}$ by
exploiting the simple current symmetry \Erf QF of the $S^{\rm J}$-matrices;
this yields
\begin{equation} \begin{array}{ll}
{\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}^{{[\sigmab,\psu_\sigma]\oei}} \!\!\!
&= \Frac{\sqrt{\sp{\rho_1}\up{\rho_1}\sp{\rho_2}\up{\rho_2}}
\,\caluhP1\,\caluhP2}{\sqrt{\s{\rho_1}\u{\rho_1}\s{\rho_2}
\u{\rho_2}}\,\up{\rho_1}\up{\rho_2}}
\displaystyle\sum_{\scriptstyle{\hat\psi\oei}_1\in\calu'_1 \atop{\hat\psi\oei}_1\succ\Check\psi_1}
\sum_{\scriptstyle{\hat\psi\oei}_2\in\calu'_2 \atop{\hat\psi\oei}_2\succ\Check\psi_2} \sum_{{\rm J}\in{\cal G}/\Gs'}
\Ne{[\rhob_2,\psu_2]\oei}{[\J]'\star{[\sigmab,\psu_\sigma]\oei}}{\ \ {[\rhob_1,\psu_1]\oei}}
\,, \end{array} \Labl8r
where $[\J]'$ denotes the surviving simple current of the $\Gs'$-extension that
comes from the simple current ${\rm J}$ of the ${\bar\cala}$-theory, and where
\begin{equation} \begin{array}{ll}
\Ne{[\rhob_2,\psu_2]\oei}{[\sigmab,\psu_\sigma]\oei}{[\rhob_1,\psu_1]\oei} \!\!\!
&= \Frac{|\Gs'|^2}{\sqrt{\sp{\rho_1}\up{\rho_1}\sp{\rho_2}\up{\rho_2}
\sp\sigma\up\sigma}} \displaystyle\sumBop\lambda \Frac1{\sp\lambda}
\\{}\\[-1.0em]
&\quad \displaystyle\sum_{\scriptstyle \J'_1\in\calu'_1,\,\J'_2\in\calu'_2,\,\J'_\sigma\in
\calu'_\sigma \atop \J'_1=\J'_2\J'_\sigma}
{\hat\psi\oei}_1(\J'_1){\hat\psi\oei}_2(\J'_2)^*{\hat\psi\oei}_\sigma(\J'_\sigma)^*\,
(S^{\J'_1}_{{\bar\lambda},{\rhob_1}})^*_{}
S^{\J'_2}_{{\bar\lambda},{\rhob_2}}S^{\J'_3}_{{\bar\lambda},{\rp\sigma}} \,
/\, \bar S_{{\bar\lambda},{\rp\vac}}
\,. \end{array} \Labl7r
The numbers \Erf7r are precisely the fusion coefficients of the $\Gs'$-extension,
as can be seen by
inserting (the analogue for $S\oei$ of) \erf S into the Verlinde formula.
We thus have succeeded in writing the annulus coefficients as a linear
combination of fusion coefficients of the $\Gs'$-extension. Still we would like
to manipulate our result further. To this end we observe that in \Erf8r we
are free to let $[\J]'$ act on the label of the $\Gs'$-fusion coefficients where
we like it most. In particular for suitable $[\J]'$ the action will
then be trivial. To determine these currents, consider first the requirement
\begin{equation} [\J]'\star {[\sigmab,\psu_\sigma]\oei} \equiv [{\rm J}{\rp\sigma},{}_{\J}{\hat\psi\oei}_\sigma]'
\,\stackrel!=\, {[\sigmab,\psu_\sigma]\oei} \,. \end{equation}
This implies, first, that we need ${\rm J}{\rp\sigma}\,{=}\,\J'{\rp\sigma}$ for some $\J'\,{\in}\,\Gs'$,
which is solved by ${\rm J}\,{\in}\,{\cal S}_\sigma\,{\cdot}\,\Gs'$. In addition we then need
${}_{\J}{\hat\psi\oei}_\sigma\,{=}\,{}_{{\J}'}{\hat\psi\oei}_\sigma$, which is equivalent to
\begin{equation} F_\sigma({\rm J}(\J')^{-1},{\rm J}_3') = 1\quad{\rm for\; all}\;
{\rm J}_3'\,{\in}\,\calu'_\sigma \,; \Labl8u
because of $\calu'_\sigma\subseteq{\cal S}_\sigma$, for the latter equality it is
sufficient (though not necessary, in general)\,%
\futnote{It is of course also sufficient that ${\rm J}$ is in $\cals'_\sigma$, which
in general is not a subgroup of ${\cal U}_\sigma$. However, we have $\cals'_\sigma
\subseteq\Gs'$, and hence because of the explicit appearance of $\Gs'$
on the right hand side\ of \Erf Hd this is in fact not relevant.}
that
\begin{equation} {\rm J}\in{\cal U}_\sigma\,{\cdot}\,\Gs' \,. \end{equation}
Similar arguments apply to $\rho_1$ or
$\rho_2$, but now we can also take into account the summations over the
${\hat\psi\oei}_i$ which satisfy ${\hat\psi\oei}_i\succ\Check\psi_i$; accordingly, while the first
part of the argument is identical, leading to the requirement that
\begin{equation} {\rm J}\in{\cal S}_i\,{\cdot}\,\Gs' \,, \end{equation}
in the second part the equality between characters only needs to hold for the
restriction of the $\calu'_i$-characters to $\caluhp i$, so that the analogue
of \Erf8u gets relaxed to
\begin{equation} F_{\rho_i}({\rm J}(\J')^{-1},{\rm J}_i') = 1\quad{\rm for\; all}\;
{\rm J}_i'\,{\in}\,\caluhp i \,, \end{equation}
which in turn is satisfied for every ${\rm J}\,{\in}\,{\cal S}_i\,{\cdot}\,\Gs'$.
Thus we conclude that whenever ${\rm J}$ is in the group
\begin{equation} \Gs'' \equiv \Gs''_{\rho_1\rho_2\sigma} := {\cal S}_{\rho_1}\cdot
{\cal S}_{\rho_2}\cdot {\cal U}_\sigma \cdot \Gs'_{\rho_1\rho_2} \,, \Labl Hd
then $[\J]'$ acts trivially. It follows that we can rewrite \Erf8r as
\begin{equation} {\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}^{{[\sigmab,\psu_\sigma]\oei}}
= N \sum_{\scriptstyle{\hat\psi\oei}_1\in\calu'_1 \atop{\hat\psi\oei}_1\succ\Check\psi_1}
\sum_{\scriptstyle{\hat\psi\oei}_2\in\calu'_2 \atop{\hat\psi\oei}_2\succ\Check\psi_2}
\sum_{{\rm J}\in{\cal G}/\Gs''} \Ne{{[\rhob_2,\psu_2]\oei}}{{\rm J}{[\sigmab,\psu_\sigma]\oei}}{{[\rhob_1,\psu_1]\oei}} \Labl9r
with
\begin{equation} N := \Frac{|\Gs''|}{|\Gs'|}\,
\Frac{\sqrt{\sp{\rho_1}\up{\rho_1}\sp{\rho_2}\up{\rho_2}}}
{\sqrt{\s{\rho_1}^{}\u{\rho_1}^{}\s{\rho_2}^{}\u{\rho_2}^{}}}\,
\Frac{\caluhP1\,\caluhP2}{\up{\rho_1}\up{\rho_2}} \,. \labl{dud}
Note that both the fusion coefficients and the prefactor $N$ are
manifestly non-negative, and hence the result \Erf9r shows that
the annulus coefficients are {\em non-negative\/}. For the interpretation of
the annulus amplitude as a partition function\ they must even be non-negative
{\em integers\/}. To establish this stronger property will require some more
work. As the fusion coefficients are manifestly integral, we only have to show
integrality for the prefactor $N$.
As a preparation we rewrite this number as a product
\begin{equation} N = N'' \cdot \dudu{\rho_1} \cdot \dudu{\rho_2} \end{equation}
of three factors
\begin{equation} N'' := \Frac{|\Gs''|\, \sp{\rho_1}\sp{\rho_2}}
{|\Gs'|\,\s{\rho_1}\s{\rho_2}} \labl{dudo}
and $\dudu{\rho_i}$, where
\begin{equation} \dudu\rho :=
\Frac{\sqrt{\s\rho}\;\caluhP\rho} {\sqrt{\u\rho^{}\,\sp\rho\up\rho}}
= \Frac{d_\rho}{\dP\rho}\, \Frac{\caluhP\rho}{\up\rho} \,. \labl{dudu}
As we will see in the next subsection, actually each of the three factors
is already integral individually; furthermore, those integers possess a natural
representation\ theoretic interpretation.
\subsection{Integrality}\label{s.64}
We first show the integrality of $N''$ \erf{dudo}. Consider the map
$p{:}\ {\cal S}_{\rho_1}{\times}{\cal S}_{\rho_2}{\times}(\caluhp\sigma)\,{\to}\,\Gs''$
that is defined by
\begin{equation} p:\quad ({\rm J}_1,{\rm J}_2,{\rm J}_3) \,\mapsto\, {\rm J}:={\rm J}_1^{-1}{\rm J}_2{\rm J}_3 \,, \Labl1h
which of course we can also interpret as a map to the subgroup
\begin{equation} {\cal I}:=p({\cal S}_{\rho_1}\,{\times}\,{\cal S}_{\rho_2}\,{\times}\,\caluhp\sigma)
\subseteq\Gs'' \end{equation}
on which \Erf1h is a surjection. We would like to determine when the image ${\rm J}$
is already in $\Gs'\,{\subseteq}\,{\cal I}$. Let us look at the monodromy charges
for ${\rm J}$. Using the fact that the monodromy charge of a fixed point vanishes
(see \Erf0Q) and using the gradation property \Erf AQ of the annulus
coefficients,
we conclude that
\begin{equation} \begin{array}{ll}
Q_{{\rm J}_1}({\rho_1}) = 0\,,\quad & Q_{{\rm J}_1}({\rho_2})=-Q_{{\rm J}_1}(\sigma)\,, \\[.5em]
Q_{{\rm J}_2}({\rho_2}) = 0\,, \ & Q_{{\rm J}_2}({\rho_1}) = Q_{{\rm J}_2}(\sigma) \,, \\[.5em]
Q_{{\rm J}_3}(\sigma)= 0\,, \ & Q_{{\rm J}_3}({\rho_1}) = 0 = Q_{{\rm J}_3}({\rho_2}) \,.
\end{array} \end{equation}
Additivity of monodromy charges then implies
\begin{equation} Q_{\rm J}({\rho_1}) = Q_{{\rm J}_2}({\rho_1}) \,, \qquad
Q_{\rm J}({\rho_2}) = -Q_{{\rm J}_1}({\rho_2}) \,, \qquad
Q_{\rm J}(\sigma)= Q_{{\rm J}_1}({\rho_2}) + Q_{{\rm J}_2}({\rho_1}) \,, \end{equation}
which tells us that in order to have ${\rm J}\,{\in}\,\Gs'$, i.e.\
$Q_{\rm J}({\rho_1})\eq0\,{=}\, Q_{\rm J}({\rho_2})$, it is necessary and sufficient that
$Q_{{\rm J}_1}({\rho_2})\eq0\,{=}\, Q_{{\rm J}_2}({\rho_1})$, which in turn is equivalent to
${\rm J}_1,{\rm J}_2\,{\in}\,\Gs'$. We conclude that the kernel of the map
$({\rm J}_1,{\rm J}_2,{\rm J}_3) \,{\mapsto}\, [{\rm J}_1^{-1}{\rm J}_2{\rm J}_3] \,{\in}\,{\cal I}/\Gs'$
is the subgroup $\cals'_{\rho_1}{\times}\cals'_{\rho_2}{\times}(\caluhp\sigma)$
of ${\cal S}_{\rho_1}{\times}{\cal S}_{\rho_2}{\times}(\caluhp\sigma)$.
By the homomorphism theorem this in turn implies that
\begin{equation} |{\cal I}|\, \sp{\rho_1}\sp{\rho_2} = |\Gs'|\,\s{\rho_1}^{}\s{\rho_2}^{} \,.
\end{equation}
Moreover, ${\cal I}$ is a subgroup (not just a subset) of $\Gs''$, so
$|\Gs''|/|{\cal I}|$ is integral, and hence also
\begin{equation} N'' \equiv \Frac{|\Gs''|\, \sp{\rho_1}\sp{\rho_2}}
{|\Gs'|\,\s{\rho_1}\s{\rho_2}} = [\Gs''\,{:}\,{\cal I}] \,\in\, {\dl Z}_{>0} \,.
\labl{dudo'}
This proves the integrality of the number $N''$; it also provides us with a
simple reason for the integrality property, namely that $N''$ is the index of
the subgroup ${\cal I}\,{=}\, p({\cal S}_{\rho_1}{\times}{\cal S}_{\rho_2}{\times}\,
\caluhp\sigma)$ in the subgroup $\Gs''$ of ${\cal G}$.
Note that for the case where all untwisted stabilizers are equal to the full
stabilizers (which immediately implies $\dudu\rho\eq1$), this already settles
the integrality problem. To establish integrality of $\dudu\rho$ as defined in
\erf{dudu} in the general case, we use information about the representation\ theory
of twisted group algebra s (see appendix \ref{s.b} for an
introduction to twisted group algebra s). Concretely, we just need to observe
that the twisted group algebra ${\mathbb C}_{\F'}\cals'_\rho$ is a semisimple
subalgebra of ${\mathbb C}_{\cal F}{\cal S}_\rho$, where ${\cal F}$ is the two-cocycle
(determined uniquely up to a coboundary) whose commutator cocycle is
$F_\rho|_{{\cal S}_\rho\times{\cal S}_\rho}$, while ${\F'}$ is the analogous two-cocycle
whose commutator cocycle is $F_\rho|_{\cals'_\rho\times\cals'_\rho}$.
The results \Erf2c and \Erf4c about the decomposition of
${\mathbb C}_{\cal F}{\cal S}_\rho$-representation s into irreducible
${\mathbb C}_{\F'}\cals'_\rho$-representation s then tell us that the
number $\dudu\rho$ has the natural interpretation as the multiplicity ${\beta}$
\Erf4c that occurs in those branching rules, and hence in particular that
\begin{equation} \dudu\rho = {\beta}_{}^{(\cals'_\rho\subseteq{\cal S}_\rho)}\in {\dl Z}_{\ge0} \,,
\labl{dudu'}
as announced.
\subsection{Further consistency checks}
We would like to stress once more that the natural annulus coefficients are
the quantities ${\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}^{[\sigmab,\psu_\sigma]\oei}$ that were defined in subsection
\ref{s.61}, for which the upper and lower indices are, in general, of different
type. (In fact, they differ in a rather subtle way, as even the very meaning of
the upper index depends, via the definition of the group
$\Gs'\,{\equiv}\,\Gs'_{\rho_1\rho_2}$, on the value of the two lower indices.)
In particular, it is the integrality of these numbers that guarantees
that the coefficients in an expansion of $A_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}(t)$ in powers of
$q\,{=}\,{\rm e}^{-\pi t}$ are integral and therefore allows for the
interpretation of the annulus amplitude as a partition function.
On the other hand, for certain purposes it is also desirable to have at one's
disposal some closely related numbers $\A\raisebox{.48em}{$\scriptstyle\!\!\circ$}$ for which all three labels are on
an equal footing, which means that the upper index should be of the same
form, i.e.\ ${[\sigmab,\psu_\sigma]}$, as the labels for the boundary condition s. In this subsection we
show that numbers of the latter form can indeed be introduced, and that they
satisfy two interesting systems of relations, see \Erf MA and \Erf AA below.
(Further inspection shows that in many cases the $\A\raisebox{.48em}{$\scriptstyle\!\!\circ$}$-coefficients are
just multiples of the annulus coefficients, although the constants of
proportionality are generically non-integral.)
Let us start by inspecting the formula \Erf6r for the annulus coefficients. It
may be noticed that to derive that result, no other property of $\Gs'$ was used
than that it is a subgroup of ${\cal G}$. Accordingly, analogous expressions are
obtained when any other subgroup of ${\cal G}$ is used. In particular, let us
introduce the quantities $\Ao{[\rhob_1,\psu_1]}{[\rhob_2,\psu_2]}{[\sigmab,\psu_\sigma]^\oo_{}}$ as the coefficients of the
annulus amplitude in the expansion
\begin{equation} {\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}(t) = \sum_{[\sigmab]^\oo_{}} \Ao{[\rhob_1,\psu_1]}{[\rhob_2,\psu_2]}{[\sigmab,\psu_\sigma]^\oo_{}}
\, {\cal X}^\circ_{[\sigmab,\psu_\sigma]^\oo_{}}(\Frac{{\rm i} t}2) \Labl5t
with respect to the characters ${\cal X}^\circ$ of the extension of the ${\bar\cala}$-theory by the simple
currents in the group ${\cal G}^\circ\,{\subseteq}\,{\cal G}$ that was defined in \Erf Go.
We then have
\begin{equation} \Ao{[\rhob_1,\psu_1]}{[\rhob_2,\psu_2]}{[\sigmab,\psu_\sigma]^\oo_{}}
= \sumBoo\lambda \Frac{|{\cal G}^\circ|}{|{\cal G}|}\,\Frac{\u\lambda}{\sO\lambda}
\sum_{{\hat\psi^\oo}_\lambda\in\calu^{\circ*}_\lambda} \sumpsipsuo\lambda
\Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob_1,\psu_1]}}^* \Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob_2,\psu_2]}}\,
S^\oo_{{[\lambdab,{\hat\psi}_\lambda]^\circ_{}},{[\sigmab,\psu_\sigma]^\oo_{}}} \,/\, S^\oo_{{[\lambdab,{\hat\psi}_\lambda]^\circ_{}},{\vac^\oo_{}}} \,, \Labl6s
where
\begin{equation} S^\oo_{{[\lambdab,{\hat\psi}_\lambda]^\circ_{}},{[\mub,{\hat\psi}_\mu]^\circ_{}}}
:= \Frac{|S^\oo|}{\sqrt{\sO\lambda\uo\lambda\sO\mu\uo\mu}}
\sum_{{\rm J}\in\calu^\circ_\lambda\cap\calu^\circ_\mu} {\hat\psi^\oo}_\lambda({\rm J})\,
{\hat\psi^\oo}_\mu({\rm J})^*\, S^{\rm J}_{{\bar\lambda},{\bar\mu}} \labl So
is the modular S-matrix of the ${\cal G}^\circ$-extension.
While by construction the upper index of the numbers $\A\raisebox{.48em}{$\scriptstyle\!\!\circ$}$ is a priori again
of a type different from the two lower ones, we will now show that actually
their values only depend on full ${\cal G}$-orbits ${[\sigmab,\phu_\sigma]}$.
To this end we compare the expressions \Erf So for $S^\oo$ and \Erf tS for
$\Tilde S$ and take into account the specific way in which $S^\oo$ appears in \Erf6s.
Our aim is then to show that up to numerical factors, we are allowed to
replace $S^\oo$ by $\Tilde S$. To this end we observe that apart from the
different prefactors, we have to deal with the presence
of different group characters and with the different summation range for the
simple currents. As for the characters, we simply need to implement their
restriction properties. Concerning the simple current summation, the following
reasoning shows that in the expression \Erf6s only terms with ${\rm J}$ in the
intersection of the two groups $\calu^\circ_\lambda{\cap}\calu^\circ_\sigma$ and
${\cal S}_\lambda{\cap}{\cal U}_\sigma$ give non-vanishing contributions.
\nxt
For ${\rm J}\,{\in}\,{\cal S}_\lambda{\setminus}\,\calu^\circ_\lambda$, we distinguish between
two cases. First, when ${\rm J}\,{\in}\,{\cal S}_\lambda{\setminus}\,\cals^\circ_\lambda$, we
deduce from the definition \Erf Go of ${\cal G}^\circ$ that $Q_{\rm J}({\rho_1})\,{\ne}\, Q_{\rm J}({\rho_2})$,
so that by the gradation property of the numbers \Erf6s (which follows by a
consideration analogous to that for the annulus coefficients) we can assume that
$Q_{\rm J}(\sigma)\nE0$; but then ${\rp\sigma}$ cannot be a fixed point of ${\rm J}$, and
hence $S^{\rm J}_{{\bar\lambda},{\rp\sigma}}\eq0$, so that the corresponding contribution to
the annulus tensor vanishes.
Second, when ${\rm J}\,{\in}\,\cals^\circ_\lambda{\setminus}\,\calu^\circ_\lambda$, then there exists
a ${\rm K}\,{\in}\,\cals^\circ_\lambda$ such that $F_\lambda({\rm J},{\rm K})\nE0$, and hence from
\begin{equation} S^{\rm J}_{{\bar\lambda},{\rp\sigma}} = S^{\rm J}_{{\rm K}{\bar\lambda},{\rp\sigma}}
= {\rm e}^{2\pi{\rm i} Q_{\rm K}(\sigma)} F_\lambda({\rm J},{\rm K})\,S^{\rm J}_{{\bar\lambda},{\rp\sigma}}
= F_\lambda({\rm J},{\rm K})\,S^{\rm J}_{{\bar\lambda},{\rp\sigma}} \end{equation}
we can again conclude that $S^{\rm J}_{{\bar\lambda},{\rp\sigma}}$ vanishes.
Here in the last equality we have also used the fact that ${\rm K}\,{\in}\,{\cal G}^\circ$, so that
we can again invoke the grading property so as to set $Q_{\rm K}(\sigma)\eq0$.
\nxt
For ${\rm J}\,{\in}\,{\cal U}_\sigma{\setminus}\,\calu^\circ_\sigma$ the same reasoning as in
the first part of the previous case applies.
\nxt
For ${\rm J}\,{\in}\,\calu^\circ_\sigma{\setminus}\,{\cal U}_\sigma$, there exists
a ${\rm K}\,{\in}\,{\cal S}_\sigma{\setminus}\,\cals^\circ_\sigma$ with $F_\lambda({\rm J},{\rm K})\nE0$,
so that now
\begin{equation} S^{\rm J}_{{\bar\lambda},{\rp\sigma}} = S^{\rm J}_{{\bar\lambda},{\rm K}{\rp\sigma}}
= {\rm e}^{2\pi{\rm i} Q_{\rm K}(\lambda)} F_\sigma({\rm J},{\rm K})\,S^{\rm J}_{{\bar\lambda},{\rp\sigma}}
= F_\sigma({\rm J},{\rm K})\,S^{\rm J}_{{\bar\lambda},{\rp\sigma}} \end{equation}
tells us that $S^{\rm J}_{{\bar\lambda},{\rp\sigma}}$ must be zero.
Furthermore, we can combine the summations over ${\hat\psi^\oo}_\lambda$ and
$\psi_\lambda{\succ}{\hat\psi^\oo}_\lambda$ to a summation over all $\psi_\lambda$,
while the ${[\lambdab]^\circ_{}}$-summation can be converted to a sum over all ${\bar\lambda}$
times a factor of $\sO\lambda/|{\cal G}^\circ|$. It follows that
\begin{equation} \Ao{[\rhob_1,\psu_1]}{[\rhob_2,\psu_2]}{[\sigmab,\psu_\sigma]^\oo_{}}
= \ao\sigma \!
\sumbo\lambda\sum_{\psi_\lambda\in{\cal S}_\lambda^*} \Frac{\u\lambda}{|{\cal G}|}\,
\Frac{ \Tilde S_{({\bar\lambda},\psi_\lambda),{[\rhob_1,\psu_1]}}^*
\Tilde S^{}_{({\bar\lambda},\psi_\lambda),{[\rhob_2,\psu_2]}}\,
\Tilde S^{}_{({\bar\lambda},\psi_\lambda),{[\sigmab,\psu_\sigma]}} }
{\Tilde S_{({\bar\lambda},\psi_\lambda),{[\vacb]}}} \,, \Labl7t
with
\begin{equation} \ao\sigma \equiv \aor{\rho_1}{\rho_2}\sigma
:= \frac{\sqrt{\s\sigma\u\sigma}}{\sqrt{\sO\sigma\uo\sigma}} \,. \Labl ao
Note that, as indicated by the notation $\aor{\rho_1}{\rho_2}\sigma$, this prefactor
not only depends on the upper label $\sigma$ of the coefficient \Erf7t, but
implicitly on the values of the two lower labels ${\rho_1}$ and ${\rho_2}$ as
well, namely through the relevant subgroup ${\cal G}^\circ\,{\equiv}\,{\cal G}^\circ_{\rho_1\rho_2}$
of ${\cal G}$. What is more interesting, however, is that
the prefactor is constant on the ${\cal G}$-orbit of ${\rp\sigma}$; this implies
that we can replace the upper label according to
\begin{equation} \Ao{[\rhob_1,\psu_1]}{[\rhob_2,\psu_2]}{[\sigmab,\psu_\sigma]^\oo_{}} \equiv \Ao{[\rhob_1,\psu_1]}{[\rhob_2,\psu_2]}{[\sigmab,\phu_\sigma]} \,, \end{equation}
where ${\hat\varphi}_\sigma$ is any ${\cal U}_\sigma$-character satisfying
\begin{equation} {\hat\varphi}_\sigma|^{}_{{\cal U}_\sigma\cap\calu^\circ_\sigma}
= {\hat\psi}_\sigma|^{}_{{\cal U}_\sigma\cap\calu^\circ_\sigma} \,. \end{equation}
Thus, as announced, we are dealing with quantities where all three labels are
on the same footing. By comparison with formula \Erf4n for the structure
constants of \mbox{$\calc(\calap)$}, the coefficients $\A\raisebox{.48em}{$\scriptstyle\!\!\circ$}$ are, up to the prefactor \Erf ao,
just the `opposite structure constants', i.e.\ those obtained when summing
over the other index of the non-symmetric $\Tilde S$-matrix.
To see how the numbers $\Ao{[\rhob_1,\psu_1]}{[\rhob_2,\psu_2]}{[\sigmab,\psu_\sigma]}$ are related to the
annulus coefficients ${\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}^{[\sigmab,\psu_\sigma]\oei}$, we recall the definition
\erf X of extended characters. Consider first the situation where the relevant
untwisted stabilizer groups are related as $\calu'_\sigma\,{\subseteq}\,\calu^\circ
_\sigma$; then we can immediately conclude that
\begin{equation} \begin{array}{ll} {\cal X}^\circ_{[\sigmab,\psu_\sigma]^\oo_{}} \!\!
&= \Frac1{\sqrt{\sO\sigma\uo\sigma}}
\displaystyle\sum_{{\rm J}\in{\cal G}^\circ} \bar{\raisebox{.15em}{$\chi$}}_{({\rm J}{\rp\sigma},{\hat\psi}_\sigma)}
= \Frac{\sqrt{\sp\sigma\up\sigma}}{\sqrt{\sO\sigma\uo\sigma}}
\displaystyle\sum_{[\JK]'\in{\cal G}^\circ/\Gs'} {\cal X}'_{[\JK]'\star[{\rp\sigma},{\hat\psi}_\sigma|^{}_
{\calu'_\sigma}]'}
\,, \end{array} \end{equation}
where ${\hat\psi}_\sigma|^{}_{\calu'_\sigma}$ is the $\calu'_\sigma$-character that
is obtained by restricting the $\calu^\circ_\sigma$-character ${\hat\psi}_\sigma$ to the
subgroup $\calu'_\sigma$. This implies that the corresponding coefficients of
the annulus amplitude are linearly related as well,
\begin{equation} \Ao{[\rhob_1,\psu_1]}{[\rhob_2,\psu_2]}{[\sigmab,\phu_\sigma]}
= \Frac{\sqrt{\sO\sigma\uo\sigma}}{\sqrt{\sp\sigma\up\sigma}}
\cdot {\rm A}_{{[\rhob_1,\psu_1]}\,{[\rhob_2,\psu_2]}}^{[{\rp\sigma},{\hat\varphi}_\sigma|^{}_{\calu'_\sigma}]'}
\,. \end{equation}
In contrast, in the case where $\calu'_\sigma$ is not a subgroup of
$\calu^\circ_\sigma$ (which, in spite of $\cals'_\sigma\,{\subseteq}\,\cals^\circ_\sigma$,
can happen for the same reasons as in the case of $\calu'_\sigma$ versus
${\cal U}_\sigma$, see the remarks
after formula \Erf90), more complicated linear combinations
arise that mix those characters of the $\Gs'$- and of the ${\cal G}^\circ$-extensions for
which the group characters ${\hat\psi^\oo}_\sigma\,{\in}\,\calu^{\circ*}_\sigma$ and
${\hat\psi\oei}_\sigma\,{\in}\,\calu^{\prime*_{}}_\sigma$ have common restrictions to the intersection
$\calu^\circ_\sigma\,{\cap}\,\calu'_\sigma$. As the precise form of this relation
does not seem to play any particular role, we refrain from writing it out here.
Having arrived at sensible coefficients $\A\raisebox{.48em}{$\scriptstyle\!\!\circ$}$ with three labels of equal type,
we are now in a position to perform a few additional consistency checks.
We first compute the product of two of these quantities, regarding them as
matrices in their lower indices. By direct computation we find
\begin{equation} \begin{array}{l}
\displaystyle\sum_{[\rhob]}\sum_{{\hat\psi}_\rho\in{\cal U}_\rho^*}\!
\Ao{[\rhob_1,\psu_1]}{[\rhob,\psu_\rho]}{[\sigmab_1,\phu_1]}\, \aOr{\rho_1}\rho{\sigma_1}^{-1}
\aOr\rho{\rho_2}{\sigma_2}^{-1}\, \Ao{[\rhob,\psu_\rho]}{[\rhob_2,\psu_2]}{[\sigmab_2,\phu_2]}
\\{}\\[-.8em] \hsp{2.7}
= \displaystyle\sumbo\lambda\!\sum_{\psi_\lambda\in{\cal S}_\lambda^*}
\Frac{\u\lambda}{|{\cal G}|}\, [\Tilde S_{({\bar\lambda},\psi_\lambda),\Omega}]^{-2}_{}\,
\Tilde S^{}_{({\bar\lambda},\psi_\lambda),{[\sigmab_1,\phu_1]}}
\Tilde S^{}_{({\bar\lambda},\psi_\lambda),{[\sigmab_2,\phu_2]}}\,
\Tilde S^*_ {({\bar\lambda},\psi_\lambda),{[\rhob_1,\psu_1]}}
\Tilde S^{}_{({\bar\lambda},\psi_\lambda),{[\rhob_2,\psu_2]}}
\\{}\\[-.8em] \hsp{2.7}
= \displaystyle\sum_{[\sigmab_3,\phu_3]}
\sumbo\lambda\sum_{\psi_\lambda\in{\cal S}_\lambda^*} \Frac{\u\lambda}{|{\cal G}|}\,
\Tilde S^{}_{({\bar\lambda},\psi_\lambda),{[\sigmab_1,\phu_1]}}
\Tilde S^{}_{({\bar\lambda},\psi_\lambda),{[\sigmab_2,\phu_2]}}
\Tilde S^*_ {({\bar\lambda},\psi_\lambda),{[\sigmab_3,\phu_3]}}
[\Tilde S_{({\bar\lambda},\psi_\lambda),\Omega}]^{-1}_{}
\\{}\\[-1.0em] \hsp{6.26}
\displaystyle\sumbo\mu\sum_{\psi_\mu\in{\cal S}_\mu^*} \Frac{\u\mu}{|{\cal G}|}\,
\Tilde S^{}_{({\bar\mu},\psi_\mu),{[\sigmab_3,\phu_3]}} \Tilde S^*_ {({\bar\mu},\psi_\mu),{[\rhob_1,\psu_1]}}
\Tilde S^{}_{({\bar\mu},\psi_\mu),{[\rhob_2,\psu_2]}} [\Tilde S_{({\bar\mu},\psi_\mu),\Omega}]^{-1}_{}
\,. \end{array} \Labl56
Here we have introduced a weight factor $\aO{\sigma_1}^{-1}\aO{\sigma_2}^{-1}$
into the summation, which correctly accounts for the number of chiral
boundary labels (i.e.\ boundary blocks) that is subsumed in the upper index
$\sigma$; because of the dependence of $\ao\sigma$ on the lower labels,
this weight factor need not be constant. (Of course one could avoid the
presence of a weight factor by simply considering the quantities
$\aO\sigma^{-1}\Ao{[\rhob_1,\psu_1]}{[\rhob,\psu_\rho]}{[\sigmab,\psu_\sigma]}$ instead, but this appears to be
less natural.) {}From formula \Erf56 we can read off that
\begin{equation} \sum_{[\rhob,\psu_\rho]} \Ao{[\rhob_1,\psu_1]}{[\rhob,\psu_\rho]}{[\sigmab_1,\phu_1]}\, \aOr{\rho_1}\rho{\sigma_1}^{-1}
\aOr\rho{\rho_2}{\sigma_2}^{-1}\, \Ao{[\rhob,\psu_\rho]}{[\rhob_2,\psu_2]}{[\sigmab_2,\phu_2]}
= \sum_{[\sigmab_3,\phu_3]} \Mo{[\sigmab_1,\phu_1]}{[\sigmab_2,\phu_2]}{[\sigmab_3,\phu_3]}\, \Ao{[\rhob_1,\psu_1]}{[\rhob_2,\psu_2]}{[\sigmab_3,\phu_3]}
\Labl MA
with
\begin{equation} \Mo{[\sigmab_1,\phu_1]}{[\sigmab_2,\phu_2]}{[\sigmab_3,\phu_3]}
:= \frac{\aor{\rho_1}\rho{\sigma_1}\,\aor\rho{\rho_2}{\sigma_2}}
{\aor{\sigma_1}{\sigma_2}{\sigma_3}\,\aor{\rho_1}{\rho_2}{\sigma_3}}\,
\Ao{[\sigmab_1,\phu_1]^+_{\phantom i}}{[\sigmab_2,\phu_2]}{[\sigmab_3,\phu_3]^+_{\phantom i}} \,. \Labl Mo
Thus the coefficients $\A\raisebox{.48em}{$\scriptstyle\!\!\circ$}$ can be regarded as the basis elements of a
finite-dimensional\ algebra\ with structure constants \Erf Mo. The presence of such an
algebra ic structure is often interpreted as a `completeness relation' for the
boundary condition s.
It is already apparent from the fact that the structure constants \Erf Mo
are essentially equal to suitable numbers $\A\raisebox{.48em}{$\scriptstyle\!\!\circ$}$ that in addition some kind
of `associativity relation' holds, where one sums over the upper index of
these objects. Indeed, by the same kind of calculation as above one checks
that
\begin{equation} \begin{array}{l} \displaystyle\sum_{[\sigmab]}\sum_{{\hat\psi}_\sigma\in{\cal U}_\sigma^*}
\Ao{[\rhob_1,\psu_1]}{[\rhob_2,\psu_2]}{[\sigmab,\psu_\sigma]}\, \aOr{\rho_1}{\rho_2}\sigma^{-1}
\aOr{\rho_3}{\rho_4}{\sigma^+}^{-1}\, \Ao{[\rhob_3,\psu_3]}{[\rhob_4,\psu_4]}{{[\sigmab,\psu_\sigma]}^+_{}}
\\{}\\[-.98em]\hsp{6.1}
= \displaystyle\sum_{[\sigmab]}\sum_{{\hat\psi}_\sigma\in{\cal U}_\sigma^*}
\Ao{[\rhob_1,\psu_1]}{{[\rhob_3,\psu_3]}^+_{}}{[\sigmab,\psu_\sigma]}\, \aOr{\rho_1}{\rho_3}\sigma^{-1}
\aOr{\rho_2}{\rho_4}{\sigma^+}^{-1}\, \Ao{{[\rhob_2,\psu_2]}^+_{}}{[\rhob_4,\psu_4]}{{[\sigmab,\psu_\sigma]}^+_{}}
\,. \end{array} \Labl AA
Relations of the form \Erf MA and \Erf AA are expected on the basis of
factorization arguments \cite{lewe3,sasT2,prss3}. But a rigorous derivation of
these identities from factorization, in particular for boundary condition s that do not
preserve the full bulk symmetry, still remains to be established.
Moreover, such relations are technically rather difficult to exploit
in non-trivial theories. In our opinion, they do not constitute
an optimal starting point for the classification of boundary condition s.
\newpage
|
2,877,628,089,774 | arxiv | \section{What is Research?}
Research is a logical and systematic search for new and useful
information on a particular topic. It is an investigation of
finding solutions to scientific and social problems through
objective and systematic analysis. It is a search for knowledge,
that is, a discovery of hidden truths. Here knowledge means
information about matters. The information might be collected from
different sources like experience, human beings, books, journals,
nature, etc. A research can lead to new contributions to the
existing knowledge. Only through research is it possible to make
progress in a field. Research is done with the help of study,
experiment, observation, analysis, comparison and reasoning.
Research is in fact ubiquitous. For example, we know that
cigarette smoking is injurious to health; heroine is addictive;
cow dung is a useful source of biogas; malaria is due to the virus
protozoan plasmodium; AIDS (Acquired Immuno Deficiency Syndrome)
is due to the virus HIV (Human Immuno deficiency Virus). How did
we know all these? We became aware of all these information only
through research. More precisely, it seeks predictions of events
and explanations, relationships and theories for them.
\subsection{\large{\bf{\emph{What are the Objectives of
Research?}}}}
The prime objectives of research are
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
to discover new facts
\item
to verify and test important facts
\item
to analyse an event or process or phenomenon to identify the cause
and effect relationship
\item
to develop new scientific tools, concepts and theories to solve
and understand scientific and nonscientific problems
\item
to find solutions to scientific, nonscientific and social problems
and
\item
to overcome or solve the problems occurring in our every day life.
\end{enumerate}
\subsection{\large{\bf{\emph{What Makes People do Research?}}}}
This is a fundamentally important question. {\emph{No person
would like to do research unless there are some motivating
factors}}. Some of the motivations are the following:
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
to get a research degree (Doctor of Philosophy (Ph.D.)) along with
its benefits like better employment, promotion, increment in
salary, etc.
\item
to get a research degree and then to get a teaching position in a
college or university or become a scientist in a research
institution
\item
to get a research position in countries like U.S.A., Canada,
Germany, England, Japan, Australia, etc. and settle there
\item
to solve the unsolved and challenging problems
\item
to get joy of doing some creative work
\item
to acquire respectability
\item
to get recognition
\item
curiosity to find out the unknown facts of an event
\item
curiosity to find new things
\item
to serve the society by solving social problems.
\end{enumerate}
\noindent Some students undertake research without any aim
possibly because of not being able to think of anything else to
do. Such students can also become good researchers by motivating
themselves toward a respectable goal.
\vskip 10pt
In the words of Prof.P.Balaram [Current Science, 87(2004)1319]
Ph.D. degree is a passport to a research career. The Ph.D. period
often influence a research scholar to make or to break in a
scientific career.
\subsection{\large{\bf{\emph{Importance of Research}}}}
Research is important both in scientific and nonscientific fields.
In our life new problems, events, phenomena and processes occur
every day. Practically implementable solutions and suggestions
are required for tackling new problems that arise. Scientists
have to undertake research on them and find their causes,
solutions, explanations and applications. Precisely, research
assists us to understand nature and natural phenomena.
\vskip 10pt
Some important avenues for research are:
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
A research problem refers to a difficulty which a researcher or a
scientific community or an industry or a government organization
or a society experiences. It may be a theoretical or a practical
situation. It calls for a thorough understanding and possible
solution.
\item
Research on existing theories and concepts help us identify the
range and applications of them.
\item
It is the fountain of knowledge and provide guidelines for
solving problems.
\item
Research provides basis for many government policies. For
example, research on the needs and desires of the people and on
the availability of revenues to meet the needs helps a government
to prepare a budget.
\item
It is important in industry and business for higher gain and
productivity and to improve the quality of products.
\item
Mathematical and logical research on business and industry
optimizes the problems in them.
\item
It leads to the identification and characterization of new
materials, new living things, new stars, etc.
\item
Only through research can inventions be made; for example, new and
novel phenomena and processes such as superconductivity and
cloning have been discovered only through research.
\item
Social research helps find answers to social problems. They
explain social phenomena and seek solution to social problems.
\item
Research leads to a new style of life and makes it delightful and
glorious .
\end{enumerate}
Emphasizing the importance of research Louis Pasteur said ``I
beseech you to take interest in these sacred domains called
laboratories. Ask that there be more and that they be adorned for
these are the temples of the future, wealth and well-being. It is
here that humanity will learn to read progress and individual
harmony in the works of nature, while humanity's own works are all
too often those of babarism, fanaticism and destruction." (Louis
Paster -- article by S.Mahanti, Dream 2047, p.29--34 (May 2003)).
\vskip 10pt
In order to know what it means to do research one may read
scientific autobiographies like Richard Feynmann's ``Surely you
are joking, Mr.Feynmann!", Jim Watson's ``The double helix",
``Science as a way of life -- A biography of C.N.R. Rao" by Mohan
Sundararajan, etc.
\sectionmark{RESEARCH METHODS AND RESEARCH METHODOLOGY}
\section{RESEARCH METHODS AND RESEARCH METHODOLOGY}
\sectionmark{RESEARCH METHODS AND RESEARCH METHODOLOGY}
{\emph{Is there any difference between research methods and
research methodology?}}
\vskip 10pt
{\emph{\bf{Research methods}}} are the various procedures,
schemes, algorithms, etc. used in research. All the methods used
by a researcher during a research study are termed as
{\emph{research methods}}. They are essentially planned,
scientific and value-neutral. They include theoretical procedures,
experimental studies, numerical schemes, statistical approaches,
etc. Research methods help us collect samples, data and find a
solution to a problem. Particularly, scientific research methods
call for explanations based on collected facts, measurements and
observations and not on reasoning alone. They accept only those
explanations which can be verified by experiments.
\vskip 10pt
{\emph{\bf{Research methodology}}} is a systematic way to solve a
problem. It is a science of studying how research is to be
carried out. Essentially, {\emph{the procedures by which
researchers go about their work of describing, explaining and
predicting phenomena are called research methodology.}} It is
also defined as the study of methods by which knowledge is gained.
Its aim is to give the work plan of research.
\subsection{\large{\bf{\emph{Importance of Research Methodology \\
in Research Study}}}}
It is necessary for a researcher to design a methodology for the
problem chosen. One should note that even if the method
considered in two problems are same the methodology may be
different. It is important for the researcher to know not only the
research methods necessary for the research under taken but also
the methodology. For example, a researcher not only needs to know
how to calculate mean, variance and distribution function for a
set of data, how to find a solution of a physical system described
by mathematical model, how to determine the roots of algebraic
equations and how to apply a particular method but also need to
know (i) which is a suitable method for the chosen problem?, (ii)
what is the order of accuracy of the result of a method?, (iii)
what is the efficiency of the method? and so on. Consideration of
these aspects constitute a research methodology.
\vskip 10pt
To understand the difference between research methods and
methodology let us consider the problem of finding the roots of
the quadratic equation
\begin{equation}
ax^2 + bx + c = 0.
\label{eq1}
\end{equation}
The formulas often used for calculating the roots of
eq.(\ref{eq1}) are
\begin{eqnarray}
x_{+} & = & \frac{-b + \sqrt{b^2-4ac}}{2a} \; ,
\label{eq2} \\
x_{-} & = & \frac{-b - \sqrt{b^2-4ac}}{2a} \; \cdot
\label{eq3}
\end{eqnarray}
These formulas are, however, inaccurate when $ \vert b \vert
\approx \sqrt{b^2-4ac}$. The equivalent formulas are
\begin{eqnarray}
x_{+} & = &\frac{-2c}{b + \sqrt{b^2-4ac}} \;,
\label{eq4} \\
x_{-} & = &\frac{-2c}{b - \sqrt{b^2-4ac}} \;.
\label{eq5}
\end{eqnarray}
When $\vert b \vert \approx \sqrt{b^2-4ac}$ one must proceed with
caution to avoid loss of precision. If $b>0$, then $x_+ $ should
be computed with the formula given by eq.(\ref{eq2}) and $x_-$
should be computed with the formula given by eq.(\ref{eq3}). If
$b<0$ then $x_+$ should be evaluated using eq.(\ref{eq4}) and
$x_-$ should be evaluated using eq.(\ref{eq5}). Here the two
formulas constitute the method of finding roots of the equation of
the form given by eq.(\ref{eq1}). If you use the formulas given
by eqs.(\ref{eq4}--\ref{eq5}) instead of the formulas given by
eqs.(\ref{eq2}--\ref{eq3}) (often used and familiar to us) to
compute the roots then you should clearly explain why the formulas
in eqs.(\ref{eq4}--\ref{eq5}) were chosen and why the other
formulas given by eqs.(\ref{eq2}--\ref{eq3}) were not considered.
This is what we mean by a research methodology. That is, research
methodology tells you which method or formula or algorithm has to
be used out of the various existing methods or formulas or
algorithms.
\vskip 10pt
More precisely, research methods help us get a solution to a
problem. On the other hand, research methodology is concerned
with the explanation of the following:
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
Why is a particular research study undertaken?
\item
How did one formulate a research problem?
\item
What types of data were collected?
\item
What particular method has been used?
\item
Why was a particular technique of analysis of data used?
\end{enumerate}
The study of research methods gives training to apply them to a
problem. The study of research methodology provides us the
necessary training in choosing methods, materials, scientific
tools and training in techniques relevant for the problem chosen.
\vskip 10pt
\hrule
\vskip 5pt
\noindent{\bf{Assignment:}}
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
List out at least $10$ methods which you have learned in your UG
and PG courses and write their purpose or application.
\item
Distinguish between research methods and research techniques.
\item
Distinguish between research methods and research methodology with
an example of your own choice.
\end{enumerate}
\vskip 5pt
\hrule
\section{TYPES OF RESEARCH}
Research is broadly classified into two main classes:
\renewcommand{\labelenumi}{\theenumi.}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
Fundamental or basic research
\item
Applied research
\end{enumerate}
\subsection{\large{\bf{\emph{Basic Research}}}}
Basic research is an investigation on basic principles and
reasons for occurrence of a particular event or process or
phenomenon. It is also called {\emph{theoretical research}}. Study
or investigation of some natural phenomenon or relating to pure
science are termed as {\emph{basic research}}. Basic researches
some times may not lead to immediate use or application. It is
not concerned with solving any practical problems of immediate
interest. But it is original or basic in character. It provides
a systematic and deep insight into a problem and facilitates
extraction of scientific and logical explanation and conclusion on
it. It helps build new frontiers of knowledge. The outcomes of
basic research form the basis for many applied research.
Researchers working on applied research have to make use of the
outcomes of basic research and explore the utility of them.
\vskip 10pt
Research on improving a theory or a method is also referred as
fundamental research. For example, suppose a theory is applicable
to a system provided the system satisfies certain specific
conditions. Modifying the theory to apply it to a general
situation is a basic research.
\vskip 10pt
Attempts to find answers to the following questions actually form
basic research. Why are materials like that? What they are? How
does a crystal melt? Why is sound produced when water is heated?
Why do we feel difficult when walking on seashore? Why are birds
arrange them in `$>$' shape when flying in a group?
\vskip 10pt
Fundamental research leads to a new theory or a new property of
matter or even the existence of a new matter, the knowledge of
which has not been known or reported earlier. For example,
fundamental research on
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
astronomy may lead to identification of new planets or stars in
our galaxy,
\item
elementary particles results in identification of new particles,
\item
complex functions may leads to new patterns or new properties
associated with them,
\item
differential equations results in new types of solutions or new
properties of solutions not known so far.
\item
chemical reactions leads to development of new compounds, new
properties of chemicals, mechanism of chemicals reactions, etc.
\item
medicinal chemistry leads to an understanding of physiological
action of various chemicals and drugs.
\item
structure, contents and functioning of various parts of human body
helps us identify the basis for certain diseases.
\end{enumerate}
\subsection{\large{\bf{\emph{Applied Research}}}}
In an {\emph{applied research}} one solves certain problems
employing well known and accepted theories and principles. Most of
the experimental research, case studies and inter-disciplinary
research are essentially applied research. Applied research is
helpful for basic research. A research, the outcome of which has
immediate application is also termed as {\emph{applied research}.
Such a research is of practical use to current activity. For
example, research on social problems have immediate use. Applied
research is concerned with actual life research such as research
on increasing efficiency of a machine, increasing gain factor of
production of a material, pollution control, preparing vaccination
for a disease, etc. Obviously, they have immediate potential
applications.
\begin{table}[b]
\caption{Differences between basic and applied researches.}
\vskip 10pt
\begin{tabularx}{\linewidth}{%
>{\setlength{\hsize}{1.0\hsize}}X|
>{\setlength{\hsize}{1.0\hsize}}X }
\hline
& \\
{\emph{Basic research}} & {\emph{Applied research}} \\
& \\
\hline
& \\
Seeks generalization & Studies individual or specific cases
without the objective to generalize \\
& \\
Aims at basic processes & Aims at any variable which makes
the desired difference \\
& \\
Attempts to explain why things happen & Tries to say how
things can be changed \\
& \\
Tries to get all the facts & Tries to correct the facts which are
problematic \\
& \\
Reports in technical language of the topic & Reports in common
language \\
& \\
\hline
\end{tabularx}
\label{tab1}
\end{table}
\vskip 10pt
Some of the differences between basic and applied research are
summarized in table 1.1. Thus, the central aim of applied research
is to find a solution for a practical problem which warrants
solution for immediate use, whereas basic research is directed
towards finding information that has broad base of applications
and thus add new information to the already existing scientific
knowledge.
\subsection{\large{\bf{\emph{Quantitative and Qualitative Methods}}}}
The basic and applied researches can be {\emph{quantitative}} or
{\emph{qualitative} or even both. Quantitative research is based
on the measurement of quantity or amount. Here a process is
expressed or described in terms of one or more quantities.
Qualitative research is concerned with qualitative phenomenon
involving quality. It is non-numerical, descriptive, applies
reasoning and uses words. Its aim is to get the meaning, feeling
and describe the situation. We measure and weigh things in the
study of substance or structure. Can we measure or weigh
patterns? We cannot measure or weigh patterns. But to study
patterns we must map a configuration of relationships. That is,
structures involve quantities whereas patterns involve qualities.
If one wishes to investigate why certain data are random then it
is a qualitative research. If the aim is to study how random the
data is, what is the mean, variance and distribution function then
it becomes quantitative. Explaining how digestion of food takes
place in our body is a qualitative description. It does not
involve any numbers or data and quantities.
\vskip 10pt
The detection of a particular compound is a qualitative analysis.
This can be done by carrying out physical or chemical tests.
Determination of exact amount of a particular compound present in
a volume is essentially quantitative analysis. This can be done by
volumetric, gravimetric and calorimetric methods or instrumental
methods. Experimental and simulation studies are generally
quantitative research.
\subsection{\large{\bf{\emph{Other Types of Research}}}}
Other types of research include {\emph{action research}} (fact
findings to improve the quality of action in the social world),
{\emph{explanatory research}} (searching explanations for events
and phenomena, for example finding answer to the question why are
the things like what they are?), {\emph{exploratory research}}
(getting more information on a topic) and {\emph{comparative
research}} (obtaining similarities and differences between events,
methods, techniques, etc.). For discussion on these types of
research see refs.[\ref{kothari}--\ref{phil}].
\vskip 10pt
\hrule
\vskip 5pt
\noindent{\bf{Assignment:}}
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\addtocounter{enumi}{3}
\item
List out at least $10$ theoretical and applied methods which you
have learned in your UG, PG courses and write their features in
two or three sentences.
\item
Write at least $20$ questions in your subject the investigation of
which forms basic research. Then point out how many of them have
already been solved and how many were found in applications.
\item
Distinguish between theory and experiment.
\item
Write a note on importance of theory in basic and applied
researches.
\item
Bring out the importance of inter-disciplinary research.
\end{enumerate}
\vskip 5pt
\hrule
\section{VARIOUS STAGES OF A RESEARCH}
Whenever a scientific problem is to be solved there are several
important steps to follow. The problem must be stated clearly,
including any simplifying assumptions. Then develop a
mathematical statement of the problem. This process may involve
use of one or more mathematical procedures. Frequently, more
advanced text books or review articles will be needed to learn
about the techniques and procedures. Next, the results have to be
interpreted to arrive at a decision. This will require experience
and an understanding of the situation in which the problem is
embedded. A general set of sequential components of research is
the following:
\renewcommand{\labelenumi}{\theenumi.}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
Selection of a research topic
\item
Definition of a research problem
\item
Literature survey and reference collection
\item
Assessment of current status of the topic chosen
\item
Formulation of hypotheses
\item
Research design
\item
Actual investigation
\item
Data analysis
\item
Interpretation of result
\item
Report
\end{enumerate}
In the following sections the above mentioned various stages of
research are discussed in detail.
\sectionmark{SELECTION OF A RESEARCH TOPIC AND PROBLEM}
\section{SELECTION OF A RESEARCH TOPIC AND PROBLEM}
\sectionmark{SELECTION OF A RESEARCH TOPIC AND PROBLEM}
The starting point of a research is the selection of a research
topic and problem. Identifying a suitable topic for work is one
of the most difficult parts of a research. Before choosing a
research topic and a problem the young researchers should keep the
following points in mind.
\renewcommand{\labelenumi}{\theenumi}
\renewcommand{\theenumi}{$\bullet$}
\begin{enumerate}
\item
Topic should be suitable for research.
\item
The researcher should have interest in it.
\item
Topic should not be chosen by compulsion from some one else.
\end{enumerate}
Topic and problem can be fixed in consultation with the research
supervisor. In our country often research supervisors suggest a
topic and state a problem in broad view. The researcher has to
narrow it and define it in operational form. One may ask: Is it
necessary that the topic of a Ph.D. should be different from
M.Sc. project and M.Phil dissertation? The answer is not
necessary. If a student is able to get a supervisor working in
his M.Sc.project or M.Phil dissertation topic then it would save
about six months in the duration of his Ph.D. work.
\subsection{\large{\bf{\emph{Can a Researcher Choose a Topic \\
by himself?}}}}
\label{s1}
A youngster interested to start a research career wishes to know
whether he/she has freedom to do research in the topic of his/her
own interest. The style of research in our country and various
other factors like the infrastructure facility available in a
research institute, time limit, our commitment to family and
social set up hardly allow a young researcher to choose a topic by
himself for his PG project, M.Phil. dissertation and Ph.D. thesis.
However, many research supervisors give complete freedom to choose
a problem in the topic suggested by him for a Ph.D. research work.
Because the normal time duration of M.Phil dissertation is about
6-8 months, it is better to work on the problem suggested by the
supervisor.
\vskip 10pt
If a student wishes to do research (for Ph.D. degree) with
fellowship then he cannot have freedom to choose a topic since he
has to work on a project the goal of which is already defined by
the project investigator. On the other hand, after choosing a
topic of his own interest he has to find a supervisor who is
working in that topic or interested in guiding him. In this case
one has severe limitation in our country for getting a fellowship
and for registering for a research degree. If a student is not
very much particular about the fellowship he has a chance to do
research in the topic of his own interest. A researcher in India
after two years of research experience with few (two or more)
publications can apply for a senior research fellowship (SRF) to
CSIR (Council for Scientific and Industrial Research) (for details
see its and other relevant web sites). He can prepare a project
under the direction of his Ph.D. supervisor which can lead to a
fellowship. For details see the book `How to get scholarships,
Fellows and Stipends' by K.D.Kalaskar (Sultan Chand and Sons, New
Delhi))
\vskip 10pt
Considering the above, a researcher should make-up his mind so as
to work in a topic suggested by the supervisor. However, a
research problem may be chosen by a researcher himself. This has
several advantages. In this case
\renewcommand{\labelenumi}{\theenumi}
\renewcommand{\theenumi}{$\bullet$}
\begin{enumerate}
\item
the researcher can pursue his/her own interest to the farthest
limits,
\item
there is an opportunity to spend a long time on something that is
a continuous source of his pleasure and
\item
the results would prove better in terms of the growth of the
investigator and the quality of the work.
\end{enumerate}
If the researcher is not interested in the topic and problem
assigned to him but is working on it because of supervisor's
compulsion, then he will not be able to face and overcome the
obstacles which come at every stage in research.
\subsection{\large{\bf{\emph{Identification of a Research Topic \\
and Problems}}}}
Some sources of identification of a research topic and problems
are the following:
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
Theory of one's own interest
\item
Daily problems
\item
Technological changes
\item
Recent trends
\item
Unexplored areas
\item
Discussion with experts and research supervisor
\end{enumerate}
Suppose one is interested in the theory of nonlinear differential
equations or quasicrystals or fullerenes. Then he can find a
research guide who is working in this field or interested to work
in this field and then choose a problem for research.
\vskip 10pt
Our daily experiences and day to affairs have rich openings on
various aspects such as the daunting tasks of AIDS, air pollution,
afforestation and deforestation, child labor, problems of aged
citizens, etc.
\vskip 10pt
Technology in various branches of science, business and marketing
changes rapidly. For example, in the early years, computers were
built in larger size with vacuum tubes. Then evolution in
electronic technology replaced them by integrated circuits.
Recently, scientists have developed quantum dots. Now the
interest is in developing efficient, super-fast and miniaturized
computing machine made up of material whose particle size of the
order of nano ($10^{-9}$) meter or even smaller. Similarly,
another fascinating topic namely, {\emph{thin film}} has multiple
fields of applications. Recent research on fullerenes resulted in
many practical applications.
\vskip 10pt
Choosing a topic of current interest or recent trends provides
bright and promising opportunities for young researchers to get
post-doctoral fellowship, position in leading institutions in our
nation and abroad.
\vskip10 pt
In each subject there are several topics which are not explored in
detail even though the topic was considered by scientists long
time ago. For example, string theory, quantum computing, nano
particles, quantum cloning and quantum cryptography and gene
immunology are fascinating topics and are in preliminary stages.
\vskip 10pt
The supervisors and experts are working on one or few fields over
a long time and they are the specialists in the field considered
and well versed with the development and current status of the
field. Therefore, a young researcher can make use of their
expertise in knowing various possible problems in the topic the
solving of which provide better opportunities in all aspects.
\vskip 10pt
Don't choose a topic simply because it is fascinating. In
choosing a topic one should take care of the possibility of data
collection, quantity of gain, breadth of the topic and so on. The
topic should not be too narrow. For example, the study of social
status and sexual life of married couples of same sex (man-man
marriage and woman-woman marriage) is interesting and of social
relevance. But the intricate problem here is that we do not find
enough number of such couples to study. This is a very narrow
topic at the same time we will not get enough data to analyze. On
the other hand, the changes in the social life of aravanis in
recent times is a valuable social problem and one can collect
enough data.
\vskip 10pt
Further, one has to study advanced level text books and latest
research articles to identify problems. Is it necessary to know
all the methods, techniques, concepts in a research topic before
identifying a problem for investigation? This is not necessary.
After learning some fundamental concepts, recent developments and
current trends of a topic, one can identify a problem for
research. Then he can learn the tools necessary to solve it.
\subsection{\large{\bf{\emph{Definition and Formulation of a
Problem}}}}}
After identifying a problem, in order to solve it, it has to be
defined and formulated properly. For this purpose, one can
execute the following.
\renewcommand{\labelenumi}{\theenumi}
\renewcommand{\theenumi}{$\bullet$}
\begin{enumerate}
\item
State the problem in questionnaire form or in an equivalent form
\item
Specify the problem in detail and in precise terms
\item
List the assumptions made
\item
Remove the ambiguities, if any, in the statement of the problem
\item
Examine the feasibility of a particular solution
\end{enumerate}
\noindent{Defining the problem is more important than its
solution. It is a crucial part of the research study and should
not be defined in hurry.}
\subsection{\large{\bf{\emph{How do you Asses Whether the Defined
Problem as a Good Problem?}}}}
A problem in its first definition may not be appealing. It may
require redefinition in order to make it a good problem. That is,
by suitably rewording or reformulating the chosen problem, it can
be made to meet the criteria of a good problem. This is also
important to solve the problem successfully. To this end a
researcher can ask a series of questions on the problem. Some are:
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
Is the problem really interesting to him and to the scientific
community?
\item
Is the problem significant to the present status of the topic?
\item
Is there sufficient supervision/guidance?
\item
Can the problem be solved in the required time frame?
\item
Are the necessary equipment, adequate library and computational
facilities, etc. available?
\end{enumerate}
If the answers to these questions are satisfactory, then the
researcher can initiate work on the chosen problem. In addition,
discuss the problem with the current doctoral students and obtain
the scope of the problem and other related aspects.
\subsection{\large{\bf{\emph{How are these Questions Important \\
and Relevant to a Researcher?}}}}
The researcher should be interested on the problem for the reasons
mentioned earlier at the end of the Sec.(\ref{s1}). The problem
should also be interesting to the supervisor so that the
researcher can get the necessary guidance from him. Otherwise
sometimes the researcher may find it very difficult to convince
the supervisor on the importance and significance of the results
obtained. More importantly, the problem must be of interest to
scientific community and society. If not then the researcher will
find great difficulty to publish his findings in reputed journals
and convince the funding agency.
\vskip 10pt
Next, the status of the problem, particularly the importance of
finding its solution should match with the current status of the
field. But, if the problem investigated is of not much interest
to science and society then publications will become useless to
him in his research career. Specifically, they cannot help earn a
post-doctoral fellowship, respectability and a permanent job in an
institution.
\vskip 10pt
A researcher needs proper guidance and encouragement from the
supervisor regularly. This is important for keeping the research
in right track, to overcome the difficulties which come at various
states of research and also to have moral support. A researcher
should avoid working under the guidance of a supervisor having
serious health problems or family problems, committed his large
time to administrative work and strong involvement in nonacademic
matters.
\vskip 10pt
Another important point is that before initiating research work on
a problem, a rough estimate on costs and time required to complete
the work must be made. A problem suitable for Ph.D. degree should
not be taken for M.Phil. degree. A problem suitable for M.Phil.
degree is not appropriate for Master's degree. If the collection
of data or resources or related information takes many years, then
the topic is obviously inappropriate for Ph.D. degree.
Controversial subjects should not be chosen. Problems that are too
narrow or too vague should be avoided.
\vskip 10pt
Finally, the researcher must make sure that the necessary
experimental setup and materials to perform the actual research
work are available in the department where research work is to be
carried out. Without these, if the researcher initiated the work
and has gone through certain stages of work or spent one or two
years in the problem then in order to complete the task he would
be forced to buy the materials and instruments from his personal
savings.
\section{LITERATURE SURVEY}
After defining a problem, the researcher has to do literature
survey connected with the problem. {\emph{Literature survey is a
collection of research publications, books and other documents
related to the defined problem}}. It is very essential to know
whether the defined problem has already been solved, status of the
problem, techniques that are useful to investigate the problem and
other related details. One can survey
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
the journals which publish abstracts of papers published in
various journals,
\item
review articles related to the topic chosen,
\item
journals which publish research articles,
\item
advanced level books on the chosen topic,
\item
proceedings of conferences, workshops, etc.,
\item
reprint/preprint collections available with the supervisor and
nearby experts working on the topic chosen and
\item
Internet.
\end{enumerate}
\vskip 5pt
A free e-print service provider for physics, mathematics,
nonlinear science, computer science and biology is
\vskip 3pt
http://www.arXiv.org
\vskip 5pt
No research shall be complete unless we make use of the knowledge
available in books, journals and internet. Review of the
literature in the area of research is a preliminary step before
attempting to plan the study.
\vskip 5pt
Literature survey helps us
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
sharpen the problem, reformulate it or even leads to defining
other closely related problems,
\item
get proper understanding of the problem chosen,
\item
acquire proper theoretical and practical knowledge to investigate
the problem,
\item
show how the problem under study relates to the previous research
studies and
\item
know whether the proposed problem had already been solved.
\end{enumerate}
Through survey one can collect relevant information about the
problem. Clarity of ideas can be acquired through study of
literature.
\vskip 5pt
Apart from literature directly connected with the problem, the
literature that is connected with similar problems is also useful.
It helps formulate the problem in a clear-cut way. A review on
past work helps us know the outcome of those investigations where
similar problems were solved. It can help us design methodology
for the present work. We can also explore the vital links with
the various trends and phases in the chosen topic and familiarize
with characteristic precepts, concepts and interpretations.
Further, it can help us formulate a satisfactory structure of the
research proposal.
\vskip 10pt
Because a Ph.D. thesis or M.Phil. dissertation is a study in depth
aiming contribution to knowledge, a careful check should be made
to ensure that the proposed study has not previously been
performed and reported. The earlier studies which are relevant to
the problem chosen should be carefully studied. Ignorance of prior
studies may lead to a researcher duplicating a work already
carried out by another researcher. A good library will be of
great help to a researcher at this stage. One can visit nearby
research institutions and avail the library facility. Review the
latest research papers and Ph.D. theses to acquire recent trends.
\section{REFERENCE COLLECTION}
As soon as the survey of available source begins, the preparation
and collection of references preferably with annotations should be
undertaken. The important source of reference collection is the
journal called Current Contents. This comes once in a week. It
is available in hard copy and also in floppy diskette. Almost all
the universities and institutions buy this document. It contains
the table of content of research journals and magazines in various
subjects. It provides title of articles, names of the authors,
date of publication, volume number, starting page number of the
articles and address of the author from whom one can get the
reprint of the article. If the title of the article indicates
that the paper is in the topic of one's interest then he can take
a copy of the article if the journal is available in the local
library. Otherwise, he can get it from a document delivery service
centre. For example, in India INFLIBNET provides this service
through six institutions. For details visit the following web
sites:
\vskip 3pt
http://web.inflibnet.ac.in/index.isp
\vskip 3pt
http://www.iisc.ernet.in/
\vskip 3pt
http://www.jnu.ac.in/
\vskip 3pt
\noindent One can obtain a research article on paying the charge
fixed by the INFLIBNET provided the particular journal is
available in it. Articles can also be purchased from the
publishers on payment. Alternatively, reprint of the article can
be had from the author by sending a letter/card to the author. A
format of reprint request card is shown below.
\vskip 3pt
\newpage
\centerline{-------------------------------------------------------------------------}
\vskip 1pt
\centerline{\bf{\emph{Front Side}}}
\vskip 5pt
\hskip 140pt Place\,:
\vskip 2pt
\hskip 140pt Date \,:
\vskip 3pt
\noindent Dear Dr./Prof.
\vskip 5pt
\noindent I would appreciate in receiving a reprint of your
following article and other related preprints/reprints, if any.
\vskip 2pt
\noindent Title\,:
\vskip 2pt
\noindent Journal name\,:
\vskip 2pt
\noindent Volume number\,: \hskip 20pt Page(s)\,: \hskip 30pt
Year\,:
\vskip 5pt
\noindent With kind regards,
\vskip 2pt
\noindent Yours sincerely,
\vskip 5pt
\centerline{-------------------------------------------------------------------------}
\centerline{-------------------------------------------------------------------------}
\vskip 5pt
\centerline{\bf{\emph{Reverse Side}}}
\vskip 5pt
\noindent Sender's Address
\vskip 15pt
\hskip 100pt To
\vskip 40pt
\centerline{-------------------------------------------------------------------------}
\vskip 10pt
The references from current contents or from journals can be noted
on a separate card or sheet with the names of authors and the
title of the paper/book, etc. For a research paper, its title,
journal name, volume number, starting and ending pages of it and
year of publication should be noted. For a book, publisher's name,
place of publication and year of publication must be written down.
Instead of cards, nowadays one can store the details of the
references in computers and have a copy in two or three floppy
diskette. The references can be classified. For example, sources
dealing with theory, dealing with experimental techniques,
concerned with numerical methods, etc. can be grouped separately.
The copies of the research articles can also be classified and
bounded. Cross references (that is research articles or books
referred or cited in a research report) should also be collected
and classified. These also provide useful information.
\section{ASSESSING THE CURRENT STATUS}}
Generally, it is not difficult to know the current status of
research work in a specific topic. The current status of the
chosen topic can be identified by reading the relevant journals
and the recent papers, discussions in conferences, seminars and
workshops. One can perform inquiries at several important places
known for research on proposed topic.
\vskip 10pt
A study of the current literature in the chosen topic explores the
current status of it. More importantly, review articles point
out not only to the basic aspects and features of the topic
concerned but also give a brief account of its present status. For
this purpose, one can survey the journals (for a topic in physics)
such as Physics Reports, Reviews of Modern Physics, Physical
Review Letters, Review section of American Journal of Physics,
Pramana, Current Science and Proceedings of recently conducted
seminars and conferences, etc.
\vskip 10pt
Rapid communication and Letter sections of international journals
publish articles which are very important and fall in recent
trends category. There are several areas in internet where the
papers just submitted to journals are placed. One can download
such articles free of cost. These articles indicate the recent
trends in a particular topic. Some relevant web sites are listed
below.
\vskip 10pt
http://arxiv.org/
\vskip 3pt
http://www.ams.org/global-preprints/
\vskip 3pt
http://front.math.ucdavis.edu/math.AG/
\vskip 3pt
http://www.ma.utexas.edu/m$\mathrm{p}_{-}$arc/
\vskip 3pt
http://www.clifford.org/anonftp/clf-alg/
\section{HYPOTHESIS}
Researchers do not carry out work without any aim or expectation.
Research is not of doing something and presenting what is done.
Every research problem is undertaken aiming at certain outcomes.
That is, before starting actual work such as performing an
experiment or theoretical calculation or numerical analysis, we
expect certain outcomes from the study. The expectations form the
hypothesis. {\emph{Hypotheses are scientifically reasonable
predictions}}. They are often stated in terms of if-then sentences
in certain logical forms. A hypothesis should provide what we
expect to find in the chosen research problem. In other words,
the expected or proposed solutions based on available data and
tentative explanations constitute the hypothesis.
\vskip 10pt
Hypothesizing is done only after survey of relevant literature and
learning the present status of the field of research. It can be
formulated based on previous research and observation. To
formulate a hypothesis the researcher should acquire enough
knowledge in the topic of research and a reasonably deep insight
about the problem. In formulating a hypothesis construct
operational definitions of variables in the research problem.
Hypothesis is due to an intelligent guess or for inspiration which
is to be tested in the research work rigorously through
appropriate methodology. Testing of hypothesis leads to
explanation of the associated phenomenon or event.
\vskip 10pt
{\emph{What are the criteria of a good hypothesis?}} An
hypothesis should have conceptual clarity and a theoretical
orientation. Further, it should be testable. It should be stated
in a suitable way so that it can be tested by investigation. A
hypothesis made initially may become incorrect when the data
obtained are analyzed. In this case it has to be revised. It is
important to state the hypothesis of a research problem in a
research report. We note that if a hypothesis withstands the
experiments and provides the required facts to make it acceptable,
not only to the researchers performing the experiments but to
others doing other experiments then when sufficiently reinforced
by continual verification the hypothesis may become a
{\emph{theory}} [\ref{span}].
\section{MODE OF APPROACH}
Mode of approach means the manner in which research is to be
carried out. {\emph{It should keep the researcher on the right
track and make him complete the planned work successfully}}. One
should sharpen the thinking and focus attention on the more
important aspects of the study. The scientific thinking must be
more formal, strict, empirical and specific and more over goal
oriented. In order to make steady progress in research and to
asses the progress of the research work, a research design is very
helpful.
\subsection{\large{\bf{\emph{Research Design}}}}
For a scientific research one has to prepare a research design. It
should indicate the various approaches to be used in solving the
research problem, sources and information related to the problem
and, time frame and the cost budget. Essentially, the research
design creates the foundation of the entire research work. The
design will help perform the chosen task easily and in a
systematic way. Once the research design is completed the actual
work can be initiated. The first step in the actual work is to
learn the facts pertaining to the problem. Particularly,
theoretical methods, numerical techniques, experimental techniques
and other relevant data and tools necessary for the present study
have to be collected and learnt.
\vskip 10pt
It is not necessary that every theory, technique and information
in the topic of research is useful for a particular problem. A
researcher has to identify and select materials which are useful
to the present work. Further, the validity and utility of the
information gathered should be tested before using them.
Scientific research is based on certain mathematical, numerical
and experimental methods. These sources have to be properly
studied and judged before applying them to the problem of
interest.
\subsection{\large{\bf{\emph{What are the Possible Approaches \\
to be Followed by a Researcher?}}}}
A researcher can exercise the following aspects regularly
throughout the research carrier. These will keep him in right
track and tightly bind him to the research activity.
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
Discussion with the supervisor, experts and colleagues about the
research work, particularly, the problem and its origin,
objectives and difficulties faced in the execution of the problem.
\item
Reading of the latest research papers, relevant theories and
possible application to the present problem and to overcome the
difficulties faced.
\item
Review of the work reported on the similar problems.
\item
Theoretical calculations, setting-up of an experimental setup,
numerical calculations, computer programs, preparation of graphs,
tables and other relevant work related to the research should be
done by a new researcher by himself without assistance from
others.
\item
Have a practice of periodically writing the work done, results
obtained and steps followed in a work. This is important because
sometime we may think that a particular aspect will be a center
piece of the problem under investigation. But once we make a
write-up of it, this aspect or part of it may turn out to be only
of marginal importance. In fact, writing of the progress of the
work will help us better understand our work and forms a solid
basis for further progress. It also points out to the gaps in our
work.
\item
Participation and presentation of research findings in national
and international meetings.
\end{enumerate}
\noindent These regular practices provide useful information like
new ideas and can help the researcher
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
sharpen and focus attention,
\item
confining to the formulation and
\item
in the interpretation of the solution obtained.
\end{enumerate}
\vskip 10pt
Each and every bit of task related to the research work has to be
done by the researcher. A young researcher should not do the
entire work in collaboration with others. The researcher is
advised to perform all the works starting from identification of
the problem to report preparation by himself under the guidance of
supervisor. Particularly, collaboration work with experts and
senior researcher may be avoided. (However, he can discuss his
problems with them). This is important to acquire
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
enough knowledge,
\item
confidence and
\item
training
\end{enumerate}
to carry out research independently after getting Ph.D. degree.
Part of the dissertation should demonstrate the researcher's
originality. The dissertation should reflect the efforts of a
single researcher. Keeping this in mind one should avoid
collaboration as far as possible in the young stage.
\vskip 10pt
Prof.Balaram wrote ``There are guides who have no interest in
their discipline and leave their wards to their own devices.
Surprisingly, it is these guides who produce some of the most
resilient scientists, self-taught men and women, who develop great
confidence in their abilities" [Current Science 87(2004)1319].
\vskip 10pt
A researcher should provide new information to the supervisor and
avoid getting information from the supervisor. He should learn
and collect many information related to his work. He should
definitely avoid embarrassing the supervisor and senior
researchers by asking doubts often. A good supervisor or a senior
researcher does not provide answers to your questions but gives
appropriate directions to clarify your doubts.
\vskip 10pt
During the course of research, one should focus the mind mainly on
the research work. Don't allow the personal life to interfere with
research. Diversions to other activities should be avoided.
Further, after working about say three years and when the time has
came to consolidate the work done so far a researcher should not
start to work on an entirely new topic. He can complete his
thesis work and then work on new topic of his interest.
The woman Nobel Laureaute Maria Goeppert Mayer said,``If you love science, all you really want is to keep on working."
\vskip 10pt
A researcher must be clear in his thoughts. He should know what
he has to find out. In order to perform the work successfully the
researcher should acquire proper training in the techniques of
research. The training equips the researcher with the
requirements of the task. Further, he should be clear about his
task and possess intellectual insight. Then only he is able to
find out the facts that would help him in his task. Make your
research a part of your every day life. Think about your research
work in background mode, ideas will come out even when you are
seeing a movie, traveling to a place, sight-seeing and shopping.
Ted Gottfried the author of biography of Fermi said, ``Scientific
research is like sports. To score, the focus of the scientist
must be narrow and intense to the exclusion of everything else
around him. The batter never takes his eye off the ball, the
hoopster shuts out everything but the court, the golfer always
follows through--and the scientist focuses his complete attention
on the task at hand and nothing else."
\vskip 10pt
A young researcher should also have persistence, tolerance and
self-control over the unpleasant outcomes such as not getting an
expected result, not recognized by the supervisor and rejection of
a research article from a journal. ``{\emph{Don't get dejected when your
paper is rejected}}"--Prof.P.R.~Subramanian. Some times one may
complete a piece of work within a week which he might have
expected to finish it in a month time. On the other hand, at some
times one may get stuck with a particular part of the work and
unable to make a substantial progress, say, in three months. Avoid
feeling remorseful at these circumstances and maintain a high
tolerance for poor results. Remember that failure and wasted works
are also part of the research career. Young researchers should
create good relationship with their seniors and colleagues.
\vskip 10pt
\subsection{\large{\bf{\emph{Getting Joy in Doing Research}}}}
To get a deep insight on the topic or the research problem a
suggestion from Dr~K.P.N.~Murthy is that {\emph{one should enjoy
doing research and approach it as an entertainment and a mode of
getting happiness}}. In the research career one should treat doing research as a way of life and not just a job. In order to achieve a goal in the research one has to work harder. The harder one works the happier one feels. One need not try to conquer the world of science. One has to come in order to work and to find his way. Initially one must work hard. Getting insise a research topic or a research career is like a pushing a door. It is hard to push the door open. But when one understand it it is ver interesting and joyful.
\vskip 10pt
Chandrasekhar pointed out that in the arts and literature quality
of work improves with age and experience while in science
generally it does not. He felt that it is because of doing science
in isolation, very narrow focus on immediate goals and
insufficient broad in interests and pursuits. In order to
continue research even at old age one should develop the spirit of
experiencing the beauty of science. The spirit of experiencing it
is not restricted to only the great scientists. Chandrasekhar
said, ``This is no more than the joys of creativity are restricted
to a fortunate few. They are instead accessible to each one of us
provided we are attuned to the perspective of strangeness in the
proportion and conformity of the parts of one another and to the
whole. And there is satisfaction also be gained from harmoniously
organizing the domain of the science with order, pattern and
coherence."
\vskip 10pt
Professor G.Baskaran stressed that group discussion is indeed an
important component of doing research particularly in small and
isolated institutions. He said, ``One cannot explain the power
and usefulness of group discussions -- it has to be experienced.
When I was a student at the Indian Institute of Science (I.I.Sc.),
Bangalore, a few of us students of physics from I.I.Sc. and
National Aeronautic Laboratory were introduced to this joyous
experience by S.K.Rangarajan, formerly a Professor of chemistry,
in whose house we assembled virtually every evening to discuss
such grave issues as amorphous solids and renormalization group.
Each one of the discussants has made a mark" (Current Science,
75(1998)pp.1262).
\vskip 10pt
For a discussion on emotional factors see, for example, ref.[\ref{cs}].
\vskip 10pt
\subsection{\large{\bf{\emph{Crucial Stage of Ph.D}}}}
The crucial period for a research scholar doing full-time Ph.D. is
the last year of the programme. During this period one should
concentrate on completing the final work for his thesis and
writing of various chapters. Generally, a research fellowship is
for fixed period of time, it might have ended before the final
year of the Ph.D. programme. We have noticed many scholars
converted the full-time programme into part-time and joined in a
job. If the job is a permanent one then one can join in the job
and continue the research. But joining in a temporary position
may highly change his research career. This would delay the
submission of his Ph.D. thesis and he may loose the interest in
research. There are examples with students capable of getting a
post doctoral fellowship but failed to even continuing the
research. Therefore, a research scholar should have a clear plan
of what he has to do in the next few years or so. Even if the
fellowship is not available at the finishing stage of Ph.D. thesis
we have friends and our well wishers to give financial support to
some extend.
\section{ACTUAL INVESTIGATION}
One should aim at doing good research. {\emph{What is good
research?}} Which universities and research institutions in your
country do the best research? How do you distinguish the great
from a good, a black hole from an ordinary hole, a superconductor
from a normal conductor, supernova from mere stars, poles from
ordinary points, linear differential equations from nonlinear
ones?
\vskip 10pt
To distinguish one from another we can use various quantities.
Like wise, to identify the best from among the available, one can
use various quantities to measure the quality of them. For
example, to identify a best research the quality of the one's
research publications, number of citations of his publications,
projects completed, books published, contribution made to the
science and society, etc. can be considered.
\vskip 10pt
Research work
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
published in reputed international journals,
\item
cited by other researchers working in the same or similar topic
and
\item
which added new information to the existing knowledge on a topic
\end{enumerate}
are generally considered as {\emph{good}}.
\vskip 10pt
At the beginning of research career a young researcher should aim
to produce a good research, particularly, his research findings
should distinguish him from other researchers and keep him one
among the top young researchers in the nation. In order to
encourage young researchers and motivate them to produce high
quality of research work awards are given yearly by certain
academic and research bodies in each country. For example, in
India, Indian President Award, Indian National Science Academy
(INSA) Young Scientist Award and many other awards are given every
year. Some Conference/Seminar organizers also provide best papers
award to young scientists.
\vskip 10pt
\subsection{\large{\bf{\emph{What are the Points to be Kept
in Mind in Order to do a Good Research?}}}}
\vskip 10pt
Actual investigation should lead to {\emph{original contribution}}
and not involve objectionable duplication. Originality is the
basic credit point of any research. Therefore, actual
investigation must be directed towards obtaining {\emph{novel
results}}. A researcher should develop new ideas and obtain deep
insight into the problem in order to get novel and new results
which are the characteristics of a good research.
\vskip 10pt
Trivial analysis should not be performed. Recently introduced
theories, experimental techniques and numerical algorithms have
to be used instead of outdated methods. Before applying any
method, the researcher should familiarize with the features of the
method. It it not worthwhile to continue in a particular
direction if the results are trivial and less informative. If
similar problems have already been done, for instance about ten
years ago, then a researcher should not consider it as important
but could treat it as a useful exercise.
\vskip 10pt
We do research by conceiving information and openings from
important research papers published by other researchers in the
topic of interest and continue in our own directions. The work of
some other researchers might have formed the basis of our
research. Similarly, our research outcomes should help other
researchers. That is, the work should be such that it should
invite others to read and more importantly use it and cite it in
their research work. Our work should lead to recognition and
respect. It should fetch joy and benefits others and as well as
us.
\vskip 10pt
As pointed out by Professor M.Lakshmanan, generally, {\emph{each
and every work of us may not produce novelty, but if we work
towards novelty then definitely in the course of research there
would come a fascinating and exciting breakthrough}}.
\vskip 10pt
The researcher must remember that ideally in the course of a
research study, there should be constant interaction between
initial hypothesis, observation and theoretical concepts. It is
exactly in this area of interaction between theoretical
orientation and observation that opportunities for originality and
creativity lie.
\vskip 10pt
Actual work finally leads to results and conclusions of the
research undertaken. For proper results it is necessary that
various steps of the work should be scientifically taken and
should not have any flaw. Developed computer algorithms must be
tested for the problems for which results are already available.
The work should be free from mistakes. Important analysis must be
repeated in order to make sure that they are free from human
mistakes. Professor Devanathan suggests that {\emph{a researcher
should check, recheck, cross check, ... all the results before
submitting a research paper to a journal}}. Before beginning to
write a part of the work done and the results obtained check and
recheck the data and the results by repeating the experiment,
rerunning the programs and going through the theoretical
derivations and arguments.
\vskip 10pt
When analysing the data, appropriate statistical tools have to be
employed. The number of data used, units of the data, error bars
and other necessary details must be noted in the graphs. As many
statistical tools as possible should be used. Appropriate curve
fitting can be done. Necessary interpretations on the results of
statistical analysis have to be made.
\vskip 10pt
In the case of development or modification of a theory and
proposal of a new method the assumptions made, basic idea, and
calculations should be clearly stated and analyzed. Various
special cases of the theory or method must be identified. The
validity, efficiency and applicability of it must be demonstrated
with examples. Merits and demerits have to be identified.
Comparison of the proposed method with the already existing and
widely used similar methods should be performed.
\vskip 10pt
In any experimental work, mere measurement of certain quantities
is not enough. The interpretation of the kind of data observed
and explanation for the particular pattern must be made. On the
basis of interpretation general principles underlying the process
can be formulated. One has to check whether the generalizations
are universal and true under different conditions.
\vskip 10pt
Some common errors made in research are [\ref{camden}]
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
Selective observation
\item
Inaccurate observation
\item
Over-generalization
\item
Made-up information
\item
Ex post facto hypothesizing
\item
Illogical reasoning
\item
Ego involvement in understanding
\item
Premature closure of inquiry
\item
Mystification
\end{enumerate}
For a very interesting discussion on the above aspects with
examples refer to the ref.[\ref{camden}]
\section{RESULTS AND CONCLUSION}
The next step after performing the actual research work on the
chosen problem is preparation of results and conclusion of the
performed work. Predictions, results and conclusion are ultimate
goals of the research performed.
\vskip 10pt
There are two indispensable rules of modern research.
The freedom of creative imagination necessarily subjected to
rigorous experimentation. In the beginning any experimental
research on a specific subject, imagination should give wings
to the thought. At the time of concluding and interpreting the
facts that were collected observation, the imagination should be
dominated and prevailed over by concrete results of experiments.
\vskip 10pt
Proper interpretations of the results must be made.
{\emph{Interpretation refers to the task of drawing inferences
from the actual research work}}. It also means drawing of
conclusion. Conclusion is based on the study performed. It would
bring out relations and processes that underlie the findings. The
utility of the outcome of the research greatly lie on proper
interpretations and is the hardest part of solving a scientific
problem. Interpretation of results is important because it
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
links the present work to the previous,
\item
leads to identification of future problems,
\item
opens new avenues of intellectual adventure and stimulates the
quest for more knowledge,
\item
makes others understand the significance of the research findings
and
\item
often suggests a possible experimental verification.
\end{enumerate}
\vskip 10pt
The basic rule in preparing results and conclusion is to give all
the evidences relevant to the research problem and its solution. A
bare statement of the findings are not enough. Their implications
must be pointed out. Discuss your answers to the following
questions with experts:
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
Are the supporting evidences sufficient?, and if not, What to do?
\item
How many pieces of evidence are required? Instead of producing
all, is it possible to restrict to one or two pieces of evidence?
If so, what are they? and
\item
Why are they sufficient?
\end{enumerate}
and so on. Such directions can help us minimize work and the
quantity of presentation of the report. Do not rely on a bogus
evidence which would increase the chances of errors. The
investigator has to give suggestions. These should be practical
and based on logic, reasoning and fact. The suggestions should be
such that they can be actually implemented.
\vskip 5pt
The researcher should not be in hurry while preparing the results
and conclusion. After preparing them the researcher may ask the
following questions:
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
Are the quantitative and qualitative analysis performed
{\emph{adequate}} for the conclusion drawn?
\item
Are the results and conclusion {\emph{general}}?
\item
Are the results and conclusion {\emph{valid only for the
particular situation}} considered in the present work?
\item
Is the conclusion {\emph{too broad}} considering the analysis
performed?
\item
Is any evidence which {\emph{weaken}} the conclusion omitted?
\end{enumerate}
\noindent The results and conclusion prepared can be revised based
on the answers to the above questions.
\vskip 10pt
Each and every statement made in the results and conclusion
sections must be based on evidence obtained from theoretical or
experimental analysis. Baseless statements should never be made.
\vskip 10pt
\hrule
\vskip 5pt
\noindent{\bf{Assignment:}}
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\addtocounter{enumi}{8}
\item
For each of the following topics write at least two questions, the
answers to which must be available in the respective topics. For
example, for the topic, ``{\emph{introduction}}", a relevant
question is `why am I doing it?'.
(i) Introduction, (ii) Review of a research topic, (iii)
Methodology, (iv) Research design, (v) Results, (vi) Discussion
and (vii) Conclusion.
\end{enumerate}
\hrule
\vskip 5pt
\sectionmark{PRESENTING A SCIENTIFIC SEMINAR--ORAL REPORT}
\section{PRESENTING A SCIENTIFIC SEMINAR-ORAL REPORT}
\sectionmark{PRESENTING A SCIENTIFIC SEMINAR--ORAL REPORT}
\subsection{\large{\bf{\emph{What is an Oral Report? \\ What are the
Importance of an Oral Report?}}}}
\vskip 2pt
Presentation of one's research work in a scientific meeting is an
{\emph{oral report}}. Scientific meetings include conference,
seminar, symposium, workshop, departmental weekly seminar, etc.
\vskip 10pt
Researchers in certain research institutions not only discuss
their own work but also have discussions on very recently reported
work of other scientists.
\vskip 10pt
An oral report provides a bridge between the researcher and
audience and offers greater scope to the researcher for explaining
the actual work performed, its outcome and significance. It also
leads to a better understanding of the findings and their
implications. In an oral report, the researcher can present the
results and interpretations which are not clearly understood by
him and may request the experts in the audience to give their
opinions and suggestions. Oral reporting at a conference or a
seminar requires more elaborate preparation than the written
report.
\vskip 10pt
A Nobel prize winner Paul Dirac said, ``{\emph{A person first gets
a new idea and he wonders very much whether this idea will be
right or wrong. He is very anxious about it, and any feature in
the new idea which differs from the old established ideas is a
source of anxiety to him. Whereas some one else who hears about
this work and talks it up doesn't have this anxiety, an anxiety to
preserve the correctness of the basic idea at all costs, and
without having this anxiety he is not so disturbed by the
contradiction and is able to face up to it and see what it really
means.''}}
\subsection{\large{\bf{\emph{Points to be Remembered in Preparing
an Oral Report}}}}
Before starting the preparation of an oral report, an outline can
be drawn based on the time duration of the report and the quality
of the audience. Departmental seminar is usually 45 minutes
duration. In other meetings time duration is fixed by the
organizer based on the number of days of the meeting, number of
speakers and the status of a speaker.
\vskip 10pt
For a long time report, that is, 45--60 minute presentation, one
may have enough time to
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
introduce the topic,
\item
discuss the definition of the problem,
\item
describe the method and technique employed,
\item
give technical details, and
\item
present results and conclusion.
\end{enumerate}
Consequently, these aspects can be prepared in detail.
\vskip 10pt
For a 15--30 minute, oral presentation one cannot find enough time
to discuss complete details of the work. In this case less
informative material must be dropped. Methods and techniques used
can be presented very briefly without going into technical
details. Much time should be reserved for results, conclusion and
further directions.
\vskip 10pt
Prepare a write-up of the oral presentation. It is a good and very helpful practice to write the talk before presenting it orally. Then evaluate the
written material. Ask:
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
{\emph{Why should the audience listen to your presentation?}}
\item
{\emph{Is the presentation match with the standard of the
audience?}}
\end{enumerate}
Revise the presentation until you get convincing answer to the
above two questions.
\vskip 10pt
Oral presentation can be made effective and attractive by using
modern visual devises, power-points, slides and transparency
sheets. Title of the report, author's name, plan of the
presentation, very important content of it and conclusion can be
printed in the slides or sheets possibly point by point with bold
and sufficiently large size letters. Important formulas,
equations, tables, figures and photographs can be prepared using
transparency sheets or slides. Slides and transparency sheets
should not contain running matters. {\emph{Researcher should not
simply read the content in the sheets}}. That is, the descriptive
portion of the report should not be prepared on the sheets. An
abstract or a short write-up of the presentation may be circulated
to the participants of the meeting. Sophisticated softwares
developed for preparing the text on transparency sheets/slides are
available in internet and can be freely downloaded. In order to
make the presentation, more lively, the researcher could use
multimedia. Nowadays, the use of {\emph{power-point}} of
Microsoft Windows is common. It is an easy and compact utility
software especially for preparing classroom presentations. The
following are the web sites from which one could download the
software at free of cost:
\vskip 3pt
http://www.office.microsoft.com/downloads
\vskip 1pt
http://www.lb.com/download-free-power-point-presen\-tation.org
\vskip 10pt
One could use the audio aspects also to facilitate his
presentation in a better way. While presenting the topic, the
researcher should strictly follow the class room teaching
methodology. For example, one should allow interaction; don't
restrict the vision of the audience of a particular section, don't
forget to modulate the voice as and when required and don't
violate the time frame.
\vskip 10pt
One or two rehearsals of the report in the presence of colleagues,
supervisor and collaborators can be exercised in order to
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
complete the presentation within the allotted time,
\item
improve the quality of presentation and
\item
maintain the fluency of the presentation.
\end{enumerate}
During a long presentation, the speaker can stop the presentation
at various stages, seek comments and questions from the audience
and then proceed. This will make the presentation attractive,
interesting and also allow the audience to clarify their doubts so
that they can follow the work.
\section{ART OF WRITING A RESEARCH PAPER AND THESIS}
\subsection{\large{\bf{\emph{What is a Research Report?}}}}
{\emph{Research reporting}} is an oral or a written presentation
of important and useful aspects of the research work done.
Scientific writing, a thesis or a paper, is intended to present
clearly the purpose and outcome of a specific research
investigation. It is the last but a major part of the research
study. A report helps the researcher get feedback from other
researchers and experts working in the same field. It also
evaluates the success and originality of the researcher's work.
{\emph{Without a report, a research study is incomplete and of no
use}}. A report essentially conveys the outcome of a research
work to interested persons. Brilliant work and most striking
findings are of little value if they are not effectively
communicated to the scientific world. As pointed out by Eli Maor,
{\emph{in academic matters the iron rule is publish or perish}}.
Some times delaying a publication of a result one would lose his
claim.
\subsection{\large{\bf{\emph{What are Research Paper or Article \\
and Ph.D Thesis or Dissertation?}}}}
A research paper is a report published in a journal or magazine or
conference proceedings, etc. Whereas a Ph.D. dissertation is a
report of the entire work done by a researcher to a university or
an institution for the award of the degree of doctor of
philosophy. A Ph.D. dissertation is a lengthy, original and
substantial document. It should contain original contributions.
Essentially, the {\emph{role of a Ph.D. dissertation is to
demonstrate the research person's original thinking and
contribution to the topic of research}}. It should also clearly
point out the research competence of the researcher in his
research field. M.Phil. dissertation is designed as a practice
for Ph.D. thesis. It will help the researcher learn and
understand the present status of the topic and make him capable of
working at the Ph.D. level. The work done for an M.Phil.
dissertation need not be publishable in journals.
\subsection{\large{\bf{\emph{Why Should a Researcher Report \\
his Findings?}}}}
{\emph{Every research investigation is carried out with certain
objectives}}. The outcome of a research work may add new
information to a theory or may have technological applications.
Sometimes the researcher may not be aware of the theoretical
development on practical applications. His research results may
be useful to another research problem. Some other researchers may
be working or planning to work on the same or similar type of
research work. Several researchers doing same research work is a
waste of time unless the solution of the problem is needed very
urgently and is of great use. Repetition of a work should be
avoided by the research community as much as possible. Unless a
researcher reports his work to the world, the correctness,
validity and originality of the work is under a question mark. The
outcome of a research work will become known to the scientific
community only through publications. In view of these, it is
important to report a work in an appropriate journal or magazine
and in scientific meetings like conferences, seminars and
symposia. Identify possible publications of your research findings
after making a considerable progress on a research problem. Don't
be confined with a mere Ph.D. degree.
\subsection{\large{\bf{\emph{Characteristics of a Good Report}}}}
A good report results from slow, pain taking and accurate
inductive work. To attract a reader, the reading matter of a
report should be clear and interesting. It should not be obscure
and dull. The write-up should be logical, clear and concise. The
basic quality or characteristics of a good scientific report/paper
and thesis are the following:
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
good presentation
\item
good organization of various chapters/sections
\item
accuracy
\item
clarity
\item
free from contradictions and confusion.
\end{enumerate}
Further, a Ph.D. dissertation should be a formal and should have
high level of scholarship.
\section{OUTLINE OF A REPORT}
\noindent{\large{\bf{\emph{What are the considerations to be kept
in mind while preparing a report?}}}}
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
First, an outline of a report has to be prepared.
\item
A sketch of what information to be conveyed must be made.
\item
Then, one can write down various topics, subtopics to be
considered and what material to be presented in them.
\item
The sentences which are to be expanded, reworded and verified for
its validity can be marked.
\end{enumerate}
The outline of the report helps us concentrate on
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\roman{enumi}}
\begin{enumerate}
\item
what is to be presented,
\item
logical relationships between different parts of the report,
\item
smooth flow of the content and
\item
continuity in the presentation.
\end{enumerate}
The outline can be discussed with the guide, collaborators,
colleagues and experts in local area. Based on their comments
the structure of the report can be modified.
\vskip 10pt
A three stage preparation of a report is generally done by
researchers. They are
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
First draft -- {\emph{Rough draft}}.
\item
Second draft -- {\emph{Rewriting and polishing of the rough
draft}}.
\item
Third draft -- {\emph{Writing the final draft}}.
\end{enumerate}
\subsection{\large{\bf{\emph{First Draft}}}}
In this stage a researcher can write
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
what has been done in the research study,
\item
procedure, method, theory and technique applied,
\item
technical difficulties faced and how they are overcome,
\item
broad findings and
\item
concluding remarks.
\end{enumerate}
Tables and charts can be typeset using computer and kept
separately in order to avoid rewriting them. Conclusion should be
precise, clear and objective. Further directions may be pointed
out.
\vskip 10pt
Since a research paper is identified by its title it should be
brief and not more than above 10-15 words. A subject index of a
paper is primarily based on the words in the title. Therefore,
few key words which are helpful to classify the paper can be
included appropriately in the title.
\vskip 10pt
How does a reader decide whether to read the content of a paper or
not? Abstract serves the purpose. By reading the abstract a
reader would decide whether the content of the paper is useful to
him. Therefore, the abstract should have positive information
about the content of the paper and summary of the work reported in
it. Further, if the abstract has final results and main
conclusion of the paper then a reader who has a general interest
in the subject can know the outcome of the paper without reading
the entire text by referring the abstract itself.
\subsection{\large{\bf{\emph{Second Draft}}}}
This is the most important and difficult part of the writing.
Extreme care must be taken in writing this draft. Unclear points,
jargons, weakness of the report have to be identified and revised.
Over-generalization of outcomes should be avoided. For example,
Hermitian operators have real eigenvalues. Generalizing it as
eigenvalues of operators are real or concluding that to have real
eigenvalues, operators should be Hermitian are incorrect.
Similarly, complex analytic functions satisfy Cauchy--Riemann
conditions. It doest not mean that functions satisfying
Cauchy--Riemann conditions should be analytic. How do you avoid
over-generalization? For some details see, for example, ref.[\ref{cs}].
\vskip 10pt
Attention must be paid to the arguments made, logical flow of work
presented, the quality of supporting evidences and conclusion
drawn. Do these in each chapter. Don't do the entire second
stage at a single stretch. Give sufficient time between revisions
of two consecutive chapters. During the break time think over the
revision made in the previous chapter or section.
\vskip 10pt
More importantly, grammar must be checked. A careful spell check
must be made. Use simple words as far as possible. Indecisive
words such as perhaps, somewhat, rather, etc. should be avoided.
Usage of some particular words repeatedly, for example, `very',
`extraordinary', `invariably' should be avoided. Expressions such
as `it seems', `there may be', `since', `putting', etc. should be
replaced by appropriate equivalent words.
\vskip 10pt
Style, presentation and grammar can be improved by asking your
friends, colleagues to read and give their critical comments,
suggestions and correct English grammar.
\vskip 10pt
In some universities the report is first read by an English
teacher. He corrects the grammar and give suggestions. After
this only a researcher can submit the thesis.
\vskip 10pt
Complicated and lengthy sentences have to be rewritten and broken.
Similar sentences or sentences conveying same information must be
eliminated. Check whether the words used clearly convey exactly
the meaning intended.
\vskip 10pt
S.~Chandrasekhar said, ``{\emph{I always sought to present my findings in
as elegant, even literary, a form as possible. I select some
writers in order to learn. For example, I read Henry James or
Virginia Woolf, and I don't simply read the text as a novel; I
see how they construct sentences, how they construct paragraphs,
how one paragraph goes into another and so on.}}" (J.~Horgan, Current
Science, 67~(1994)~pp.500-01).
\vskip 10pt
Proper references of related work should be included. Trivial
matters and obvious conclusion should not be included and if there
are such sentences then they should be dropped.
\subsection{\large{\bf{\emph{Third Draft}}}}
This is the last stage. In this stage, one can concentrate on
{\emph{final touches and finishing}}. This should be in the
direction of making the report weighty, authoritative, attractive
and convincing. Similar words and format should be avoided in
successive sentences. Make sure that the script clearly shows the
originality of the author and importance of the outcome of the
study performed.
\vskip 10pt
In all the three stages of report preparation one should follow a
proper style of writing. Use clear and unadorned English
appropriate for the readers. One has to be aware of to whom the
research report is intended. The report is not for the
supervisor. It is better to avoid the use of personal pronoun. Use
of ``I" and ``the author" should be avoided. Some supervisors
like to use ``we". For an interesting fun about the usage of ``I"
and ``we" see p.106 of ``Why are things the way they are?" by
G.~Venkataraman (University Press, Hyderabad, 1992).
\vskip 10pt
Both active and passive voice should be used wherever necessary or
appropriate. However, when using them one should check whether
the meaning is strictly correct. For example, when writing ``The
experimental results agree with the theory" we must check whether
we are strengthening the experimental result or the theory. Care
must be taken in using present and past tenses. Use past tense to
describe the data collection and work done by others and you. For
interpretation, assessments and discussions present tense is
appropriate.
\vskip 10pt
Between various stages it is advisable to give gap of few days so
that one can leisurely think of the manuscript and record how to
revise it. This will avoid unnecessary tension and half-hearted
write up.
\sectionmark{LAYOUT OF A PH.D. THESIS / M.PHIL. DISSERTATION}
\section{LAYOUT OF A RESEARCH REPORT /
PH.D. THESIS / M.PHIL. DISSERTATION}
\sectionmark{LAYOUT OF A PH.D. THESIS / M.PHIL. DISSERTATION}
The layout of a research report is the list of various parts of
the report/thesis. Generally, a research report should consist of
the following three components:
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
Preliminary pages
\item
Main text
\item
End matters
\end{enumerate}
\subsection{Preliminary Pages}
{\emph{Preliminary pages include title of the report,
acknowledgement, certificate page, list of publications and table
of contents}}. Acknowledgements are written to thank those who
have helped the researcher during their course of investigation.
For a book it is in the form of preface or forward.
Acknowledgement should be brief, simple, modest and given only to
substantial assistance provided by the guide, head of the
department, staff of the department, agencies which provided
financial support, collaborators and institutions where part of
the work has been carried out. Acknowledgements made for routine
participation by members of the researcher's family, librarian,
friends, clerical helpers and god are normally considered
superfluous. Acknowledgement should be made at the time of public
viva-voce also. There is a chance for a researcher to forget to
say acknowledgement at the end of the presentation. To avoid this
he may do it at the beginning of the presentation. An important
point is to consider the tone to adopt so that you sound genuine.
\vskip 10pt
{\emph{Every research report should have an abstract}}. It is a
necessary part of any scientific and nonscientific research
report. In a research article it appears next to the author's name
and affiliation. In the case of Ph.D. thesis, before its
submission an elaborated abstract of the thesis called
{\emph{synopsis}} has to be submitted to the institution where
registration for Ph.D. degree is made. Abstract and synopsis
convey the essence and brief details about the report. It should
contain a very short statement of the problem, methodology and
procedures adapted in the work and results of the study in a very
condensed form. {\emph{The abstract can act as a tool to control
the flow of ideas in the thesis}}. It can help you link in a
logical way the reasons for the research and aims of the work. It
should contain answers to the questions: What was done in the
project? Why is it of interest? How was it done? What were the
outcomes of the work done? What is the significance of the
results? One should emphasize the original contribution in the
abstract. The abstract of a Ph.D. thesis will be about three or
four pages.
\vskip 10pt
{\emph{Table of contents gives title of the chapters, section
headings, title of appendices and their page numbers}}. In the
certificate page the researcher should undertake that the work
reported has not been reported earlier by him or by any one else
for the award of any degree. It should also mention that the work
is done by the researcher and not copied from any other source.
\vskip 10pt
All the preliminary pages should be numbered with lower-case roman
numbers.
\subsection{Main Text}
The main text presents the details of the research work and
results. This part of the thesis should provide the following,
about the research work:
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
Introduction
\item
Actual research work performed and the findings
\item
Summary and conclusion.
\end{enumerate}
\subsubsection{\large{\bf{\emph{Introduction}}}}
The purpose of the introduction is to give a brief outline of the
field of research. In this part one can bring clearly the
importance of the field and the current status of it. It should
contain an overview of the problem, its importance, statements
about the hypothesis or specific questions to be explored. This
is followed by a preview of the scheme of the following chapters,
that is an outline of plan of the work. Here, aim of each of the
chapters and their contents can be briefly stated. Related and
relevant work done by others must be pointed out. Various
concepts and definitions of scientific and technical terms
necessary for understanding the research work undertaken are to be
defined and explained. Details of statistical tools or quantities
used in the study can be given in a separate chapter.
\vskip 10pt
Irrelevant and less informative materials need not be presented.
For example, regular and irregular behaviour of solution of a
system or differential equation can be characterized by
calculating the statistical tools such as Lyapunov exponents,
correlation function, correlation dimension, power spectrum,
periodicity of the solution and probability distribution. If the
power spectrum is not used in a research work then there is no
need to discuss in detail the systematic way of calculating it.
Similarly, suppose the effect of noise in a theoretical model
equation is studied by including, say, Gaussian random numbers in
the simulation. There are many methods available to generate
Gaussian random numbers. If the Box--Muller method is used then
it can be described. In this case describing other methods, for
example, rejection technique is redundant to the present thesis
report. The theory and experimental set up used should be clearly
described with proper references. Define the technical terms used
in the dissertation either by a reference to a previously
published definition or by a precise definition. Such a
definition should be given only once in the report.
\vskip 10pt
The introductory chapter(s) should be prepared in such a way that
it should interest the reader in the subject matter of research.
It should not be aimless, confused and laking in precision.
Introductory part may contain one or two chapters.
\vskip 10pt
To be precise, the introductory part should cover the following
aspects:
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
Features of the topic
\item
Present status of the field
\item
Some unsolved problems
\item
Statement of the problem undertaken
\item
Importance and justification of the present problem
\item
Preview of the scheme of the following chapters and their
interrelationship Definition of various scientific terms used, and
\item
Methodology used.
\end{enumerate}
\subsubsection{\large{\bf{\emph{Actual Research Work}}}}
This is the heart of the research report/thesis. The actual
research work undertaken, difficulties faced, technical details,
results, conclusion and future direction form the main part of
this portion. This part can be presented in a few chapters. Each
chapter should contain introduction, research work, results and
conclusion. Materials should be organized systematically and
presented under appropriate headings and subheadings. First,
write the chapters that describe your actual research work. After
this, prepare the conclusion and introduction parts. When writing
the actual work collect the terms and note down the matter which
are to be defined and described in the introduction.
\vskip 10pt
As Professor P.R.~Subramanian points out, {\emph{for preparing the
Ph.D. thesis report one should not simply copy word by word from
his research articles}}. Even if the content of the thesis is the
work reported in his research publications, the student should
reword the material without changing the meaning, give much more
details, explanations, suggestions and possibly a better
reorganization of the content.
\vskip 10pt
Wherever possible, the results should be presented in the form of
figures, illustrations and tables. They can make the report quite
attractive. Tables should be as precise as possible. All the
figures should clearly specify the variables of the axes, units
used and other necessary information. Figure caption should not
be a reproduction of sentences of the text. It must clearly state
what it is. Figures should be clearly explained in the text. Data
should be fitted to an appropriate mathematical expression.
Nowadays, sophisticated softwares are available for curve fitting.
After making a curve fit or plotting a set of data, proper
explanation for observed variation of the data should be given. A
set of data measurement without any analysis and discussion is of
no use.
\vskip 10pt
Extreme care must be taken in type setting mathematical
equations, variables and parameters involved in the study. Italic
or Greek letters or mathematical symbols can be used for variables
and parameters. For example, x or X should not be used as a
variable name. The correct usage is $x$ or $X$ (or typeset in
italics). All the equations should be centered and numbered.
Vectors should be clearly specified by an arrow over the name or
by bold face name. Equations should not be repeated.
\vskip 10pt
Jokes or puns should not find a place in the report. Use
``correct" or ``incorrect" to refer to the results of others.
Don't use the words ``bad", ``terrible" and ``stupid". Avoid use
of ``today", ``modern times", ``soon", ``seems", ``in terms of",
``based on", ``lots of", ``type of", ``something like", ``just
about", ``number of", ``probably", ``obviously", ``along with",
``you", ``I", ``hopefully" and ``may". There is no need to
mention the circumstances in which the results are obtained.
\vskip 10pt
\hrule
\vskip 5pt
\noindent{\bf{Assignment:}}
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\addtocounter{enumi}{9}
\item
Reword/rephrase the following and give the reason for the change:
\renewcommand{\theenumi}{\roman{enumi}}
\begin{enumerate}
\item
Dinesh and Geethan [1] reported that ...
\item
The following algorithm represents a major breakthrough ....
\item
Even though the above method is not earthshaking ....
\item
Geethan and I obtained ....
\item
There is a method to calculate ....
\item
The program will use the data after it stored them to a CD ...
\item
The method is started by calculating the value of $\delta$ ....
\end{enumerate}
\end{enumerate}
\vskip 5pt
\hrule
\subsubsection{\large{\bf{\emph{Conclusion}}}}
At the end of each of chapter, one can place a brief summary of
the outcome of the work presented in that chapter under the
heading conclusion. They should be clear and precise.
\vskip 10pt
The relevant questions which are still not answered and new
questions raised by the work of the present chapter have to be
mentioned. Whether the answers to the questions are obtained or
not, if obtained in which chapter(s) they are presented should be
specified. Mention possible future research. It is important to
make a connection between two consecutive chapters either at the
end of the first or at the beginning of the second.
\vskip 10pt
Chapters should not look like reports of isolated work. There
should be a link between consecutive chapters and the link should
be clearly brought out.
\subsection{End Matters}
The end part of the report generally consists of references,
appendices, computer programs (if they are not easy to develop)
and copies of research publications that came out from the
research work done.
\subsubsection{\large{\bf{\emph{Appendices}}}}
Appendices are supplementary contents which are not placed in the
main report in order to keep the continuity of the discussion;
however, they are relevant for understanding the particular part
of the report. An appendix may present
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
a brief summary of a theory or a numerical method used which can
be found elsewhere,
\item
a lengthy mathematical derivation or a large set of equations,
\item
technical details and
\item
a list of values of constants and parameters used in the work.
\end{enumerate}
Appendices can be placed at the end of report after references.
They should be numbered by capital alphabets.
\subsubsection{\large{\bf{\emph{References/Bibliography}}}}
References or bibliographies are sources consulted. Each
reference should contain name(s) of author(s), title of the paper,
journal name, volume number of the issue in which the article
appeared, starting page number, end page number and year of
publication. In the case of a book source its author(s), title,
publishers's name, place of publication, year of publication and
edition should be given. Some examples are given below.
\renewcommand{\labelenumi}{(\theenumi)}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
Suppose the reference is the paper of K.~Murali, Sudeshna Sinha
and W.L.~Ditto with title ``Implementation of NOR gate by a
chaotic Chua's circuit" appeared in the journal called
`International Journal of Bifurcations and Chaos' in the year
2003, the volume number of corresponding issue is 13 and the
starting and ending page numbers of the article are 2669 and 2672
respectively. The above article can be specified as (without
mentioning the title of the article)
\vskip 3pt
K.~Murali, Sudeshna Sinha and W.L.~Ditto, Int. J Bifur. and Chaos
13 (2003) 2669--2672.
\item
For an article which appeared in a conference proceedings a
typical format is given below:
\vskip 3pt
R.~Harish and K.P.N.~Murthy, ``Intermittency and multifractality
in iterated function systems". In: Nonlinear Systems. Eds.
R.~Sahadevan and M.~Lakshmanan (Narosa, New Delhi, 2002)
pp.~361--371.
\vskip 5pt
In the above ``Intermittency...." is the title of the report of
R.~Harish and K.P.N.~Murthy. ``Nonlinear Systems" is the title of
the conference proceedings edited by R.~Sahadevan and
M.~Lakshmanan. The proceeding was published in the year 2002 by
Narosa Publishing House, New Delhi. In the proceedings the
article appears from the page 361 to page 371.
\item
A book can be noted down as, for example
\vskip 3pt
T.~Kapitaniak, ``Controlling Chaos" (Academic Press, San Diego,
1996).
\item
A Ph.D. thesis can be referred as shown below:
\vskip 3pt
S.~Parthasarathy, ``On the analytic structure and chaotic dynamics
of certain damped driven nonlinear oscillators". Ph.D. thesis.
(Bharathidasan University, 1993, Unpublished).
\item
For an unpublished manuscript downloaded from internet one can
note down the web site where it is available (see for example the
references 5 and 6 of the references section of this manuscript).
\end{enumerate}
References can be either in alphabetical order according to
author's name or the order in which they are referred in the
report. Make sure that each reference cited in the text is
correctly entered into the list of references. Repetition of
references in the list should be avoided.
\subsection{Typing the Report}
Typing should conform to the set of requirements of the
institution. The thesis should be double line spaced and not more
than 25 lines per page. It may be typed on both sides. Chapter
heading must be in large size with bold face. Each paragraph
should be right margin aligned. Important terms when used first
time can be in italic letters and bold face. First word of a
sentence should not be an abbreviation. Latest softwares such as
LATEX or WORD can be used for thesis, dissertation and report
preparation. One could download the software LATEX a free of cost
from the web sites:
\vskip 1pt
1) http://www.ctan.org
\vskip 1pt
2) http://www.miktex.org
\vskip 10pt
If a report is prepared keeping all the above precautions in mind,
there is every likelihood of it becoming useful for proper study.
Such report enables the reader to comprehend the data and to
determine for himself the validity of the conclusion.
\vskip 10pt
{\emph{Before or immediately after submitting hard copies of the
Ph.D. dissertation to a university, show it to your colleagues,
teachers, scientists of your department, your parents and
friends}}.
\section{ACKNOWLEDGEMENT}
We acknowledge valuable discussion with Professor M.~Sivasankaran
Nair, Dr~K.~Balasubramanian and Dr~E.~Subramanian. We are very
grateful to Professor P.R.~Subramanian and Dr~K.P.N.~Murthy for a
critical reading of the manuscript and their suggestions which
greatly improved the presentation of the manuscript. We are thankful to Prof.V.Devanathan, Dr.K.P.N.Murthy and Dr.Sudeshna Sinha for their suggestions to young researchers.
\vskip 20pt
\hrule
\hrule
\vskip 10pt
\noindent{\bf{REFERENCES:}}
\renewcommand{\labelenumi}{\theenumi.}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{enumerate}
\item
C.~R.~Kothari, {\emph{Research Methodology: Methods and
Techniques}} (Wiley Eastern, New Delhi, 1985).
\label{kothari}
\item
P.~Saravanavel, {\emph{Research Methodology}} (Kitab Mahal,
Allahabad, 1987).
\label{sara}
\item
E.~M.~Phillips and D.~S.~Pugh, {\emph{How to get a Ph.D.?}}
(UBSPD, New Delhi, 1993).
\label{phil}
\item
R.~Spangenburg and D.~K.~Moser, {\emph{The History of Science in
the Eighteenth Century}} (University Press, Hyderabad, 1999)
\label{span}
\item
http://www.cs.indiana.edu/mit.research.how.to/ \\
section3.12.html
\label{cs}
\item
http://www.camden.rutgers.edu/camden/TEC/ \\ index.html
\label{camden}
\end{enumerate}
\vskip 10pt
\hrule
\hrule
\vskip 20pt
%
``It seems to me that scientific research should be regarded as a
painter regards his art, a poet his poems, and a composer his
music." -- Albert A. Michelson.
\vskip 10pt
``The average Ph.D. thesis is nothing but transference
of bones from one graveyard to another." -- Frank J. Dobie.
\vskip 10pt
When I got by B.S., I would be able to ``bullshit"... When I got
by M.S. I would have ``more shit", and that finally, upon
reaching my Ph.D., it would be ``piled higher and deeper." -- S.
Baker.
\vskip 10pt
``Works are of value only if they give rise to better ones.''
-- Alexander von Humboldt.
\vskip 50pt
{\noindent{\large{\emph{\bf{A Short interview with three eminent
scientists.}}}}}
\vskip 10pt
\noindent{\bf{1. Interview with Professor V.~Devanathan}}
\vskip 10pt
\noindent{\emph{What are the requirements for a successful
research career?}}
\vskip 5pt
\noindent{\emph{Prof.~V.~Devanathan}} : Motivation and innate
interest in the topic of his research pursuit are the requirements
for a successful research career. If a person takes the research
not by compulsion but by his own choice, then he will not feel it
as a burden but pursue it as a hobby. ``Science is at its best
when it is a part of a way of life" - this is the inscription that
is found on the foundation stone of Institute of Mathematical
Sciences, Chennai and truly describes the correct aptitude for a
successful research career.
\vskip 10pt
\noindent{\emph{Is it possible for an average student to come up
with novel results in a research problem? If so, what kind of
approach he should follow?}}
\vskip 5pt
\noindent{\emph{Prof.~V.~Devanathan}} : Usually, the assessment
of a student as good, average or bad is based on his performance
in the examinations. There are some who are good in examinations
with a good memory for reproduction but lack in deeper
understanding of the subject and originality in approach. There
are some who are not so good in examinations but show originality
in thinking and follow unconventional or novel approach to the
subject. There are a few who are good both in examinations and
research. So, an average student with an ability of average
performance in the examinations, need not feel different if he has
{\emph{originality in thinking}} and {\emph{self-confidence}}.
\vskip 10pt
\noindent{\emph{During a research career, a young researcher may
come across disappointing moments like not getting expected
results, rejection of a research article from a journal, etc. What
kind of mode of approach a researcher should have to face such
situations?}}
\vskip 5pt
\noindent{\emph{Prof.~V.~Devanathan}} : ``Success begets success
and failure begets failure." Success and failure are like two
sides of a coin and one is bound to face them alternatively in the
course of one's research career. Elation at the time of success
and depression at the time of failure are usually mitigated if one
works in collaboration with others. At the time of depression,
the co-workers come to the rescue and prop up the sagging spirit.
\vskip 10pt
\noindent{\emph{In our manuscript we have mentioned the
following:}}
\vskip 2pt
{\emph{Each and every bit of work has to be done by the
researcher. A young researcher should not do the entire work in
collaboration with others. The researcher is advised to perform
all the work starting from identification of the problem to report
preparation by himself under the guidance of supervisor.}}
\vskip 2pt
\noindent{\emph{Please give your views on this point.}}
\vskip 5pt
\noindent{\emph{Prof.~V.~Devanathan}} : At the initial stages, the
researcher gets the support of the research group in which he is
working and he acquires the knowledge of the group effortlessly.
The weekly informal seminars, if conducted within the group, will
increase the pace of learning and help to clarify and crystallize
the problems. This process of learning is made easier if the
young researcher works in collaboration with others. This is true
both for theoretical and experimental work. At present, the
experimental work is almost a team work and successful research
group is one in which the group leader allots the specified work
to individuals taking into account his ability and expertise.
\vskip 10pt
\noindent{\bf{2. Interview with Dr~K.P.N.~Murthy}}
\vskip 10pt
\noindent{\emph{The common belief is that research is laborious
and painful. Many times you have mentioned: ``Doing research is an
entertainment." Please, elaborate on this statement of yours.}}
\vskip 5pt
\noindent{\emph{Dr~K.P.N.~Murthy}} : Research not only
constitute a discovery or creating a new paradise but also consist
of obtaining a personalized understanding of a phenomenon. The
struggle that you go through for obtaining an insight into a
phenomenon or getting a hold of a nuance and the extessy that you
get when you get an understanding of a phenomenon or obtaining a
new way of explaining of that phenomenon may be unmatched. This
ecstasy is nothing to do with what yours creative have impact on
science and society. However, it is the ecstasy of what Einstein
got when he created special theory of relativity or Feynman when
he created quantum electrodynamics or Raman when he found the
so-called Raman lines. It is this makes the research an enterprise
of joy. It is that makes a research an entertainment.
\vskip 10pt
\noindent{\emph{Is it necessary for a beginner of research to
learn all the aspects of theoretical, experimental and numerical
techniques involved in a topic before he take-up an actual
research problem? }}
\vskip 5pt
\noindent{\emph{Dr~K.P.N.~Murthy}} : A certain basic knowledge
about physics and mathematics is must for starting research.
That is it. Several things you learn doing research. Ignorance
of even some of the basic elements is no hindrance for creativity.
What is required for doing good research is an enthusiasm, a
commitment and willingness to go back to basics and learn them right.
\vskip 10pt
\noindent{\emph{Before preparing the final write-up of your
research work, you have the practice of discussing the salient
features of your findings with a few other researchers. How are
you benefited from this?}}
\vskip 5pt
\noindent{\emph{Dr~K.P.N.~Murthy}} : After you have completed a
piece of work I find it is a good practice to discuss with your
colleagues the important findings that you have made. I have
always realized that I got a better understanding of what I have
done when I tried to explain to my colleagues about my work in a
convincing way. The very act of speaking of what you have done
removes the cob-webs in your understandings. I always make it to
give a seminar on my work to a larger audience before submitting
it to a journal for publication. I feel this is a very good and
helpful practice.
\vskip 10pt
\noindent{\emph{``Enjoy doing research and approach it as an
entertainment and a mode of getting happiness." This is your
suggestion to young researchers. Please, brief it for the benefit
of youngsters. In what way will this be helpful to a researcher?
}}
\vskip 5pt
\noindent{\emph{Dr~K.P.N.~Murthy}} : In any human enterprise it is
important that one likes what one does. The hard work that you have
put in a problem does not tired you and rest be assured if you approach
a research problem with joy and you will get a good result.
Publication of that result and the acceptance that you get from your
colleagues become secondary. The satisfaction that you obtained by
doing a job well is a reward by itself. I would say that youngsters
should have this attitude towards whatever they do.
\vskip 10pt
\noindent{\bf{3. Interview with Dr~Sudeshna Sinha}}
\vskip 10pt
\noindent{\emph{Despite unavoidable tasks a woman of our country
has, you have become one of the leading scientists in theoretical
physics. What are your advice and suggestions to young
researchers particularly to young women researchers?}}
\vskip 5pt
\noindent{\emph{Dr~Sudeshna Sinha}} : It is indeed somewhat
harder for women to concentrate on career planning - especially
when their children are young. One will have to accept that
household tasks will always be there. The hardest thing is not
really the number of hours of work one can put in - but the
{\emph{quality of concentration}} one can achieve. Here
discipline comes in. Since women will probably manage to get
fewer hours of academic work done every day - they need to really
plan the academic work they hope to achieve every single day. So
it is most beneficial to discipline oneself into shutting off all
daily chores {\emph{from one's mind}} for some hours every day.
The point is to learn efficiency -- and to appreciate that one does
not have the benefit of unlimited time (as others will make
justifiable demands on your time -- like children).
Also women may find it hard to pursue academic work at certain
points in their life - but they must preserve the self-confidence
and will to return to academic after such times are over. They
must realize that in 3--4 decades of working life -- a few years is
not a big deal. They should not think that a break in career is
{\emph{irreversible}}.
\vskip 10pt
\noindent{\emph{Publishing in reputed journals (like Physical
Review Letters) is a dream or prestige for many physicists. What
are the secret of yours for regular publications in reputed
journals? What type of problems one has to take up for getting
published in top-level journals? }}
\vskip 5pt
\noindent{\emph{Dr~Sudeshna Sinha}} : With journals like Physical
Review Letters one must remember two things: First, always try
and make a case of the general interest of your results. The
commonest grounds for rejection is {\emph{lack of broad
interest}}. This is very subjective of course, and being Indian
does not help. But still, at the outset, one should make an
attractive statement of the general scope of one's work (that is,
try to answer this hypothetical question: Why should someone not
doing research in this exact narrow sub-field be interested in
reading my paper). Second point is persistence. Take all
criticisms of the paper seriously (and don't reply needlessly
aggressively to the referees) and try to answer all the
criticisms. Then resubmit, and {\emph{don't give up till the last
round!}}
\vskip 10pt
\noindent{\emph{How could a beginner of research come up with
novel results?}}
\vskip 5pt
\noindent{\emph{Dr~Sudeshna Sinha}} : Well, I think coming up
with {\emph{novel}} results is not entirely in one's hand. There
is an element of good fortune here! If the guide of the young
researcher can identify a problem that is technically easy to
tackle -- but whose results can be of considerable potential
interest -- then there is a good chance for the young researcher to
get a novel result. But this is not in the hands of the young
researcher, and most often not in the hands of the guide either
(as it depends on the subject, timing etc.).
{\bf{In this matter I always tell my students: whether you get a
novel result tomorrow is a matter of luck, but in a career
spanning several decades, if you work steadily and think deeply
about the subject, it is almost assured that at some point or the
other, you will get a good idea which will lead to a novel
result!}}
\vskip 10pt
\noindent{\emph{To get a deep insight into the topic or problem of
research, what are the ways a young researcher can follow? }}
\vskip 5pt
\noindent{\emph{Dr~Sudeshna Sinha}} : One should not just
passively {\emph{read}} papers or books! One should try to work
it all out in some detail. While reading passively one feels one
has {\emph{understood}} -- but only when one is trying to solve
something does one gain any real understanding. In fact it is a
great idea to look at the title and abstract of a paper, and then
ask oneself how one would have attempted to work on such a problem
and only then look at what the authors have done.
\end{document}
|
2,877,628,089,775 | arxiv | \section{Introduction}
Understanding the physical process of thermalization within the framework of quantum mechanical principle has been a
long-standing problem. Thermodynamics and statistical mechanics are built with the hypothesis of equilibrium
\cite{Landau1969,Kubo1991}, that is, over a sufficiently long time, a macroscopic system which is very weakly coupled
with a thermal reservoir can always reach thermal equilibrium, and its equilibrium statistical distribution does not depend
on the initial state of the system. Over a century and a half, investigating the foundation of statistical mechanics and
thermodynamics has been focused on two basic questions \cite{Huang1987}: (i) how does macroscopic irreversibility
emerge from microscopic reversibility? and (ii) how does the system relax to thermal equilibrium with its environment
from an arbitrary initial state? Rigorously solving these problems from the dynamical evolution of quantum systems,
namely, finding the underlie of disorder and fluctuations from the deterministic dynamical evolution, has
been a big challenge in physics \cite{Huang1987,Landau1969,Kubo1991,Feynman1963,Leggett1983a,Srednicki94,Zurek2003,Gemmer2004,Jarzynski2011,
Zhang2012,Kosloff2013,Nandkishore15,Xiong2015,MillenNJP2016,EspositoNJP17,Binder2018,Deffner2019,Xiong2020,Talkner2020}.
Obviously, the foundation of thermodynamics and statistical mechanics and the
answers to these questions rely on a deep understanding of the dynamics of systems interacting with their
environments, i.e., the nonequilibrium evolution of open quantum systems.
In 1980's, Caldeira and Leggett investigated the problem of thermalization from the study of the
quantum Brownian motion, a Brownian particle coupled to a thermal reservoir made by a continuous distribution of harmonic oscillators
\cite{Leggett1983a}. They used the Feynman-Vernon influence functional approach \cite{Feynman1963} to explore the
dynamics of quantum Brownian motion, and found the equilibrium thermal state approximately \cite{Leggett1983a}.
Later, Zurek studied extensively this nontrivial problem from the quantum-to-classical transition point of view. Zurek revealed
the fact that thermalization is realized through decoherence dynamics as a consequence of entanglement between
the system and the reservoir \cite{Zurek2003}. Thermalization in these investigations is demonstrated for quantum
Brownian motion for initial Gaussian wave packets at high temperature limit \cite{Leggett1983a,Zurek2003}.
However, the thermalization with arbitrary initial state of the system at arbitrary initial temperatures of one or
multiple reservoirs for arbitrary system-reservoir coupling strengths have not been obtained.
On the other hand,
in the last two decades, experimental investigations on nano-scale quantum heat engines have
attracted tremendous attentions on the realization of thermalization and the formulation of quantum thermodynamics
\cite{Allahverdyan2000,Scully2003,Esposito2010a,Scully2011,Trotzky2012,Jezouin2013,KZhang2014,Bergenfeldt2014,Langen2015,
Pekola2015,David2015,Kaufman2016,Anders2016,Ronagel2016,Ochoa15}.
Besides searching new thermal phenomena arisen from quantum coherence and quantum entanglement,
an interesting question also appeared naturally is what happen when microscopic systems couple strongly with reservoirs.
Since then, many effects have been devoted on the problems of how thermodynamic laws emerge
from quantum dynamics and how these laws may be changed when the system-reservoir couplings become strong
\cite{Campisi2009,Subas2012,Esposito2015,Seifert2016,Carrega2016,Ochoa2016,Jarzynski2017,Marcantoni2017,Bruch2018,
Perarnau2018,Hsiang2018,Anders2018,Strasberg2019,Newman2020,Ali2020a,Rivas2020}.
In particular, how to properly define the thermodynamic work and heat in the quantum mechanical framework
becomes an important issue when the system and reservoirs strongly coupled together
\cite{Esposito2010b,Binder2015,Esposito2015,Alipour2016,Strasberg2017,Perarnau2018}.
Due to various assumptions and approximations one inevitably taken in addressing the open quantum system dynamics,
no consensus has been reached in building quantum thermodynamics at strong coupling.
In the last decade, we have derived the exact master equation of open quantum systems
\cite{Tu2008,Jin2010,Lei2012,Yang2015,Yang2017,Zhang2018,Lai2018,Huang2020} by extending the Feynman-Vernon
influence functional theory into the coherent state representation \cite{Zhang1990}. The open quantum systems
we have studied are a large class of nano-scale open quantum systems that exchange matters, energies and
information with their reservoirs through the particle tunneling processes. We also solved the exact master
equation of these systems with arbitrary initial states at arbitrary initial reservoir temperatures. Thus, a rather
general picture of thermalization processes has been obtained \cite{Zhang2012,Xiong2015,Xiong2020}.
In this paper, we shall explore the thermodynamic laws and statistical mechanics principles from the dynamical evolution of
open quantum systems for both the weak and strong coupling strengths, based on the exact solution of the exact master
equation we obtained.
In fact, the difficulty for building the strong coupling quantum thermodynamics is twofold
\cite{Seifert2016,Carrega2016,Ochoa2016,Jarzynski2017,
Marcantoni2017,Bruch2018,Perarnau2018,Hsiang2018,Anders2018,Strasberg2019,Newman2020,Ali2020a,Rivas2020}:
(i) How to systematically determine the internal energy from the system Hamiltonian which
must be modified by the strong coupling with its reservoirs? (ii) How to correctly account the entropy
production when the system evolves from nonequilibrium state to the steady state?
We find that the nature of solving the above difficulty is the renormalization of both the system Hamiltonian
and the system density matrix during the nonequilibrium evolution through the system-reservoir couplings.
The system-reservoir couplings also result in the dissipation and fluctuation dynamics in open
quantum systems, which are indeed renormalization effects of the system-reservoir interactions.
The renormalization effects can be obtained nonperturbatively after exactly traced over all reservoir states.
They are manifested in the dynamical evolution of the reduced density matrix
with dissipation and fluctuation, and accompanied by the renormalized system Hamiltonian.
We develop such a nonperturbative renormalization theory of quantum thermodynamics from weak to strong couplings
in this paper.
The rest of the paper is organized as follows. In Sec.~II, we begin with the simple open quantum system of a
nanophotonic system coupled with a thermal reservoir. The renormalized system Hamiltonian is
obtained in the derivation of the exact master equation for the reduced density matrix.
The exact solution of the reduced density matrix is also obtained analytically from the exact master equation.
Its steady state approaches to a Gibbs state so that quantum thermodynamics emerges naturally.
However, we find that the exact solution of the particle occupation in the system at strong coupling
does not agree to the Bose-Einstein distribution with the initial reservoir temperature.
This indicates that the corresponding equilibrium temperature must also be renormalized when
the reduced density matrix is influenced by the system-reservoir interaction through the dissipation and
fluctuation dynamics of the system. By introducing the renormalized temperature as the derivative of
the renormalized system energy with respect to the von Neumann entropy in terms of the reduced
density matrix, we overcome the inconsistency. Thus, the self-consistent renormalized quantum statistics
and renormalized quantum thermodynamics are formulated for both the weak and strong coupling strengths.
In Sec.~III, we extend such study to more general open quantum systems coupled to multi-reservoirs through particle
exchange (tunneling) processes described by generalized Fano-Anderson Hamiltonians.
These open systems are typical nano-scale systems that have been studied for various quantum
transport in mesoscopic physics. Here both systems and reservoirs
are made of many bosons or many fermions, not limiting to the prototypical open system of a harmonic oscillator
coupling to a oscillator reservoir introduced originally by Feynman \cite{Feynman1963} and by Caldeira and Leggett
\cite{Leggett1983a,Leggett1983b}. From the exact master equation and its exact solution in the steady state
for such class of open quantum systems \cite{Tu2008,Jin2010,Lei2012}, we develop the renormalization theory
of quantum thermodynamics for both the weak and strong coupling strengths in general. We further take an
electronic junction system (a single electronic channel coupled two reservoirs with
different initial temperatures and chemical potentials) as a specific nontrivial application. It is a nontrivial example
because other approaches proposed for strong coupling quantum thermodynamics in the last few years
keep the reservoir temperature unchanged \cite{Seifert2016,Carrega2016,Ochoa2016,Jarzynski2017,
Marcantoni2017,Bruch2018,Perarnau2018,Hsiang2018,Anders2018,Strasberg2019,Newman2020,Rivas2020}
so that these approaches become invalid for multiple reservoirs when the total system (the system plus all reservoirs)
reaches a final equilibrium state. We demonstrate the consistency of the Fermi-Dirac statistics with our renormalized
quantum thermodynamic in this nontrivial application.
In Sec.~IV, we discuss further the generalization of this nonperturbative renormalization theory for quantum
thermodynamics to more complicated interacting open quantum systems.
We take the non-relativistic quantum electrodynamics (QED) derived from the fundamental quantum field theory
as an example, and considered electrons as the open system and all photonic modes (electromagnetic field)
as the reservoir. The system-reservoir interaction is the fundamental electron-photon interaction.
We perform the nonperturbative renormalization by integrating out exactly the infinite number of
electromagnetic field degrees of freedom. We obtain the reduced density matrix in terms of only the system
degrees of freedom in the same way as we derived the exact master equation for the generalized
Fano-Anderson Hamiltonian in Sec.~III. The resulting renormalization theory are given
by the reduced density matrix for electrons and the nonperturbative renormalized electron Hamiltonian,
which can be systemically computed in terms of two-electron propagating Green functions in principle.
Thus, we show that although our renormalized quantum thermodynamics theory is formulated from the exact
solvable open quantum systems, it can apply to arbitrary open quantum systems even though the final exact
analytical solution is hardly found. In fact, similar situation also exists for the equilibrium statistical mechanics,
namely, one cannot solve exactly
all equilibrium physical systems, in particular the strongly correlated systems such as the Hubbard model and the
general quantum Heisenberg spin model \cite{Nagaosa2010} even though the reservoir effect can be ignored there.
Therefore, approximations and numerical methods remained to be developed further for the study of renormalized
nonequilibrium dynamics within the framework we developed in this paper.
A conclusion is given in Sec.~V. In Appendices, we provide the necessary analytical derivations of
the solutions used in the paper.
\section{A simple example for strong coupling quantum thermodynamics}
For simplicity, we begin with a single-mode bosonic open system (such as a microwave cavity in quantum optics
or a vibrational phononic mode in solid-state and biological systems) coupled to a thermal reservoir
through the energy exchange interaction.
The total Hamiltonian of the system, the reservoir and the coupling between them is considered to be described by the
Fano Hamiltonian \cite{Fano1961}:
\begin{align}
H_{\rm tot}\!= & H_{_{\!S}}\!+\!H_{_{\!E}}\!+\!H_{{_{\!S}}{_{\!E}}} \notag \\
=& \hbar \omega_s a^\+a\!+\!{\sum}_k\hbar\omega_kb^\+_kb_k\!+\!{\sum}_k\hbar(V_ka^\+b_k\!+\!V_k^*b^\+_ka), \label{fH}
\end{align}
where $a^\+$ and $b^\+_k$ ($a$ and $b_k$) are the creation (annihilation) operators of the bosonic modes in the
system and in the reservoir with energy quanta $\hbar\omega_s$ and $\hbar\omega_k$, respectively. They obey the standard
bosonic commutation relations: $[a, a^\dag]=1$ and $[b_k,b^\dag_{k'}]=\delta_{kk'}$, etc. The parameter
$V_k$ is the coupling amplitude between the system and the reservoir and can be experimentally tuned
to strong coupling \cite{Putz14,Chiang21}.
In fact, all parameters in the Hamiltonian, including the couplings between the system and the reservoir can be
time-dependently controlled with the modern nano and quantum technologies. The universality of Fano resonance also
makes this simple system useful in nuclear, atomic, molecular and optical physics, as well as condensed matter systems \cite{Miroshnichenko2010}.
\subsection{The exact master equation of the system and its exact nonequilibrium solution}
To study the thermalization of open quantum systems, the reservoir can be initially set in a thermal state
\begin{align}
\rho_{_{\!E}}(t_0)\!=\!e^{-\beta_0 H_{_{\!E}}}/Z_{_{\!E}},
\end{align}
where $\beta_0\!=\!1/k_B T_0$ and
$T_0$ is the initial temperature of the reservoir, $Z_{_{\!E}}\!=\!{\rm Tr}_{_{\!E}}[e^{-\beta_0 H_{_{\!E}}}]$ is its
partition function. The system can be initially in arbitrary state $\rho_{_{\!S}}(t_0)$ so that the initial total density matrix of the
system plus the reservoir is a direct product state \cite{Feynman1963,Leggett1983a},
\begin{align}
\rho_{\rm tot}(t_0)= \rho_{_{\!S}}(t_0)\otimes \frac{\!e^{-\beta_0 H_{_{\!E}}}}{Z_{_{\!E}}}. \label{itdm}
\end{align}
After the initial time $t_0$, both the system and the reservoir evolve into an entangled nonequilibrium state
$\rho_{\rm tot}(t)$ which obeys the Liouville-von Neumann equation in quantum mechanics \cite{Neumann55},
\begin{align}
\frac{d}{dt}\rho_{\rm tot}(t)=\frac{1}{i\hbar}[H_{\rm tot}, \rho_{\rm tot}(t)]. \label{voneq}
\end{align}
Because the system and the reservoir together form a closed system, the Liouville-von Neumann equation is the same as
the Schr\"{o}dinger equation of quantum mechanics for the evolution of pure quantum states. But the Liouville-von Neumann
equation is more general because it is also valid for mixed states.
Quantum states of the system are completely determined by the reduced density matrix $\rho_{_{\!S}}(t)$. It is defined
by the partial trace over all the reservoir states:
\begin{align}
\rho_{_{\!S}}(t)\!=\!{\rm Tr}_{_{\!E}}[\rho_{\rm tot}(t)].
\end{align}
The equation of motion for $\rho_{_{\!S}}(t)$, which
is called the master equation, determines the quantum evolution of the system at later time $t$ ($>t_0$).
In the literature, one usually derives the master equation using various approximations, such as the
memory-less dynamical maps, the Born-Markovian approximation, and secular approximation, etc.~\cite{Lindblad1976,GKS1976,Kolodynski2018,Breuer2008,Paavola2009,Rajesh2015}.
But these methods are invalid for strong coupling open quantum systems with strong non-Markovian dynamics.
In the past decade, we have developed a very different approach
to rigorously derive the exact master equation for a large class of open quantum systems \cite{Tu2008,Jin2010,Lei2012,Yang2015,Yang2017,Lai2018,Zhang2018,Huang2020}.
Explicitly, we have derived the exact master equation for Eq.~(\ref{fH}) by exactly tracing over all the reservoir
states from the solution of the Liouville-von Neumann equation \cite{Wu2010,Xiong2010,Lei2011,Lei2012}.
The trace over all the reservoir states is a nonperturbative renormalization to the reduced density matrix of the system
and to the system Hamiltonian simultaneously. We complete this partial trace by integrating out exactly all the reservoir
degrees of freedom through the coherent state path integrals \cite{Lei2012,Zhang1990}.
The resulting exact master equation for the reduced density matrix accompanied with the renormalized system Hamiltonian is given by
\begin{subequations}
\label{emefh}
\begin{align}
\frac{d}{dt}\rho_{_{\!S}} & (t)\!= \frac{1}{i\hbar}\big[ H^r_{_{\!S}}(t),\rho_{_{\!S}}(t)\big] \notag\\
& \!+\!\gamma(t,t_0)\big\{2a\rho_{_{\!S}}(t)a^\+ \!-\! a^\+a\rho_{_{\!S}}(t) \! -\! \rho_{_{\!S}}(t)a^\+a\big\} \notag\\
& \!+\! \widetilde{\gamma}(t,t_0)\big\{a^\+ \rho_{_{\!S}}(t)a \!+\! a\rho_{_{\!S}}(t)a^\+\!-\! a^\+a\rho_{_{\!S}}(t) \!-\! \rho_{_{\!S}}(t)aa^\+\big\}. \label{eme1}
\end{align}
In this exact master equation, the first term describes a unitary evolution of the reduced density matrix with the renormalized
Hamiltonian
\begin{align}
H^r_{_{\!S}}(t)=\hbar \omega^r_s(t,t_0)a^\+a . \label{rsH}
\end{align}
\end{subequations}
This renormalized Hamiltonian contains all the energy corrections to the system arisen from the system-reservoir interaction
through the nonequilibrium evolution. The second and
the third terms describe the non-unitary evolution of the reduced density matrix, which contain all non-Markovian dissipation and
fluctuation dynamics induced by the back-reactions between the system and the reservoir through the system-reservoir interaction
(see Eqs.~(\ref{fmrc}) and (\ref{uvte}) given later).
Physically, the second and the third terms in the above master equation also characterize the emergence of disorder and fluctuations
induced by the system-reservoir interaction. This is because if the system is initially in a pure quantum state, it contains zero disorder
at beginning (its initial entropy is zero). The index $r$ denotes renormalized physical quantities hereafter.
The energy renormalization, the dissipation and fluctuation dynamics described in the exact master equation Eq.~(\ref{emefh})
are characterized by the non-Markovian renormalized frequency $\omega^r_s(t,t_0)$, the non-Markovian dissipation coefficient
$\gamma(t,t_0)$ and the non-Markovian fluctuation coefficient $\widetilde{\gamma}(t,t_0)$, respectively.
All these non-Markovian coefficients are nonperturbatively and exactly determined
by the following relations \cite{Wu2010,Xiong2010,Lei2011,Lei2012}
\begin{subequations}
\label{fmrc}
\begin{align}
& \omega^r_s(t,t_0) = - {\rm Im}[\dot{u}(t,t_0)/u(t,t_0)], \label{re} \\
& \gamma(t,t_0) = - {\rm Re}[\dot{u}(t,t_0)/u(t,t_0)], \label{ds} \\
& \tilde{\gamma}(t,t_0)= \dot{v}(t,t)-2v(t,t){\rm Re}[\dot{u}(t,t_0)/u(t,t_0)] .
\end{align}
\end{subequations}
Here $u(t,t_0)$ and $v(\tau,t)$ are the two non-equilibrium Green functions obeying the integro-differential Dyson equations,
\begin{subequations}
\label{uvte}
\begin{align}
& \frac{d}{dt}u(t,t_0) \!+\! i\omega_s u(t,t_0) \!+ \!\! \int^t_{t_0} \!\! d\tau g(t,\tau) u(\tau,t_0)=0 , \label{ut} \\
& v(\tau,t)= \! \int^\tau_{t_0} \!\! d\tau_1\! \int^t_{t_0} \!\! d\tau_2 u(\tau,\tau_1)\widetilde{g}(\tau_1 , \tau_2)u^*(t,\tau_2).
\end{align}
\end{subequations}
The non-Markovianity is manifested by the above time-convolution equation of motion for these non-equilibrium Green functions.
The integral kernels in the above convolution equations are given by
\begin{subequations}
\begin{align}
&g(t,\tau)=\!\! \int^\infty_0 \!\!\! d\omega J(\omega)e^{-i\omega(t-\tau)}, \label{gt} \\
&\widetilde{g}(\tau_1,\tau_2)=\!\! \int^\infty_0 \!\!\! d\omega J(\omega) \overline{n}(\omega,T_0)e^{-i\omega(\tau_1-\tau_2)},
\end{align}
\end{subequations}
which characterize the time correlations between the system and the reservoir through the system-reservoir interaction.
The frequency-dependent function
\begin{align}
J(\omega)\equiv \sum_k|V_k|^2\delta(\omega-\omega_k) \label{sd}
\end{align}
is called as the spectral density,
which fully encapsulates the fundamental dissipation (relaxation) and fluctuation (noise or dephasing) effects induced by the
system-reservoir interaction. Finally, the initial temperature dependent function
\begin{align}
\overline{n}(\omega_k,T_0)=\!{\rm Tr}_{_{\!E}}[b^\dag_kb_k \rho_{_{\!E}}(t_0)]= 1/[e^{\hbar \omega_k/k_BT_0}-1]
\end{align}
is the initial particle distribution in the reservoir.
An arbitrary initial state of the system can be expressed as
\begin{align}
\rho_{_{\!S}}(t_0)\!=\!\sum^\infty_{m,m'=0}\rho_{mm'}|m\rangle \langle m'|, \label{inis}
\end{align}
where $|m\rangle= \frac{1}{\sqrt{m!}}(a^\dag)^m|0\rangle$ is the bosonic Fock state. If
$\rho_{mm'}=c_m c^*_{m'}$, then $\rho_{_{\!S}}(t_0)$ is a pure state, otherwise it is a mixed state.
The exact solution of the exact master equation Eq.~(\ref{eme1}) has be found \cite{Xiong2015,Xiong2020},
\begin{align}
\rho^{\rm exact}_{_{\!S}}(t)=&\! \!\!\! \sum^\infty_{m,m'=0} \!\!\! \rho_{mm'} \!\!\!\!\!\!\! \sum_{k=0}^{\rm min\{m,m'\}}
\!\!\!\!\!\!\! d_k(t) A^\+_{mk}(t)\widetilde{\rho}[v(t,t)] A_{m'k}(t), \label{esrdm}
\end{align}
where
\begin{subequations}
\begin{align}
& \widetilde{\rho}[v(t,t)]=\sum_{n=0}^\infty\frac{[v(t,t)]^n}{[1+v(t,t)]^{n+1}}|n\rangle\langle n|, \\
& A^\+_{mk}(t)=\frac{\sqrt{m!}}{(m-k)!\sqrt{k!}}\Big[\frac{u(t,t_0)}{1+v(t,t)}a^\+\Big]^{m-k}, \\
& d_k(t)=\!\Big[1\!-\!\frac{|u(t,t_0)|^2}{1+v(t,t)}\Big]^k.
\end{align}
\end{subequations}
As a self-consistent check of the above solution, we calculate the average particle number in the system from the above solution,
$\overline{n}(t)\equiv{\rm Tr}_{_{\!S}}[a^\+a\rho_{_{\!S}}(t)]$, also using the Heisenberg equation of motion directly,
$\overline{n}(t)\equiv{\rm Tr}_{{_{\!S}}+{_{\!E}}}[a^\+(t)a(t)\rho_{\rm tot}(t_0)]$. Both calculations give the same result
\cite{Jin2010,Lei2012,Yang2015,Zhang2018}:
\begin{align}
\overline{n}(t)& ={\rm Tr}_{_{\!S}}[a^\+a\rho_{_{\!S}}(t)]={\rm Tr}_{{_{\!S}}+{_{\!E}}}[a^\+(t)a(t)\rho_{\rm tot}(t_0)] \notag \\
& = u^*(t,t_0) \overline{n}(t_0) u(t.t_0) + v(t,t), \label{pnexp}
\end{align}
where $u(t,t_0)$ and $v(t,t)$ are determined by Eq.~(\ref{uvte}).
Based on the above exact formalism, for a given spectral density $J(\omega)$, if no localized bound state exists \cite{Zhang2012,Expl},
the general solution of Eq.~(\ref{ut}) is
\begin{align}
u(t,t_0) = \int_0^{\infty}\!\!\!\! d\omega D(\omega)e^{i\omega(t-t_0)} \xrightarrow{t\rightarrow\infty} 0 \label{ut0}
\end{align}
where $ D(\omega) \! = \! \frac{J(\omega)}{[\omega-\omega_s-\Delta(\omega)]^2 + \pi^2J^2(\omega)}$ shows the system spectrum broadening due to the coupling to the reservoir,
and $\Delta(\omega) = {\cal P}\big[\! \int d\omega' \frac{J(\omega')}{\omega-\omega'}\big]$ is the principal-value integral of the self-energy correction to the system,
$\Sigma(\omega)\!=\! \int d\omega' \frac{J(\omega')}{\omega-\omega'}\!=\!\Delta(\omega)-i\pi J(\omega)$. In fact, $\Delta(\omega)$
gives the system frequency (or energy) shift.
As a result, in the steady state limit, we have $A^\+_{mk}(t) \xrightarrow{t\rightarrow\infty} \delta_{mk}$ and $d_k(t)\xrightarrow{t\rightarrow\infty}1$. Then
the exact solution of the particle distribution and the reduced density matrix are reduced to
\begin{subequations}
\label{smss}
\begin{align}
\overline{n}_{\rm exact}\!(t\!\rightarrow\!\infty) &= \lim_{t\rightarrow\infty}v(t,t) = \!\! \int^\infty_0 \!\! \!\! d\omega D(\omega)\overline{n}(\omega,T_0), \label{sspd} \\
\rho^{\rm exact}_{_{\!S}}\!(t\!\rightarrow\!\infty) &= \lim_{t\rightarrow\infty}\sum_{n=0}^\infty \frac{[v(t,t)]^n} {[1+ v(t,t)]^{n+1}} | n \rangle \langle n | \\
&=\lim_{t\rightarrow\infty} \frac{\exp\big\{\ln\!\big[\!\frac{v(t,t)}{1+v(t,t)}\big]a^\+a\big\}}{1+v(t,t)} . \label{rss}
\end{align}
\end{subequations}
Equation (\ref{smss}) is the exact steady-state solution of the system coupled to a thermal reservoir for all
coupling strengths for the open system Eq.~(\ref{fH}). All the influences of the reservoir on the system through the
system-reservoir interaction have been taken into account in this solution.
Remarkably, the above results show that the exact solution of the steady state is independent of the initial state
of the system and is determined by the particle distribution, as a consequence of thermalization \cite{Xiong2015,Xiong2020}.
Note that the above exact master equation formalism remains the same for initial states involved initial correlations
between the system and the reservoir, with the only modification of the correlation function $v(\tau,t)$, as we have
shown in Refs.~\cite{Tan2011,Huang2020,Yang2015}.
This exact master equation formalism has also been extended to open quantum systems including external deriving fields
\cite{Lei2012,Chiang21}.
\subsection{Renormalization of quantum thermodynamics}
Now we can study quantum thermodynamics for all the coupling strengths from the above exact solution.
First, the master equation Eq.~(\ref{emefh}) shows that the Hamiltonian of the system must be renormalized
from $H_{_{\!S}}$ to $H^r_{_{\!S}}$ given by the energy (or frequency) shift from $\hbar\omega_s$ to $\hbar\omega^r_s(t)$ during
the nonequilibrium dynamical evolution. This is a nonperturbative renormalization effect of the system-reservoir coupling on the system.
The renormalized frequency $\omega^r_s(t)$ and its steady-state value $\omega^r_s=\omega^r_s(t\!\rightarrow\!\infty)$ can be exactly calculated from
Eqs.~(\ref{re}) and (\ref{ut}). Here we take the Ohmic spectral density $J(\omega)\!=\!\eta\omega\exp(-\omega/\omega_c)$
\cite{Leggett1987} in the practical calculation. The result is presented in Fig.~\ref{pdsc}(a) and (b). It shows that different
system-reservoir coupling strengths $\eta$ will cause different renormalized system energies, resulting in
different cavity frequency shifts, see Fig.~\ref{pdsc}(a). In Fig.~\ref{pdsc}(b), we plot the steady-state values of the renormalized cavity
frequency as a function of the system-reservoir coupling strength $\eta/\eta_c$, where $\eta_c=\omega_s/\omega_c$ is a
critical coupling strength for the Ohmic spectral density \cite{Zhang2012,Xiong2015}.
When $\eta >\eta_c$, the system-reservoir coupling would generate a localized mode (localized bound state)
such that the cavity system cannot approach to the equilibrium with the reservoir, as we will discuss later \cite{Xiong2015,Xiong2020}.
\begin{figure}[ht]
\centering
\includegraphics[width=5.50cm]{Fig1a.pdf}
\includegraphics[width=5.5cm]{Fig1bc.pdf}
\caption{(a) The renormalized system energy (cavity frequency shift) $\hbar\omega^r_s(t)$ for three different system-reservoir
coupling strengths, $\eta=0.01\eta_c, 0.5 \eta_c, 0.9 \eta_c$. It is calculated from Eqs.~(\ref{re}) and (\ref{ut}) for the Ohmic spectral density
$J(\omega)\!=\!\eta\omega\exp(-\omega/\omega_c)$, where the cutoff frequency $\omega_c=10\omega_s$ is taken, and
$\eta_c\!=\!\omega_s/\omega_c$ is a critical coupling for the Ohmic spectral density
\cite{Zhang2012,Xiong2015}.
(b) The steady-state renormalized frequency shift $\omega^r_s =\omega^r_s (t\!\rightarrow\!\infty)$ as a function
of the system-reservoir coupling strength $\eta/\eta_c$.
(c) The steady-state particle distribution $\overline{n}_{\rm exact}(t\!\rightarrow\!\infty)$ of Eq.~(\ref{sspd}) (the blue-dashed line),
the Bose-Einstein distribution without the energy (frequency) renormalization $\overline{n}(\omega_s,T_0)$ (the black-dot line)
and with the energy renormalization $\overline{n}(\omega^r_s,T_0)$ (the green-dashed-dot line), respectively.
The system is initially set in a pure Fock state $|n_0\rangle$ with $n_0=5$, and the reservoir initial temperature
$T_0=10 \hbar \omega_s$.
\label{pdsc}}
\end{figure}
In Fig.~\ref{pdsc}(c), we plot the exact solution $\overline{n}_{\rm exact}(t\!\rightarrow\!\infty)$ of Eq.~(\ref{sspd})
as a function of the coupling strength $\eta/\eta_c$ (the blue-dashed line). We compare the result with the Bose-Einstein
distribution without the energy (frequency) renormalization, $\overline{n}(\omega_s,T_0)=1/[e^{\hbar \omega_s/k_BT_0}-1]$
(see the black-dot line), also compare to Bose-Einstein distribution with the energy renormalization,
$\overline{n}(\omega^r_s,T_0)=1/[e^{\hbar \omega^r_s/k_BT_0}-1]$ (see the green-dashed-dot line).
As one can see, the exact solution $\overline{n}_{\rm exact}(t\!\rightarrow\!\infty)$ derivates significantly from the Bose-Einstein distribution
without the energy renormalization, i.e.,~$\overline{n}(\omega_s,T_0)$, as $\eta$ increases. This derivation shows how the system-reservoir
coupling strength changes the intrinsic thermal property of the system. On the other hand, the Bose-Einstein distribution
with the renormalized energy, given by $\overline{n}(\omega^r_s,T_0)$, changes with the changes of $\eta$, similar to the exact solution
$\overline{n}_{\rm exact}(t\!\rightarrow\!\infty)$. But there is still a quantitative disagreement between the exact solution $\overline{n}_{\rm exact}(t\!\rightarrow\!\infty)$
and the Bose-Einstein distribution $\overline{n}(\omega^r_s,T_0)$ with the renormalized cavity photon energy $\hbar\omega^r_s$.
To understand further the origin of the above difference, let us recall that the exact solution
$\rho^{\rm exact}_{_{\!S}}\!(t\!\rightarrow\!\infty)$ of Eq.~(\ref{rss}) is indeed a Gibbs-type state.
This indicates that the exact particle distribution $\overline{n}_{\rm exact}(t\!\rightarrow\!\infty)$ should obey a
Bose-Einstein distribution for all coupling strengths. To find such distribution that agrees with the exact solution
Eq.~(\ref{sspd}), one possibility is to renormalize the temperature, because no other thermal quantity
can be modified in the Gibbs state for this photonic system.
In the literature, it is commonly believed that the reservoir is large enough so that its temperature
should keep invariant \cite{Seifert2016,Jarzynski2017}.
However, the initial decoupled states Eq.~(\ref{itdm}) of the system plus the reservoir is not an equilibrium state
of the total system. After the initial time $t_0$,
both the system and reservoir evolve into a correlated (entangled) nonequilibrium state $\rho_{\rm tot}(t)$.
When the system and the reservoir reach the equilibrium state, there must be a fundamental way to show whether the
new equilibrium state is still characterized by the initial reservoir temperature.
As a self-consistent check, let us denote
the final steady-state equilibrium temperature as $T_f$. Then, according to the equilibrium statistical mechanics,
the steady state of the total system (the system plus the reservoir) should be
\begin{align}
\rho_{\rm tot}(t\rightarrow \infty) = \frac{1}{Z_{\rm tot}} e^{-\beta_f H_{\rm tot}} , \label{tsdm}
\end{align}
where $\beta_f=1/k_BT_f$, and $H_{\rm tot}$ is the total Hamiltonian of the system plus the reservoir, including the
coupling interaction between them, i.e.,~Eq.~(\ref{fH}).
Taking a trace over the reservoir states from the above steady state of the total density matrix, we have rigorously proven
\cite{Huang2020} that (also see the detailed derivation given in Appendix A)
\begin{align}
\rho_{_{\!S}}(t\rightarrow \infty) \! & =\! {\rm Tr}_{_{\!E}}\Big[\frac{e^{-\beta^r H_{\rm tot}}}{Z_{\rm tot}}\Big] \notag \\
&=\! \frac{\exp\big\{\ln\!\big[\!\frac{\overline{n}(t\!\rightarrow\!\infty)}{1+\overline{n}(t\!\rightarrow\!\infty)}\big]a^\+a\big\}}{1+\overline{n}(t\!\rightarrow\!\infty)}.
\label{ess}
\end{align}
This result is the same as the solution of Eq.~(\ref{smss}). The latter is the steady state of the exact time-dependent solution
Eq.~(\ref{esrdm}) solved from the exact master equation Eq.~(\ref{eme1}) for arbitrary coupling. This shows that
the equilibrium state Eq.~(\ref{tsdm}) which is originally proposed in statistical mechanics
is indeed valid for both the weak and strong coupling between the system
and the reservoir. Furthermore, the exact particle distribution can be obtained from the dynamical evolution of exact master equation
or from the Heisenberg equation of motion directly, as shown by Eq.~(\ref{pnexp}). Thus, we have
$\overline{n}(t\!\rightarrow\!\infty)= {\rm Tr}_{{_{\!S}}+{_{\!E}}}[a^\dag a \rho_{\rm tot}(t\!\rightarrow\!\infty)]= {\rm Tr}_{_{\!S}}[a^\dag a \rho_{_{\!S}}(t\!\rightarrow\!\infty)]
=\overline{n}_{\rm exact}(t\!\rightarrow\!\infty)$. This gives a further self-consist justification to the above conclusion.
The result presented in Fig.~\ref{pdsc}(c) shows that $\overline{n}(\omega^r_s,T_0) \neq \overline{n}_{\rm exact}(t\!\rightarrow\!\infty)$
except for the weak coupling. This indicates that in general, $T_f \neq T_0$, namely the final equilibrium temperature of the total
system cannot be the same as the initial equilibrium temperature of the reservoir when the total system reaches the new
equilibrium state, except for the very weak coupling strength.
\iffalse
The fact that the final steady state temperature $T_f$
does not equal to the initial reservoir temperature $T_0$ in the strong coupling can also be justified from the second law of thermodynamics.
Let the initial state of the system is a pure state, then from Eqs.~[\ref{itdm}] and [\ref{tsdm}], we have the initial entropy of the total system
\begin{align}
&S^0_{\rm tot}=-{\rm Tr}_{{_{\!S}}+{_{\!E}}}[\rho_{\rm tot}(t_0)\ln\rho_{\rm tot}(t_0)]= \beta_0 E^0_{_{\!E}} \\
&S^f_{\rm tot}=-{\rm Tr}_{{_{\!S}}+{_{\!E}}}[\rho_{\rm tot}(t\!\rightarrow\!\infty)\ln\rho_{\rm tot}(t\!\rightarrow\!\infty)]= \beta^r E^f_{\rm tot}
\end{align}
Because the total system is a closed system, the total energy is conserved: $E^f_{\rm tot}=E^0_{_{\!E}}+E^0_{_{\!S}}$. Meanwhile,
the total system evolving from the initial state of Eq.~(\ref{itdm}) into the steady state of Eq.~(\ref{tsdm}) is an nonequilibrium
process, the second law of thermodynamics requires that
\begin{align}
\Delta S_{\rm tot}= S^f_{\rm tot} - S^0_{\rm tot} = (\beta^r-\beta^0)E^0_{_{\!E}} + \beta^r E^0_{_{\!S}}> 0
\end{align}
which is valid for any initial pure state of the system.
\fi
Now the question is how to determine this steady-state equilibrium temperature $T_f$ when the system and the reservoir finally reach
the equilibrium state. According to the axiomatic description of thermodynamics \cite{Callen1985}, the equilibrium temperature
of a system is defined as the change of its internal energy with respect to the change of its thermal entropy.
This temperature definition in thermodynamics does not assume a weak coupling between the system and
the reservoir because no statistical mechanics is used in this definition. It is the fundamental definition of the temperature for arbitrary
two coupled thermodynamic systems when they reach the equilibrium each other, from which the zeroth law of thermodynamics is derived
\cite{Callen1985}.
Now, the average energy of the system at arbitrary time, i.e., the nonequilibrium internal energy of the system, is given by the renormalized
Hamiltonian Eq.~(\ref{rsH}) with the exact solution of the reduced density matrix $\rho_{_{\!S}}(t)$ of Eq.~(\ref{esrdm}):
\begin{align}
U_{_{\!S}}(t) \equiv {\rm Tr}_{_{\!S}}[H^r_{_{\!S}}\!(t)\rho_{_{\!S}}(t)]. \label{intE}
\end{align}
Also, we define the von Neumann entropy with the exact reduced density matrix of Eq.~(\ref{esrdm}) as
the nonequilibrium thermodynamic entropy of the system \cite{Ali2020a,Neumann55,Callen1985}:
\begin{align}
S_{_{\!S}}(t)=\!- k_B{\rm Tr}_{_{\!S}}[\rho_{_{\!S}}(t)\ln\rho_{_{\!S}}(t)], \label{Entr}
\end{align}
where $k_B$ is the Boltzmann constant. Note that this entropy is defined for the exact reduced density matrix obtained
after traced over exactly all the reservoir states. It also encapsulates all the renormalization
effects of the system-reservoir interaction to the system state distributions. Thus, we introduce a renormalized
nonequilibrium thermodynamic temperature \cite{Ali2020} which is defined as
\begin{align}
T^r\!(t) \equiv \frac{\partial U_{_{\!S}}(t)}{\partial S_{_{\!S}}(t)}\bigg|_{\omega^r_s} \!\! = {\rm Tr}_{_{\!S}}\bigg[H^r_{_{\!S}}\!(t) \frac{d\rho_{_{\!S}}(t)}{dS_{_{\!S}}(t)}\bigg] . \label{rdT}
\end{align}
This is a direct generalization of the concept of the equilibrium temperature to nonequilibrium states in open quantum systems.
When the system and its reservoir reach the equilibrium steady state, no matter the system-reservoir coupling is strong or weak,
we can fundamentally obtain the final equilibrium temperature
$T_f \equiv T^r\!=\!T^r(t\!\rightarrow\!\infty)$ from the dynamical evolution of the open quantum system.
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{Fig2.pdf}
\caption{
(a)-(c) The nonequilibrium dynamical evolution of the internal energy, the entropy and the corresponding renormalized
temperature, given respectively by Eqs.~(\ref{intE})-(\ref{rdT})
for different coupling strengths $\eta / \eta_c = 0.3, 0.5, 0.8$
(correspond to the blue-dot-dashed, green-dashed, red-dot lines, respectively).
(d) The steady-state values of the renormalized frequency $\omega^r_s$ and renormalized temperature $T^r$ as
a function of the coupling strength $\eta$.
(e) The steady-state particle distribution as a function of coupling strength $\eta$. It shows that the
exact solution $\overline{n}_{\rm exact}(t\!\rightarrow\!\infty)$ of Eq.~(\ref{sspd}) (the blue-dashed line) and
the Bose-Einstein distribution $\overline{n}(\omega^r_s,T^r)$ with the renormalized frequency
and the renormalized temperature (the red-dot line) agree perfectly to each other.
The green-dashed-dot line is $\overline{n}(\omega^r_s,T_0)$ without the renormalized temperature,
which cannot describe the exact solution solved from the exact master equation.
Other parameters are taken as the same as that in Fig.~\ref{pdsc}.
\label{pdsc2}}
\end{figure}
From the exact solution of the reduced density matrix $\rho^{\rm exact}_{_{\!S}}(t)$ of Eq.~(\ref{esrdm}), we calculate
the time-dependence of the internal energy $U_{_{\!S}}(t)$, the entropy $S_{_{\!S}}(t)$ and the dynamical renormalized
temperature $T^r(t)$ for different
coupling strengths. The corresponding results are presented in Fig.~\ref{pdsc2}(a)-(c), respectively. It shows
explicitly how the nonequilibrium internal energy, entropy and renormalized temperature evolve
differently for different system-reservoir coupling strengths. Their steady-state values also approach different points
for different coupling strengths. The different steady-state internal energies and entropies associated with different
system-reservoir coupling strengths result in different
steady-state temperatures. This indicates that the reservoir cannot remain unchanged from the initial reservoir temperature.
This new feature has not been discovered or noticed in all previous investigations of strong-coupling quantum thermodynamics
\cite{Seifert2016,Carrega2016,Ochoa2016,Jarzynski2017,Marcantoni2017,Bruch2018,
Perarnau2018,Hsiang2018,Anders2018,Strasberg2019,Newman2020,Rivas2020}.
In Fig.~\ref{pdsc2}(d), we plot the steady-state
renormalized temperature, $T^r\!=\!T^r(t\!\rightarrow\!\infty)$, as a function of the coupling astrength $\eta/\eta_c$.
Using this renormalized temperature, we further plot the Bose-Einstein distribution with both the renormalized energy and
the renormalized temperature: $\overline{n}(\omega^r_s,T^r)=1/[e^{\hbar \omega^r_s/k_BT^r}-1]$, see the red-dot line
in Fig.~\ref{pdsc2}(e). \textit{Remarkably, it perfectly reproduces the exact solution of Eq.~(\ref{sspd}),} i.e.,
\begin{align}
\overline{n}_{\rm exact}(t\!\rightarrow\!\infty)=\overline{n}(\omega^r_s,T^r)=\frac{1}{e^{\hbar \omega^r_s/k_BT^r}-1}.
\end{align}
In other words, in the steady state, the exact solution of the steady-state particle occupation
solved from the exact dynamics of the open quantum system obeys the standard Bose-Einstein distribution
only for the renormalized Hamiltonian Eq.~(\ref{rsH}) with the renormalized temperature Eq.~(\ref{rdT}).
This provides a very strong proof that strong coupling quantum thermodynamics must be renormalized
for both the system Hamiltonian and the temperature.
Furthermore, in terms of the renormalized Hamiltonian Eq.~(\ref{rsH}) and the renormalized temperature Eq.~(\ref{rdT}),
the steady state Eq.~(\ref{rss}) can be expressed as the standard Gibbs state,
\begin{align}
\rho^{\rm exact}_{_{\!S}}\!(t\!\rightarrow\!\infty)\! & = \!\! \sum_{n=0}^\infty \!\frac{[\overline{n}(\omega^r_s,T^r)]^n} {[1+ \overline{n}(\omega^r_s,T^r)]^{n+1}}
| n \rangle \langle n | \notag \\ & =\! \frac{1}{Z^r_{_{\!S}}} e^{-\beta^r\!H^r_s} , \label{rsspd}
\end{align}
where
$Z^r_{_{\!S}}\!=\!{\rm Tr}_{_{\!S}}[e^{-\beta^r H^r_{_{\!S}}}]$ is the renormalized partition function, and $\beta^r\!=\!1/k_BT^r$
is the inverse renormalized temperature in the steady state. This is a direct proof
of how the statistical mechanics, as a consequence of disorder or randomness in the nature,
emerges from the exact dynamical evolution of quantum mechanics.
Moreover, one can check that in the very weak
coupling regime $\eta\!\ll\!\eta_c$, $\Delta(\omega)\!\rightarrow\!0$ and $D(\omega)\!\rightarrow\!\delta(\omega-\omega_s)$
so that the steady state solution of Eq.~[\ref{sspd}] is directly reduced to $\overline{n}(\omega_s,T_0)$
\cite{Xiong2015,Xiong2020},
and
\begin{align}
\rho^{\rm exact}_{_{\!S}}\!(t\!\rightarrow\!\infty)\! \xrightarrow{\eta\ll \eta_c} & \sum_{n=0}^\infty\! \frac{[\overline{n}(\omega_s,T_0)]^n}
{[1\!+\!\overline{n}(\omega_s,T_0)]^{n+1}} | n \rangle \langle n | \notag \\
& =\! \frac{1}{Z_{_{\!S}}} e^{-\beta_0 H_{_{\!S}}} . \label{rssw}
\end{align}
This reproduces the expected solution of the statistical mechanics in the weak coupling regime.
Figures \ref{pdsc} and \ref{pdsc2} also show that $\hbar\omega^r_s\!\rightarrow\!\hbar\omega_s$ and $T^r\!\rightarrow\!T_0$
at very weak coupling. Thus, the equilibrium hypothesis of thermodynamics and statistical mechanics
is deduced rigorously from the dynamics of quantum systems. This solves the long-standing problem
of how thermodynamics and statistical mechanics emergence from quantum dynamical evolution \cite{Huang1987}.
On the other hand, $\eta_c= \omega_s/\omega_c$ is a critical coupling strength for Ohmic spectral density that
when $\eta > \eta_c$, the system exists a dissipationless localized bound state (localized mode) at frequency
$\omega_b=\omega_s + \Delta(\omega_b)$ with $J(\omega_b)=0$ \cite{Zhang2012}.
Once such a localized mode exists, the spectral function $D(\omega)$ of the system in Eq.~(\ref{ut0}) is modified as
\begin{align}
D(\omega) \! = Z(\omega_b) \delta(\omega\!-\!\omega_b) + \! \frac{J(\omega)}{[\omega\!-\!\omega_s\!-\!\Delta(\omega)]^2 \!+\! \pi^2J^2(\omega)},
\end{align}
where $Z (\omega_b) = [1-\partial \Sigma(\omega)/\partial \omega]^{-1}\big|_{\omega=\omega_b}$ is the localized bound state wavefunction.
Then, the asymptotic value of the Green function $u(t\!\rightarrow\!\infty,t_0)$ never vanishes. As a result,
the steady state of the reduced density matrix Eq.~(\ref{esrdm}) cannot be reduced to Eq.~(\ref{smss}). It always depends on
the initial state distribution $\rho_{mm'}$ of Eq.~(\ref{inis}). In other words, the system cannot be thermalized
with the reservoir \cite{Xiong2015,Xiong2020,Ali2020}.
\begin{widetext}
\begin{figure}[ht]
\includegraphics[width=15cm]{Fig3.pdf}
\caption{\label{ultro_s}
The nonequilibrium dynamical evolution of the internal energy $U_{_{\!S}}(t)$, the entropy $S_{_{\!S}}(t)$ and the renormalized
nonequilibrium temperature $T^r(t)$ for different system initial states $|n_0\rangle = |5\rangle, |10\rangle$ and $|15\rangle$ (corresponding
to the blue dashed-dot line, the grees dashed line and the red solid line, respectively). The left, the midden and the right
panels correspond to (a) the weak coupling ($\eta \ll \eta_c$), (b) the strong coupling $\eta < \eta_c$), and (c) the
ultra-strong coupling $\eta > \eta_c$). The initial bath temperature $T_0=30 \hbar \omega_s$. Other parameters are taken
the same as that in Fig.~\ref{pdsc}.}
\end{figure}
\end{widetext}
In Fig.~\ref{ultro_s}, we plot the nonequilibrium dynamical evolution of the internal energy, the entropy production and the
renormalized temperature with different initial states for the very weak coupling ($\eta=0.01 \eta_c \ll \eta_c$), the strong
coupling ($\eta =0.5 \eta_c < \eta_c$) and the ultra-strong coupling ($\eta=1.2 \eta_c > \eta_c$) cases. The results show
that when $\eta > \eta_c$, different initial states of the system lead to different steady states. That is, the equilibrium hypothesis
of the classical thermodynamics and statistical mechanics is broken down at ultra-strong coupling. Also note that after consider
the renormalization of the system Hamiltonian, the dynamics of the internal energy and the
renormalized temperature are significantly changed, in particular in the strong coupling regime, in the comparison with our
previous study \cite{Ali2020} where no renormalization of the system Hamiltonian is taken into account. On the other hand,
regarding the system and the reservoir as a many-body system, the existence of the localized bound state in
the regime $\eta > \eta_c$ corresponds to a realization of the many-body localization \cite{Nandkishore15}.
When the coupling strength $\eta$ crosses the critical value $\eta_c$, the transition from thermalization
to many-body localization occurs \cite{Nandkishore15}. Our exact solution provides the foundation of this transition
between thermodynamics and many-body localization.
\subsection{Quantum work and quantum heat}
Quantum mechanics does not introduce the concepts of work and heat because it deals with closed systems.
For open quantum systems,
the exchanges of matters, energies and information between the system and the reservoir cause the energy change
of the system in the nonequilibrium evolution. This results in the work and chemical work (associated with chemical
potential) done on the system or by the
system, and the heat flowed into or out of the system. But usually the exchanges of matters, energies and information
are correlated and interfered each other. This makes difficulties to define clearly the concepts of work, heat
and chemical work in quantum thermodynamics. For the photons and phonons described by Eq.~(\ref{fH}), no
matters exchange between the system and the reservoir so that no chemical work involves (chemical potential is zero here).
Thus, the energy change of the system only involves with work and heat. After integrated out exactly the reservoir degrees
of freedom, the reduced density matrix Eq.~(\ref{esrdm}) and the associated renormalized system Hamiltonian
Eq.~(\ref{rsH}) can be used to properly define thermodynamic work and heat within the quantum mechanics framework.
The chemical work will be considered when we study fermion systems, as we will discuss in the next section.
As it is shown from Eq.~(\ref{intE}), the nonequilibrium change of the internal energy in time contains two parts.
One is the change (i.e.~the renormalization) of the system Hamiltonian $H^r_{_{\!S}}\!(t)$
(through the renormalization of the energy level $\hbar\omega^r_{_{\!S}}(t)$) which corresponds to the \textit{quantum work}
done on the system. Note that in quantum mechanics, the concept of volume in a physical system is mainly characterized
by energy levels through energy quantization. Thus, the change of volume is naturally replaced by the change of
energy levels, which results in a proper definition of work in quantum mechanics \cite{Zemansky1997}.
The other part is the change of the density state $\rho_{_{\!S}}(t)$ which corresponds to \textit{quantum heat}
associated with the entropy production. Consequently,
\begin{align}
\frac{dU_{_{\!S}}(t)}{dt} & =\!{\rm Tr}_{_{\!S}}\Big[\rho_{_{\!S}}(t)\frac{dH^r_{_{\!S}}\!(t)}{dt}\Big]+{\rm Tr}_{_{\!S}}\Big[H^r_{{_{\!S}}}\!(t)\frac{d\rho_{_{\!S}}(t)}{dt}\Big] \notag \\
&= \frac{dW_s(t)}{dt}+\frac{dQ_s(t)}{dt} .
\label{1stl}
\end{align}
This is the first law of nonequilibrium quantum thermodynamics.
Thus, the quantum work and quantum heat can be naturally determined by
\begin{subequations}
\begin{align}
\frac{dW_{_{\!S}}(t)}{dt} &=\!{\rm Tr}_{_{\!S}}\Big[\rho_{_{\!S}}(t)\frac{dH^r_{_{\!S}}(t)}{dt}\Big] = \overline{n}(t)\frac{d(\hbar\omega^r_s(t))}{dt} , \\
\frac{dQ_{_{\!S}}(t)}{dt} & ={\rm Tr}_{_{\!S}}\Big[H^r_{_{\!S}}(t)\frac{d\rho_{_{\!S}}(t)}{dt}\Big] =T^r\!(t) \frac{dS_{_{\!S}}(t)}{dt}, \label{heatt}
\end{align}
\end{subequations}
where $\overline{n}(t)= {\rm Tr}_{_{\!S}}[a^\+a\rho_{_{\!S}}(t)]$ is given by Eq.~(\ref{pnexp}).
The second equalities in the above equations have used explicitly the renormalized system Hamiltonian given after
Eq.~(\ref{eme1}) and the definition of the renormalized temperature Eq.~(\ref{rdT}), respectively.
In the literature, there are various definitions about work and heat for strong coupling quantum thermodynamics
but no consensus has been reached. The main concern is how to correctly include the system-reservoir coupling energy
into the internal energy of the system \cite{Esposito2010b,Binder2015,Esposito2015,Alipour2016,Strasberg2017,
Perarnau2018,Rivas2020}. The difficulty comes from the fact that most of open quantum systems cannot be solved
exactly so that it is not clear how to properly separate the contributions of the system-reservoir coupling interaction into
the system and into the reservoir, respectively. However, this difficulty can be overcome in our exact master equation
formalism. Because we obtain the renormalized system Hamiltonian accompanied with the reduced density matrix after
integrated out exactly all the reservoir degrees of freedom, namely exactly solved the partial trace over all the reservoir states.
Thus, the renormalized system Hamiltonian contains all possible
contributions of the system-reservoir coupling interaction to the system energy.
Explicitly, let us rewrite the renormalized system Hamiltonian Eq.~(\ref{rsH}) as
\begin{align}
H_{_{\!S}}^r(t)& =\hbar \omega^r_s(t,t_0)a^\+a = \hbar \omega_sa^\+a + \delta\omega_{_{\!S}}(t,t_0)a^\+a\notag \\
&=H_{_{\!S}} + \delta H_{_{\!S}}(t). \label{rsh}
\end{align}
From Eq.~(\ref{re}) and Eq.~(\ref{ut}), we have
\begin{align}
\omega^r_s(t,t_0) &= \omega_{_{\!S}} \!+\delta \omega_{_{\!S}}(t,t_0) \notag \\
& = \omega_{_{\!S}} \!+ \frac{1}{2}{\rm Im}\bigg[\! \int^t_{t_0} \!\! d\tau g(t,\tau) u(\tau,t_0)/ u(t,t_0)\bigg]. \label{rsf}
\end{align}
Here $H_{_{\!S}}= \hbar \omega_sa^\+a $ is the bare Hamiltonian of the system. The second term in Eq.~(\ref{rsf})
contains all order contributions of the system-reservoir coupling interaction to the system energy, as shown in Fig.~\ref{figrsf}. Figure
\ref{figrsf}(a) is a diagrammatic plot of the bare Hamniltonians of the system, the reservoir and the interaction between them, respectively.
Figure \ref{figrsf}(b) is the diagrammatic expansion (up to infinite orders) of the retarded Green function of Eq.~(\ref{ut}), from which all order
renormalization effects to the system energy change (the system frequency shift) are reproduced. This diagrammatic expansion up to the infinite
orders illustrate the nonperturbative renormalized energy arisen from the system-reservoir interaction in our exact master equation theory.
On the other hand, it is interesting to see that if we replace the full solution of the Green function $u(\tau,t_0)$ approximately
with the free-particle (zero-th order) Green function $u_0(\tau,t_0) = e^{-i\omega_{_{\!S}} (\tau-t_0)}$ (also
for $u(t,t_0)$) in Eq.~(\ref{rsf}), the result is just the second-order renormalized energy correction. By applying this same approximation
to the dissipation and fluctuation coefficients in Eq.~(\ref{fmrc}), it is straightforward to obtain the time-dependent decay rate and
noise in the Born-Markovian master equation, as we have shown in our previous work \cite{Xiong2010}. But once we have the exact master
equation with the exact solution, such approximated master equation is no longer needed.
\begin{figure}[ht]
\centering
\includegraphics[width=8cm]{Fig4.pdf}
\caption{(a) The diagrammatic spectra of the Hamiltonians of the system, the reservoir and the their interaction.
(b) The diagrammatic Dyson expansion of Eq.~(\ref{ut}) in the energy domain, where
$\Sigma(\omega)\!=\! \int d\omega' \frac{J(\omega')}{\omega-\omega'}$ is the self-energy arisen from the
coupling between the system and the reservoir, and $J(\omega)\equiv \sum_k|V_k|^2\delta(\omega-\omega_k)$. The renormalized
system energy $\hbar\omega^r_s$ and the dissipation coefficient $\gamma$ of Eqs.~(\ref{re})-(\ref{ds}) are
determined nonperturbatively from
Eq.~(\ref{ut}) with a Laplace transformation, which contains all order contributions upto the infinite orders from
the system-reservoir coupling Hamiltonian, as shown in this diagrammatic expansion.}
\label{figrsf}
\end{figure}
In Fig.~\ref{C} (a)-(b), we plot the nonequilibrium evolution of $dW_{_{\!S}}(t)/dt$ and $dQ_{_{\!S}}(t)/dt$ for different coupling
strengths. The negative values of $dW_{_{\!S}}(t)/dt$ show quantum work done by the system during the quantum mechanical time
evolution, and more work is done by the system for the stronger system-reservoir coupling.
While, $dQ_{_{\!S}}(t)/dt$ is negative and then becomes positive in time, which shows that quantum heat flows into the
reservoir in the beginning and then flows back to the system in later time.
This corresponds to the system dissipate energy very quickly into the reservoir in the very beginning,
and then the thermal fluctuations arisen from the reservoir makes the heat flowing back slowly into the system.
This heat flowing process can indeed be explained clearly from the exact master equation Eq.~(\ref{eme1})
combined with Eq.~(\ref{heatt}). It directly results in
\begin{align}
dQ_{_{\!S}}(t)/dt= \hbar \omega^r_s\big[-2\gamma(t,t_0)n(t) + \widetilde{\gamma}(t,t_0)\big],
\end{align}
where the first term is the contribution from dissipation and the second is the contribution of fluctuations in our
exact master equation. That is, the heating flow in open quantum systems is a combination effect of dissipation
and fluctuation dynamics, which makes the system and the reservoir approach eventually to the equilibrium. This is
also a renormalization effect.
Furthermore, the quantum Helmholtz free energy of the system is defined by a Legendre transformation from
the internal energy $U_{_{\!S}}(t)$ \cite{Ali2020a,Callen1985}:
\begin{align}
F_{_{\!S}}(t) & = U_{_{\!S}}(t) - T^r\!(t)S_{_{\!S}}(t) \notag \\ & \stackrel{t\rightarrow \infty}{\longrightarrow} -(1/\beta^r) \ln Z^r_{_{\!S}} . \label{rfe}
\end{align}
From the above solution, we have
\begin{align}
dF_{_{\!S}}(t) = dW_{_{\!S}}(t) - S_{_{\!S}}(t)dT^r\!(t),
\end{align}
which naturally leads to the consistency that the quantum thermodynamic
work done on the system can be identified with the change of the Helmholtz free energy of
the system in isothermal processes \cite{Callen1985}. Moreover, the specific heat calculated from the
internal energy and from the Gibbs state with the renormalized Hamiltonian and temperature are also
identical, as shown in Fig.~\ref{C}(c),
\begin{align}
C =&\frac{dQ_{_{\!S}}}{dT^r}= T^r \frac{dS_{_{\!S}}}{dT^r} = \frac{\partial U_{_{\!S}}}{\partial T^r}\bigg|_{\omega^r_s} , \label{rhc}
\end{align}
where the third thermodynamic law is justified from the specific heat at arbitrary coupling strength: $C \sim (T^r)^3$
as $T^r\!\rightarrow\!0$. Thus, a consistent formalism of quantum thermodynamics from the weak coupling
to the strong coupling is obtained from a simple open quantum system of Eq.~(\ref{fH}).
\begin{figure}[ht]
\center
\includegraphics[width=5.5cm]{Fig5.pdf}
\caption{\label{C} (a)-(b) The nonequilinrium evolution of quantum work and quantum heat changings with respect to the time,
$dW_{_{\!S}}(t)/dt$ and $dQ_{_{\!S}}(t)/dt$ (in the unit of $\hbar \omega_s^2$), for different coupling strengths.
(c) The steady-state specific heat as a function of the renormalized temperature calculated from the derivative of the internal energy
with respect to the renormalized temperature
Eq.~(\ref{rhc}) (red lines) and from the partition function given in the Gibbs state Eq.~(\ref{rsspd}) for different
initial temperatures. The dashed-dot, dot and dashed lines correspond to the different coupling strengths $\eta/\eta_c=0.3,0.5,0.8$,
respectively. Other parameters are taken the same as that in Fig.~\ref{pdsc}.}
\end{figure}
\section{The more general formulation of quantum thermodynamics for all couplings}
\subsection{Multi-level open quantum system couple to multiple reservoirs}
The results from the exact solution of the single-mode bosonic open system in the last section show that different from the previous investigations \cite{Seifert2016,Carrega2016,Ochoa2016,
Jarzynski2017,Marcantoni2017,Bruch2018,Perarnau2018,Hsiang2018,Anders2018,Strasberg2019,Newman2020,Ali2020a,Rivas2020},
only by introducing the renormalized temperature and incorporating with the renormalized system Hamiltonian,
can we obtain the consistent quantum thermodynamics for all coupling strengths.
Now we extend this quantum thermodynamics formulation to the more general situation: a multi-level
system couples to multiple reservoirs (including both bosonic and fermionic systems) through
the particle exchange (tunneling) processes.
In a quasiparticle picture, the Hamiltonian of a microscopic system in the energy eigenbasis
can be written as $H_S= \sum_i \varepsilon_{i} a^\+_ia_i$.
As a specific example, consider the system be an individual system and the reservoir be
a many-body system. The system Hamiltonian can be generally expressed as
\begin{align}
H_{_{\!S}}= \frac{{\bf P}^2}{2m} + V({\bf Q}) = \sum_i \varepsilon_i |\psi_i\rangle \langle \psi_i |=
\sum_i \varepsilon_i a^\dag_i a_i . \label{sph}
\end{align}
In Eq.~(\ref{sph}), the second equality is the spectral decomposition of the system Hamiltonian:
$H_{_{\!S}} |\psi_i\rangle= \varepsilon_i |\psi_i\rangle$, and the last equality uses the second
quantization language: $ |\psi_i\rangle= a^\dag_i |0\rangle$ and $a_i |0\rangle=0$,
and $|0\rangle$ is the vacuum state.
The particle creation and annihilation operators $a^\dag_i$ and $a_i$ obey the
standard bosonic commutation and fermionic anticommutation relations: $[a_i, a^\dag_j]_\mp
= a_ja^\dag_j \mp a^\dag_j a_i=\delta_{ij}$ when the system being boson and fermion systems,
respectively.
Similarly, the Hamiltonian of a reservoir can also be written as
$H_{_{\!E}} = \sum_{ k} \epsilon_{ k} b^\dag_{ k} b_{ k}$,
where $\epsilon_{ k}$ is usually a continuous spectrum and
could have band structure for structured reservoir. For a many-body reservoir in which
the particle-particle interaction is not strong enough, the single quasiparticle picture works
\cite{Thouless1972}. Then the reservoir Hamiltonian can be expressed approximated as
\begin{align}
H_{_{\!E}} &\simeq \! \sum_j \! \Big[\frac{\boldsymbol p^2_j}{2m_j} \!+\! U({\boldsymbol q}_j)
\!+\! \overline{\sum_{j'}\!V({\boldsymbol q}_j, {\boldsymbol q}_{j'})} \, \Big] \notag \\
& = \sum_k \epsilon_k |\psi_k\rangle \langle \psi_k| = \sum_k \epsilon_k b^\dag_k b_k,
\end{align}
where $\overline{\sum_{j'} V({\boldsymbol q}_j, {\boldsymbol q}_{j'})}$ represents
the effective mean-field potential of many-body interactions, and $ \big[\frac{\boldsymbol p^2_j}{2m_j}
+ U({\boldsymbol q}_j) + \overline{\sum_{j'}V({\boldsymbol q}_j, {\boldsymbol q}_{j'})}\,\big]|\psi_k\rangle =
\epsilon_k |\psi_k\rangle$ gives the quasiparticle continuous spectrum of the reservoir.
The reservoir particle creation and annihilation operators $b^\dag_k$ and $b_k$ also obey the
standard bosonic commutation or fermionic anticommutation relations. In fact,
the system can also be either a simple system or such a many-body system.
To dynamically address statistical mechanics and thermodynamics from quantum mechanical principle, the fundamental
system-reservoir interactions are required to contain at least the basic physical processes of energy exchanges, matter
exchanges and information exchanges between the system and reservoirs.
The simplest realization for such a minimum requirement is the quantum tunneling Hamiltonian,
\begin{align}
H_{{_{\!S}}{_{\!E}}}= \sum_{ ik}\big(V_{ ik}a^\dag_i b_{ k}+ V^*_{ ik}b^\dag_{ k} a_i\big),
\end{align}
which is also the basic Hamiltonian in the study of quantum transport in mesoscopic physics as well as in nuclear, atomic
and condensed matter physics for various phenomena \cite{Hang1996,Miroshnichenko2010,Jin2010,Lei2012}.
The coupling strengths $V_{ ik}$ are proportional to the quasiparticle wavefunction overlaps between
the system and reservoirs and therefore are tunable through nanotechnology manipulations \cite{Hang1996,Miroshnichenko2010}
so that they can be weak or strong coupling. More discussions about fundamental system-reservoir interactions will be
given in the next section.
Thus, a basic Hamiltonian with the minimum requirement for solving the foundation of quantum thermodynamics and statistical mechanics
can be modeled as
\begin{align}
H_{\rm tot}(t) =&\, H_{_{\!S}}(t)+\sum_\alpha H^\alpha_{_{\!E}}(t)+\sum_\alpha H^\alpha_{{_{\!S}}{_{\!E}}}(t) \notag \\
=& \sum_i \varepsilon_{i}(t) a^\+_ia_i \!+ \!\! \sum_{\alpha k} \epsilon_{\alpha k}(t) b^\dag_{\alpha k} b_{\alpha k} \notag \\
&+ \! \sum_{\alpha ik}\!\big[V_{\alpha ik}(t)a^\dag_i b_{\alpha k}\!+\! V^*_{\alpha ik}(t)b^\dag_{\alpha k} a_i\big], \label{qth}
\end{align}
which describes the system one concerned couples to multiple reservoirs. This is a generalization of the Fano-Anderson Hamiltonian
we introduced \cite{Zhang2012,Zhang2018}. The index $\alpha$ stands for different reservoirs.
All parameters in the Hamiltonian can be time-dependently controlled with the current nano and quantum technologies.
This is an exact solvable Hamiltonian that involves explicit exchanges of energies, matter and information between the system
and reservoirs. It allows us to rigorously solve quantum statistics and thermodynamics
from the dynamical evolution of quantum systems. Also note that the above open quantum systems are different from the
one proposed by Feynman and Vernon \cite{Feynman1963} as well as by Caldeira and Leggett \cite{Leggett1983b}
in the previous investigations of dissipative quantum dynamics in the sense that
their environment is made only by harmonic oscillators and the system-environment coupling is limited to
the weak coupling.
We have derived the exact master equation of the open systems with Eq.~(\ref{qth}) for the reduced density
matrix of the system. The formal solution of the total density matrix of the Liouville-von Neumann equation (\ref{voneq})
can be expressed as:
\begin{align} \label{fsLvNe}
\rho_{_{\!S}}(t)\!=\!{\rm Tr}_{_{\!E}}\big[{\cal U}(t,t_0)\big(\rho_{_{\!S}}(t_0) {\prod}_\alpha \!\!\!\otimes \rho^\alpha_{_{\!E}}(t_0)\big){\cal U}^\dag(t,t_0)\big],
\end{align}
where ${\cal U}(t,t_0)={\cal T}_{\rightarrow}\exp\big\{\!-\!\frac{i}{\hbar}\!\int^t_{t_0}\!H_{\rm tot}(t')dt'\big\}$ is the time evolution
of the total system, and ${\cal T_\rightarrow}$ is the time-ordering operator. Here the system is initially in an arbitrary state $\rho_{_{\!S}}(t_0)$.
All reservoirs can be initially in their own
equalibrium thermal states, $\rho^\alpha_{_{\!E}}(t_0)= e^{-\beta_{\alpha 0} (H^\alpha_{_{\!E}} - \mu_{\alpha 0}\hat{N}^\alpha)}/Z_\alpha$
which can have different initial temperature $\beta_{\alpha 0}=1/k_BT_{\alpha 0}$ and different chemical potentials $\mu_{\alpha 0}$
for different reservoir $\alpha$. Here $\hat{N}^\alpha$ is the total particle number operator of reservoir $\alpha$.
After trace out all the environmental states through the coherent state path integrals \cite{Zhang1990}, the resulting exact master
equation of the system is indeed a generalization of Eq.~(\ref{eme1}) to multi-level open systems
\cite{Tu2008,Jin2010,Lei2012,Yang2015,Yang2017,Zhang2018,Huang2020},
\begin{align}
\frac{d}{dt}{\rho}_{_{\!S}}(t) =& \frac{1}{i\hbar} \big[ H^r_{_{\!S}}(t), \rho_{_{\!S}}(t)\big] \!+\!\! {\sum}_{ij} \! \big\{\gamma_{ij}\left( t,t_0\right)
\!\! \big[2a_{j}\rho_{_{\!S}}(t) a_{i}^{\+} \notag \\
&-\!a_{i}^{\+}a_{j}\rho_{_{\!S}}(t) \!-\!\rho_{_{\!S}}(t) a_{i}^{\+}a_{j}\big] \!+\! \widetilde{\gamma}_{ij}(t,t_0) \big [a_{i}
^{\+}\rho_{_{\!S}}(t) a_{j} \notag \\
&~~~{\pm} a_{j}\rho_{_{\!S}}(t) a_{i}^{\+}\mp a_{i}^{\+}a_{j}
\rho_{_{\!S}}(t) \!-\! \rho_{_{\!S}}(t) a_{j}a_{i}^{\+}\big]\big\}.
\label{EME}
\end{align}
where the upper and lower signs of $\pm$ correspond respectively to the bosonic and fermionic systems.
In the above exact master equation, all the renormalization effects arisen from the system-reservoir interactions have been taken
into account when all the environmental degrees of freedoms are integrated out nonperturbatively and exactly in finding the
reduced density matrix. These renormalization effects are manifested by the renormalized system Hamiltonian,
\begin{align}
H^r_{_{\!S}}(t) ={\sum}_{ij} \varepsilon^r_{s,ij}(t,t_0)a_i^\+a_j \label{fa_rH}
\end{align}
and the dissipation and fluctuations coefficients $\gamma_{ij}(t,t_0)$ and $\widetilde{\gamma}_{ij}(t,t_0)$ in Eq.~(\ref{EME}).
These time-dependent coefficients are determined nonperturbatively and exactly by the following relations,
\begin{subequations}
\label{tddfc}
\begin{align}
& \varepsilon^r_{ij}(t,t_0) = \!- \hbar {\rm Im}\big[\dot{\bm u}(t,t_0)
\bm u^{-1}(t,t_0) \big]_{ij}, \label{fa_re} \\
& \gamma_{ij}(t,t_0) = \!-{\rm Re}\big[\dot{\bm u}(t,t_0)
\bm u^{-1}(t,t_0) \big]_{ij}, \\
& \widetilde{\gamma}_{ij}(t,t_0) =\dot{\bm v}_{ij}(t,t)\!-\!\big[\dot{\bm u}(t,t_0) \bm u^{-1}(t,t_0)
\bm v(t,t)\!+\!\text{h.c.} \big]_{ij} .
\end{align}
\end{subequations}
where ${\bm u}(t,t_0)$ and ${\bm v}(t,t)$ are $N\times N$ nonequilibrium Green function matrices and $N$ is the total number
of energy levels in the system.
The nonequilibrium retarded Green functions $u_{ij}(t,t_0) \equiv \langle [a_i(t), a^\+_j(t_0)]_\pm$ which obeys the
equation of motion \cite{Zhang2012,Tu2008,Jin2010,Lei2012},
\begin{subequations}
\label{uvgf}
\begin{align}
\frac{d}{dt}\bm u(t,t_{0}) -& \frac{1}{i\hbar} {\bm \varepsilon}(t) \bm u(t, t_{0}) +\!\!\int_{t_{0}}^{t}\!\! \!\! dt' \bm g(t,t')
\bm u(t',t_{0}) =0. \label{ute}
\end{align}
The nonequilibrium correlation Green function ${\bm v}(t,t)$ obeys the nonequilibrium fluctuation-dissipation relation \cite{Zhang2012},
\begin{align}
{\bm v}(\tau,t) \!=\!\!\! \int^\tau_{t_0}\!\!dt_1\!\int^t_{t_0}\!\!dt_2\bm u(\tau,t_1)\,\widetilde{\bm g}(t_1, t_2)\,\bm u^\+(t,t_2). \label{vt}
\end{align}
\end{subequations}
The integral memory kernels $\bm g(t,t')$ and $\widetilde{\bm g}(t_1, t_2)$ are
the system-reservoir time correlations and are given by
\begin{subequations}
\label{giniti}
\begin{align}
& g_{ij}(t,t') = \! \sum_{\alpha k} \frac{1}{\hbar^2}V_{\alpha i k}(t')V^*_{\alpha jk}(t)
\exp\!\bigg\{\!\!-\!\frac{i}{\hbar}\! \!\int_t^{t'} \!\!\!\! d\tau\epsilon_{\alpha k} (\tau)\bigg\} , \label{ik} \\
& \widetilde{g}_{ij}(t_1,t_2) = \!\sum_{\alpha k}\frac{1}{\hbar^2}V_{\alpha ik}(t_2)V^*_{\alpha jk} (t_1)
\big\langle b^\dag_{\alpha k}(t_0) b_{\alpha k}(t_0) \big\rangle_E \notag \\
&~~~~~~~~~~~~~~~~~~~~\times
\!\exp\!\bigg\{\!\!-\!\frac{i}{\hbar}\! \!\int_{t_2}^{t_1} \!\!\!\!d\tau \epsilon_{\alpha k} (\tau)\bigg\} .
\end{align}
\end{subequations}
Here the initial reservoir correlation function,
\begin{align}
\big\langle b^\dag_{\alpha k}(t_0) b_{\alpha k}(t_0) \big\rangle_E & = f ( \epsilon_{\alpha k},T_{\alpha 0},\mu_{\alpha 0}) \notag \\
& =\frac{1}{[e^{(\epsilon -\mu_{\alpha 0})/k_{B}T_{\alpha 0}} {\mp} 1 ]},
\end{align}
determines the initial particle distribution of the bosons or fermions in the initial thermal
reservoir $\alpha$ with the chemical potential $\mu_{\alpha 0}$ and the temperature
$T_{\alpha 0}$ at initial time $t_0$.
In the case the energy spectra of the reservoirs and the system-reservoir
couplings are time-independent, the memory kernels are simply reduced to
$g_{ij}(t,t')\!=\!\!\!\int\!\!d\epsilon J_{ij}(\epsilon)e^{-i\epsilon(t-t')}$,
$\widetilde{g}_{ij}(t_1,t_2)\! =\!\!\!\int\!\!d\epsilon J_{ij}(\epsilon)f(\epsilon,T_\alpha,\mu_\alpha)e^{-i\epsilon(t_1-t_2)}$,
where
\begin{align}
J_{ij}(\epsilon) =\frac{1}{\hbar^2} {\sum}_{\alpha k}V_{\alpha ik}V^*_{\alpha jk}\delta(\epsilon-\epsilon_{\alpha k})
={\sum}_\alpha J_{\alpha,ij}(\epsilon),
\end{align}
and $J_{\alpha ij}(\epsilon) $ is the spectral density matrix of reservoir $\alpha$.
\subsection{The theory of quantum thermodynamics from the weak to the strong couplings}
Again, if there exist no many-body localized bound states, the exact solution of Eq.~(\ref{EME}) has recently been solved
\cite{Xiong2020} and its exact steady state is (see a detailed derivation in Appendix B)
\begin{align}
\rho^{\rm exact}_{_{\!S}}(t\!\rightarrow\!\infty)=\frac{\exp\Big\{\sum_{ij}\Big(\!\ln\frac{\overline{\bm n}}{I\pm\overline{\bm n}}\Big)_{ij}
a^\dag_ia_j\Big\}}{[\det(I\pm\overline{\bm n})]^{\pm 1}}
\label{gss}
\end{align}
which is a generalized Gibbs-type state. Here
$\overline{n}_{ij}=\lim_{t\rightarrow \infty} n_{ij}(t)$ is the one-particle density matrix defined
as \cite{Jin2010,Yang1962}
\begin{align}
n_{ij}(t) \equiv {\rm Tr}_{_{\!S}}[a^\+_ia_j \rho_{_{\!S}}(t)] = \rho^{(1)}_{ij}(t). \label{opdm}
\end{align}
The solution Eq.~(\ref{gss}) remains the same for initial system-reservoir correlated states with a modification of
$\widetilde{\bm g}(t_1,t_2)$ in Eq.~(\ref{vt}) to include the initial correlations between the system and
reservoirs \cite{Yang2015,Huang2020}.
Thus, the nonequilibrium internal energy, entropy and particle number can be defined by
\begin{subequations}
\label{snd}
\begin{align}
&U_{_{\!S}}(t)\!\equiv\!{\rm Tr}_{_{\!S}}[H^r_{_{\!S}}\!(t)\rho_{_{\!S}}(t)] = \sum_{ij}\varepsilon^r_{ij}(t)n_{ij}(t), \\
& S_{_{\!S}}(t)\equiv\!-\!k_B{\rm Tr}_{_{\!S}}[\rho_{_{\!S}}(t)\ln\rho_{_{\!S}}(t)], \\
& N_{_{\!S}}(t) \equiv\!{\rm Tr}_{_{\!S}}[a^\dag_i a_i \rho_{_{\!S}}(t)] \!=\!\ \sum_in_{ii}(t).
\end{align}
\end{subequations}
They are related to each other and may form the fundamental equation for quantum thermodynamics
\cite{Callen1985,Ali2020a}: $U_{_{\!S}}(t) = U_{_{\!S}}(\varepsilon^r_s(t),S_{_{\!S}}(t), N_{_{\!S}}(t))$. Here energy levels
play a similar role as the volume \cite{Zemansky1997}. Thus,
\begin{align}
dU_{_{\!S}}(t)\!=\!dW_{_{\!S}}(t)+T^r(t)dS_{_{\!S}}(t)+\mu^r(t)dN_{_{\!S}}(t),
\end{align}
as the first law of nonequilibrium quantum thermodynamics.
Explicitly, the quantum work $dW_{_{\!S}}(t)$ done on the system is arisen from the changes of energy levels,
\begin{align}
\frac{dW_{_{\!S}}(t)}{dt}=\!{\rm Tr}_{_{\!S}}\Big[\rho_{_{\!S}}(t)\frac{dH^r_{_{\!S}}(t)}{dt}\Big] \!=\! {\sum}_{ij}n_{ij}(t)\frac{d\varepsilon^r_{s,ij}(t)}{dt} .
\end{align}
The quantum heat $dQ_{_{\!S}}(t)$ (also including the chemical work $dW^c_{_{\!S}}(t)$)
comes from the changes of particle distributions and transitions (the one-particle density matrix, see Eq.~(\ref{opdm})),
\begin{align}
dQ_{_{\!S}}(t)+dW^c_{_{\!S}}(t)
\!=\! {\sum}_{ij}\!\varepsilon^r_{s,ij}(t) dn_{ij}(t)\notag \\
&=\!T^r\!(t) dS_{_{\!S}}(t)\! +\! \mu^r\!(t)dN_{_{\!S}}(t).
\end{align}
It shows that $dn_{ij}(t)$ characterizes both the state information exchanges (entropy production)
and the matter exchanges (chemical process for massive particles) between the systems and
the reservoir. For photon or phonon systems, particle number is the number of
energy quanta $\hbar\omega$ so that $\mu^r(t)\!=\!0$.
From the above formulation, we can define the renormalized temperature and renormalized chemical potential by
\begin{align}
&T^r\!(t)= \! \frac{\partial U_{_{\!S}}(t)}{\partial S_{_{\!S}}(t)}\bigg|_{\varepsilon^r_s(t), N_{_{\!S}}(t)}, ~
\mu^r(t)= \! \frac{\partial U_{_{\!S}}(t)}{\partial N_{_{\!S}}(t)}\bigg|_{\varepsilon^r_s(t), S_{_{\!S}}(t)} . \label{rdTg}
\end{align}
As a result, Eq.~(\ref{gss}) can be also written as the standard Gibbs state,
\begin{align}
\rho^{\rm exact}_{_{\!S}}(t\!\rightarrow\!\infty)=\frac{1}{Z^r} \exp\big\{\!-\!\beta^r(H^r_{_{\!S}}\!-\!\mu^r \hat{N})\big\} , \label{ggss}
\end{align}
which is given in terms of the renormalized Hamiltonian $H^r_{_{\!S}}(t)$, the renormalized temperature $T^r(t)$ and the renormalized
chemical potential $\mu^r(t)$ at steady state, and $\hat{N}=\sum_ia^\+_ia_i$ is the particle
number operator of the system.
Because the exact solution of the steady state is a Gibbs state, thermodynamic
laws are all preserved at steady state. This completes our nonperturbative renormalization theory of quantum thermodynamics
for all the coupling strengths.
\subsection{An application to a nanoelectronic system with two reservoirs}
As a practical and nontrivial application:
we consider a nanoelectronic system, the single electron transistor made of a quantum dot coupled
to a source and a drain. Here the two leads which are treated as two reservoirs \cite{Hang1996,Tu2008,Jin2010},
see Fig.~\ref{st}(a). The total Hamiltonian is
\begin{align}
H_{\rm tot}\!=& \sum_{\sigma}
\varepsilon_\sigma a^\+_{\sigma} a_\sigma\!+\!\sum_{\alpha, k, \sigma}\epsilon_{\alpha k}
c^\+_{\alpha k \sigma}c_{\alpha k \sigma} \notag \\
& +\!\sum_{\alpha, k, \sigma}(V_{\alpha k}a^\+_{\sigma}
c_{\alpha k \sigma}\!+\!V^*_{\alpha k}c^\+_{\alpha k \sigma} a_{\sigma}). \label{setH}
\end{align}
The index $\sigma=\uparrow,\downarrow$ labels electron spin, $\alpha=L, R$ labels the left and right leads.
The two leads are setup initially in thermal states with different initial temperatures $T_{L,R}$ and chemical potentials
$\mu_{L.R}$. This is a prototype with nontrivial feature in the sense that two reservoirs initially
have different temperatures and different chemical potentials so that when the system reaches the steady state,
there exists only one final temperature and one final chemical potential. That is, one has to introduce
the renormalized temperature $T^r$ and the renormalized chemical potential $\mu^r$ to characterize this final
equilibrium state when the system and two reservoirs reach
equilibrium. While, other approaches proposed for strong coupling quantum thermodynamics in the
last few years \cite{Seifert2016,Carrega2016,Ochoa2016,Jarzynski2017,
Marcantoni2017,Bruch2018,Perarnau2018,Hsiang2018,Anders2018,Strasberg2019,Newman2020,Rivas2020}
keep the reservoir temperature unchanged and therefore must be invalid for such simple
but nontrivial open quantum systems.
To explicitly solve the renormalized thermodynamics of the above system, let $|0\rangle, |\uparrow\rangle,|\downarrow\rangle,|d\rangle$
(the empty state, the spin up and down states
and the double occupied state, respectively) be the basis of the 4-dim dot Hilbert space of this quantum dot system.
Then the reduced density matrix has the form,
\begin{align}
\rho(t) = \matx{\rho_{00}(t) & 0 & 0 & 0 \\ 0 & \rho_{\uparrow\uparrow}(t) & \rho_{\uparrow\downarrow}(t) & 0 \\ 0 &
\rho_{\downarrow\uparrow}(t) & \rho_{\downarrow\downarrow}(t) & 0 \\ 0 &0 & 0 & \rho_{dd}(t) } .
\end{align}
If the dot is initially empty, the $4\times4$ reduced density matrix has been solved exactly from the exact master equation \cite{Tu2011,Yang2018}:
\begin{align}
&\rho_{00}(t)\!=\!\det[I\!-\!\bm v(t,t)], ~~ \rho_{dd}\!=\!\det[\bm {v}(t)], \notag \\
& \rho_{\uparrow\uparrow}(t)\!=\!v_{\uparrow\uparrow}(t)\!-\!\rho_{33}(t) , ~~
\rho_{\downarrow\downarrow}(t)\!=\!v_{\downarrow\downarrow}(t)\!-\!\rho_{33}(t), \notag \\
&\rho_{\uparrow\downarrow}(t)\!=\!v_{\uparrow\downarrow}(t)\!=\!\rho^*_{\downarrow\uparrow}(t).
\end{align}
Here the $2\times 2$ matrix Green function $\bm{v}(t)\equiv \bm{v}(t,t)$ is determined
by the Green function $\bm{u}(t,t')$.
We take reservoir spectra as a Lorentzian form, then the spectral densities $J_\alpha(\epsilon)$ can be expressed as \cite{Meir1993,Tu2008}:
\begin{align}
J_{\alpha,ij}(\epsilon)=\frac{\Gamma_\alpha d^2}{\epsilon^2+d^2}\delta_{ij}~~ (i,j=\uparrow,\downarrow),
\end{align}
where $\Gamma_\alpha$ is the tunneling rate (the coupling strength) between the quantum dot and the lead $\alpha$.
For simplicity, we also ignore the spin-flip tunneling. The exact solution of the reduced density matrix is rather simple,
\begin{align}
\rho(t) = \det[1-\bm{v}(t)]\exp \Big\{ \bm{a}^\dag \ln\frac{\bm{v}(t)}{1-\bm{v}(t)} \bm{a}\Big\}.
\end{align}
Here $\bm{a}^\dag=(a^\dag_\uparrow, a^\dag_\downarrow)$, and
\begin{subequations}
\label{setvt}
\begin{align}
&v_{ii}(t) = \!\!\int^t_{t_0}\!\!\!dt_1\!\!\int^t_{t_0}\!\!\!\!dt_2\,u_{ii}(t,t_1)\,\widetilde{g}(t_1, t_2)\,u^*_{ii}(t,t_2), \\
&v_{\uparrow \downarrow}=0 , \\
& \frac{d}{dt}u_{ii}(t,t_0) + i\varepsilon_{i} u_{ii}(t,t_0)
+ \!\!\int_{t_0}^t\!\!\! dt' \!\! \int \!\! d\epsilon\frac{\Gamma d^2 e^{-\epsilon(t-t')}}{\epsilon^2+d^2}u_{ii}(t,t') \notag \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~=0 ,
\end{align}
\end{subequations}
for $i=\uparrow, \downarrow$, and $\Gamma=\Gamma_{L}+\Gamma_{R}$.
As a result, the nonequilibrium internal energy, the entropy and the total average particle number can be found analytically,
\begin{subequations}
\begin{align}
U_{_{\!S}}(t)= & \varepsilon^r_\uparrow(t) v_{\uparrow\uparrow}(t) + \varepsilon^r_\downarrow(t) v_{\downarrow\downarrow}(t) , \\
S_{_{\!S}}(t)= & -v_{\uparrow\uparrow}(t)\ln v_{\uparrow\uparrow}(t)-v_{\downarrow\downarrow}(t)\ln v_{\downarrow\downarrow}(t) \notag \\
& -(1-v_{\uparrow\uparrow}(t))\ln (1-v_{\uparrow\uparrow}(t)) \notag \\
& -(1-v_{\downarrow\downarrow}(t))\ln (1-v_{\downarrow\downarrow}(t)), \\
N_{_{\!S}}(t) = & v_{\uparrow\uparrow}(t) + v_{\downarrow\downarrow}(t) . \label{setN}
\end{align}
\end{subequations}
From the above solution, the corresponding renormalized energy, renormalized temperature and renormalized chemical potential
can be calculated straightforwardly.
\begin{figure}[ht]
\centering
\includegraphics[width=5.5cm]{Fig6a.pdf}
\includegraphics[width=8cm]{Fig6b-g.pdf}
\includegraphics[width=8cm]{Fig6hi.pdf}
\caption{(a) A schematic plot of the single-electron transistor device. (b)-(g) The nonequilibrium evolution of the energy levels
$\varepsilon^r_{\uparrow,\downarrow}(t)$, the particle occupation in each level $\overline{n}_{\uparrow,\downarrow}(t)$,
the internal energy $U_{_{\!S}}(t)$, the entropy $S_{_{\!S}}(t)$, the renormalized temperature $T^r(t)$ and chemical potential $\mu^r(t)$
at different coupling strength $\Gamma=0.2\varepsilon_\uparrow, 0.8\varepsilon_\uparrow$, respectively. (h) The steady-state
value of the renormalized energy levels $\varepsilon^r_{\uparrow,\downarrow}$, the renormalized temperature $T^r$ and the renormalized chemical
potential $\mu^r$ as a function of the coupling strength, and (i) the comparison of the renormalized Fermi-Dirac distribution
$f(\varepsilon^r_{\uparrow,\downarrow},T^r,\mu^r)$ with
the exact solution of the $\overline{n}_{\uparrow,\downarrow}(t\!\rightarrow\!\infty)$ as a function of the coupling strength $\Gamma$.
Other parameters: $\varepsilon_\downarrow = 3 \varepsilon_\uparrow$, $k_BT_{L,R}=(3,0.1)\varepsilon_\uparrow$,
$\mu_{L,R}=(5,2)\varepsilon_\uparrow$, and $d=10\varepsilon_\uparrow$.
}
\label{st}
\end{figure}
In Fig.~\ref{st}(b)-(g), we show the nonequilibrium evolution of the renormalized energy levels $\varepsilon^r_{\uparrow,\downarrow}(t)$,
the particle occupations in each levels $\overline{n}_{\uparrow,\downarrow}(t)$, the internal energy $U_{_{\!S}}(t)$, the entropy $S_{_{\!S}}(t)$,
the renormalized temperature $T^r(t)$ and the renormalized chemical potential $\mu^r(t)$ for the coupling
strength $\Gamma_L =\Gamma_R=\Gamma/2$ for different coupling strengths. It shows that in such a nano-scale device,
all physical quantities are quickly approach to the steady state. Then, in Fig.~\ref{st}(h), we plot the steady-state values of the
renormalized energy levels $\varepsilon^r_{\uparrow,\downarrow}$, the renormalized temperature $T^r$ and the renormalized chemical
potential $\mu^r$ as a function of the coupling strength, respectively. These renormalized thermodynamical quantities
change as the change of the coupling strength.
Finally, in Fig.~\ref{st}(i), we present the corresponding renormalized Fermi-Dirac distributions (the Fermi-Dirac distribution
with renormalized energy, the renormalized temperature and the renormalized chemical potential):
$f(\varepsilon^r_{\uparrow,\downarrow},T^r,\mu^r)=1/[e^{(\varepsilon^r_{\uparrow,\downarrow}-\mu^r_\alpha)/k_{B}T^r_\alpha}+1]$.
We compare the renormalized Fermi-Dirac distributions with the exact solution of the occupation numbers
$\overline{n}_{\uparrow,\downarrow}(t\!\rightarrow\!\infty)$
which are solved from the exact master equation. The results shows that they completely agree with each other.
This provides the proof to the consistency of the renormalized strong coupling quantum
thermodynamics for fermionic systems.
To have a clearer physical picture about the renormalized temperature and renormalized chemical potential when the system
coupled to two reservoirs, we take various different setups of the initial temperatures and initial chemical potentials of the two reservoirs
in Fig.~\ref{dist}. From these results, we can see how the renormalized temperature and the renormalized chemical potential changes
for the different setups, even in the weak-coupling regime. To understand these results, we first compare the exact solution with its
weak-coupling limit. Since we also take the same spectral density for two reservoirs, we find that in the very weak-coupling limit (WCL),
\begin{align}
N_{_{\!S}}(t\rightarrow \infty) & = n_{\uparrow}(\mu^r,T^r) + n_{\downarrow}(\mu^r,T^r) \notag \\
& \stackrel{\rm WCL}{\rightarrow} \frac{1}{2}\big[n_{\uparrow}(\mu_L,T_L) + n_{\uparrow}(\mu_R,T_R) \big] \notag \\
&~~~~~~~ +\frac{1}{2}\big[n_{\downarrow}(\mu_L,T_L) + n_{\downarrow}(\mu_R,T_R)\big] . \label{setNwcl}
\end{align}
The first equality is the exact solution from Eqs.~(\ref{setN}) and the second equality is obtained with
the help (\ref{setvt}) in the very weak-coupling limit, where $\mu_L,T_L$ and $\mu_R,T_R$ are the initial chemical
potential and temperatures of the left and right reservoirs, respectively.
Figure \ref{dist}(a) shows the results for the two reservoirs that have the same initial
temperature and the same initial chemical potential. Because the two reservoirs are set to have the same spectral density,
the two reservoirs are equivalent to one single reservoir in this case. Thus, the renormalized temperature and the
chemical potential approach to the initial temperature and the initial
chemical potential in the very weak coupling limit, as shown in Fig.~\ref{dist}(a), also as we expected.
Figure \ref{dist}(b) shows the results for the two reservoirs sharing the same initial temperature but having different initial chemical potentials.
Naively, one may think that the renormalized temperature in the very weak coupling limit should be the same as the same initial temperature of
the two reservoirs and the renormalized chemical potential should be $\mu^r=(\mu_L +\mu_R)/2$. From Fig.~\ref{dist}(b), we see that
the renormalized chemical potential in the very weak coupling limit is $\mu^r=(\mu_L +\mu_R)/2$, as we expected from energy
conservation law. However, the renormalized temperature is a bit larger than the initial temperature. This result can actually
be understood from Eq.~(\ref{setNwcl}). Because $\mu^r=(\mu_L+\mu_R)/2 \neq \mu_L\neq \mu_R$, Eq.~(\ref{setNwcl}) shows
that $T_L\neq T^r\neq T_R$ in the very weak-coupling limit, even through $T_L=T_R$.
Figure \ref{dist}(c) shows further the case $\mu_L=\mu_R$ and $T_L \neq T_R$. We have $\mu^r=\mu_L=\mu_R$, and from
Eq.~(\ref{setNwcl}), we find that $T^r \neq (T_L+T_R)/2$ in the very weak coupling limit, as shown in Fig.~\ref{dist}(c).
Figure \ref{dist}(d) shows the high temperature limit in which the chemical potentials play a little role. Thus, we have
$T^r\simeq (T_L+T_R)/2$ in the very weak coupling limit, even if $\mu_L \neq \mu_R$. This is shown in
Fig.~\ref{dist}(d). These results demonstrate that only at
very high temperature, the renormalized temperature $T^r=(T_L+T_R)/2$. In other words, in the
quantum regime, the renormalized temperature we introduced is necessary even at very weak-coupling limit for multi-reservoirs.
This justifies further the consistency of our renormalized theory for quantum thermodynamics at any coupling.
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{Fig7.pdf}
\caption{The steady-state renormalized temperature and renormalized chemical potential of the single-electron transistor as a function
of the system-reservoir coupling strength $\Gamma$ changing from weak to strong for different setups of the initial temperatures and initial
chemical potentials of the two leads (reservoirs): (a) Two reservoirs have the same initial temperature and chemical potential; (b) The initial
temperatures of the two reservoirs are the same but their initial chemical potentials are different; (c) The initial chemical potentials
of the two reservoirs are the same but their initial temperatures are different; and (d) Two reservoirs at high temperature limit.
}
\label{dist}
\end{figure}
\section{Extension to more arbitrary system-reservoir couplings}
The renormalized quantum thermodynamics for arbitrary coupling strength presented in Sec.~III,
given by Eq.~(\ref{snd}) to Eq.~(\ref{ggss}), is formulated from the exact master equation (\ref{EME}) based on the
system-reservoir coupling of Eq.~(\ref{qth}). However, this formulation can be directly extended to general open quantum
systems with system-reservoir couplings not limiting to the form of Eq.~(\ref{qth}). This is because
the renormalized Hamiltonian $H^r_{_{\!S}}(t)$ is determined by the nonequilibrium Green function $\bm u(t,t_0)$
of Eq.~(\ref{ute}). It can be applied to arbitrary system interacting with arbitrary environment. In Eq.~(\ref{ute}), $\bm g(t,t')$
is the self-energy correlation that can be easily generalized to any interacting system using the nonequilibrium Green
function technique in many-body systems \cite{Kadanoff1962,Keldysh1965}.
Meanwhile, the renormalized temperature $T^r(t)$ (the renormalized chemical potential
$\mu^r(t)$) of Eq.~(\ref{rdTg}) are determined by the changes of internal energy of the system with respect to the changes of
the von Neumann entropy (the average particle number of the system). These nonequilibrium thermodynamic quantities are
well defined by Eq.~(\ref{snd}).
They rely neither on the exact master equation of Eq.~(\ref{EME}) nor the system-environment coupling in
the Hamiltonian of Eq.~(\ref{qth}). They are all determined by the reduced density matrix which can be solved from
the Liouville-von Neumann equation (\ref{voneq}), whose formal solution can be expressed as
\begin{align}
\rho_{_{\!S}}(t) = {\rm Tr}_{_{\!E}}[{\cal U}(t,t_0) \rho_{\rm tot} (t_0){\cal U}^\+(t,t_0)] . \label{fs_rho2}
\end{align}
Here, ${\cal U}(t,t_0)$ is the quantum evolution operator, the same as the one given after
Eq.~(\ref{fsLvNe}) but the total Hamiltonian can be extended to arbitrary system interacting with arbitrary reservoir.
In most of cases, taking the trace over the environmental states
is the most difficult problem in open quantum systems. Practically, one can use the
perturbation expansion method to calculate the trace over the environmental states order by order approximately
\cite{Breuer2008}, or using the coherent state path integrals to nonperturbatively trace over all the
environmental states as we did \cite{Tu2008,Jin2010,Lei2012,Yang2015,Zhang2018,Huang2020,Zhang1990,Zhang2022}.
Here we focus on the nonperturbation procedure. All the renormalization effects of the system-reservoir interactions
on the system can be obtained from this procedure.
To be specific, let us consider a general fermionic system coupled to a general bosonic reservoir. Notice that Eq.~(\ref{qth})
describes the exchanges of energies and particles between the system and the reservoir only for both the system and the
reservoir that are made of the same type of quasiparticles, either bosons or fermions. When the system is a fermionic system
and the reservoir is a bosonic system, the system-reservoir coupling Hamiltonian generally has the following interaction form \cite{Zhang2012,Zhang2018}
\begin{align}
H_{{_{\!S}}{_{\!E}}}=\sum_{ ijk}\big[V_{ij}(k)c^\dag_i c_jb^\dag_k + V^*_{ij}(k) c^\dag_j c_i b_k \big]. \label{epi}
\end{align}
This system-reservoir interaction describes the energy exchange between the system and the reservoir through the transition
of a fermion (e.g.~an electron) between two states by emitting a boson (a quanta
of energy, such as a photon or a phonon) into the reservoir or absorbing a boson from the reservoir. The creation and
annihilation operators $c^\dag_i, c_i$~($b^\dag_k, b_k$) obey the standard fertmionic anticommutation (bosonic commutation)
relationships. In fact, Eq.~(\ref{epi}) is the general form of the non-relativistic electron-photon interaction that can be derived from the fundamental
field theory of quantum electrodynamics (QED).
Explicitly, the QED Lagrangian determines the fundamental electron-photon interaction as follows \cite{Peskin1995},
\begin{align}
{\cal L}_{\rm QED} = \overline{\psi}(i\gamma^\mu \partial_\mu - m)\psi - \frac{1}{4}F^{\mu\nu}F_{\mu\nu}
- e\overline{\psi}\gamma^\mu A_\mu \psi, \label{lqed}
\end{align}
where $\psi(x)$ is the fermionic field for electrons, $A_\mu(x)$ is the covariant 4-vector of the electromagnetic (EM) field, $\gamma^\mu$
is the Dirac matrix, and $F_{\mu\nu}(x)=\partial_\mu A_\nu(x) - \partial_\nu A_\mu(x)$ is the EM field strength tensor.
The first two terms of Eq.~(\ref{lqed}) leads to the free electron and photon Hamiltonians. The last term gives the fundamental
electron-photon interaction in the non-relativistic limit (by ignoring positrons and choosing a proper gauge).
Thus, the non-relativistic QED Hamiltonian is given by \cite{Coulomb_gauge}
\begin{align}
\!\!\!H_{\rm QED} \!= & H_{\rm electron} \!+\! H_{\rm photon} \!+\! H_{\rm e-p} \notag \\
= &\!\sum_{\bm p}\! \varepsilon_{\bm p} c^\dag_{\bm p} c_{\bm p}
\!+\!\!\!\! \sum_{\bm p, \bm p', \bm q} \!\!\! U(\bm q) c^\dag_{\bm p+\bm q}c^\dag_{\bm p'\!-\!\bm q} c_{\bm p'} c_{\bm p}
\!+ \!\! \sum_{\bm k} \! \hbar \omega_{\bm k} b^\dag_{\bm k} b_{\bm k} \notag \\
& +\! \hbar\!\sum_{\bm p \bm k} \!\big[V({\bm k}) c^\dag_{\bm p} c_{\bm p-\bm k} b_{\bm k} \!+\! V^*({\bm k})
c^\dag_{\bm p- \bm k} c_{\bm p } b^\dag_{\bm k} \big], \label{hqed}
\end{align}
where $c^\dag_{\bm p }, c_{\bm p}$ and $b^\dag_{\bm k},b_{\bm k}$ are creation and annihilation operators of electrons and
photons with momentum $\bm{p}$ and $\bm{k}$. The summations over $\bm{p}$ and $\bm{k}$ should be replaced by the
continuous integrals $\int\!\!\frac{d^3\bm p}{(2\pi)^3}$ and $\int\!\!\frac{d^3\bm k}{(2\pi)^3}$. Also, without loss of generality,
we have omitted the indices of electron spin and photon polarization. The first term in the second equality in Eq.~(\ref{hqed}) is the free electron
Hamiltonian. The second term is the electron-electron Coulomb interaction arisen from the choice of Coulomb gauge. The
third term is the EM field Hamiltonian, and the last two terms are the electron-photon interaction. Note that the electron-phonon
interaction in solid-state physics has the same form.
Equation (\ref{hqed}) can describe most of non-relativistic physics in the current physics research, unless one is also interested in the
phenomena in the smaller scale of nuclear arisen from the weak and strong interactions or the larger scale of universe from gravity.
\iffalse
we shall first take the perturbation expansion method to calculate the trace over the environmental states using perturbation expansion manner.
Then we will use the coherent state path integral method to nonperturbatively and completely integrate out the environmental degrees of freedom.
Consider again that the initial state of the system and the environment is decoupled \cite{Feynman1963,Leggett1987}:
$\rho_{\rm tot}(t_0)=\rho_{_{\!S}}(t_0)\otimes \rho_{_{\!E}}(t_0)$, and the environment is initially in a thermal equilibrium state.
One can apply the iteration method to Eq.~(\ref{fs_rho1}) to systematically expand the reduced density matrix order by order
to the desired accuracy.
For convenience, we rewrite Eq.~(\ref{fs_rho1}) in the interaction picture $\widetilde{\rho}_{\rm tot}(t)= e^{{i\over \hbar}H_0t}
\rho_{\rm tot}(t)e^{-{i\over \hbar} H_0t}$, where $H_0=H_{\rm electron}+H_{\rm photon}$ given in Eq.~(\ref{hqed}).
Then Eq.~(\ref{fs_rho1}) can be expanded order by order through the iterations,
\begin{align}
\label{perdm}
\widetilde{\rho}_{_{\!S}}(t)\! = \,& {\rm Tr}_{_{\!E}}[\widetilde{\rho}_{\rm tot}(t)]= \widetilde{\rho}_{_{\!S}}(t_0) \! + \! \frac{1}{i\hbar} \!\!\! \int_{t_0}^t \!\! dt' {\rm Tr}_{_{\!E}} [H_I(t'), \widetilde{\rho}_{\rm tot}(t')] \notag \\
= \, & \widetilde{\rho}_{_{\!S}}(t_0)
\!- \! \frac{1}{\hbar^2} \!\! \int_{t_0}^t \!\!\! dt' \!\!\! \int_{t_0}^{t'} \!\!\!\! dt'' {\rm Tr}_{_{\!E}}[H_I(t'), [H_I(t''),\widetilde{\rho}_{\rm tot}(t'')]],
\notag \\ = & \cdots\cdots .
\end{align}
In the second equality (the second iteration), we have used the fact that ${\rm Tr}_{_{\!E}} [H_I(t'), \widetilde{\rho}_{\rm tot}(t_0)]=0$.
Obviously, the higher order perturbation expansion can be obtained with subsequent iterations.
Applying the Born approximation $\rho_{\rm tot}(t'') \simeq \rho_{_{\!S}}(t'')\otimes \rho_{_{\!E}}(t_0)$ (which is valid only for the weak coupling limit)
and Markov approximation $\rho_{\rm tot}(t'') \simeq \rho_{\rm tot}(t) $ (which ignores the memory effect), the second iteration
in Eq.~(\ref{perdm}) yields the well-known Born-Markov master equation of open quantum systems,
\begin{subequations}
\begin{widetext}
\begin{align}
\frac{d}{dt} \rho_{_{\!S}}(t) = \, \frac{1}{i\hbar}[{H^r_{_{\!S}}}(t), \rho_S(t)]
+ \sum_{\bm p \bm p' \bm k} & \Big\{\kappa_{\bm p \bm k}(t) {\cal D}[
\big[a^\dag_{\bm p }a_{\bm p-\bm k} , a^\dag_{\bm p' -\bm k}a_{\bm p'}]{\rho_{_{\!S}}}(t)
\notag \\ &
+\widetilde{\kappa}_{\bm p \bm k}(t) \big[{\cal D}[a^\dag_{\bm p' -\bm k}a_{\bm p'} , a^\dag_{\bm p}a_{\bm p-\bm k}] {\rho_{_{\!S}}}(t)
+{\cal D}[a^\dag_{\bm p }a_{\bm p-\bm k} , a^\dag_{\bm p' -\bm k}a_{\bm p'}]{\rho_{_{\!S}}}(t)
\big] \Big\} . \label{bmme}
\end{align}
where
\begin{align}
{\cal D}[a^\dag_{\bm p }a_{\bm p-\bm k},a^\dag_{\bm p' -\bm k}a_{\bm p'}]\rho_{_{\!S}}(t) \equiv 2a^\dag_{\bm p'
-\bm k}a_{\bm p'}{\rho_{_{\!S}}}(t)a^\dag_{\bm p }a_{\bm p-\bm k} \!-\! a^\dag_{\bm p }a_{\bm p-\bm k} a^\dag_{\bm p' -\bm k}a_{\bm p'}{\rho_{_{\!S}}}(t)
- {\rho_{_{\!S}}}(t) a^\dag_{\bm p }a_{\bm p-\bm k} a^\dag_{\bm p' -\bm k}a_{\bm p'}
\end{align}
\end{widetext}
is a generalized Lindblad operator.
The renormalized system Hamiltonian is given by ${H^r_{_{\!S}}}(t)=H_{_{\!S}} + \delta H(t)$, where $\delta H(t)$ contains the reservoir-induced
electron energy shift and electron-electron interactions:
\begin{align}
\delta H(t) = \!- \!\! \sum_{\bm p} (\delta \varepsilon_{\bm p}\!+\!\widetilde{\delta \varepsilon}_{\bm p}) a^\dag_{\bm p}a_{\bm p}
\!+ \!\!\!\sum_{\bm p \bm p' \bm k}\! W_{\bm p \bm k}(t)a^\dag_{\bm p }a^\dag_{\bm p'\! -\bm k}
a_{\bm p-\bm k} a_{\bm p'} . \label{eq:8}
\end{align}
\end{subequations}
The other two terms in the master equation (\ref{bmme}), characterized by the time-dependent coefficients $\kappa_{\bm p \bm k}(t)$
and $\widetilde{\kappa}_{\bm p \bm k}(t)$, describe respectively the dissipation and fluctuations of the electrons induced by the surrounding
photons. The reservoir-induced renormalized electron Hamiltonian, the electron-electron interaction, the dissipation and fluctuations
in the master equation of Eq.~(\ref{bmme}) are specfied by
\begin{subequations}
\label{eq:11}
\begin{align}
&\delta\varepsilon_{\bm p} = \frac{e^2}{\hbar}\!\sum_{\bm k} \!\!\int_0^t \!\!ds|V(\bm k)|^2\sin[(\omega_{\bm k}\!-\!\Delta_{\bm p\bm k})(t\!-\!s)], \\
&\widetilde{\delta\varepsilon}_{\bm p} = \frac{e^2}{\hbar}\!\sum_{\bm k} \!\!\int_0^t \!\!ds|V(\bm k)|^2\overline{n}(\bm k, T_0)
\big[\sin[(\omega_{\bm k}\!-\!\Delta_{\bm p\bm k})(t\!-\!s)], \notag \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - \sin[(\omega_{\bm k}\!+\!\Delta_{\bm p\bar{\bm k}})(t\!-\!s)] \big], \\
&W_{\bm p \bm k} = \frac{e^2}{\hbar} \!\!\int_0^t \!\!ds|V(\bm k)|^2\sin[(\omega_{\bm k}\!-\!\Delta_{\bm p\bm k})(t\!-\!s)] \\
&\kappa_{\bm p \bm k}(t) = \frac{e^2}{\hbar^2}\!\!\int_0^t \!\!ds|V(\bm k)|^2\cos[(\omega_{\bm k}\!-\!\Delta_{\bm p\bm k})(t\!-\!s)], \\
&\widetilde{\kappa}_{\bm p \bm k}(t) = \frac{e^2}{\hbar^2}\!\!\int_0^t \!\!\!ds|V(\bm k)|^2\overline{n}(\bm k, T_0)\cos[(\omega_{\bm k}\!-\!\Delta_{\bm p\bm k})(t\!-\!s)],
\end{align}
\end{subequations}
where $\Delta_{\bm p\bm k}=(\varepsilon_{\bm p}\!-\!\varepsilon_{\bm p- \bm k})/\hbar$,
$\Delta_{\bm p \bar{\bm k}}=(\varepsilon_{\bm p}\!-\!\varepsilon_{\bm p+ \bm k})/\hbar$ and $\overline{n}
(\omega_{\bm k},T_0) = \langle b^\dag_{\bm k} b_{\bm k}\rangle_0$ is the initial photon distribution
for the photon mode $\bm k$ at the initial time $t_0$.
It is easy to check that the master equation (\ref{bmme}) has the same structure as the exact master equation of Eq.~(\ref{EME}). However, the
renormalized energy and the time-dependent dissipation and fluctuation coefficients in Eq.~(\ref{bmme}) correspond indeed to the extended
second-order perturbation expansion of the exact relation (\ref{tddfc}) with respect to the system-reservoir coupling $eV(\bm k)$. This can be easily obtained from the
extended nonequilibrium Green function $\bm u(t,t_0) = \langle [a^\dag_{\bm p }(t)a_{\bm p-\bm k}(t),a^\dag_{\bm p' -\bm k}(t_0)a_{\bm p'}(t_0)]\rangle$
with the self-energy contains only the second-order correction given by the Feynman diagram of Fig.~\ref{fd}. Because the electron-photon interaction (the coupling
strength $\alpha_{_{\rm QED}}= e^2 / 4\pi\epsilon_0\hbar c \simeq 1/137$), the second order perturbation theory usually yields sufficient accuracy.
The extension to the higher order corrections is straightforward by more iterations in Eq.~(\ref{perdm}) although practically the calculation is very cumbersome.
Once the renormalized system Hamiltonian and the reduced density matrix can be systematically found through the perturbation expansion,
the renormalized of quantum thermodynamics formulated in Sec.~III can be directly applied.
\begin{figure}[ht]
\center
\includegraphics[width=3cm]{Fig6.pdf}
\caption{\label{fd} The corresponding Feynman diagram of the second-order corrections to the physical process involved in the master equation Eq.~(\ref{bmme}).}
\end{figure}
\fi
In the following, we shall find all the nonperturbative renormalization effects on electrons from the electron-photon interaction
by using the coherent state path integrals \cite{Zhang1990} to nonperturbatively trace over all the environmental states.
To do so, we may express the exact reduced density $\rho_{_{\!S}} (t) $ of Eq.~(\ref{fs_rho2}) as
$\rho_{_{\!S}} (\bm{\xi}^\dag_f, \bm{\xi}'_f, t ) = \langle \bm{\xi}_f |\rho_{_{\!S}}(t) | \bm{\xi}'_f \rangle$ which
is generally given by
\begin{align}
\rho_{_{\!S}} (\bm{\xi}^\dag_f, \bm{\xi}'_f, t )
= \!\! \int \!\! d\mu & ( \bm{\xi}_0) d\mu (\bm{\xi}'_0 ) \rho_{_{\!S}}( \bm{\xi}_0,\bm{\xi}'_0,t_0) \notag \\
& ~~~\times \! \mathcal{J}_{\rm QED}\!( \bm{\xi}^\dag_f,\bm{\xi}'_f, t; \bm{\xi}_0, \bm{\xi}'^\dag_0,t_0) . \label{crhos_t}
\end{align}
Here we have utilized the unnormalized fermion coherent states $|\bm{\xi}\rangle \equiv \exp(\sum_{\bm p} \xi_{\bm p}
c^{\dag}_{\bm p})|0\rangle$. The integral measure $d\mu ( \bm{\xi}) = \prod_{\bm p} d\xi^*_{\bm p} d\xi_{\bm p}
e^{- |\xi_{\bm p}|^2}$ is defined the Haar measure in Grassmannian space. The vectors $ \bm{\xi} \equiv (\xi_{\bm{p}_1},
\xi_{\bm{p}_2}, . . .)$ is a one-column matrix and $\xi^*_{\bm p_i}$ is a Grassmannian variable.
The propagating function $\mathcal{J}\!(\bm{\xi}^\dag_f, \bm{\xi}'_f, t; \bm{\xi}_0, {\bm \xi'}_0^\dag, t_0)$
in Eq.~(\ref{crhos_t}), which describes the nonequilibrium evolution of the states of the electron system from the initial state
$\rho_{_{\!S}}(t_0)$ to the state at any later time $\rho_{_{\!S}}(t)$, can be obtained analytically after completing exactly the path
integrals over all the photon modes.
The result is %
\begin{align}
\mathcal{J}(\bm{\xi}^\dag_{t}, \bm {\xi}'_{t}, t; \, & \bm \xi_0,{\bm \xi'_0}^{\dag}, t_0)= \int\mathcal{D}[ \bm \xi; \bm \xi']\exp\Big\{\frac{i}{\hbar}
\big(S_{s}[\bm \xi^\dag,\bm \xi] \notag \\
&-S^*_{s}[\bm \xi'^\dag, \bm \xi'] +S^{\rm qed}_{\rm IF} [\bm \xi^\dag,\bm \xi; \bm \xi'^\dag, \bm \xi'] \big)\Big\} ,
\label{ppg}
\end{align}
where $\mathcal{D}[ \bm \xi; \bm \xi']= \prod_{\bm p,t<\tau<t_0} d\xi^*_{\bm p}(\tau)d\xi_{\bm p}(\tau)$,
$S_{e}[\bm \xi^\dag,\bm \xi]$ is the bare electron action of the original free-electron Hamiltonian plus the electron-electron
Coulomb interaction in QED. In the fermion coherent state representation, it is given by
\begin{align}
S_{e}[\bm \xi^\dag,\bm \xi] = & -{i\hbar \over 2}\big[\bm \xi^\dag_t \bm{\xi}(t) + \bm \xi^\dag(t_0) \bm{\xi}_{t_0} \big] \notag \\
&+\!\! \int_{t_0}^{t}\!\! d\tau \Big\{ {i\hbar \over 2}
\big[\dot{\bm \xi}^\dag(\tau) \bm{\xi}(\tau) -\bm \xi^\dag(\tau) \dot{\bm{\xi}}(\tau)\big] \notag \\
& ~~~~~~~~~~~~~~~~~- {\cal H}(\bm \xi^\dag(\tau), \bm{\xi}(\tau)) \Big\}, \label{qed_saction}
\end{align}
where ${\cal H}(\bm \xi^\dag(\tau), \bm{\xi}(\tau))= \sum_{\bm p} \varepsilon_{\bm p} \xi^*_{\bm p}(\tau) \xi_{\bm p}(\tau)
+ \sum_{\bm p, \bm p', \bm q} U(\bm q) \xi^*_{\bm p+\bm q}(\tau) \xi^*_{\bm p'\!-\!\bm q}(\tau) \xi_{\bm p'}(\tau) \xi_{\bm p}(\tau)$.
The additional action $S^{\rm qed}_{\rm IF} [\bm \xi^\dag,\bm \xi; \bm \xi'^\dag, \bm \xi']$
in Eq.~(\ref{ppg}) is an electron action correction arisen from electron-photon interaction after exactly integrated out all the photon
modes \cite{Zhang2022}:
\begin{widetext}
\begin{align}
S^{\rm qed}_{\rm IF} [\bm \xi^\dag,\bm \xi; \bm \xi'^\dag, \bm \xi']=
\!\! \int_{t_0}^{t}\!\!\! & d\tau \Bigg\{ i\hbar \sum_{\bm{p}\bm{p}'\bm{k}} \bigg[ \int_{t_0}^{\tau} \!\!\! d\tau' \!
\Big\{ \sigma^+_{\bm p,\bm k}(\tau) G_{\bm k} (\tau,\tau') \sigma^-_{\bm p',\bm k}(\tau')
+ \sigma'^-_{\bm p',\bm k}(\tau) G^*_{\bm k} (\tau,\tau') \sigma'^+_{\bm p,\bm k}(\tau') \!\Big\} \notag \\
&- \!\!\! \int_{t_0}^{t} \!\!\! d\tau' \Big\{ \sigma'^+_{\bm p,\bm k}(\tau) G_{\bm k} (\tau,\tau') \sigma^-_{\bm p',\bm k}(\tau')
\!-\! \big[\sigma^+_{\bm p,\bm k}(\tau) \!-\! \sigma'^+_{\bm p,\bm k}(\tau)\big] \widetilde{G}_{\bm k}(\tau,\tau')
\big[\sigma^-_{\bm p',\bm k}(\tau') \!-\! \sigma'^-_{\bm p',\bm k}(\tau')\big] \!\Big\} \bigg] \! \Bigg\} \label{qed_ation}
\end{align}
\end{widetext}
This is a generalization of the Feynman-Vernon influence functional \cite{Feynman1963} to electron-photon interacting systems
so that we may also call the action of Eq.~(\ref{qed_ation}) as the influence functional action.
For simplicity, here we have introduced the composite-particle variables $\sigma^+_{\bm p,\bm k}(\tau) \equiv
\xi^*_{\bm p}(\tau) \xi_{\bm p-\bm k}(\tau)$
and $\sigma^-_{\bm p,\bm k}(\tau) \equiv \xi^*_{\bm p-\bm k}(\tau) \xi_{\bm p}(\tau)=(\sigma^+_{\bm p,\bm k}(\tau))^\dag$,
which correspond to the spin-like variables of the exciton operators $a^\dag_{\bm p }a_{\bm p- \bm k}$ and
$a^\dag_{\bm p - \bm k}a_{\bm p}$, respectively. The non-local time correlations in Eq.~(\ref{qed_ation}) are given by
\begin{subequations}
\label{ggbeta}
\begin{align}
\label{gvt}
G_{\bm k}(\tau,\tau') &=|V(\bm{k})|^2 e^{-i \omega_{\bm k}(\tau-\tau')},
\\
\widetilde{G}_{\bm k}(\tau,\tau') &=|V(\bm{k})|^2 \overline{n}(\omega_{\bm k}, T_0) e^{-i \omega_{\bm k}(\tau-\tau')}
\label{gbetavt}
\end{align}
\end{subequations}
which depict the time-correlations between electrons and photons. The four terms in
Eq.~(\ref{qed_ation}) come from the contributions of the electron-photon interactions to the electron forward propagation, the electron backward
propagation, and to the electrons mixed from the forward with backward propagations at the end point time $t$ and at the initial time $t_0$,
respectively, through the path integrals over all the photon modes.
The above results show that the propagating function Eq.~(\ref{ppg}) of the reduced density
matrix for electrons in non-relativistic QED and the corresponding influence functional action Eq.~(\ref{qed_ation})
have the same form as that for our generalized Fano-Anderson Hamiltonian Eq.~(\ref{qth}), as shown by
Eqs.~(\ref{ppg1}) and (\ref{fa_ation}) in Appendix B. The main difference is that the bare system action Eq.~(\ref{saction})
and the influence functional action Eq.~(\ref{fa_ation}) for the generalized Fano-Anderson Hamiltonian are quadratic
with respect to the integrated variables in the path integrals of the propagating function for the reduced density matrix.
They can be solved exactly and the resulting propagating
function is given by Eq.~(\ref{rdmcr1}) in Appendix B. Here the bare electron action Eq.~(\ref{qed_saction}) and the influence functional
action Eq.~(\ref{qed_ation}) for QED Hamiltonian are highly nonlinear so that the path integrals of the propagating function
Eq.~(\ref{ppg}) cannot be carried out exactly. However, the similarity between Eq.~(\ref{qed_ation}) and Eq.~(\ref{fa_ation}) allows us to
find the nonperturbative renormalized electron Hamiltonian in non-rtelativistic QED.
Note that the influence functional action Eq.~(\ref{qed_ation}) is a complex function. Its real part contains all the corrections to
the electron Hamiltonian in non-rtelativistic QED, which results in the renormalization of both the single electron energy and the electron-electron
interaction. The imaginary part contains two decoherence sources. One contributes to the energy dissipation into the environment
induced by the electron-photon interaction. The other contributes to the fluctuations arisen from the initial states of the thermal
photonic reservoirs through the electron-photon interaction.
The influence functional action Eq.~(\ref{fa_ation}) shares the same property. Furthermore, it is not difficult to show that the last two
terms in both Eqs.~(\ref{qed_ation}) and (\ref{fa_ation}) are pure imaginary so that they only contribute to the dissipation and
fluctuation dynamics of the electron system. The first two terms in both Eqs.~(\ref{qed_ation}) and (\ref{fa_ation}) can combine with the
forward and backward bare system actions in Eq.~(\ref{qed_saction}) and (\ref{saction}), respectively, from which we can
systematically find the general nonperturbative renormalized Hamiltonian of the system.
Explicitly, let us first examine the generalized Fano-Anderson systems Eq.~(\ref{qth}) in Sec.~III. The renormalized
system Hamiltonian can also be determined by the bare system Hamiltonian function in Eq.~(\ref{saction}) plus the
first term in the influence functional action Eq.~(\ref{fa_ation}), i.e.,
\begin{align}
{\cal H}^r & [ \bm \alpha^\dag, \bm{\alpha}]={\cal H}[\bm \alpha^\dag, \bm{\alpha}] + \delta {\cal H}[\bm \alpha^\dag, \bm{\alpha}] \notag \\
&= \! \sum_i \!\varepsilon_i(\tau) \alpha^*_i(\tau) \alpha_i(\tau) \!-\! i\hbar \! \sum_{ij} \!\! \int_{t_0}^{\tau} \!\!\!\! d\tau' \!
\alpha^*_i(\tau) g_{ij}(\tau,\tau') \alpha_j(\tau'). \label{fa_rsH1}
\end{align}
Note that the evolution of $\alpha_j(\tau)$ along the forward path is determined by \cite{Jin2010,Lei2012,Yang2015,Zhang2018}
\begin{align}
\alpha_j(\tau)= \bm u_{jj'}(\tau,t_0)\alpha_j(t_0) + f_j(\tau)
\end{align}
where $\bm u_{jj'}(\tau,t_0)$ is the propagating Green function which obeys the integro-differential Dyson equation Eq.~(\ref{ute}). While,
$f_j(\tau')$ is the noise source associated with the correlation Green function $\bm v(\tau,t)$ of Eq.~(\ref{vt}) that has no contribution
to the system Hamiltonian renormalization \cite{Yang2015,Zhang2018}. Thus, we can only take the part of the evolution
$\alpha_j(\tau')$ that has the contribution to system Hamiltonian renormalization, i.e.
\begin{align}
\alpha_j(\tau') & \propto \bm u_{jj'}(\tau',t_0)\alpha_j(t_0) \notag \\
& \propto [\bm u(\tau',t_0)\bm u^{-1}(\tau,t_0)]_{jj'}\alpha_j(\tau).
\end{align}
The second line in the above expression also shows how the memory effect is taken into account.
Using this result, Eq.~(\ref{fa_rsH1}) can be rewritten by
\begin{align}
{\cal H}^r[ & \bm \alpha^\dag, \bm{\alpha}]= \! \sum_{ij} \alpha^*_i(\tau)\varepsilon^r_{ij} (\tau,t_0) \alpha_{ij}(\tau)
\end{align}
and
\begin{align}
\varepsilon^r_{ij} (\tau,t_0)& = \varepsilon_i(\tau) \delta_{ij} \!+\! \hbar {\rm Im} \!\! \int_{t_0}^{\tau} \!\!\!\!\! d\tau'
[\bm g(\tau,\tau')\bm u(\tau',t_0) \bm u^{-1}(\tau,t_0)]_{ij} \notag \\
&= -\hbar {\rm Im}\big[\dot{\bm u}(\tau,t_0)\bm u^{-1}(\tau,t_0)\big]_{ij}
\end{align}
This is the same solution for the renormalized system Hamiltonian given by Eqs.~(\ref{fa_rH}) and (\ref{fa_re}) that we obtained
after completely solved the propagating function Eq.~(\ref{rdmcr}) and derived the exact master equation Eq.~(\ref{EME}).
Thus, we can find the renormalized electron Hamiltonian in non-relativistic QED from the electron influence functional action
Eq.~(\ref{qed_ation}) in the same way. The result is
\begin{align}
{\cal H}^r( & \bm \xi^\dag(\tau), \bm{\xi}(\tau)) = \sum_{\bm p} \varepsilon_{\bm p} \xi^*_{\bm p}(\tau) \xi_{\bm p}(\tau) \notag \\
& +\!\! \sum_{\bm p, \bm p', \bm q} \!\! U(\bm q) \xi^*_{\bm p+\bm q}(\tau) \xi^*_{\bm p'\!-\!\bm q}(\tau) \xi_{\bm p'}(\tau)
\xi_{\bm p}(\tau) \notag \\
& \!- \! i\hbar \! \sum_{\bm{p}\bm{p}'\bm{k}} \! \int_{t_0}^{\tau} \!\!\! d\tau' \!
\xi^*_{\bm p}(\tau) \xi_{\bm p-\bm k}(\tau) G_{\bm k} (\tau,\tau') \xi^*_{\bm p'-\bm k}(\tau') \xi_{\bm p'}(\tau'). \label{reifor}
\end{align}
In Eq.~(\ref{reifor}), the first two terms are the bare electron Hamiltonian in Eq.~(\ref{qed_saction}), and the last term comes from the
the first term in the electron influence functional action Eq.~(\ref{qed_ation}), as the renormalization effect arisen from the electron-photon interaction
after integrated out all the photonic modes. Moreover, we can similarly introduce the two-electron propagating Green function
$\bm W_{\bm p, \bm p', \bm k}(t, t_0) \equiv \langle [c^\dag_{\bm p-\bm k}(t) c_{\bm p}(t),c^\dag_{\bm p'}(t_0) c_{\bm p'-\bm k}(t_0) ]\rangle $.
Consequently, we have
\begin{align}
\xi^*_{\bm p-\bm k}(t) \xi_{\bm p}(t) \propto \sum_{\bm p''} \bm W_{\bm p, \bm p', \bm k} (t, t_0)
\xi^*_{\bm p'-\bm k}(t_0) \xi_{\bm p'}(t_0).
\end{align}
Then the renormalized electron Hamiltonian can be obtained as
\begin{subequations}
\label{qed_ren}
\begin{align}
H^r_{\rm electron} (t,t_0) & = \sum_{\bm p} \varepsilon^r_{\bm p}(t,t_0) c^\dag_{\bm p} c_{\bm p} \notag \\
& +\!\! \sum_{\bm p, \bm p', \bm q} \!\! U^r_{\bm p'} (\bm q, t, t_0) c^\dag_{\bm p+\bm q} c^\dag_{\bm p'\!-\!\bm q} c_{\bm p'} c_{\bm p} ,
\end{align}
where
\begin{align}
&\varepsilon^r_{\bm p}(t,t_0)=\varepsilon_{\bm p} + \sum_{\bm p'}\delta U_{\bm p'}(\bm q,t,t_0) , \label{csee} \\
&U^r_{\bm p'}(\bm q, t, t_0)= U(\bm q)+\delta U_{\bm p'}(\bm q,t,t_0),
\end{align}
and
\begin{align}
\delta U_{\bm p'}(\bm q, t, t_0) = \hbar {\rm Im} \! \sum_{\bm q' \bm q''} \! \int_{t_0}^{t} \!\!\! d\tau
& G_{\bm q} (t,\tau) \bm W_{\bm q', \bm q'', \bm q}(\tau,t_0) \notag \\
& \times \bm W^{-1}_{\bm q'', \bm p', \bm q}(t,t_0) .
\end{align}
\end{subequations}
The correction to the single electron energy in Eq.~(\ref{csee}) comes from the operator normal ordering of the renormalized electron-electron
interaction in the last term of Eq.~(\ref{reifor}). By calculating the two-electron propagating Green function $\bm W(t, t_0)$
from the total non-relativistic QED Hamiltonian Eq.~(\ref{hqed}), the renormalized electron Hamiltonian and the electron reduced density
matrix can be obtained. From the renormalized electron Hamiltonian Eq.~(\ref{qed_ren}) and the reduced density matrix given by
Eq.~(\ref{crhos_t})-(\ref{qed_ation}), the renormalized quantum thermodynamics, Eq.~(\ref{snd}) to Eq.~(\ref{ggss}) formulated in
Sec.~III, can be directly extended to complicated interacting open quantum system. Of course, in practical, the two-electron propagating Green function
$\bm W(t, t_0)$ is very difficult to be calculated, not like the systems of Eq.~(\ref{qth}) discussed in Sec. III where
the general solution of the single particle Green function $\bm u(t,t_0)$ has been solved analytically in our previous work \cite{Zhang2012}.
Nevertheless, Eqs.~(\ref{crhos_t}) to (\ref{qed_ation}) provide a full nonequilibrium electron-electron interaction theory rigorously
derived from the non-relativistic QED theory after we integrated out exactly all the photonic modes. It can describe various
nonequilibrium physical phenomena in many-body electronic systems based on the non-relativistic QED theory, where all the
renormalization effects arisen from electron-photon interaction has been taken into account in the reduced density matrix.
In practical, the reduced density matrix of Eq.~(\ref{crhos_t}) with Eqs.~(\ref{ppg}) to (\ref{qed_ation}) are still hard to be solved exactly,
because the contributions from all the photon modes has been included exactly and it goes far beyond the perturbation
expansion one usually used in many-body systems \cite{Thouless1972} and in quantum field theory \cite{Peskin1995}.
In particular, when the Coulomb interaction dominates the electron-electron interaction, the system become a strongly correlated
electronic system. Then further nonperturbative approximations and numerical methods have to be introduced to find properly the
renormalized Hamiltonian and the reduced density matrix of the open system for the strong coupling quantum thermodynamics.
In fact, the same problem also exists in the equilibrium physics, namely one cannot solve all the equilibrium physical problems exactly
even though the Gibbs state is well defined under the equilibrium hypothesis of statistical mechanics. The typical example
is the strongly-correlated electron systems, such as Hubbard model or quantum Heisenberg spin model, which are the
approximation of the above nonequilibrium electron-electron interaction QED theory. But so far one is still
unable to solve them exactly \cite{Nagaosa2010}.
Therefore, how to practically solve nonequilibrium quantum thermodynamics for arbitrary system-environment interactions
remains to be a challenge problem. The closed time-path Green functions technique with the loop expansion to quantum
transport phenomena developed by one of us long time ago \cite{Zhang1992} could be a possible method for solving
nonperturbatively the nonequilibrium quantum thermodynamics of strong interacting many-body systems, and we leave this
problem for further investigation.
\section{Conclusion and perspective}
In conclusion, we formulate the renormalization theory of quantum thermodynamics and quantum statistical mechanics
based on the exact dynamic evolution of quantum mechanics for both weak and strong coupling strengths.
For a class of generally solvable open quantum systems described by the generalized Fano-Anderson Hamiltonians, we show that the
exact steady state of open quantum systems coupled to reservoirs through the particle exchange processes is a generalized
Gibbs state. The renormalized system Hamiltonian and the reduced density matrix are obtained nonperturbatively when we
traced over exactly all the reservoir states through the coherent state path integrals \cite{Zhang1990}.
Using the renormalized system Hamiltonian and introducing the renormalized temperature, the exact steady state of the reduced
density matrix can be expressed as the standard Gibbs state. The corresponding steady-state particle distributions obey
the Bose-Einstein and the Fermi-Dirac distributions for bosonic and fermionic systems, respectively. In the very weak coupling
limit, the renormalized system Hamiltonian and the renormalized temperature are reduced to the original bare Hamiltonian of the system and
the initial temperature of the reservoir if it couples to a single reservoir. Thus, classical thermodynamics and statistical mechanics
emerge naturally from the dynamics of open quantum systems. Thermodynamic laws and statistical mechanics principle are thereby deduced
from the dynamical evolution of quantum dynamics. If open quantum systems contain dissipationless localized bound states,
thermalization cannot be reached. This is the solution to the long-standing problem in
thermodynamics and statistical mechanics that one has been trying to solve from quantum mechanics for a century.
On the other hand, the renormalization theory presented in this work is nonperturbative.
The traditional renormalization theory in quantum field theory and in many-body physics are build on perturbation
expansions with respect to the interaction Hamiltonian.
As we have systematically shown, the system Hamiltonian renormalization and the reduced density matrix are finally
expressed in terms of the nonequilibrium Green functions. We have nonperturbatively derived the equation of motion for these
nonequilibrium Green functions and obtained the general nonperturbation solution. We can easily reproduce the traditional
perturbation renormalization theory by expanding order by order our solution with respect to the system-reservoir interaction.
Furthermore, this nonperturbative renormalization theory also corresponds to the one-step renormalization in the framework of
Wilson renormalization group framework. The renormalization group is build through subsequent integrations of physical degrees
of freedom from the higher energy scale to lower energy scale. For open quantum systems, instead of integrating out the higher
energy degrees of freedom, the dynamics is fully determined by nonperturbatively integrating out all the reservoir degrees of freedom at once
but including all energy levels from the low energy scale to the high energy scale of the reservoirs. Therefore, the underlying physical picture of
our renormalization roots on the different physical basis. If the open quantum system interacts hierarchically with many
reservoirs, then hierarchically tracing over all the reservoirs' states would lead to a new renormalization group theory to open
quantum systems that count all influences of hierarchical reservoirs on the system.
As a consequence of the renormalized Hamiltonian and renormalized temperature, we find that the system can
become colder or hotter, as the coupling increases. For fermion systems, as the coupling
increases, the renormalized energy levels can be increased or decreased, depending on the
dot energy levels are greater than or less than the center energy of the Lorentz-type spectral
density, but the renormalized temperature is always increased (becomes hotter). For boson systems
with the Ohmic-type spectral density, both the renormalized frequency and the renormalized
temperature always decrease (becomes colder) as the coupling increases, while for a Lorentz-type
spectral density, the renormalized frequency and
temperature will simultaneously decrease or increase, which is quite different from fermion systems.
This reveals the very flexible controllability for energy and heat transfers
between systems and reservoirs, and provides potential applications in building quantum
heat engines in strong coupling quantum thermodynamics.
\acknowledgments
This work is supported by Ministry of Science and Technology of Taiwan, Republic of China under
Contract No. MOST-108-2112-M-006-009-MY3.
WMZ proposed the idea and formulated the theory, WMH performed the numerical calculations,
WMH and WMZ discussed and analyzed the results, WMZ interpreted the physics and wrote the manuscript.
|
2,877,628,089,776 | arxiv | \section{Introduction}
Recently, problems of out-of-distribution detection \cite{yang2021generalized} and anomaly detection \cite{1541882} have attracted interests in studies of deep neural networks (DNNs). It is a practical issue that DNNs show unexpected behaviors when testing samples come from unseen classes in training. DNN classifiers typically can show high confidence when predicting such samples as those from a known class.
This behavior can be attributed to the discriminative softmax classifier in DNNs \cite{10.1145/3394486.3403189}. It has been proposed that deep data description models \cite{pmlr-v80-ruff18a}, which describes the normal data with a hypersphere, and multi-class data description (MCDD) which describes known classes as Gaussian components, respectively in the embedded space, can achieve better detection performances for outlier and OOD detection tasks.
Both data description models define their losses such that each data is projected onto the proximity of a class center in the embedded space. Subsequently, they can identify a test sample outside of the hypersphere or has a low probability over all known classes as an outlier or an OOD sample.
In above settings, the anomalies and the out-of-distribution samples are not available in training, making an attempt to learn the separation between them and known class distributions directly infeasible. A practical approach, instead, is to a) enclose each known class in a compact region and b) separate them from each other as much as possible so as to expand the space in between that yield low probabilities over known classes.
Previous studies, namely data description models, employed both max-margin and MAP losses, which placed emphases on a), the compactness of each class individually.
However, it is our intuition that consideration of inter-class separation can have substantial impact, as the it introduces supervising information regarding numerous combinations of heterogeneous class pairs.
In this paper, we present an information-theoretic loss based on the information bottleneck principle\cite{DBLP:journals/corr/physics-0004057}, from
which we can derive the relation between a) the intra-class similarity and b) the inter-class discrepancies.
In our empirical study, we setup an out-of-distribution detection task using the MNIST dataset. The proposed model yields high detection performance and its graphical analysis shows that the proposed model contributes to the segmentation of normal classes.
This paper is organized as follows. \secref{sec:Related_Work} describes the related studies on deep data description, deep anomaly detection, and few-shot learning.
\secref{sec:XDD} describes the proposed model and
\secref{sec:experiments} presents the setup and the results of the empirical study. We state our conclusion in \secref{sec:conclusion}.
\section{Related Work} \label{sec:Related_Work}
\subsection{Deep Data Description}\label{subsec:DFDD}
The support vector data description \cite{Tax:2004:SVD:960091.960109} aims to find a spherical decision boundary that encloses the normal data, in order to detect whether a new sample comes from the same distribution or is an outlier.
Deep-SVDD \cite{pmlr-v80-ruff18a} has employed the embedding functions of deep neural networks (DNN) to capture the structure of normal data.
Deep Multi-class Data Description (MCDD) \cite{10.1145/3394486.3403189} was introduced as an extension of the Deep SVDD for out-of-distribution (OOD) detection.
A DNN is trained such that the embedding function $f$ maps the labeled data onto the close proximity of the centers of the corresponding class $k$, to find Gaussian components which describes the training data in the embedded space ${\mathcal Z}$.
Describing the $k^\text{th}$ component as a multinormal distribution
\begin{equation}
P(z|y=k) = {\mathcal N}\left(f(z;W)|\mu_k,\sigma_k^2I\right)
\label{eq:P(z|y=k)}
\end{equation}
The Deep MCDD loss ${\mathcal L}_\text{MCDD}$ is defined as a MAP loss of the generative classifier as
\begin{eqnarray}
{\mathcal L}_\text{MCDD}&=&-\frac{1}{N}\sum_{i=1}^N\log
\frac{P(y=k)P(x|y=k)}{\sum_{k'}P(y=k')P(x|y=k')}
\nonumber\\
&=&\frac{1}{N}\sum_{i=1}^N\log\frac{\exp\left(-D_{y_i}(x_i)+b_{y_i}\right)}%
{\sum_{k=1}^K\exp\left(-D_k(x_i)+b_k\right)}
\label{eq:L_MCDD}
\end{eqnarray}
where $D_k(x)$ is the distance from the class centers given \eqref{eq:P(z|y=k)}.
\begin{equation}
D_k(x
\approx \frac{\|f(x;W)-\mu_k\|^2}{2\sigma^2_k} + \log\sigma_k^d
\label{eq:D_k(x)=}
\end{equation}
From equations \ref{eq:L_MCDD} and \ref{eq:D_k(x)=},
the Deep MCDD training can be considered a minimization of the intra-class deviation in the embedded space.
\subsection{Deep Anomaly Detection}
One of the important method of unsupervised anomaly detection
is to train autoencoders with normal data and employ its reconstruction error as anomaly score \cite{1541882}. This approach has been naturally extended to deep and convolutional autoencoders \cite{goodfellow2016learning}.
Recently, anomaly detection using generative adversarial networks have emerged as a popular approach for deep unsupervised anomaly detection following the influential work of AnoGAN \cite{DBLP:conf/ipmi/SchleglSWSL17}.
Generally, GAN-based anomaly detection exploit the generator network,
which learns the manifold of normal data distribution through its mapping function,
to compute the reconstruction error of the test data based on the learned manifold.
For example, the test data is reconstructed using SGD in AnoGAN
and using a BiGAN architecture in EGBAD \cite{zenati2019efficient}.
The above literatures have also reported promising results using the embedded space of the discriminator network, in which the distances are used as the anomaly scores.
In this work, we compute the anomaly scores using only the distances in the embedded space of the discriminator network.
\section{Information Bottleneck Loss}\label{sec:XDD}
\subsection{Derivation}
The information bottleneck \cite{DBLP:journals/corr/physics-0004057,DBLP:conf/iclr/AlemiFD017}
is a principle for extracting relevant information in the input variable $X$ about the output variable $Y$. The mutual information $I(\cdot;\cdot)$ quantifies the statistical dependence between the two variables.
We attempt to learn a compressed representation of $X$, denoted as $Z$, by discarding irrelevant information that do not contribute to the prediction of $Y$.
In \cite{7133169}, it was proposed that Information Bottleneck principle may be used for layer-wise analyses of DNNs in terms of the efficiency of compression. We, however, focus on utilizing the above rate-distortion function as for training a DNN.
In this paper, therefore, we consider ${Z}$ to be a function of $X$, such that $f:{\mathcal X}\to{\mathcal Z}\subset{\mathbb R}^d$, and $f$ the embedding layers of a trained DNN model.
${\mathcal X, Y, Z}$ denotes the subspaces of data, label, and the embedded features, from which variables $X, Y, Z$ take their values, respectively.
Finding an optimal $Z$ leads to a minimization problem for a Lagrangian
\begin{equation}
{\mathcal L} = I(X;Z) - \beta I(Z;Y)
\label{eq:L=I(X;Z)}
\end{equation}
This problem is referred to as a rate-distortion problem, as $I(X;Z)$ is the amount information maintained in $Z$ and a measure of the compression rate, while $I(Y;Z)$ is the amount of relevant information about $Y$ thus a measure of the distortion.
The Lagrangian multiplier $\beta$ represents the trade-off between the two terms.
The mutual information can be rewritten as the Kullback-Leibler divergence between the marginal and the conditional probability distributions
\begin{equation}
I(X;Z) = \mathbb{E}_{x,z}\left[ D_\text{KL}\left(p(z|x)\|p(z)\right)\right]
\label{eq:I(X;Z)=D_KL}
\end{equation}
We model the empirical distribution of $p(z)$ by the average of the Dirac delta functions,
\begin{equation}
p(z) = \frac{1}{N} \sum_{i=1}^N \delta\left(z-f(x_i)\right)
\label{eq:p(z)}
\end{equation}
and the conditional distribution $p(z|x_i)$ as an isotropic normal distribution around the observation in the embedded space.
\begin{eqnarray}
p(z|x_i) &=& \mathcal{N} ( z | f(x_i), \sigma^2 I)
\nonumber \\
&=& \frac{1}{\left(2\pi\sigma^2\right)^{d/2}} \exp\frac{\|z-f(x_i)\|^2}{2\sigma^2}
\label{eq:p(z|x_i)}
\end{eqnarray}
where $\sigma$ is the deviation caused by the randomness introduced in DNN training, e.g., batch normalization and dropout layers.
After substituting \eqref{eq:p(z)} and \eqref{eq:p(z|x_i)} into \eqref{eq:I(X;Z)=D_KL}, the derivation is as follows.
\begin{eqnarray}
&&I(X;Z) = E_{x,z} \left[ \log\frac{p(z|x)}{p(z)} \right] \nonumber \\
&=&\int\int \frac{1}{N}\sum_{i=1}^N\delta(z-f(x_i))\log \left[
\frac{1}{\left(2\pi\sigma^2\right)^{d/2}} \exp-\frac{\|z-f(x_i)\|^2}{2\sigma^2}
\right] dz dx\nonumber\\
&&-\int\int\frac{1}{N} \sum_{i=1}^N\delta(z-f(x_i))\log \left[
\frac{1}{N} \sum_{i=1}^N \delta\left(z-z_i\right)
\right] dz dx
\nonumber\\
&=&
\int \frac{1}{N}\sum_{i=1}^N\log \left[
\frac{1}{\left(2\pi\sigma^2\right)^{d/2}} \exp-\frac{\|z-f(x_i)\|^2}{2\sigma^2}
\right]
+ \log\frac{1}{N} dx
\nonumber\\
&=& \frac{1}{N^2} \sum_{z\in\mathcal Z}\sum_{i=1}^N\left(-\frac{\|z-f(x_i)\|^2}{2\sigma^2}+\log\sigma^d\right) +\text{const.}
\label{eq:I(X;Z)=E_x,z}
\end{eqnarray}
\eqref{eq:I(X;Z)=E_x,z} interprets as the sum of distances between all pairs of $z$.
Meanwhile, we model the class conditional probability over $z$ as an isotropic normal distribution around the class center.
\begin{eqnarray}
p(z|y) &=& \mathcal{N}(z; \mu_y, \sigma_yI)
\nonumber\\
&=& \frac{1}{(2\pi\sigma_y^2)^{d/2}}\exp\left(- \frac{\|z-\mu_y\|^2}{2\sigma_y^2} \right)
\end{eqnarray}
where $\sigma_y$ denotes the deviation over the class $y$.
The mutual information $I(Y;Z)$ is then rewritten as follows.
\begin{eqnarray}
I(Y;Z) &= & E_{y,z}\left[\log\frac{p(z|y)}{p(z)}\right]
\nonumber\\
&=& \int\int\frac{1}{N}\sum_{i=1}^N \delta\left(z-z_i\right) \log p(z|y) dy dz
\nonumber\\
&&- \int\int\frac{1}{N}\sum_{i=1}^N \delta\left(z-z_i\right) \log p(z) dy dz
\nonumber\\
&=&\int\frac{1}{N}\sum_{i=1}^N \delta\left(z-z_i\right) \log p(z|y) dy
\nonumber\\
&=& \frac{1}{N} \sum_{y=1}^K{n_y}\log \frac{\exp \left(- \frac{\|z-\mu_y\|^2}{2\sigma}\right) -\log\sigma_y^d
}{\sum \exp \left(- \frac{\|z-\mu_y\|^2}{2\sigma}\right) -\log\sigma_y^d
}+ \text{const.}
\label{eq:I(Y;Z)=E_y,z}
\end{eqnarray}
\eqref{eq:I(Y;Z)=E_y,z} is equivalent to the MAP loss function \eqref{eq:L_MM} except for the class bias.
To increase $I(Y;Z)$, the intra-class deviation, i.e., the distances between intra-class sample pairs in the embedded space are reduced. Meanwhile, reduction of $I(X;Z)$ is achieved by increasing the distances between inter-class sample pairs. To minimize ${\mathcal L}$, therefore, the intra-class similarity and the inter-class distances must simultaneously be increased.
\section{Empirical Results}\label{sec:experiments}
\subsection{Setup}
We evaluate XDD in an out-of-distribution detection task with two-phase training: 1) the first step is an adversarial pre-training where a generator and a discriminator are trained with samples from unlabeled in-distribution samples, and in the second step, the embedding layer of the discriminator is re-trained by XDD using a limited number of labeled samples per class, with the IB loss.
The pretraining input is a set of unlabeled data ${\mathcal X}_\text{S} = \{(x_i\}_{i=1}^M$.
The re-training input combines a set of $K$ labeled examples from each class, ${\mathcal X}_\text{T} = \{\left(p_j, y_j\right)\}_{j=1}^{K\times N}$.
The label $y_j$ takes a value from ${\mathcal Y}=\left\{1,\ldots,N\right\}$.
The discriminator $D$ consists of an embedding function and a discriminating function to classify between a real image and a generated image.
The embedding function $f:{\mathcal X}\to{\mathcal Z}\subset{\mathbb Z}$
defines a mapping from the input space ${\mathcal X}$ to the deep feature space ${\mathcal Z}$ with parameters ${\mathcal W}$.
After re-training, the detection performance over the out-of-distribution class is measured over the test set. The anomaly score of each test sample is computed by kernel density estimation (KDE) in the deep feature space over the labeled examples.
We present an experimental result using the MNIST dataset\cite{726791}.
The architectures of the generator and the discriminator networks are shown in \tabref{tab:architecture_GAN}
\begin{table}[htb]
\caption{Generator and Discriminator Architecture}\label{tab:architecture_GAN}
\centerline{
\begin{tabular}{ll}
\hline
Generator & Discriminator\\
\hline
Linear(100,7*7*512) & Conv2d(1,8,3)\\
BatchNorm1d(7*7*512)& BatchNorm2d(8) \\
ReLU() & LeakyReLU()\\
ConvTranspose2d(512,256,3,2,1,1)&Conv2d(8,16,3)\\
BatchNorm2d(256)&BatchNorm2d(16)\\
LeakyReLU()&LeakyReLU()\\
ConvTranspose2d(256,128,3,1,1)&Conv2d(16,32,3)\\
BatchNorm2d(128)&BatchNorm2d(32)\\
LeakyReLU()&LeakyReLU()\\
ConvTranspose2d(128,64,3,1,1)&Conv2d(32,64,3)\\
BatchNorm2d(64)&BatchNorm2d(64)\\
LeakyReLU()&LeakyReLU()\\
ConvTranspose2d(64,1,3,2,1,1)&Linear(64*7*7,1)\\
Tanh()&Sigmoid()\\
\hline
\end{tabular}
}
\end{table}
\subsection{Datasets}
We adopt the standard OOD detection tasks from previous studies \cite{10.1145/3394486.3403189} using MNIST dataset.
For the pre-training input, we exclude one digit as the OOD class from the original set. For the re-training input, $n=10$ labeled samples were randomly chosen from each of nine digits.
The detection performances were measured on the test set with entire ten digits using the area under the precision-recall curves (AUPRCs) over the target class.
\draftcontent{\color{red}{
We present experimental results using three image classification benchmarks:
MNIST\cite{726791}, Fashion-MNIST \cite{xiao2017/online}, and CIFAR10 \cite{citeulike:7491128}. The summary of the datasets is shown in \tabref{fig:dataset_description}.
}}
\draftcontent{
\begin{table}[htb]
\caption{Dataset Description}\label{fig:dataset_description}
\centerline{
\begin{tabular}{cccc}
\hline
Dataset & \#Image Size$\times$Channels & \#Instances & \# Classes\\
\hline
MNIST & $28\times28\times1$ & 70,000 & 10\\
FASHION-MNIST & $28\times28\times1$ & 70,000 & 10\\
CIFAR-10 & $32\times32\times3$ &60,000 & 10\\
\hline
\end{tabular}
}
\end{table}
}
\subsection{Results}
\figref{fig:MNIST_barchart} summarizes the AUPRC of the XDD models on ten target anomaly class detection tasks.
The $y$-axis indicates the mean AUPRCs in ten repetitions. The number on the $x$-axis indicate the digit designated as the anomalous class.
The blue bars indicate the means after pre-training and the orange bar indicate the means after XDD training. XDD training contributes to substantial improvements of AUPRC values in all tasks.
\figref{fig:MNIST_barchart} shows that the OOD detection performance from initial training substantially by XDD.
\begin{figure}
\centerline{
\includegraphics[width=.7\textwidth]{fig_MNIST_barchart}
}
\caption{Average AUPRC}\label{fig:MNIST_barchart}
\end{figure}
Figures \ref{fig:MNIST_before} and \ref{fig:MNIST_after} illustrate
the low-dimensional projection of the test samples using $t$-SNE \cite{icml2014c2_kusner14}
after pre-training and after the XDD training.
The samples are represented by triangles of colors corresponding to respective classes while the labeled examples are represented by black $\circ$'s and $\times$'s.
This result was obtained when the anomalous target class is eight (yellow).
From \ref{fig:MNIST_before}, we can see that the separation among normal classes in the embedded space are not distinct after pre-training. \figref{fig:MNIST_after} visualizes the separation between the normal classes is increased by XDD which in turn reduce the overlap between target anomalous classes.
\begin{figure}[htb]
\begin{minipage}{.5\linewidth}
\includegraphics[width=0.99\textwidth]{fig_MNIST_EMB_BEFORE.png}
\caption{Projection after pre-training}\label{fig:MNIST_before}
\end{minipage}
\begin{minipage}{.5\linewidth}
\includegraphics[width=0.99\textwidth]{fig_MNIST_EMB_AFTER.png}
\caption{Projection after XDD training}
\label{fig:MNIST_after}
\end{minipage}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
In this paper, we presented an information-theoretic loss function for deep neural network training, which takes into account the intra-class similarity and the inter-class discrepancy terms. The relation between the two terms were derived from the Information Bottleneck principle.
The empirical results showed that the proposed model yields high ODD detection performance and the graphical analysis indicates that the class separations are emphasized by the training.
In future work, we plan to extend this work to larger collection of anomaly detection and out-of-distribution detection tasks.
\newif\ifBibliography\Bibliographyfalse
\ifBibliography
\bibliographystyle{splncs04}
|
2,877,628,089,777 | arxiv | \section{Introduction}
Let $\pi\colon \mathfrak X\to D$
be a deformation of a compact K\"ahler manifold $X_0$ of dimension $n\geq 2$
over a disk $D$ in the complex plane.
Let $C_0$ be a compact reduced curve (when $n = 2$) or a compact smooth complex manifold
of dimension $n-1$ (when $n>2$).
Let $\varphi_0\colon C_0\to X_0$ be a map which is an immersion,
that is, for any $p\in C_0$, there is an open neighborhood $p\in V_p\subset C_0$
such that $\varphi_0|_{V_p}$ is an embedding.
Then the image of $\varphi_0$ determines an integral cohomology class $[\varphi_0(C_0)]$ of type $(1, 1)$,
that is, a Hodge class which is the Poincar\`e dual of the cycle $\varphi_0(C_0)$.
Note that the class $[\varphi_0(C_0)]$ naturally determines an integral cohomology class of each fiber of
$\pi$.
Therefore, it makes sense to ask whether this class remains Hodge in these fibers or not.
Clearly, the condition that the class $[\varphi_0(C_0)]$ remains Hodge is necessary
for the existence of deformations of the map $\varphi_0$ to other fibers.
If we assume that the image is a local complete intersection, we can talk about the
\emph{semiregularity} of the map $\varphi_0$, see Section \ref{sec:2}.
When $\varphi_0$ is the inclusion, Bloch \cite{B} proved that if $\varphi_0$ is semiregular
and the class $[\varphi_0(C_0)]$ remains Hodge, then there is a deformation of $\varphi_0$
(Bloch proved it for local complete intersections of any codimension).
In other words, a local complete intersection subvariety which is semiregular
satisfies the \emph{variational Hodge conjecture}.
More precisely, the variational Hodge conjecture asks the existence of
a family of cycles of the class $[\varphi_0(C_0)]$ which need not restrict to $\varphi_0(C_0)$
on the central fiber.
Therefore, Bloch's theorem in fact shows that the semiregularity gives a result stronger than
the variational Hodge conjecture.
However, although Bloch's theorem guarantees the existence of a relative deformation of a cycle on the
central fiber $X_0$, it gives little control of the geometry of the deformed cycle.
Our purpose is to show that the semiregularity in fact suffices to control the geometry
of the deformed cycles when the cycle is of codimension one.
\begin{thm}\label{thm:1}
Assume that the map $\varphi_0$ is semiregular.
If the class $[\varphi_0(C_0)]$ remains Hodge, then the map $\varphi_0$ deforms
to other fibers.
\end{thm}
For example, if the image $\varphi_0(C_0)$ has normal crossing singularity,
then there is a natural map $\widetilde\varphi_0\colon \widetilde C_0\to X_0$,
where $\widetilde C_0$ is the normalization of $C_0$ (when $n>2$, $C_0 = \widetilde C_0$).
Then if $\widetilde\varphi_0$ is semiregular, Theorem \ref{thm:1} implies that
it deforms to a general fiber and the singularity of
the image remains the same (e.g., it gives a relative equigeneric deformation when $n = 2$).
On the other hand, if the image $\varphi_0(C_0)$ has normal crossing singularity,
the semiregularity turns out to be related to some classical notions appeared in different contexts.
Namely, we will prove the following (see Corollary \ref{cor:3}).
\begin{thm}\label{thm:2}
Assume that the subvariety $\varphi_0(C_0)$ is semiregular in the classical sense.
That is, the inclusion of $\varphi_0(C_0)$ into $X_0$ is semiregular in the sense of Definition \ref{def:semiregular}.
Then if the map
$H^{0}(\varphi_0(C_0), \mathcal N_{\iota}) \to H^{0}(\varphi_0(C_0), \mathcal S)$
is surjective, the map $\varphi_0$ is semiregular.
In particular, if the class $[\varphi_0(C_0)]$ remains Hodge on the fibers of $\mathfrak X$,
the map $\varphi_0$ can be deformed to general fibers of $\mathfrak X$.
\end{thm}
Here $\mathcal S$ is the \emph{infinitesimal normal sheaf} of the variety $\varphi_0(C_0)$,
see Section \ref{sec:6} for the definition.
A variety with normal crossing singularity is called \emph{d-semistable}
if the infinitesimal normal sheaf is trivial, see \cite{F}.
The notion of d-semistablity is known to be related to the existence of log-smooth deformations (see \cite{KF, KK})
By the above theorem, it turns out that it is also related to deformations of pairs, see Corollary \ref{cor:4}.\\
In the case where $n = 2$, Theorem 31 in \cite{N6} combined with Theorem \ref{thm:1} above
implies the following.
Let $\varphi_0\colon C_0\to X_0$ be an immersion where $\varphi_0(C_0)$ is a reduced nodal curve.
Let $p\colon C_0\to \varphi_0(C_0)$ be the natural map (which is a partial normalization of $\varphi_0(C_0)$)
and $P = \{p_i\}$ be the set of nodes of $\varphi_0(C_0)$ whose inverse image by $p$ consists of two points.
\begin{thm}\label{thm:CB}
Assume that $\varphi_0(C_0)$ is semiregular in the classical sense
and the class $[\varphi_0(C_0)]$ remains Hodge on the fibers of $\mathfrak X$.
Then the map $\varphi_0$ deforms to general fibers of $\mathfrak X$
if for each $p_i\in P$, there is a first order deformation of
$\varphi_0(C_0)$ which smoothes $p_i$, but does not smooth the other nodes of $P$. \qed
\end{thm}
This is related (in a sense opposite) to the classical \emph{Cayley-Bacharach condition}, see \cite{BHPV},
which requires that
if a first order deformation does not smooth the nodes $P\setminus \{p_i\}$,
then it does not smooth $p_i$, either.
Using this, we can also deduce a geometric criterion for the existence of deformations of pairs,
see Corollary \ref{cor:geomCB}.
Finally, based on a similar idea, we will prove that any projective variety can be swept by
nodal curves with very large number of nodes.
Namely, we will prove the following (see Corollary \ref{cor:20}).
\begin{thm}
Let $Y$ be a projective variety of dimension $n\geq 2$.
Then for any positive number $\varepsilon$,
there is an $(n-1)$-dimensional family $\mathcal C\to B$ of irreducible nodal curves
whose fibers satisfy $\delta > g^{2-\varepsilon}$, and a map
$p\colon \mathcal C\to Y$ which dominates $Y$.
Here $\delta$ is the number of nodes of a fiber of $\mathcal C$ and
$g$ is the geometric genus of it.
\end{thm}
In general, it would be difficult to improve the exponent $2-\varepsilon$ further.
For example, if we can prove the existence of a nodal curve of large degree which satisfies the estimate
$\delta>g^{2+\varepsilon}$ when $Y$ is a Fano manifold, such a curve has many deformations
enough to carry out Mori's famous bend-and-break procedure \cite{Mo}.
\section{Semiregularity for local embeddings}\label{sec:2}
Let $n$ and $p$ be positive integers with $p<n$.
Let $M$ be a complex variety (not necessarily smooth or reduced) of dimension $n-p$
and $X$ a compact K\"ahler manifold of dimension $n$.
Let $\varphi\colon M\to X$ be a map which is an immersion,
that is, for any $p\in M$, there is an open neighborhood $p\in U_p\subset M$
such that $\varphi|_{U_p}$ is an embedding.
We assume that the image is a local complete intersection.
Then, the normal sheaf $\mathcal N_{\varphi}$ is locally free of rank $p$.
Define the locally free sheaves $\mathcal K_{\varphi}$ and $\omega_M$ on $M$ by
\[
\mathcal K_{\varphi} = \wedge^p \mathcal N_{\varphi}^{\vee}
\]
and
\[
\omega_M = \mathcal K_{\varphi}^{\vee}\otimes \varphi^*\mathcal K_X,
\]
where $\mathcal K_X$ is the canonical sheaf of $X$.
When $\varphi$ is an inclusion, the natural inclusion
\[
\varepsilon\colon \mathcal N^{\vee}_{\varphi}\to \varphi^*\Omega_X^1
\]
gives rise to an element
\[
\begin{array}{ll}
\wedge^{p-1}\varepsilon \in Hom_{\mathcal O_M}\left(
\wedge^{p-1}\mathcal N_{\varphi}^{\vee}, \varphi^*\Omega_X^{p-1} \right)
&= \Gamma\left(M, (\varphi^*\Omega_X^{n-p+1})^{\vee}\otimes \varphi^*\mathcal K_X
\otimes \mathcal K_{\varphi}^{\vee}\otimes\mathcal N_{\varphi}^{\vee} \right)\\
&= Hom_{\mathcal O_X}(\Omega_X^{n-p+1}, \omega_M\otimes\mathcal N_{\varphi}^{\vee}).
\end{array}
\]
This induces a map on cohomology
\[
\wedge^{p-1}\varepsilon\colon H^{n-p-1}(X, \Omega_X^{n-p+1})\to H^{n-p-1}(M, \omega_M\otimes \mathcal N_{\varphi}^{\vee}).
\]
When $\varphi$ is not an inclusion, then
$\Gamma\left(M, (\varphi^*\Omega_X^{n-p+1})^{\vee}\otimes \varphi^*\mathcal K_X
\otimes \mathcal K_{\varphi}^{\vee}\otimes\mathcal N_{\varphi}^{\vee} \right)$
is not necessarily isomorphic to
$Hom_{\mathcal O_X}(\Omega_X^{n-p+1}, \omega_M\otimes\mathcal N_{\varphi}^{\vee})$,
but the map
$\wedge^{p-1}\varepsilon\colon H^{n-p-1}(X, \Omega_X^{n-p+1})\to
H^{n-p-1}(M, \omega_M\otimes \mathcal N_{\varphi}^{\vee})$
is still defined.
\begin{defn}\label{def:semiregular}
We call $\varphi$ \emph{semiregular} if the natural map $\wedge^{p-1}\varepsilon$ is surjective.
\end{defn}
In this paper, we are interested in the case where $p = 1$ and $M$ is reduced when $n=2$,
and $M$ is smooth when $n>2$.
In this case, we have $\omega_M\otimes \mathcal N_{\varphi}^{\vee}\cong \varphi^*\mathcal K_X$ and the map
$\wedge^{p-1}\varepsilon$ will be
\[
H^{n-2}(X, \mathcal K_X)\to H^{n-2}(M, \varphi^*\mathcal K_X).
\]
\section{Local calculation}
Let $\pi\colon \mathfrak X\to D$
be a deformation of a compact K\"ahler manifold $X_0$ of dimension $n\geq 2$.
Here $D$ is a disk on the complex plane centered at the origin.
Let
\[
\{(U_i, (x_{i, 1}, \dots, x_{i, n})\}
\]
be a coordinate system of $X_0$.
Taking $D$ small enough,
the sets
\[
(\mathfrak U_{i} = U_i\times D, (x_{i, 1}, \dots, x_{i, n}, t))
\]
gives
a coordinate system of $\mathfrak X$.
Precisely, we fix an isomorphism between
$\mathfrak U_i$ and a suitable open subset of $\mathfrak X$ which is compatible with
$\pi$ and the inclusion $U_i\to X_0$.
Here $t$ is a coordinate on $D$ pulled back to $\mathfrak U_i$.
The functions $x_{i, l}$ are also pulled back to $\mathfrak U_i$ from $U_i$
by the natural projection.
Take coordinate neighborhoods $\mathfrak U_i, \mathfrak U_j$ and $\mathfrak U_k$.
On the intersections of these open subsets, the coordinate functions on one of them can be written
in terms of another.
We write this as follows.
Namely, on $\mathfrak U_i\cap \mathfrak U_j$, $x_{i, l}$ can be written as $x_{i, l}({\bf x}_{j}, t)$,
here we write
\[
{\bf x}_j = (x_{j, 1}, \dots, x_{j, n}).
\]
Similarly, on $\mathfrak U_j\cap \mathfrak U_k$, we have $x_{j, l} = x_{j, l}({\bf x}_{k}, t)$.
Then, on $\mathfrak U_i\cap \mathfrak U_k$, we have
\[
x_{i, l} = x_{i, l}({\bf x}_{k}, t) = x_{i, l}({\bf x}_{j}({\bf x}_k, t), t).
\]
Let $X_t = \pi^{-1}(t)$ be the fiber of the family $\pi$ over $t\in D$.
Assume that there is a map
\[
\varphi_0\colon C_0\to X_0
\]
from a variety $C_0$ of dimension $n-1$ to $X_0$, which is an immersion.
We can take an open covering $\{V_{i}\}$ of $C_0$ such that
the restriction of $\varphi_0$ to $V_i$ is an embedding,
the image $\varphi_0(V_i)$ is contained in $U_i$ and
is defined by an equation $f_{i, 0} = 0$ for some holomorphic function $f_{i, 0}$.
Let $\mathop{\mathrm{Spec}}\nolimits\Bbb C[t]/t^{m+1}$ be the $m$-th order infinitesimal neighborhood fo the origin of $D$.
Note that
\[
\{U_{i, m} = \mathfrak U_i\times_D \mathop{\mathrm{Spec}}\nolimits\Bbb C[t]/t^{m+1}\}
\]
gives a covering by coordinate neighborhoods
of $X_m = \mathfrak X\times_D{\mathop{\mathrm{Spec}}\nolimits\Bbb C[t]/t^{m+1}}$.
We write by $x_{i, l, m}$ the restriction of $x_{i, l}$ to $U_{i, m}$.
Let us write
\[
{\bf x}_{i, m} = \{x_{i, 1, m}, \dots, x_{i, n, m}\}.
\]
Assume we have constructed an $m$-th order deformation $\varphi_m\colon C_m\to \mathfrak X_m$
of $\varphi_0$.
Here $m$ is a positive integer and
$C_m$ is an $m$-th order deformation of $C_0$.
Let $V_{i, m}$ be the ringed space obtained by restricting $C_m$ to $V_i$.
Let $\{f_{i, m}({\bf x}_{i, m}, t)\}$
be the set of local defining functions of $\varphi_m(V_{i, m})$ in $U_{i, m}$.
We will often write $f_{i, m}({\bf x}_{i, m}, t)$ as $f_{i, m}({\bf x}_{i, m})$ for notational simplicity.
In particular, on the intersection $U_{i, m}\cap U_{j, m}$, there is an invertible function $g_{ij, m}$
which satisfies
\[
f_{i, m}({\bf x}_{i, m}({\bf x}_{j, m}, t), t) = g_{ij, m}({\bf x}_{j, m}, t)f_{j, m}({\bf x}_{j, m}, t) \;\; \text{(mod $t^{m+1}$)}.
\]
Define a holomorphic function $\nu_{ij, m}$ on $V_i\cap V_j$ by
\[
t^{m+1}\nu_{ij, m}({\bf x}_{j, m+1}) = t^{m+1}\nu_{ij, m}({\bf x}_{j, 0}) = f_{i, m}({\bf x}_{i, m+1}({\bf x}_{j, m+1}))
- g_{ij, m}({\bf x}_{j, m+1})f_{j, m}({\bf x}_{j, m+1}),
\]
which is an equality over $\Bbb C[t]/t^{m+2}$.
\begin{prop}
The set of local sections $\{\nu_{ij,m}\}$ gives
a \v{C}ech 1-cocycle with values in $\mathcal N_{\varphi_0}$ on $C_0$.
Here $\mathcal N_{\varphi_0}$ is the normal sheaf of the map $\varphi_0$ on $C_0$.
\end{prop}
\begin{rem}
The cohomology class of the
cocycle $\{\nu_{ij, m}\}$ represents the obstruction to deforming the map $\varphi_m$ one step further.
\end{rem}
\proof
Note that $\mathcal N_{\varphi_0}$ is an invertible sheaf and the functions $g_{ij, 0}$ gives the
transition functions of it.
Therefore, we need to check the identities
\[
\nu_{ik, m}({\bf x}_{k, m+1}) = \nu_{ij, m}({\bf x}_{j, m+1}({\bf x}_{k, m+1}))
+ g_{ij, 0}({\bf x}_{j, 0}({\bf x}_{k, 0}))\nu_{jk, m}({\bf x}_{k, m+1})
\]
and
\[
\nu_{ij,m} = -g_{ij. 0}\nu_{ji, m}
\]
on $C_0$.
Note that
\[
{\bf x}_{i, m+1}({\bf x}_{k, m+1}) \equiv {\bf x}_{i, m+1}({\bf x}_{j, m+1}({\bf x}_{k, m+1})) \;\; \text{mod} \;t^{m+2}.
\]
Then,
\[
\hspace{-.5in}\begin{array}{ll}
\displaystyle t^{m+1}\nu_{ik, m}({\bf x}_{k, m+1}) &
\displaystyle = f_{i, m}({\bf x}_{i, m+1}({\bf x}_{k, m+1})) - g_{ik, m}({\bf x}_{k, m+1})f_{k, m}({\bf x}_{k, m+1})\\
& \displaystyle = f_{i, m}({\bf x}_{i, m+1}({\bf x}_{j, m+1}({\bf x}_{k, m+1})))
- g_{ij, m}({\bf x}_{j, m+1}({\bf x}_{k, m+1}))f_{j, m}({\bf x}_{j, m+1}({\bf x}_{k, m+1}))\\
& \hspace{.4in} + g_{ij, m}({\bf x}_{j, m+1}({\bf x}_{k, m+1}))f_{j, m}({\bf x}_{j, m+1}({\bf x}_{k, m+1}))
- g_{ik, m}({\bf x}_{k, m+1})f_{k, m}({\bf x}_{k, m+1})\\
& \displaystyle = t^{m+1}\nu_{ij, m}({\bf x}_{j, m+1}({\bf x}_{k, m+1}))
+ g_{ij, m}({\bf x}_{j, m+1}({\bf x}_{k, m+1}))(f_{j, m}({\bf x}_{j, m+1}({\bf x}_{k, m+1}))\\
& \hspace{.4in} - g_{jk, m}({\bf x}_{k, m+1})f_{k, m}({\bf x}_{k, m+1}))
\displaystyle +g_{ij, m}({\bf x}_{j, m+1}({\bf x}_{k, m+1}))g_{jk, m}({\bf x}_{k, m+1})f_{k, m}({\bf x}_{k, m+1})\\
& \hspace{.4in} - g_{ik, m}({\bf x}_{k, m+1})f_{k, m}({\bf x}_{k, m+1})\\
& \displaystyle = t^{m+1}\nu_{ij, m}({\bf x}_{j, m+1}({\bf x}_{k, m+1}))
+ t^{m+1}g_{ij, m}({\bf x}_{j, m+1}({\bf x}_{k, m+1}))\nu_{jk, m}({\bf x}_{k, m+1})\\
& \hspace{.4in} \displaystyle + (g_{ij, m}({\bf x}_{j, m+1}({\bf x}_{k, m+1}))g_{jk, m}({\bf x}_{k, m+1})
- g_{ik, m}({\bf x}_{k, m+1}))f_{k, m}({\bf x}_{k, m+1}).
\end{array}
\]
Since
\[
(g_{ij, m}({\bf x}_{j, m+1}({\bf x}_{k, m+1}))g_{jk, m}({\bf x}_{k, m+1})
- g_{ik, m}({\bf x}_{k, m+1}))f_{k, m}({\bf x}_{k, m+1}) \equiv 0 \;\; \text{mod}\; t^{m+1}.
\]
we have
\[
g_{ij, m}({\bf x}_{j, m+1}({\bf x}_{k, m+1}))g_{jk, m}({\bf x}_{k, m+1}) \equiv g_{ik, m}({\bf x}_{k, m+1})\;\; \text{mod} \;t^{m+1}
\]
Therefore, we have
\[
\begin{array}{l}
(g_{ij, m}({\bf x}_{j, m+1}({\bf x}_{k, m+1}))g_{jk, m}({\bf x}_{k, m+1})
- g_{ik, m}({\bf x}_{k, m+1}))f_{k, m}({\bf x}_{k, m+1}) \\
\hspace{.5in} \equiv (g_{ij, m}({\bf x}_{j, m+1}({\bf x}_{k, m+1}))g_{jk, m}({\bf x}_{k, m+1})
- g_{ik, m}({\bf x}_{k, m+1}))f_{k, 0}({\bf x}_{k, m+1}) \;\; \text{mod}\; t^{m+2}.
\end{array}
\]
Since $f_{k, 0}(x_k) = 0$ on $C_0$, we have the first identity.
The second identity follows from this by taking $k = i$.\qed
\begin{comment}
Now we define two \v{C}ech cocycles $\{\theta_{ij, k}\}$on $X_0$ and $\{\psi_{ij, k}\}$ on
$X_k$.
Recall that a choice of the identification
\[
U_{i, k} \cong U_i\times \mathop{\mathrm{Spec}}\nolimits\Bbb C[t]/t^{k+1},
\]
determines a coordinate system
${\bf x}_{i, k} = (x_{1, i, k}, \dots, x_{n, i, k})$ which restricts to $(x_{1, i}, \dots, x_{n, i})$ over $\Bbb C[t]/t$.
Take three coordinate neighborhoods
$U_i, U_j$ nd $U_l$.
We write
\[
x_{a, l, k}({\bf x}_{j, k}) =\sum_{p=0}^k t^pf_{a, p}({\bf x}_{j, k}),
\]
\[
x_{a, j, k}({\bf x}_{i, k}) =\sum_{p=0}^k t^pg_{a, p}({\bf x}_{i, k}),
\]
and
\[
x_{a, l, k}({\bf x}_{i, k}) =\sum_{p=0}^k t^ph_{a, p}({\bf x}_{i, k}).
\]
Note that $x_{a, l, k}({\bf x}_{i, k}) = x_{a, l, k}({\bf x}_{j, k}({\bf x}_{i, k}))$.
This allows us to represent $h_{a, p}$ in terms of $f_{a, p}$ and $g_{a, p}$.
We denote
\[
x_{a, \geq 1, j, k}({\bf x}_{i, k}) := \sum_{p=1}^k t^pg_{a, p}({\bf x}_{i, k}).
\]
\begin{defn}
We define $\{\theta_{ij, k+1}\}$ on $X_0$ and $\{\psi_{ij, k}\}$ on $X_k$ as follows.
\[
t^{k+1}\theta_{ij, k+1}(x) = \sum_l x_{l, k+1, j, k+1}({\bf x}_{i, k+1}(x))\frac{\partial}{\partial x_{l, j, 0}},
\]
\[
t\psi_{ij, k}(x) = \sum_l x_{l, \geq 1, j, k}({\bf x}_{i, k}(x))\frac{\partial}{\partial x_{l, j, k}}.
\]
\end{defn}
\begin{lem}
$\psi_{il, k}(x) - \psi_{ij, k}(x) - \psi_{jl, k}(x) = ??$
\end{lem}
\proof
We have
\[
t\psi_{il, k}(x) =
\]
\begin{prop}
$\{\theta_{ij, k+1}\}$ and $\{\psi_{ij, k}\}$
are \v{C}ech 1-cocycles.
\end{prop}
\proof
First we consider $\{\theta_{ij, k+1}\}$.
We need to check $\theta_{im, k+1} = \theta_{ij, k+1} + \theta_{jm, k+1}$ and $\theta_{ij, k+1} = -\theta_{ji, k+1}$.
We have
\[
\begin{array}{ll}
\displaystyle {\bf x}_{m, k+1}({\bf x}_{i, k+1}(x)) & = {\bf x}_{m, k+1}({\bf x}_{j, k+1}({\bf x}_{i, k+1}(x)))\\
\displaystyle & = {\bf x}_{m, k+1}({\bf x}_{\leq k, j, k+1}({\bf x}_{i, k+1}(x))
+ {\bf x}_{k+1, j, k+1}({\bf x}_{i, k+1}(x)))\\
\displaystyle & = {\bf x}_{\leq k, m, k+1}({\bf x}_{\leq k, j, k+1}({\bf x}_{i, k+1}(x)))
+ {\bf x}_{k+1, m, k+1}({\bf x}_{j, 0}({\bf x}_{i, 0}(x)))\\
& \hspace{1.5in} + \sum_lx_{l, k+1, j, k+1}({\bf x}_{i, k+1}(x))
\frac{\partial {{\bf x}_{m, 0}}}{\partial x_{l, j, 0}}({\bf x}_{j, 0}(x)).
\end{array}
\]
Here we used the equalities like
\[
{\bf x}_{k+1, m, k+1}({\bf x}_{\leq k, j, k+1}({\bf x}_{i, k+1}(x)))
= {\bf x}_{k+1, m, k+1}({\bf x}_{j, 0}({\bf x}_{i, 0}(x)))
\]
etc., which hold over $\Bbb C[t]/t^{k+2}$.
Let $({\bf x}_{\leq k, m, k+1}({\bf x}_{\leq k, j, k+1}({\bf x}_{i, k+1}(x))))_{k+1}$ be the sum of terms of
${\bf x}_{\leq k, m, k+1}({\bf x}_{\leq k, j, k+1}({\bf x}_{i, k+1}(x)))$ which are of order $k+1$ with respect to $t$.
Then we have
\[
\begin{array}{ll}
{\bf x}_{k+1, m, k+1}({\bf x}_{i, k+1}(x)) &=
({\bf x}_{\leq k, m, k+1}({\bf x}_{\leq k, j, k+1}({\bf x}_{i, k+1}(x))))_{k+1}
+ {\bf x}_{k+1, m, k+1}({\bf x}_{j, 0}({\bf x}_{i, 0}(x)))\\
& \hspace{1.5in} + \sum_lx_{l, k+1, j, k+1}({\bf x}_{i, k+1}(x))
\frac{\partial {{\bf x}_{m, 0}}}{\partial x_{l, j, 0}}({\bf x}_{j, 0}(x)).
\end{array}
\]
From this, the equality
\[
t^{k+1}\theta_{im, k+1}(x) = t^{k+1}\theta_{jm, k+1}(x) + t^{k+1}\theta_{ij, k+1}(x)
+ \sum_l (x_{l, \leq k, m, k+1}({\bf x}_{\leq k, j, k+1}({\bf x}_{i, k+1}(x))))_{k+1}\frac{\partial}{\partial x_{l, m, 0}}
\]
follows.
Next, consider $\psi_{ij, k}$.
We have
\[
\begin{array}{ll}
\displaystyle {\bf x}_{m, k+1}({\bf x}_{i, k+1}(x)) & = {\bf x}_{m, k+1}({\bf x}_{j, k+1}({\bf x}_{i, k+1}(x)))\\
\displaystyle & = {\bf x}_{m, k+1}({\bf x}_{0, j, k+1}({\bf x}_{i, k+1}(x)) + {\bf x}_{\geq 1, j, k+1}({\bf x}_{i, k+1}(x)))\\
\displaystyle & = {\bf x}_{m, k+1}({\bf x}_{0, j, k+1}({\bf x}_{i, k+1}(x))) \\
&\hspace{.2in}+ \sum_lx_{l, \geq 1, j, k+1}({\bf x}_{i, k+1}(x))
\frac{\partial {{\bf x}_{m, k+1}}}{\partial x_{l, j, k+1}}({\bf x}_{j, k+1}({\bf x}_{i, k+1}(x)))
\end{array}
\]
Thus,
\[
\begin{array}{ll}
\displaystyle {\bf x}_{\geq 1, m, k+1}({\bf x}_{i, k+1}(x))
& = {\bf x}_{\geq 1, m, k+1}({\bf x}_{0, j, k+1}({\bf x}_{i, k+1}(x)))\\
\displaystyle & \hspace{.2in}
+ \sum_lx_{l, \geq 1, j, k+1}({\bf x}_{i, k+1}(x))
\frac{\partial {{\bf x}_{m, k+1}}}{\partial x_{l, j, k+1}}({\bf x}_{j, k+1}({\bf x}_{i, k+1}(x)))
\end{array}
\]
Using this, we have
\[
\begin{array}{ll}
\psi_{im, k+1}(x) & = \sum_l x_{l, \geq 1, m, k+1}({\bf x}_{i, k+1}(x))\frac{\partial}{\partial x_{l, m, k}}\\
& = \sum_l(x_{l, \geq 1, m, k+1}({\bf x}_{0, j, k+1}({\bf x}_{i, k+1}(x)))
+ \sum_p x_{p, \geq 1, j, k+1}({\bf x}_{i, k+1}(x))
\frac{\partial {{x}_{l, m, k+1}}}{\partial x_{p, j, k+1}}({\bf x}_{j, k+1}({\bf x}_{i, k+1}(x))))
\frac{\partial}{\partial x_{l, m, k+1}}\\
& = \sum_l(x_{l, \geq 1, m, k+1}({\bf x}_{0, j, k+1}({\bf x}_{i, k+1}(x))))\frac{\partial}{\partial x_{l, m, k+1}}
+ \sum_p x_{p, \geq 1, j, k+1}({\bf x}_{i, k+1}(x))\frac{\partial}{\partial x_{p, j, k+1}}\\
& = \sum_l(x_{l, \geq 1, m, k+1}({\bf x}_{0, j, k+1}({\bf x}_{i, k+1}(x))))\frac{\partial}{\partial x_{l, m, k+1}}
+ \psi_{ij, k+1}(x)\\
\end{array}
\]
On the other hand,
\[
\begin{array}{l}
x_{l, \geq 1, m, k+1}({\bf x}_{j, k+1}({\bf x}_{i, k+1}(x))) - x_{l, \geq 1, m, k+1}({\bf x}_{0, j, k+1}({\bf x}_{i, k+1}(x)))\\
= x_{l, \geq 1, m, k+1}({\bf x}_{0, j, k+1}({\bf x}_{i, k+1}(x)) + {\bf x}_{\geq 1, j, k+1}({\bf x}_{i, k+1}(x)))
- x_{l, \geq 1, m, k+1}({\bf x}_{0, j, k+1}({\bf x}_{i, k+1}(x)))\\
= \sum_px_{p, \geq 1, j, k+1}({\bf x}_{i, k+1}(x))
\frac{\partial x_{l, \geq 1, m, k+1}}{\partial x_{p, j, k+1}}({\bf x}_{j, k+1}({\bf x}_{i, k+1}(x))).
\end{array}
\]
Therefore, we have
\[
\psi_{im, k+1} = \psi_{jm, k+1}(x) + \psi_{ij, k+1}(x)
- \sum_px_{p, \geq 1, j, k+1}({\bf x}_{i, k+1}(x))
\frac{\partial x_{l, \geq 1, m, k+1}}{\partial x_{p, j, k+1}}({\bf x}_{j, k+1}({\bf x}_{i, k+1}(x)))
\frac{\partial}{\partial x_{l, m, k+1}}.
\]
\end{comment}
\section{Explicit description of the Kodaira-Spencer class}
Let $\pi\colon \mathfrak X\to D$
be a deformation of a smooth manifold $X_0$ as before.
We have the exact sequence
\[
0 \to \pi^*\Omega^1_{D}\to \Omega^1_{\mathfrak X}\to \Omega^1_{\mathfrak X/D}\to 0
\]
The Kodaira-Spencer class is, by definition, the corresponding class in
$\mu\in\mathop{\mathrm{Ext}}\nolimits^1(\Omega^1_{\mathfrak X/D}, \pi^*\Omega^1_{D})$.
\begin{lem}
The class $\mu$ is represented by the \v{C}ech 1-cocycle
$\mu_{ij} = \sum_{l=1}^n \frac{\partial x_{i, l}({\bf x}_j, t)}{\partial t}\partial_{x_{i, l}}dt$.
\end{lem}
\proof
See \cite{Griffiths2}, Section II.1. \qed\\
From now on, we drop $dt$ from these expressions since it plays no role below.
Restricting this to a presentation over $\Bbb C[t]/t^{m+1}$, we obtain the Kodaira-Spencer class for the
deformation $X_{m+1}:= \mathfrak X\times_D\mathop{\mathrm{Spec}}\nolimits\Bbb C[t]/t^{m+2}$.
We denote this class by $\mu_m$.
Assume that we have constructed an $m$-th order deformation $\varphi_m\colon C_m\to X_m$.
Let $\mathcal N_{m/D}$ be the relative normal sheaf of $\varphi_m$
and
\[
p_m: \varphi_m^*\mathcal T_{X_{m}/D}\to \mathcal N_{m/D}
\]
be the natural projection,
where $\mathcal T_{X_{m}/D}$ is the relative tangent sheaf of $X_m$.
Pulling $\mu_m$ back to $C_m$ and taking the image by $p_m$, we obtain a class
$\overline{\mu}_m\in H^1(C_m, \mathcal N_{m/D})$.
As before, let $\{f_{i, m}({\bf x}_{i, m}, t)\}$
be the set of local defining functions of $\varphi_m(V_{i, m})$ on $U_{i, m}$.
\begin{lem}
The class $\overline\mu_m$ is represented by the pull back of
\[
\eta_{ij, m} = \sum_{l=1}^n \frac{\partial x_{i, l}({\bf x}_j, t)}{\partial t}\partial_{x_{i, l}}f_{i, m}({\bf x}_i, t).
\]
to $C_m$
\end{lem}
\proof
We check the cocycle condition.
Namely, we have
\[
\begin{array}{l}
\eta_{ik, m}-\eta_{ij, m}-g_{ij, m}\eta_{jk, m} \\
= \sum_{l=1}^n \frac{\partial x_{i, l}({\bf x}_k, t)}{\partial t}\partial_{x_{i, l}}f_{i, m}({\bf x}_i, t)
- \sum_{l=1}^n \frac{\partial x_{i, l}({\bf x}_j, t)}{\partial t}\partial_{x_{i, l}}f_{i, m}({\bf x}_i, t)
-g_{ij, m}\sum_{l=1}^n \frac{\partial x_{j, l}({\bf x}_k, t)}{\partial t}\partial_{x_{j, l}}f_{j, m}({\bf x}_i, t)\\
= \sum_{l=1}^n \frac{\partial x_{i, l}({\bf x}_k, t)}{\partial t}\partial_{x_{i, l}}f_{i, m}({\bf x}_i, t)\\
\hspace{.5in}- \sum_{l=1}^n \frac{\partial x_{i, l}({\bf x}_j, t)}{\partial t}\partial_{x_{i, l}}f_{i, m}({\bf x}_j, t)
-g_{ij, m}\sum_{l=1}^n \frac{\partial x_{j, l}({\bf x}_k, t)}{\partial t}\partial_{x_{j, l}}(g_{ij, m}^{-1}f_{i, m}({\bf x}_i({\bf x}_j, t), t))\\
= (\mu_{ik}-\mu_{ij}-\mu_{jk})f_{i, m}
-g_{ij, m}f_{i, m}({\bf x}_i({\bf x}_j, t), t)\sum_{l=1}^n \frac{\partial x_{j, l}({\bf x}_k, t)}{\partial t}\partial_{x_{j, l}}(g_{ij, m}^{-1}).
\end{array}
\]
Since $f_{i, m}({\bf x}_i({\bf x}_j, t), t)$ is zero on the image of $\varphi_m$,
we see that $\eta_{ik, m} = \eta_{ij, m} + g_{ij, m}\eta_{jk, m}$ on $C_m$.
Also, note that $g_{ij, m}$ is the transition function of the normal sheaf $\mathcal N_{m/D}$.
Then it is clear that $\eta_{ij, m}$ represents the class $\overline{\mu}_m$.\qed\\
Recall that an analytic cycle of codimension $r$ in a K\"ahler manifold
determines a cohomology class of type $(r, r)$, which is the Poincar\'e dual of the
homology class of the cycle.
Let $\zeta_{C_0}\in H^{1}(X_0, \Omega_{X_0/D}^{1})$ be the class
corresponding to the image of $\varphi_0$.
Note that since the family $\mathfrak X$ is differential geometrically trivial, the class $\zeta_{C_0}$
determines a cohomology class in $H^2(\mathfrak X, \Bbb C)$.
We denote it by $\widetilde\zeta_{C_0}$.
Then we have the following.
\begin{lem}
When $\varphi_0$ is semiregular,
the class $\widetilde\zeta_{C_0}$ remains Hodge in $X_{m+1}$ if and only if the class $\overline\mu_m$ is zero.
\end{lem}
\proof
Since we are assuming we have constructed $\varphi_m\colon C_m\to X_m$,
the class $\widetilde\zeta_{C_0}$ is Hodge on $X_m$.
That is, $\widetilde\zeta_{C_0}|_{X_m}\in H^1(X_m, \Omega^1_{X_m/D})$.
In \cite[Proposition 4.2]{B}, Bloch shows that
$\widetilde\zeta_{C_0}$ remains Hodge on $X_{m+1}$ if and only if
the cup product $\widetilde\zeta_{C_0}|_{X_m}\cup \mu_m\in H^2(X_m, \mathcal O_{X_m})$ is zero.
This is the same as the claim that the cup product
$\widetilde\zeta_{C_0}|_{X_m}\cup \mu_m\cup \alpha$ is zero for any $\alpha\in H^{2n-2}(X_m, \Bbb C)$.
On the other hand, we have the following.
\begin{claim}
The cup product
$\widetilde\zeta_{C_0}|_{X_m}\cup \mu_m\cup \alpha$ is zero for any $\alpha\in H^{2n-2}(X_m, \Bbb C)$
if and only if
the cup product $\overline\mu_m\cup \varphi_m^*\alpha$ is zero on $C_m$.
\end{claim}
\noindent
{\it Proof of the claim.}
By the definition of $\widetilde\zeta_{C_0}|_{X_m}$,
the class $\widetilde\zeta_{C_0}|_{X_m}\cup \mu_m\cup \alpha$ is zero if and only if
the class $\varphi_m^*\mu_m\cup \varphi_m^*\alpha$ is zero.
Note that the cohomology group $H^{2n-2}(X_m, \Bbb C)$ decomposes as
\[
H^{2n-2}(X_m, \Bbb C) \cong H^n(X_m, \Omega_{X_m/D}^{n-2})
\oplus H^{n-1}(X_m, \Omega_{X_m/D}^{n-1})\oplus H^{n-2}(X_m, \mathcal K_{X_m/D}).
\]
By dimensional reason, the cup product between $\varphi_m^*\mu_m$
and the pull back of the classes in $H^n(X_m, \Omega_{X_m/D}^{n-2})\oplus H^{n-1}(X_m, \Omega_{X_m/D}^{n-1})$
is zero.
Therefore, we can assume that $\alpha$ belongs to $H^{n-2}(X_m, \mathcal K_{X_m/D})$, and so
the class $\varphi_m^*\alpha$ belongs to $H^{n-2}(C_m, \varphi^*_m\mathcal K_{X_m/D})$.
On the other hand, $\varphi_m^*\mu_m$ belongs to
$H^1(C_m, \varphi_m^*\mathcal T_{X_m/D})$ and we have the natural map
\[
H^1(C_m, \varphi^*\mathcal T_{X_m/D})\to H^1(C_m, \mathcal N_{m/D}).
\]
Here $\overline{\mu}_m$ is the image of $\varphi_m^*\mu_m$ by this map.
Recall that the dual of
$H^1(C_m, \mathcal N_{m/D})$ is given by $H^{n-2}(C_m, \varphi_m^*\mathcal K_{X_m/D})$.
Therefore, it follows that the cup product $\varphi^*\mu_m\cup \varphi^*\alpha$
reduces to $\overline\mu_m\cup \varphi_m^*\alpha$.
This proves the claim.\qed\\
It immediately follows that if $\overline{\mu}_m$ is zero, then
$\widetilde\zeta_{C_0}$ remains Hodge in $X_{m+1}$.
For the converse, assume that $\widetilde\zeta_{C_0}$ remains Hodge in $X_{m+1}$.
There is a natural map
\[
\iota: H^{2n-2}(X_m, \Bbb C) \to H^1(C_m, \mathcal N_{m/D})^{\vee}
\]
as in the proof of the claim.
Namely,
for a class $\alpha$ of
$H^{2n-2}(X_m, \Bbb C)=
H^{n}(X_m, \Omega_{X_m/D}^{n-2})\oplus H^{n-1}(\Omega_{X_m/D}^{n-1})
\oplus H^{n-2}(\Omega_{X_m/D}^{n})$
and $\beta\in H^1(C_m, \mathcal N_{m/D})$,
let
$\iota(\alpha)(\beta)$ be the cup product $\beta\cup \varphi_m^*\alpha$
composed with the trace map $H^{n-1}(C_m, \omega_{C_m})\to \Bbb C$.
The restriction of this map to $X_0$ is a surjection by the semiregularity of $\varphi_0$.
Since the surjectivity is an open condition, $\iota$ is also a surjection.
This shows that $\overline\mu_m\cup \varphi_m^*\alpha$ is zero
for any $\alpha\in H^{2n-2}(X_m, \Bbb C)$ is equivalent to the claim that $\overline\mu_m$ is zero.\qed\\
Thus, in this case we can write $\overline{\mu}_m$ as the coboundary of a
\v{C}ech 0-cochain with values in $\mathcal N_{m/D}$ on $C_m$.
We choose one such representative $\{\delta_i\}$ where
$\delta_i\in\Gamma(V_{i, m}, \mathcal N_{m/D})$.
Also note that by the exact sequence
\[
0\to \mathcal O_{U_{i, m}}\to \mathcal O_{U_{i, m}}(\varphi_m(V_{i, m}))\to \mathcal N_{m/D}|_{V_{i, m}}\to 0,
\]
there is a section $\widetilde\delta_i$ of $\mathcal O_{U_{i, m}}(\varphi_m(V_{i, m}))$
which maps to $\delta_i$.
Explicitly, putting
$\widetilde\eta_{ij, m} =
\widetilde\delta_i({\bf x}_i({\bf x}_j, t), t) - g_{ij, m}({\bf x}_j, t)\widetilde\delta_j({\bf x}_j, t)$,
it coincides with $\eta_{ij, m}$ when restricted to $V_{i, m}$.
\section{Proof of the theorem}
Recall that the obstruction to deforming $\varphi_m$ is given by a cocycle $\nu_{ij, m}$ on $C_0$
defined by
$t^{m+1}\nu_{ij, m}({\bf x}_{j}) = f_{i, m}({\bf x}_{i}({\bf x}_{j}, t), t)
- g_{ij, m}({\bf x}_{j}, t)f_{j, m}({\bf x}_{j}, t)$.
Differentiating this with respect to $t$, we have
\[
\begin{array}{ll}
(m+1)t^m\nu_{ij, m}({\bf x}_{j}) & = \frac{\partial f_{i, m}({\bf x}_i, t)}{\partial t}
+ \sum_{l=1}^n\frac{\partial x_{i, l}({\bf x}_j, t)}{\partial t}\frac{\partial f_{i, m}({\bf x}_i, t)}{\partial {x}_{i, l}}
- g_{ij, m}({\bf x}_{j}, t)\frac{\partial f_{j, m}({\bf x}_j, t)}{\partial t}
-\frac{\partial g_{ij, m}({\bf x}_j, t)}{\partial t}f_{j, m}({\bf x}_j, t)\\
&= \frac{\partial f_{i, m}({\bf x}_i, t)}{\partial t}
- g_{ij, m}({\bf x}_{j}, t)\frac{\partial f_{j, m}({\bf x}_j, t)}{\partial t}
+ \eta_{ij, m}
-\frac{\partial g_{ij, m}({\bf x}_j, t)}{\partial t}f_{j, m}({\bf x}_j, t).
\end{array}
\]
Since $f_{j, m}$ is zero on $C_m$, we can ignore the last term.
By the same reason, we can replace $\eta_{ij, m}$ by $\widetilde\eta_{ij, m}$.
Dividing this by $f_{i, m}({\bf x}_i, t)$, we have
\[
\begin{array}{ll}
(m+1)t^m\frac{\nu_{ij, m}({\bf x}_j)}{f_{i, m}({\bf x}_i, t)}
& = \frac{1}{f_{i, m}({\bf x}_i, t)}\frac{\partial f_{i, m}({\bf x}_i, t)}{\partial t}
- \frac{g_{ij, m}({\bf x}_{j}, t)}{f_{i, m}({\bf x}_i, t)}\frac{\partial f_{j, m}({\bf x}_j, t)}{\partial t}
+ \frac{\eta_{ij, m}}{f_{i, m}({\bf x}_i, t)}\\
& = \frac{1}{f_{i, m}({\bf x}_i, t)}\frac{\partial f_{i, m}({\bf x}_i, t)}{\partial t}
- \frac{g_{ij, m}({\bf x}_{j}, t)}{f_{i, m}({\bf x}_i, t)}\frac{\partial f_{j, m}({\bf x}_j, t)}{\partial t}
+\frac{\widetilde\delta_{i}}{f_{i, m}({\bf x}_i, t)}-\frac{g_{ij, m}\widetilde\delta_{j}}{f_{i, m}({\bf x}_i, t)}\\
& = \frac{1}{f_{i, m}({\bf x}_i, t)}(\frac{\partial f_{i, m}({\bf x}_i, t)}{\partial t} +\widetilde\delta_i)
- \frac{1}{f_{j, m}({\bf x}_i, t)}(\frac{\partial f_{j, m}({\bf x}_j, t)}{\partial t} +\widetilde\delta_j)
\end{array}
\]
modulo functions holomorphic on $C_m$.
Note that this is an equation over $\Bbb C[t]/t^{m+1}$, and so we have
$\frac{g_{ij, m}({\bf x}_{j}, t)f_{j, m}({\bf x}_j, t)}{f_{i, m}({\bf x}_i, t)} = 1$.
Let $[\frac{1}{f_{i, m}({\bf x}_i, t)}(\frac{\partial f_{i, m}({\bf x}_i, t)}{\partial t} +\widetilde\delta_i)]_m$
be the coefficient of $t^m$ in
$\frac{1}{f_{i, m}({\bf x}_i, t)}(\frac{\partial f_{i, m}({\bf x}_i, t)}{\partial t} +\widetilde\delta_i)$.
Note that the above equation still holds when we replace
$\frac{1}{f_{i, m}({\bf x}_i, t)}(\frac{\partial f_{i, m}({\bf x}_i, t)}{\partial t} +\widetilde\delta_i)$
and $\frac{1}{f_{j, m}({\bf x}_i, t)}(\frac{\partial f_{j, m}({\bf x}_j, t)}{\partial t} +\widetilde\delta_j)$
by $[\frac{1}{f_{i, m}({\bf x}_i, t)}(\frac{\partial f_{i, m}({\bf x}_i, t)}{\partial t} +\widetilde\delta_i)]_m$
and $[\frac{1}{f_{j, m}({\bf x}_i, t)}(\frac{\partial f_{j, m}({\bf x}_j, t)}{\partial t} +\widetilde\delta_j)]_m$,
respectively.
Also, we can think $[\frac{1}{f_{i, m}({\bf x}_i, t)}(\frac{\partial f_{i, m}({\bf x}_i, t)}{\partial t} +\widetilde\delta_i)]_m$
as a function on $U_i$ by forgetting $t^m$.
Now we proceed as in \cite{N6} to produce a family of
local $C^{\infty}$ differential forms on $C_0$ which represent the obstruction class $\{\nu_{ij, m}\}$.
Namely, introduce any Hermitian metric on $X_0$.
For each $V_{i}$, the unit normal bundle of the image $\varphi_0(V_{i})$
of radius $r$ gives a circle bundle on it.
Here $r$ is a small positive real number and when $C_0$ is a singular curve, then we
construct the bundle only away from a small neighborhood of singular points.
These local circle bundles glue and give a global circle bundle on $C_0$ (away from
singular points).
Note that the class $[\nu_{ij, m}]$ is zero if and only if the pairing of it with
any class in $H^{n-2}(C_0, \varphi^*\mathcal K_{X_0})$ is zero.
By semiregularity, any class in $H^{n-2}(C_0, \varphi^*K_{X_0})$
is a restriction of an element of $H^{n-2}(X_0, \mathcal K_{X_0})$.
Let $\Theta$ be any closed $C^{\infty}$ $(2n-2)$-form on $X_0$.
In particular, $\Theta$ represents a class in
\[
H^{2n-2}(X_0) = H^{n-2}(X_0, \mathcal K_{X_0})\oplus
H^{n-1}(X_0, \Omega^{n-1}_{X_0})\oplus H^n(X_0, \Omega^{n-2}_{X_0}).
\]
Here $\Omega^i_{X_0}$ is the sheaf of holomorphic $i$-forms on $X_0$.
Integrating the restriction of the singular $(2n-2)$-form
$[\frac{1}{f_{i, m}({\bf x}_i, t)}(\frac{\partial f_{i, m}({\bf x}_i, t)}{\partial t} +\widetilde\delta_i)]_m\Theta$ to the circle bundle
along the fibers, we obtain a set of local closed $(2n-3)$-forms $\gamma_i$ on $V_{i}$.
As the radius $r$ goes to zero, the \v{C}ech 1-cocycle obtained as the differences of
$\{\gamma_i\}$ converges to the obstruction class $[\nu_{ij, m}]$ paired with the pull back of $\Theta$
by $\varphi_0$.
The argument in \cite{N6} proves the following:
\begin{enumerate}
\item The class obtained from $\{\gamma_i\}$ does not depend on the radius $r$.
\item Thus, if $C_0$ is nonsingular, then since $\{\gamma_i\}$ is defined on an open covering
of $C_0$, the class determined by $\{\gamma_i\}$ is zero by definition for any $\Theta$.
This implies that $[\nu_{ij, m}]\in H^1(C_0, \mathcal N_{\varphi_0})$ is zero, too.
\item If $C_0$ has singular points, $\{\gamma_i\}$ is defined only on away from the singular points,
so one cannot immediately conclude that the class $[\nu_{ij, m}]$ is zero.
However, we can identify the class determined by $\{\gamma_i\}$ by local calculation at singular points,
and Stokes' theorem shows each of these local contributions is zero, which implies
the class $[\nu_{ij, m}]\in H^1(C_0, \mathcal N_{\varphi_0})$ is again zero.
\end{enumerate}
This finishes the proof of the theorem.\qed
\begin{comment}
When $n>2$, we are assuming $C_m$ is smooth.
Regarding the relevant bundles as real bundles and taking the complexification, we obtain a sequence of
$C^{\infty}$ bundles
\[
0\to \mathcal N_{0/D}\otimes\Bbb C \to \mathcal N_{m/D}\otimes\Bbb C
\to \mathcal N_{m-1/D}\otimes\Bbb C\to 0.
\]
Taking the associated cohomology sequence, we have
\[
\cdots \to H^1(C_0, \mathcal N_{0/D}\otimes\Bbb C)
\to H^1(C_m, \mathcal N_{m/D}\otimes\Bbb C)
\to H^1(C_{m-1}, \mathcal N_{m-1/D}\otimes\Bbb C)\to\cdots.
\]
The obstruction class $[\nu_{ij, m}]$ belongs to
$H^1(C_0, \mathcal N_{0/D})\subset H^1(C_0, \mathcal N_{0/D}\otimes\Bbb C)$.
The above argument shows that the image of it in
$H^1(C_m, \mathcal N_{m/D})\subset H^1(C_m, \mathcal N_{m/D}\otimes\Bbb C)$
is zero.
It suffices to show that the composition
\[
H^1(C_0, \mathcal N_{0/D})\to H^1(C_m, \mathcal N_{m/D})\to H^1(C_m, \mathcal N_{m/D}\otimes\Bbb C)
\]
is an injection.
To see this, consider the dual map
\[
H^{2n-3}(C_m, \mathcal N_{m/D}^{\vee}\otimes\Bbb C)
\to H^{n-2}(C_0, \varphi_0^*\mathcal K_{X_0}).
\]
It suffices to show that this map is surjective.
Fixing suitable diffeomorphisms between $X_0$ and the other fibers of $\mathfrak X$,
any class
in $H^{n-2}(X_0, \mathcal K_{X_0})$ can be extended to
a family of closed relative $(2n-2)$-forms on these fibers,
that is, a de Rham representative of $H^{2n-2}(\mathfrak X/D)$.
This shows that the natural composition
\[
H^{2n-2}(\mathfrak X/D)\to H^{2n-3}(C_m, \mathcal N_{m/D}^{\vee}\otimes\Bbb C)
\to H^{n-2}(C_0, \varphi_0^*\mathcal K_{X_0}),
\]
is surjective, proving the claim.\qed\\
\end{comment}
\section{Criterion for semiregularity}\label{sec:6}
In this section, we give necessary conditions for a map $\varphi_0\colon C_0\to X_0$
to be semiregular.
It turns out that some classical notions which appeared in different contexts such as Cayley-Bacharach condition
and d-semistability are related to relative deformations of maps.
\subsection{The case $n>2$}
First we consider the case $n>2$.
Let $\pi\colon \mathfrak X\to D$ be a family of $n$-dimensional K\"ahler manifolds.
Let $\varphi_0\colon C_0\to X_0$ be a map from a compact smooth complex manifold of dimension
$n-1$ which is an immersion.
We also assume that the image $\varphi_0(C_0)$ has normal crossing singularity.
Consider the exact sequence on $\varphi_0(C_0)$ given by
\[
0\to \iota^*\mathcal K_{X_0}\to p_*\varphi_0^*\mathcal K_{X_0}\to \mathcal Q\to 0,
\]
where $\iota\colon \varphi_0(C_0)\to X_0$ is the inclusion, and $p\colon C_0\to \varphi_0(C_0)$
is the normalization.
The sheaf $\mathcal Q$ is defined by this sequence.
It is supported on the singular locus $sing(\varphi_0(C_0))$ of $\varphi_0(C_0)$.
We have an associated exact sequence of cohomology groups
\[
\begin{array}{ll}
\cdots\to H^{n-2}(\varphi_0(C_0), \iota^*\mathcal K_{X_0})& \to H^{n-2}(\varphi_0(C_0), p_*\varphi_0^*\mathcal K_{X_0})
\to H^{n-2}(\varphi_0(C_0), \mathcal Q)\\
& \to
H^{n-1}(\varphi_0(C_0), \iota^*\mathcal K_{X_0})\to H^{n-1}(\varphi_0(C_0), p_*\varphi_0^*\mathcal K_{X_0})
\to H^{n-1}(\varphi_0(C_0), \mathcal Q).
\end{array}
\]
By dimensional reason, we have $H^{n-1}(\varphi_0(C_0), \mathcal Q) = 0$.
Also, note that
\[
H^{i}(\varphi_0(C_0), p_*\varphi_0^*\mathcal K_{X_0})\cong H^{i}(C_0, \varphi_0^*\mathcal K_{X_0})
\]
for $i = n-2, n-1$, by the Leray's spectral sequence.
Therefore,
if $\varphi_0(C_0)$ is semiregular in the classical sense, that is,
the natural map $H^{n-2}(X_0, \mathcal K_{X_0})\to H^{n-2}(\varphi_0(C_0), \iota^*\mathcal K_{X_0})$
is surjective, then the map $\varphi_0$ is semiregular if and only if the map
$H^{n-2}(\varphi_0(C_0), \iota^*\mathcal K_{X_0}) \to H^{n-2}(\varphi_0(C_0), p_*\varphi_0^*\mathcal K_{X_0})$
is surjective.
\begin{cor}\label{cor:1}
Assume that $\varphi_0(C_0)$ is semiregular in the classical sense
and the class $[\varphi_0(C_0)]$ remains Hodge on the fibers of $\mathfrak X$.
Then if the map
$H^{n-2}(\varphi_0(C_0), \iota^*\mathcal K_{X_0}) \to H^{n-2}(\varphi_0(C_0), p_*\varphi_0^*\mathcal K_{X_0})$
is surjective, $\varphi_0$ can be deformed to general fibers of $\mathfrak X$.\qed
\end{cor}
On the other hand, consider the exact sequence
\[
0\to p_*\mathcal N_{\varphi_0}\to \mathcal N_{\iota}\to \mathcal S\to 0,
\]
of sheaves on $\varphi_0(C_0)$, where $\mathcal S$ is defined by this sequence.
The associated exact sequence of cohomology groups is
\[
\begin{array}{l}
0\to H^0(\varphi_0(C_0), p_*\mathcal N_{\varphi_0})\to H^0(\varphi_0(C_0), \mathcal N_{\iota})
\to H^0(\varphi_0(C_0), \mathcal S)\\
\hspace{.4in} \to H^1(\varphi_0(C_0), p_*\mathcal N_{\varphi_0})\to H^1(\varphi_0(C_0), \mathcal N_{\iota})\to \cdots
\end{array}
\]
We have
$H^i(\varphi_0(C_0), p_*\mathcal N_{\varphi_0})\cong H^i(C_0, \mathcal N_{\varphi_0})$
again by the Leray's spectral sequence.
Note that the group $H^i(C_0, \mathcal N_{\varphi_0})$
is isomorphic to the dual of $H^{n-1-i}(C_0, \varphi_0^*\mathcal K_{X_0})$, $i = 0, 1$.
Similarly, the group $H^i(\varphi_0(C_0), \mathcal N_{\iota})$ is isomorphic to the dual of
$H^{n-1-i}(\varphi_0(C_0), \iota^*\mathcal K_{X_0})$, $i = 0, 1$.
\begin{comment}
(Since $\varphi_0(C_0)$ is reduced).
\end{comment}
Comparing the dual of the previous cohomology exact sequence with the latter,
we obtain $H^{n-2}(\varphi_0(C_0), \mathcal Q)^{\vee}\cong H^0(\varphi_0(C_0), \mathcal S)$.
In particular, we can restate Corollary \ref{cor:1} as follows.
\begin{cor}\label{cor:3}
Assume that $\varphi_0(C_0)$ is semiregular in the classical sense
and the class $[\varphi_0(C_0)]$ remains Hodge on the fibers of $\mathfrak X$.
Then if the map
$H^{0}(\varphi_0(C_0), \mathcal N_{\iota}) \to H^{0}(\varphi_0(C_0), \mathcal S)$
is surjective, $\varphi_0$ can be deformed to general fibers of $\mathfrak X$.\qed
\end{cor}
The sheaf $\mathcal S$ is the \emph{infinitesimal normal sheaf} of the singular locus of $\varphi_0(C_0)$,
as we will see below.
Recall that we assume that the image $\varphi_0(C_0)$ has normal crossing singularity.
Then, for any point $p\in \varphi_0(C_0)$, we can take a coordinate system
$(x_1, \dots, x_n)$ on a neighborhood $U$ of
$p$ in $X_0$
so that $U\cap \varphi_0(C_0)$ is given by $x_1\cdots x_k = 0$, $1\leq k\leq n$.
Let $\mathcal I_j$ the ideal of $\mathcal O_U$ generated by $x_j$
and let $\mathcal I$ be the ideal defining $\varphi_0(C_0)\cap U$ in $U$.
Then
\[
\mathcal I_1/\mathcal I_1\mathcal I\otimes \cdots \otimes \mathcal I_k/\mathcal I_k\mathcal I
\]
gives an invertible sheaf on the singular locus of $\varphi_0(C_0)\cap U$.
Globalizing this construction, we obtain an invertible sheaf on the singular locus of $\varphi_0(C_0)$.
Then the dual invertible sheaf of this is called the infinitesimal normal sheaf of
the singular locus of $\varphi_0(C_0)$, see \cite{F}.
\begin{lem}
The sheaf $\mathcal S$ is isomorphic to the infinitesimal normal sheaf.
\end{lem}
\proof
Note that the sheaf
$\mathcal I_1/\mathcal I_1\mathcal I\otimes \cdots \otimes \mathcal I_k/\mathcal I_k\mathcal I$
is generated by the element $x_1\otimes \cdots \otimes x_k$.
The sheaf $p_*\mathcal N_{\varphi_0}$ is given by
$\oplus_{i=1}^kHom(\mathcal I_i/\mathcal I_i^2, \mathcal O_U)$ on $U$.
The sheaf $\mathcal N_{\iota}$ is given by $Hom(\mathcal I/\mathcal I^2, \mathcal O_U)$.
The sheaf $\mathcal N_{\iota}$ is an invertible sheaf and generated by the morphism which
maps $x_1\cdots x_k$ to $1\in\mathcal O_U$.
In particular, by multiplying any $x_1\cdots \check{x}_i\cdots x_k$, the generator is mapped into the image of
$p_*\mathcal N_{\varphi_0}\to \mathcal N_{\iota}$,
namely, the image of the generator of $Hom(\mathcal I_i/\mathcal I_i^2, \mathcal O_U)$.
Also note that the ideal of the singular locus of $\varphi_0(C_0)$ is generated by
$x_1\cdots \check{x}_i\cdots x_k$, $i = 1, \dots, k$.
From these, it is easy to see that the cokernel of the map
$p_*\mathcal N_{\varphi_0}\to \mathcal N_{\iota}$ is isomorphic to the dual of
$\mathcal I_1/\mathcal I_1\mathcal I\otimes \cdots \otimes \mathcal I_k/\mathcal I_k\mathcal I$.\qed\\
Recall that the infinitesimal normal sheaf is related to deformations of $\varphi_0(C_0)$
which smooth the singular locus, see \cite{F}.
In particular, $\varphi_0(C_0)$ is called \emph{d-semistable} if the infinitesimal normal sheaf is trivial,
and d-semistable variety carries a log structure log smooth over a standard log point, so that
one can study its deformations via log smooth deformation theory \cite{KF, KK, KN}.
By Corollary \ref{cor:3}, the infinitesimal normal sheaf plays a crucial in the deformation theory
even if it is not d-semistable.
On the other hand, the notion of d-semistability gives a sufficient condition for the existence of
deformations in this situation, too, as follows.
\begin{cor}\label{cor:4}
Assume that the image $\varphi_0(C_0)$ is very ample and $H^1(X_0, \mathcal O_{X_0}(\varphi_0(C_0))) = 0$.
Assume also that $\varphi_0(C_0)$ is d-semistable and the singular locus of $\varphi_0(C_0)$ is connected.
Then the map $\varphi_0$ is semiregular.
\end{cor}
\proof
First, the subvariety $\varphi_0(C_0)$ is semiregular in the classical sense.
Namely, consider the cohomology exact sequence
\[
\cdots \to H^1(X_0, \mathcal O_{X_0}(\varphi_0(C_0)))\to H^1(\varphi_0(C_0), \mathcal N_{\iota})
\to H^2(X_0, \mathcal O_{X_0})\to \cdots,
\]
here $\iota\colon \varphi_0(C_0)\to X_0$ is the inclusion.
When $H^1(X_0, \mathcal O_{X_0}(\varphi_0(C_0))) = 0$,
the map $H^1(\varphi_0(C_0), \mathcal N_{\iota})\to H^2(X_0, \mathcal O_{X_0})$ is injective.
Since this map is the dual of the semiregularity map $H^{n-2}(X_0, \mathcal K_{X_0})\to
H^{n-2}(\varphi_0(C_0), \mathcal \iota^*\mathcal K_{X_0})$,
it follows that $\varphi_0(C_0)$ is semiregular.
To prove that $\varphi_0$ is semiregular, it suffices to show the map
$H^{0}(\varphi_0(C_0), \mathcal N_{\iota}) \to H^{0}(\varphi_0(C_0), \mathcal S)$
is surjective.
When $\varphi_0(C_0)$ is d-semistable, the sheaf $\mathcal S$ is the trivial line bundle on the
singular locus of $\varphi_0(C_0)$.
Since we assume that the singular locus is connected, it suffices to show that the map
$H^{0}(\varphi_0(C_0), \mathcal N_{\iota}) \to H^{0}(\varphi_0(C_0), \mathcal S)$ is not the zero map.
This in turn is equivalent to the claim that the injection
$H^0(\varphi_0(C_0), p_*\mathcal N_{\varphi_0})\to H^0(\varphi_0(C_0), \mathcal N_{\iota})$
is not an isomorphism.
Since $\varphi_0(C_0)$ is very ample, there is a section $s$ of $\mathcal O_X(\varphi_0(C_0))$
which does not entirely vanish on the singular locus of $\varphi_0(C_0)$.
Then if $\sigma$ is a section of $\mathcal O_X(\varphi_0(C_0))$ defining $\varphi_0(C_0)$,
the sections $\sigma + \tau s$, where $\tau\in\Bbb C$ is a parameter, deforms
$\varphi_0(C_0)$, and the non-vanishing of $s$ on the singular locus of $\varphi_0(C_0)$ implies that
this smoothes a part of the singular locus of $\varphi_0(C_0)$.
Since the sections of $H^0(\varphi_0(C_0), p_*\mathcal N_{\varphi_0})$ give first order deformations
which does not smooth the singular locus, it follows that the map
$H^0(\varphi_0(C_0), p_*\mathcal N_{\varphi_0})\to H^0(\varphi_0(C_0), \mathcal N_{\iota})$
is not an isomorphism.
This proves the claim.\qed
\subsection{The case $n = 2$}
Now let us consider the case $n = 2$.
Although we can work in a more general situation,
we assume $\varphi_0(C_0)$ is a reduced nodal curve for simplicity.
However $C_0$ need not be smooth.
Let $p\colon C_0\to \varphi_0(C_0)$ be the natural map, which is a partial normalization.
In this case, we can deduce very explicit criterion for the semiregularity.
Again, we have the exact sequence
\[
\begin{array}{l}
0\to H^0(\varphi_0(C_0), p_*\mathcal N_{\varphi_0})\to H^0(\varphi_0(C_0), \mathcal N_{\iota})
\to H^0(\varphi_0(C_0), \mathcal S)\\
\hspace{.4in} \to H^1(\varphi_0(C_0), p_*\mathcal N_{\varphi_0})\to H^1(\varphi_0(C_0), \mathcal N_{\iota})\to \cdots,
\end{array}
\]
and if $\varphi_0(C_0)$ is semiregular in the classical sense,
then $\varphi_0$ is semiregular if and only if the map
$H^0(\varphi_0(C_0), \mathcal N_{\iota})
\to H^0(\varphi_0(C_0), \mathcal S)$ is surjective.
Let $P = \{p_i\}$ be the set of nodes of $\varphi_0(C_0)$ whose inverse image by $p$ consists of two points.
Then the sheaf $\mathcal S$ is isomorphic to $\oplus_{i}\Bbb C_{p_i}$, where
$\Bbb C_{p_i}$ is the
skyscraper sheaf at $p_i$.
By an argument similar to the one in the previous subsection, we proved the following
in \cite{N6}.
\begin{thm}\label{thm:CB}
Assume that $\varphi_0(C_0)$ is semiregular in the classical sense.
Then the map $\varphi_0$ is semiregular if and only if for each $p_i\in P$, there is a first order deformation of
$\varphi_0(C_0)$ which smoothes $p_i$, but does not smooth the other nodes of $P$. \qed
\end{thm}
For applications, it will be convenient to write this in a geometric form.
Consider the exact sequence
\[
0\to \mathcal O_{X_0}\to \mathcal O_{X_0}(\varphi_0(C_0))\to \mathcal N_{\iota}\to 0
\]
of sheaves on $X_0$ and the associated cohomology sequence
\[
0 \to H^0(X_0, \mathcal O_{X_0})\to H^0(X_0, \mathcal O_{X_0}(\varphi_0(C_0)))\to
H^0(\varphi_0(C_0), \mathcal N_{\iota})
\to H^1(X_0, \mathcal O_{X_0})\to \cdots.
\]
Let $V$ be the image of the map $H^0(\varphi(C_0), \mathcal N_{\iota})
\to H^1(X_0, \mathcal O_{X_0})$.
Since we are working in the analytic category, we have the exact sequence
\[
0\to \Bbb Z\to \mathcal O_{X_0}\to \mathcal O_{X_0}^{\ast}\to 0
\]
of sheaves on $X$.
Let $\bar V$ be the image of $V$ in $Pic^0(X_0) = H^1(X_0, \mathcal O_{X_0}^{\ast})$.
In \cite{N6}, we proved the following.
\begin{cor}\label{cor:geomCB}
In the situation of Theorem \ref{thm:CB}, the map $\varphi_0$ is unobstructed
if for each $p_i\in P$, there is an effective divisor $D$ such that $\mathcal O_X(\varphi_0(C_0)-D)\in \bar V$
which avoids $p_i$ but passes through all points in $P\setminus\{p_i\}$.
\end{cor}
A particularly nice case is
when the map $H^0(\varphi_0(C_0), \mathcal N_{\iota})\to H^1(X_0, \mathcal O_{X_0})$ is surjective.
This is the case when $\varphi_0(C_0))$ is sufficiently ample.
Then, if for each $p_i\in P$ there is an effective divisor $D$ which is algebraically equivalent to $\varphi_0(C_0)$
which avoids $p_i$ but passes through all points in $P\setminus\{p_i\}$, the map $\varphi_0$ is semiregular.
This is, in a sense, the opposite to the classical \emph{Cayley-Bacharach property},
see for example \cite{BHPV}.
Combined with Theorem \ref{thm:1}, we have the following.
\begin{cor}\label{cor:5}
Assume that $\varphi_0(C_0)$ is reduced, nodal and semiregular in the classical sense
and the class $[\varphi_0(C_0)]$ remains Hodge on the fibers of $\mathfrak X$.
Then the map $\varphi_0$ deforms to general fibers of $\mathfrak X$
if the condition in Theorem \ref{thm:CB} or Corollary \ref{cor:geomCB} is satisfied. \qed
\end{cor}
In the case of $n = 2$, the original exact sequence
\[
\begin{array}{ll}
\cdots\to H^{0}(\varphi_0(C_0), \iota^*\mathcal K_{X_0})& \to H^{0}(\varphi_0(C_0), p_*\varphi_0^*\mathcal K_{X_0})
\to H^{0}(\varphi_0(C_0), \mathcal Q)\\
& \to
H^{1}(\varphi_0(C_0), \iota^*\mathcal K_{X_0})\to H^{1}(\varphi_0(C_0), p_*\varphi_0^*\mathcal K_{X_0})
\to H^{1}(\varphi_0(C_0), \mathcal Q)
\end{array}
\]
before taking the dual is sometimes also useful.
In this case, if $\varphi_0(C_0)$ is semiregular in the classical sense, then $\varphi_0$ is semiregular
if and only if the map
\[
H^{0}(\varphi_0(C_0), \iota^*\mathcal K_{X_0}) \to H^{0}(\varphi_0(C_0), p_*\varphi_0^*\mathcal K_{X_0})
\cong H^0(C_0, \varphi_0^*\mathcal K_X)
\]
is surjective.
For example,
when the canonical sheaf $\mathcal K_{X_0}$ is trivial, then it is clear that this map
is surjective and also $\varphi_0(C_0)$ is semiregular in the classical sense.
In fact, in this case it is not necessary to assume that the image $\varphi_0(C_0)$ is nodal or reduced,
and any immersion $\varphi_0$ from a reduced curve $C_0$ is semiregular.
It is known that when $X_0$ is a K3 surface and the image $\varphi_0(C_0)$
is reduced, then the map $\varphi_0$
deforms to general fibers if the class $[\varphi_0(C_0)]$ remains Hodge.
This claim is proved using the twistor family associated with the hyperk\"ahler structure of
K3 surfaces, see for example \cite{CGL}.
Corollary \ref{cor:5} gives a generalization of this fact to general surfaces.\\
\begin{comment}
A typical example to which this result applies is the case when the image $\varphi_0(C_0)$ is sufficiently ample.
Namely, assume that $H^1(X, \mathcal O_X(\varphi_0(C_0))) = 0$.
Then the the image $\varphi_0(C_0)$ is semiregular in the classical sense.
In fact, consider the cohomology exact sequence
\[
\cdots \to H^1(X, \mathcal O_X(\varphi_0(C_0)))\to H^1(\varphi_0(C_0), \mathcal N_{\iota})
\to H^2(X, \mathcal O_X)\to \cdots,
\]
here $\iota\colon \varphi_0(C_0)\to X$ is the inclusion.
When $H^1(X, \mathcal O_X(\varphi_0(C_0))) = 0$,
the map $H^1(\varphi_0(C_0), \mathcal N_{\iota})\to H^2(X, \mathcal O_X)$ is injective.
Since this map is the dual of the semiregularity map $H^{0}(X, \mathcal K_X)\to
H^{0}(\varphi_0(C_0), \mathcal \iota^*K_X)$,
it follows that $\varphi_0(C_0)$ is semiregular.
This argument is valid for any $n$.
Geometrically, the corollary claims that if the linear system $\mathcal O_X(\varphi_0(C_0))$ contains,
for every $i$, a divisor $C_i$ such that $p_i\notin C_i$ but $P\setminus \{p_i\}\subset C_i$,
then the map $\varphi_0$ can be deformed to other fibers.
Also, this property is related to the \emph{Cayley-Bacharach property} of invertible sheaves \cite{}.
Let $X$ be a surface and $P$ be a 0-dimensional subscheme consisting of simple points.
Let $\mathcal L$ be an invertible sheaf on $X$.
Then $P$ satisfies the Cayley-Bacharach property with respect to $\mathcal L\otimes \mathcal K_X$
if for every point $p\in P$, we have
\[
H^0(X, \mathcal L\otimes \mathcal K_X\otimes \mathcal I_Y)
= H^0(X, \mathcal L\otimes \mathcal K_X\otimes \mathcal I_{Y\setminus\{p\}}).
\]
The condition in the corollary can be seen as the opposite of the Cayley-Bacharach property
of $P$ with respect to $\mathcal O_X(\varphi_0(C_0))$.
\end{comment}
In \cite{N6}, we also proved the following.
\begin{thm}\label{thm:2}
Let $X$ be a smooth complex projective surface with an effective canonical class.
Let $L$ be a very ample class.
Then, there is a positive number $A$ which depends on $L$ such that
for any positive integer $m$, the numerical class of $mL$ contains
an embedded irreducible nodal curve $C$ whose geometric genus is less than $Am$.\qed
\end{thm}
Such a curve has very large number of nodes, which is roughly $\frac{L^2}{2}m^2$.
In particular, for any positive number $\varepsilon$, we can assume
$\delta(C) > g(C)^{2-\varepsilon}$ for large $m$, where $\delta(C)$ is the number of nodes of $C$
and $g(C)$ is the geometric genus of $C$
Moreover, the proof in \cite{N6} shows that
we can take $C$ to be semiregular when it is considered as a map $\varphi\colon \tilde C\to X$
from the normalization $\tilde C$, and $\varphi$
has non-trivial deformations on which gives
equisingular deformations of $C$ in $X$.
Now let $Y$ be any smooth projective variety of dimension not less than two.
Let $X$ be a smooth surface in $Y$ which is a complete intersection of sufficiently high degree.
Then by the above theorem there is an embedded irreducible nodal curve $C$
whose geometric genus is less than $Am$ and which deforms equisingularly in $X$.
Moreover, by Theorem \ref{thm:1}, if we deform $X$ to a smooth surface $X'$ inside $Y$, then the curve $C$
also deforms to $X'$ equisingularly.
Since a dense open subset of $Y$ is swept by such surfaces as $X'$,
we have the following, which claims that any projective variety can be dominated by
an equisingular family of nodal curves which has large number of nodes (and small number of genera).
\begin{cor}\label{cor:20}
Let $Y$ be a projective variety of dimension $n\geq 2$.
Then for any positive number $\varepsilon$,
there is an $(n-1)$-dimensional family $\mathcal C\to B$ of irreducible nodal curves
whose fibers satisfy $\delta > g^{2-\varepsilon}$, and a map
$p\colon \mathcal C\to Y$ which dominates $Y$.
Here $\delta$ is the number of nodes of a fiber of $\mathcal C$ and
$g$ is the geometric genus of it.
\end{cor}
\proof
This follows from the above argument applied to a desingularization of $Y$.\qed\\
\section*{Acknowledgement}
\noindent
The author was supported by JSPS KAKENHI Grant Number 18K03313.
|
2,877,628,089,778 | arxiv |
\section{Full results}
\label{app:results}
Results start on next page.
\begin{sidewaysfigure}[h!t]
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_0.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_1.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_2.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_3.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_4.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_5.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_6.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_7.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_8.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_9.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_10.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_11.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_12.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_13.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_14.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_15.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_16.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_17.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_18.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_19.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_20.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_21.pdf}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width=\textwidth]{figures/qualitative_results/full/qual_results_22.pdf}
\end{sidewaysfigure}
\section{Conclusion}
We proposed \mbox{\textsc{SeeThrough}}, a method for automatically finding partially occluded chairs in a photograph of a structured scene.
Our key insight is the incorporation of higher level
scene statistics that allows more accurate reasoning in scenes containing medium to high levels of occlusion.
We demonstrate considerable quantitative and qualitative performance improvements across multiple measures.
Our method suffers from limitations that suggest a number of future research directions. First, we plan to extend the evaluation to a more expansive class of objects beyond chairs. Second, we think exploring templates that can express a broader understanding of the multi-object spatial relationships is a promising future direction.
\section{Introduction}
\begin{quote} \small
Partial occlusions pose a major challenge to the successful recognition of visual objects because they reduce the evidence available to the brain. $\ldots$ As a result, recognition must rely not only on information about the physical object but also on information about the occlusion, scene context and perceptual experience~\cite{brainWorks17}.
\end{quote}
For many scene understanding tasks such as creating a room mockup for VR or automatically estimating how many people a room can accommodate, it is sufficient to estimate positions, orientations, and rough proportions of the objects rather than exact point-wise surface geometry. Given a {\em single} 2D photograph, the goal of this paper is to select and place instances of 3D models, particularly the partially {\em occluded} ones, to recover the photographed {\em scene arrangement} under the estimated camera.
\begin{figure}[b!]
\vspace*{-0.1in}
\includegraphics[width=\linewidth]{figures/new_figures/teaser.pdf}
\caption[Context example]{We present \textsc{SeeThrough}, a method to detect objects (specifically chairs) from single images under medium to heavy occlusion by reasoning with 3D scene-level context information. Our method significantly improves detection rate over state-of-the-art alternatives. }
\label{fig:teaser}
\end{figure}
With the easy access to large volumes of image and 3D model repositories, and the availability of powerful supervised learning methods, researchers have investigated multiple subproblems relevant to the above goal, such as object recognition~\cite{He:2016:CVPR}, localization~\cite{Ren:2015:NIPS}, pose prediction~\cite{wu2016single}, or developed a complete system \textsc{Im2Cad}~\cite{izadinia2017im2cad} that selects and positions 3D CAD models that are similar to the input imaged scenes. While these approaches work reliably in rooms with relatively low occlusion, under moderate to heavy occlusion the methods quickly deteriorate. A common source of failure is that under significant occlusion, state-of-the-art semantic segmentation or region detection methods begin to break down, and hence any system relying on them also fail (see Figure~\ref{fig:teaser}).
Unlike images with limited occlusion where direct image-space information is sufficient, occluded scenes require a different treatment.
One possibility is to train an end-to-end network to go from single images to parameterized scene mockups. However, a major bottleneck is obtaining suitable training data. On the one hand, in our experiments the networks trained with synthetic 3D scene data do not easily translate to real-world data. On the other hand, obtaining real-world training data is difficult to scale as it requires complex annotations in 3D from single images. Instead, we propose a novel approach that heavily relies on 3D contextual statistics that can be automatically extracted from synthetic scene arrangement data.
Our key insight is that typical indoor scenes exhibit significant regularity in terms of co-occurrence of objects,
which can be exploited as explicit priors to make predictions about
object identity, placement and orientation, even under significant inter- or intra-object occlusions. For example,
a human observer can easily spot heavily occluded chairs due to the presence of other visible nearby chairs and a table (see Figure~\ref{fig:teaser}), as we have a good mental model of typical chair-table arrangements.
We introduce \mbox{\textsc{SeeThrough}}\ that generates 2D keypoints from input images using a neural network, lifts the keypoints to candidate 3D object proposals, and then solves a selection problem to pick objects scored according to object cooccurrence statistics extracted from a scene database. We iterate the process by allowing already selected objects to reinforce selection of weakly witnessed occluded ones.
We tested our approach quantitatively on a new scene mockup dataset including partially occluded objects and show significant improvement of recognition over baseline methods on multiple quantitative measures. Although our current
implementation is focused on the \emph{chair} class, the method itself is not
inherently limited to this, and could be extended to other classes with
appropriately annotated data.
{\em (Full code, training data, and scene statistics will be available for research use. Supplementary material is available at \url{http://geometry.cs.ucl.ac.uk/mhueting/proj/seethrough/seethrough_supplementary.tar.gz}) }
\if0
Large sets of 3D indoor scenes are useful for purposes ranging from
architecture and product design to virtual reality content and game asset
creation. Aside from being used directly as a resource for rendering or interactive purposes,
statistics extracted from them can be used to gain insight into
how objects are commonly used, and how they are commonly arranged.
However, such information is unfortunately hard to come by as it is tedious and expensive to manually
create. In contrast, 2D photographs of such indoor scenes are widely and freely
available, as are large databases of individual 3D models. A natural approach for creating 3D arrangements would be to convert 2D photographs into 3D scenes by making use of large 3D model repositories.
This gives rise to an essential problem in computer vision and graphics, which
we will henceforth call the {\bf mockup problem}: {\em given a single 2D photograph
and a database of 3D models, select and place instances from this database into a 3D scene
as to accurately reconstruct the photographed scene arrangement. }
The problem is inherently ill-posed, as photographs are the result of the
projection of many complex attributes (e.g., geometry, material, illumination).
Indeed, we are faced with reconstructing an entire dimension that was lost when the
photograph was taken. Additionally, inter- and intra-object occlusions limit
the amount of visual information available for certain objects, making the
reconstruction process more difficult. It is now possible for algorithms to make reasonable inferences from a single natural image by relying
on relevant \emph{prior knowledge} about the image in question, at least under no or very limited occlusion. Even so, the
complexity of the problem, together with the difficulty of gathering large
amounts of training data, makes the mockup problem a highly challenging one.
Recent advances have addressed parts of the goal by looking at simpler problems,
such as object recognition~\cite{He:2016:CVPR}, localization~\cite{Ren:2015:NIPS}, and pose prediction~\cite{Wu:2016:ECCV}. Unfortunately,
these techniques are designed for objects that are almost fully visible, and
fail under moderate to severe occlusions, making them useful only for the
simplest of scenes. Moreover, they work on a single object basis, discarding
any more high-level information that might be present. Simply combining these
methods thus yields limited success (see Section~\ref{sec:ch4:evaluation}).
In this paper, we suggest that in order to improve beyond baseline approaches that focus on single-object placement, we need
to reason more globally, and inject deeper knowledge of
the domain into the optimization process. Our key insight is that scenes
typically exhibit significant regularity in terms of co-occurrence of objects,
which can be exploited as explicit prior information to make predictions about
object identity, placement and orientation, even when such objects are in
highly occluded regions.
We hypothesize that the above approach matches the way we as humans
reason under similar occluded conditions. A heavily occluded chair is still easily
distinguishable as such due to the presence of other chairs and a table (see
Figure~\ref{fig:ch4:occlusion_example}), as we have a good understanding of
typical chair-table arrangements. By explicitly modeling this type of
knowledge, we can find placements that would otherwise carry too little visual
information for accurate recognition.
This insight is captured in our method by combining reprojection error of known
keypoints with pairwise object co-occurrence costs in the objective function.
Candidate placements are generated and tested on the one hand based on semantic
keypoint maps from a newly trained deep neural network, and on the other hand
based on the pairwise agreement between instances according to a model of
object co-occurrence statistics, gleaned from a database of pre-existing 3D scenes.
We tested our approach quantitatively on 100 hand-annotated images and show a
marked improvement of recognition over baseline methods. Although our current
implementation is focused on the \emph{chair} class, the method itself is not
inherently limited to this, and could be extended to other classes with
appropriate data annotation effort.
The contributions of this paper are:
(i)~a keypoint estimation network for estimating relevant keypoints of
multiple instances of chairs in a single image;
(ii)~a pairwise co-occurrence model capturing likelihood of co-occurring
chair instances;
(iii)~an end-to-end pipeline for finding chairs in single images
that outperforms current state-of-the-art; and
(iv)~a ground-truth dataset of 100 scenes for testing performance of similar methods.
\fi
\section{Method}
\begin{figure}[h!tb]
\includegraphics[width=\linewidth]{figures/pipeline/pipeline}
\caption[Pipeline]{The full pipeline of our method.}
\label{fig:ch4:pipeline}
\end{figure}
Our pipeline (Figure~\ref{fig:ch4:pipeline}) takes as input a photograph $\bb{x}$ and a database of 3D chair models
$\bb{M}$, and outputs a mocked up 3D scene $\bb{S}$, such that the reprojection
of $\bb{S}$ with the estimated camera $C$ results in an image that approximates the photograph $\bb{x}$ (see Figure~\ref{fig:ch4:intended_outcome}).
\begin{figure}[t]
\includegraphics[width=\linewidth]{figures/expected_output/expected_output}
\caption[Expected output]{Intended working of our method: we take a single image of a structured indoor scene as input, and output a 3D scene with the constituent chairs recovered in the right location and pose, as well as the camera parameters that reproject this scene as close as possible to the original input.}
\label{fig:ch4:intended_outcome}
\end{figure}
As preprocessing, we estimate the scene camera $C$ using an
vanishing directions~\cite{Hedau:2009:ICCV} to obtain focal length and
camera orientation (Section~\ref{sec:ch4:camera_estimation}). Our method consists of three stages: (i)~the image is
passed through a keypoint estimation network that outputs a set of
\emph{keypoint probability maps}, representing at each pixel the probability of
the presence of a certain semantically meaningful keypoint
(Section~\ref{sec:ch4:keypoint_maps});
(ii)~the keypoint maps
are combined with the estimated camera $C$ to generate candidate object
placements (Section~\ref{sec:ch4:candidate_generation}); and
(iii)~a
selection is made among these candidates by optimizing an objective function
which combines object-to-keypoint-map matching with pairwise placement agreement
according to a pre-trained object co-occurrence model
(Section~\ref{sec:ch4:optimization}). The second and third stages are then
iterated, this time taking into account the previously found objects during
candidate generation as a strong prior (Section~\ref{sec:ch4:iteration}). This
process is iterated until convergence.
\subsection{Camera estimation}
\label{sec:ch4:camera_estimation}
To convert sets of 2D keypoints to possible 3D locations we need the intrinsic
and extrinsic parameters of the camera with which photo $\bb{x}$ was taken.
Specifically, for a good reconstruction, we need the orientation of the camera
with respect to the ground plane in the form of rotation matrix $C_R$, the
focal length $C_f$, and a measure of the scale of the room $C_s$. However,
estimating the scale of the room without prior information is not possible -- even
if we know the 2D location of a chair, it still might be 1 meter or 100 meters tall. There is no way of deciding this
without some prior knowledge about chairs and their dimensions. We thus fix our
scale parameter and only estimate $C_f$ and $C_R$, and replace $C_s$ with
individual scale parameters for each object in the optimization later on. Most
methods for camera parameter estimation indeed focus on $C_f$ and $C_r$, and to
do so rely on automatically estimating vanishing points (see
Figure~\ref{fig:ch4:camera_estimation}). We employ the method from Hedau et
al.~\cite{Hedau:2009:ICCV}. In summary, their method uses structured learning
from Tsochantaridis et al.~\cite{Tsochantaridis:2005:JMLR} to rank multiple
room layout candidates, which are generated from estimated vanishing points. We
refer to the paper from Hedau et al. for more information.
To complete our camera parameters, we pick meters as unit in our world
coordinate system (the same coordinate system used by our model set), and set
the camera's location $C_t$ as being at eye height (1.8m) on world origin. This
altogether yields our camera $C$.
\begin{figure}
\includegraphics[width=\linewidth]{figures/camera_estimation/camera_estimation}
\caption[Vanishing point detection]{By estimating vanishing points in the image, the camera rotation matrix and focal length can be detected. Detecting scale a priori is not possible.}
\label{fig:ch4:camera_estimation}
\end{figure}
\subsection{Keypoint maps}
\label{sec:ch4:keypoint_maps}
Our goal is to find location and pose of as many chairs in the scene as possible.
We aim to do this by finding all instances of a predefined set of
semantically meaningful keypoints in the image, and then use the estimated camera together with a
3D chair template consisting of those same keypoints to reconstruct
the 3D location and pose of the chairs.
We start by defining a set of general keypoint types for the chair object class.
Each keypoint type represents one or more keypoints that should be present in
each (reasonable) chair instance. We selected 8 keypoint types, each of which
is uniquely identifiable on every reasonable chair. These keypoint types
are shown in Figure~\ref{fig:ch4:keypoint_types}.
\begin{figure}[h!tb]
\includegraphics[width=\linewidth]{figures/keypoint_types/keypoint_types}
\caption[Keypoint types]{Selected keypoint types.}
\label{fig:ch4:keypoint_types}
\end{figure}
\subsubsection{Keypoint location map}
A keypoint location map is a 2D map whose domain is the input image $\bb{x}$, and
represents belief about the presence of a specific keypoint type at a specific
pixel of $\bb{x}$. It is represented as a $r \times r$ single-channel
matrix, with values between 0 and 1. In the case of perfect information, the matrix will have
value 0 everywhere except for those locations where a keypoint of the
corresponding type is present, where it would have value 1. However, as we will employ
an L2 loss function, such step-function keypoint maps would result in an extremely
discontinuous error landscape, destabilizing the training process. Instead, we
represent each keypoint using a Gaussian lobe centered around its true location,
resulting in a much smoother loss function (see Figure~\ref{fig:ch4:keypoint_lobes}).
\begin{figure}
\includegraphics[width=\linewidth]{figures/gaussian_lobes/gaussian_lobes}
\caption[Gaussian lobes]{To facilitate the training process the keypoints are represented as Gaussian lobes around their location.}
\label{fig:ch4:keypoint_lobes}
\end{figure}
\subsubsection{Keypoint estimation network}
To extract keypoint location maps for each keypoint type from an input image,
we employ a deep learning architecture. This network takes our image $\bb{x}$ as
input and outputs a set of keypoint location maps $m_1, \ldots, m_{N_k}$, where
$N_k = 6$ is the total number of predefined semantic keypoints.
The network architecture was selected through experimentation. We tried 2 different architectures:
\begin{itemize}
\item The convolutional pose machines (CPM) \cite{Wei:2016:CVPR} architecture,
whose task of human pose estimation through keypoint localization closely resembles our own, and
\item ResNet-50 \cite{He:2016:CVPR}, a general purpose network with high
performance on a number of image understanding tasks, such as object detection and semantic segmentation.
\end{itemize}
In both cases, we trained the network using an L2 loss function on the difference between
the output and ground truth keypoint location maps.
Perhaps surprisingly, ResNet-50 resulted in the highest test accuracy (see
Table~\ref{tab:ch4:net_performance}). Although the task that CPM was meant for
(keypoint detection) more closely resembles our own, it cannot compete with the
fact that ResNet-50 was pretrained on ImageNet, the data distribution of which
is more similar to ours.
\begin{table}
\centering
\begin{tabular}{|c|c|}
\hline
architecture & MSE \\ \hline
ResNet-50~\cite{He:2016:CVPR} & $3.24 \times 10^{-5}$ \\ \hline
CPM~\cite{Wei:2016:CVPR} & $1.02 \times 10^{-4}$ \\ \hline
\end{tabular}
\caption[Architecture performance]{Performance of the two tried architectures on our task. ResNet-50's advantage of being pretrained on ImageNet gives it the edge over CPM.}
\label{tab:ch4:net_performance}
\end{table}
We employed the TensorFlow implementation of ResNet-50. By using an input image
size of $512 \times 512$ and a bottleneck stride of 8 we get a final keypoint
map size of $r = 64$. The full architecture can be seen in Table~\ref{tab:ch4:network_architecture}.
The training data we used is discussed in Section~\ref{sec:ch4:training_data}.
\begin{table}[h!tb]
\centering
\resizebox{\linewidth}{!}{
\bgroup
\def1.5{1.5}
\begin{tabular}{|c|c|c|}
\hline
layer name & output size & node type \\ \hline
input & $512 \times 512$ & \\ \hline
conv\_1 & $256 \times 256$ & $7\times 7$, stride 2 \\ \hline
max\_pool & $128 \times 128$ & Max pooling, stride 2 \\ \hline
block\_1 & $64 \times 64$ & Bottleneck units with shortcuts, $\begin{bmatrix} 1 \times 1, 64 \\ 3 \times 3, 64 \\ 1 \times 1, 256 \end{bmatrix} \times 3$, last $3\times 3$ stride 2 \MatTableStrut \\ \hline
block\_2 & $64 \times 64$ & Bottleneck units with shortcuts, $\begin{bmatrix} 1 \times 1, 128 \\ 3 \times 3, 128 \\ 1 \times 1, 512 \end{bmatrix} \times 4$, all stride 1 \MatTableStrut \\ \hline
block\_3 & $64 \times 64$ & Bottleneck units with shortcuts, $\begin{bmatrix} 1 \times 1, 256 \\ 3 \times 3, 256 \\ 1 \times 1, 1024 \end{bmatrix} \times 6$, all stride 1 \MatTableStrut \\ \hline
block\_4 & $64 \times 64$ & Bottleneck units with shortcuts, $\begin{bmatrix} 1 \times 1, 512 \\ 3 \times 3, 512 \\ 1 \times 1, 2048 \end{bmatrix} \times 3$, all stride 1 \MatTableStrut \\ \hline
\end{tabular}
\egroup
}
\caption[Network architecture]{ResNet-50 based architecture used for keypoint estimation.}
\label{tab:ch4:network_architecture}
\end{table}
\subsection{Candidate generation}
\label{sec:ch4:candidate_generation}
Now that the camera parameters and keypoint locations have been estimated, we
move on to the candidate generation stage. In this part, predefined object templates
are fit to different subsets of the estimated keypoint locations, and scored
by their agreement with the entire keypoint map. First, we will describe how we get specific
keypoint locations from the estimated keypoint maps. Then, we will discuss how we construct
the object templates from our set of 3D models. Finally, we describe the actual
candidate generation process.
\subsubsection{Keypoint locations from keypoint location maps}
The keypoint estimation network's output consists of $N_k$ single channel
keypoint location maps $m_1, \ldots, m_{N_k}$. For our candidate generation process, these maps need
to be converted to concrete keypoint locations. We cannot simply take all
locations with a value above a certain threshold, as the maps spread the
probability of a found keypoint across multiple pixels (Figure~\ref{fig:ch4:keypoint_lobes}). One way of dealing
with this is to find all local maxima in each map. The issue with this is that
large regions of very low probability still have many local maxima. To discount
these, we first pass each map $m_i$ through a thresholding operation with
threshold $\tau_m$, discarding all pixels below that value. Then, we find all
8-neighbourhood local maxima in each map $m_i$, and store them as our candidate keypoint
locations. We denote the found keypoint locations of type $k$ as $\bb{Q}_k$, and the
full set $\bb{Q} = \{\bb{Q}_1, \ldots, \bb{Q}_{N_k}\}$. See
Figure~\ref{fig:ch4:keypoint_map_to_keypoints}.
\begin{figure}[h!tb]
\includegraphics[width=\linewidth]{figures/keypoint_map_to_keypoints/keypoint_map_to_keypoints}
\caption[Keypoint map postprocessing]{Keypoint candidate locations are found by thresholding the output of the neural network and then finding local maxima}
\label{fig:ch4:keypoint_map_to_keypoints}
\end{figure}
\subsubsection{Object templates}
From the keypoint candidates $\bb{Q}$, we want to find actual chair
candidates. As all chairs are slightly different in shape, and fitting each
chair model in our dataset individually is prohibitively expensive, we make use
of a chair template model.
Specifically, we create this chair template model by fitting a Principal
Component Analysis (PCA) basis to the 3D coordinates of all 8 keypoints of all
chair models in our database $\bb{M}$. By analysing the cumulative
percentage of variance of each resulting PCA dimension, we conclude that the
top 3 PCA dimensions are responsible for $>85\%$ of variance in the shape of
all chairs. These top 3 PCA dimensions represent our chair template model $T$, and
the deviation from the mean $\bb{p} \in \mathbb{R}^3$ represents a variable for our optimization.
See Figure~\ref{fig:ch4:pca_dimensions}.
\begin{figure}[h!tb]
\includegraphics[width=\linewidth]{figures/pca_dimensions/pca_dimensions}
\caption[Template PCA]{Visualization of the top 3 PCA dimensions of our chair template, with respect to the mean chair. They approximately correspond to respectively chair width, back height and chair depth.}
\label{fig:ch4:pca_dimensions}
\end{figure}
We define $T(\bb{p})$ as the reprojection of PCA parameters $\bb{p}$ to 3D world space, i.e.
the instantiated coordinates of one particular instance of the chair template.
\vspace{-1mm}
\subsubsection{Candidate keypoint sets}
Finally, we will fit the generated chair template $T$ to the found keypoint
locations $\bb{Q}$. Unfortunately, we do not have any correspondences between the
keypoint locations of different types -- for example, we do not know which
``top-left'' keypoint belongs with which ``front-right-leg'' keypoint. As
such, we generate the exhaustive set of candidates by fitting a candidate chair
placement to each minimum set of 2D keypoint locations that results in a
well-defined fitting problem. A single keypoint correspondence is not enough,
as any candidate placement can then be rotated around its up-axis
indiscriminately. As we know the camera and thus the ground plane, and work
under the assumption that the chair models can change only scale and azimuth
(i.e. are placed flat on the ground), we can suffice with 2 keypoint
correspondences. Although this does leave some ambiguities due to overlap
between the scale dimension and the template parameters, due to regularization
on both of these parameter sets the resulting problem is well-defined. We thus
create our set of 2D keypoint candidate pairs as
\[ \bb{K} = \bigcup_{\bb{Q}_i \in \bb{Q}}\bigcup_{\bb{Q}_j \in \bb{Q}\setminus{}\bb{Q}_i} \bb{Q}_i \times \bb{Q}_j, \]
where $\times$ represents the Cartesian product.
\subsubsection{Template fitting}
\label{sssec:ch4:template_fitting}
\begin{figure}[h!t]
\def\linewidth{\linewidth}
\import{figures/notation/}{notation.pdf_tex}
\caption{Parameters estimated during the candidate fitting process.}
\label{fig:ch4:fitting_parameters}
\end{figure}
Then, we will generate one candidate chair placement for each $\bb{K}_i \in \bb{K}$
by finding the optimal parameters that yield a reprojection of the template's keypoints in line with $\bb{K}_i$,
as well as the full keypoint location maps $\bb{m}$.
These parameters consist of:
\begin{itemize}
\item a 2D translation across the ground plane $\bb{t}$,
\item 1D azimuth $\theta$,
\item 1D scale $s$,
\item 3D chair template parameters $\bb{p}$.
\end{itemize}
See Figure~\ref{fig:ch4:fitting_parameters} for clarification. This
optimization is split into two stages. In the first stage, we will optimize
specifically for the reprojection of the 3D keypoints corresponding to $k_u,
k_v \in \bb{K}_i$. In the second stage, we will incorporate our knowledge of
the other keypoint location maps in $\bb{m}$ and further finetune the
parameters to match with them as closely as possible as well. We now describe
each stage in turn.
\paragraph{First stage -- optimization w.r.t. 2 keypoints}
In the first stage, we find the optimal parameters such that the reprojection of the chair template's keypoints line up with $\bb{K}_i$.
We define the reprojection $z_i$ of each keypoint $k_i, i \in \{u, v\}$ as
\[ z_i = P(R(s[T(\bb{t})]_i, \theta) + \bb{t}, C), \]
where $R$ represents rotation, and $P$ represents camera projection.
The objective function is then simply the summed mean squared error of these reprojections w.r.t. the data:
\[ L = \sum_{i \in \{u, v\}} \|z_i - k_i \|^2 \].
We initialize the parameters as $\bb{t} = \bb{0}, \theta = 0, s = 1,
\bb{p} = \bb{0}$. Furthermore, we add an L2 regularization term to both
the norm of the template parameters $\bb{p}$ as well as the scale $s$.
This non-linear least squares optimization problem is then solved using Ceres~\cite{Ceres}.
\paragraph{Second stage -- optimization w.r.t. all keypoints}
Now that the parameters have been optimized w.r.t.\ our keypoint pair $\bb{K_i}$, we
finetune the parameters by also taking into account the other keypoint location maps in $\bb{m}$.
Note that we now go back to using the keypoint location maps themselves instead
of the extracted local maxima -- we do not optimize for exact location anymore, and
allow the final reprojection to deviate from the maxima in each individual keypoint location map.
Instead, we maximize the \emph{total probability} over all keypoint location maps. Our objective function becomes:
\[ L = \sum_{i \in \{1, \ldots, N_k\}} \|1 - m_i(z_i)\|^2, \]
where $m_i(z_i)$ represents the value of keypoint location map $m_i$ at reprojected keypoint $z_i$.
The same L2 regularizations as in the first stage apply, and we again solve our problem using Ceres~\cite{Ceres}.
If the final loss of the second stage is lower than a threshold $\tau_u$ we add the final parameters
as a candidate placement to our candidate placement set $\bb{O}$. This candidate placement set
is then passed on to the candidate selection stage.
\subsection{Candidate selection}
\label{sec:ch4:optimization}
In the final stage of our pipeline, we incorporate the key insight of this
method, as discussed in the introduction, which states that we need to use higher level
scene statistics to maximize our mockup performance. Specifically, we take the
candidate placements $\bb{O}$ from the previous stage and employ a
combination of the keypoint location maps and a model of object co-occurrence
statistics to select the final subset of chairs that constitutes our scene
mockup.
\subsubsection{Scene statistics}
\label{ssec:ch4:scene_statistics}
To model these higher level scene statistics, we employ a pairwise object
co-occurrence model. It models the probability of two chairs occurring at a given
relative orientation and translation from each other. To create this model, we
fit a Gaussian Mixture Model over the relative orientation $\delta_\theta$ and translation $\bb{\delta}_t$ of
pairs of chairs in the synthetic scene dataset \textsc{PBRS} (see
Section~\ref{sec:ch4:training_data}). We only take into account chairs that
are within a distance $\delta_r = 1.5m$ from each other, reasoning that chairs
that are farther apart are more likely to belong to entirely different groups
of chairs, making it imprudent to base our reconstruction on their
relationship. See Figure~\ref{fig:ch4:relative_transform} for clarification.
Fitting the GMM was done using Expectation-Maximization. As the models in \textsc{PBRS} tend to be
aligned exactly, we regularize the resulting mixture model by adding a small bias (0.01) to the diagonal
of the fitted covariance matrices. The number of mixture components $N_m$ was found by experimentation, and was set to $5$.
A visualization of some of the resulting mixture components can be found in Figure~\ref{fig:ch4:mixture_components}.
\begin{figure}
\def\linewidth{\linewidth}
\import{figures/relative_transform/}{relative_transform.pdf_tex}
\caption[GMM visualization]{We extract relative transformations of pairs of chairs from the PBRS dataset and fit a GMM to these datapoints.}
\label{fig:ch4:relative_transform}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{figures/mixture_components/mixture_components}
\caption[Mixture components]{A visualization of two of the mixture components resulting from fitting the GMM to the relative transformations of pairs of chairs in the PBRS dataset. The means and standard deviational ellipses are plotted in green.}
\label{fig:ch4:mixture_components}
\end{figure}
\subsubsection{Graph optimization}
\label{ssec:ch4:graph_optimization}
We now need to prune our over-complete set of candidate placements using the
trained object co-occurrence model. We represent this task as a graph labeling
problem. Each candidate placement represents a node in the graph, and takes on
a binary label representing whether or not that candidate placement is present
in the final mockup. Unary costs for each label stem from the keypoint location
maps, and pairwise costs stem from the scene statistics GMM. See
Figure~\ref{fig:ch4:graph_example}.
\begin{figure}
\includegraphics[width=\linewidth]{figures/graph_example/graph_example}
\caption[Candidate selection visualization]{We model our candidate selection problem as a graph labeling problem, where the unary costs are based on the keypoint location maps, and the pairwise costs on the scene statistics GMM.}
\label{fig:ch4:graph_example}
\end{figure}
\paragraph{Unary cost} To compute the unary score of a candidate placement $o_i \in \bb{O}$,
we \emph{generate} the keypoint location map $\bb{n}$ of $o_i$ (in the same
way we would do for creating a ground truth keypoint map) and compare it with
the keypoint location map $\bb{m}$ of the input image $\bb{x}$. As we do not expect a
single placement to explain the entire keypoint location map, we setup the score
as a multiplicative one, with the value only being dependent on the agreement
of the actual keypoints the placement $o_i$ exhibits:
\[ u_i = \frac{\|\bb{n} \odot \bb{m}\|_F}{\bb{n} \odot \bb{n}}, \]
where $\|\cdot\|_F$ represents the Frobenius norm, and $\odot$ represents the Hadamard product.
The normalization factor ensures that a candidate that perfectly matches the
keypoint location map of our input image $\bb{x}$ gets a score of 1. Finally, for a specific
candidate $o_i \in \bb{O}$, interpreting $u_i$ as a probability we get unary costs based on the log odds of $u_i$:
\begin{align}
U_i(0) &= 0 \\
U_i(1) &= -\log\left(\frac{u_i^\alpha}{1 - u_i^\alpha}\right)
\end{align}
where $\alpha$ is a scaling parameter to set the sensitivity of optimization to
the value in the keypoint maps. Our choice for the log odds means that a (scaled) score
of higher than $0.5$ results in a candidate unary cost that \emph{decreases} the score of the total
cost when selected, and otherwise \emph{increases} it.
\paragraph{Pairwise cost} The pairwise cost is based entirely on the fitted GMM. We extract
the relative translation $\bb{\delta}_t$ and orientation $\delta_\theta$, and evaluate
the trained GMM to get our raw pairwise score:
\[ p_{ij} = GMM(o_i, o_j). \]
The final pairwise score is then again based on the log odds corresponding to $p_{ij}$. It only applies when two objects co-occur:
\begin{align}
P_{ij}(0, 0) &= P_{ij}(1, 0) = P_{ij}(0, 1) = 0 \\
P_{ij}(1, 1) &= -\log\left(\frac{p_{ij}^\beta}{1 - p_{ij}^\beta}\right)
\end{align}
with $\beta$ a scaling parameter similar to $\alpha$.
Finally, we add an infinite pairwise cost to all candidate placement pairs that
intersect. These intersections are precomputed based on triangle-triangle
intersections.
We solve the final problem setup using OpenGM~\cite{OpenGM} by converting it to
a linear program and feeding it to CPLEX~\cite{CPLEX}.
\subsection{Iterative optimization}
\label{sec:ch4:iteration}
After the optimization from Section~\ref{sec:ch4:optimization} is complete, we
could stop and pass on the candidate placements with label 1 to the model
selection stage (Section~\ref{sec:ch4:model_selection}). However, now that
some objects have been definitely placed, we can use this information to
improve our candidate generation step, and by extension our candidate selection
step. In other words, we iterate the process of candidate generation and
selection, using the newly selected candidates in each iteration as a strong prior for the
candidate generation process of the next generation.
\subsubsection{Added pairwise cost in generation step}
To take into account the already selected placements during the candidate
generation phase, we keep our original non-linear least squares optimization,
but to the loss function of each stage of the two stage process (see
Section~\ref{sssec:ch4:template_fitting}) we add a term that represents the
GMM. Incorporating all mixture components in this
term is hard, as it is challenging to define a well-behaved objective function to minimize that
represents them. As noted by Olson et al.~\cite{Olson:2013:IJRR}, the
structure of the negative log-likelihood (NLL) of a GMM does not lend itself to
non-linear least squares optimization. Instead, they propose to approximate the
NLL of the full GMM by considering it as a Max-Mixture, reducing the NLL to the
weighted distance to the closest mixture mean (see Figure~\ref{fig:ch4:max_mixture} and \cite{Olson:2013:IJRR} for
details). In fact, in our case it makes sense to only optimize with respect to
the closest mean, and not all means: a chair should either be encouraged to be
next to another chair, or opposite, but never both. This replaces the original GMM likelihood function
\[ p_\mathrm{GMM}(\bb{\delta}) = \sum_i w_i N(\bb{\mu}_i, \bb{\Sigma}_i) \]
with the Max-Mixture likelihood function
\[ p_\mathrm{Max}(\bb{\delta}) = \max_i w_i N(\bb{\mu}_i, \bb{\Sigma}_i), \]
where $\bb{\delta} = \begin{bmatrix} \bb{\delta}_t \\ \delta_\theta
\end{bmatrix}$ is the relative translation and orientation of the new candidate
w.r.t. the already placed object, and $w_k$ is the weight of the $k$th mixture in
the model.
Taking the negative log likelihood gives
\[ -\log(p_\mathrm{Max}(\bb{\delta})) = \min_k \frac{1}{2} (\bb{\delta} - \bb{\mu}_k)^T \bb{\Sigma}_k^{-1}(\bb{\delta} - \bb{\mu}_k) - \log(w_k\eta_k), \]
where $N(\bb{\mu}, \bb{\Sigma})$ represents the normal distribution, and $\eta_k$ is the Gaussian normalization factor for the $k$th mixture. At
optimization time, during each step we find the mixture component $k^*$ that
minimizes this function, and then optimize w.r.t. the negative log likelihood
of the Gaussian of that component alone, resulting in the following term to be added to the objective function:
\[ \frac{1}{2} (\bb{\delta} - \bb{\mu}_{k^*})^T \bb{\Sigma}_{k^*}^{-1}(\bb{\delta} - \bb{\mu}_{k^*}). \]
By decoupling the component selection from the optimization step, we've
restored the nice properties of the single Gaussian negative log likelihood.
This term is added for each already placed object.
\begin{figure}
\includegraphics[width=\linewidth]{figures/max_mixture/max_mixture}
\caption[Max mixture model]{We approximate the GMM using a Max-Mixture Model from Olson et al., 2013~\cite{Olson:2013:IJRR}. Due to the simplified negative log likelihood of this model we can then use it in our non-linear least squares optimization.}
\label{fig:ch4:max_mixture}
\end{figure}
\subsubsection{Added unary cost in selection step}
As the already selected placements are not part of the optimization during later iterations,
the influence of the GMM on a new candidate placement w.r.t. already selected placements becomes
a unary cost. So, for each candidate placement in the second iteration, we add a term to $U_i(1)$
w.r.t. each of the already selected placements:
\[ -\log\left(\frac{GMM(o_i, o^*_j)^\beta}{1 - GMM(o_i, o^*_j)^\beta}\right). \]
With these modifications, the candidate generation step and candidate selection
step are iterated until convergence, i.e. until no new objects are added to the
scene.
\subsection{Model selection}
\label{sec:ch4:model_selection}
The set of all selected placements still only consist of template parameters, not actual chair models.
As a final step, we find the chair $g^*$ in our database $\bb{M}$ that best fits the
template. To do so, we reproject the 3D keypoint coordinates of each chair in
the database to the PCA coordinate space, and find the chair whose PCA
coordinates are closest to the PCA coordinates of our template:
\[ g^* = \arg\min_{g \in \bb{M}} \|[\mathrm{PCA(g)}]_0^3 - \bb{p}\|^2, \]
where $\bb{p}$ are the PCA coordinates of the candidate's template.
The resulting chair models together with their transform constitute our final scene mockup.
\subsection{Hyper parameters}
Our optimization pipeline depends on a number of hyper parameters. We optimized
these using HyperOpt~\cite{HyperOpt}, which employs a Tree of Parzen Estimators (Bergstra et al., 2013~\cite{Bergstra:2013:ICML}).
As our objective function we used the PercCorrectFull measure (see Section~\ref{sec:ch4:performance_measures}).
As ground truth data we used 10 scenes we annotated specifically for this purpose,
in the same way as the data used for evaluation (see Section~\ref{sec:ch4:ground_truth_annotation}).
See Table~\ref{tab:ch4:hyperparameters} for a list of resulting hyper parameter values.
\begin{table}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|}
\hline
Name & Description & Value \\ \hline
$\alpha$ & Sensitivity of keypoint maps & 0.61 \\ \hline
$\beta$ & Sensitivity to object co-occurrence model & 0.14 \\ \hline
$\tau_m$ & Lower threshold of keypoint location map & 0.25 \\ \hline
$\tau_u$ & Maximum cost for selecting candidate & 0.21 \\ \hline
\end{tabular}
}
\caption[Hyper parameters]{Hyper parameters of optimization, found by HyperOpt~\cite{HyperOpt}}
\label{tab:ch4:hyperparameters}
\end{table}
\subsection{Data}
\label{sec:ch4:training_data}
\subsubsection{Image data}
For purposes of qualitative evaluation, we scraped the
interior design website~\cite{Houzz} for the top 1000 results of the search
query ``dining room''. We denote this dataset \textsc{Houzz}. These images are
high quality and represent difficult but fair scenarios on which we expect our
method to perform well. Some examples of these images can be seen in
Figure~\ref{fig:ch4:houzz}.
\begin{figure}
\includegraphics[width=\linewidth]{figures/houzz_example/houzz_example}
\caption[Houzz samples]{Example images from our scraped \textsc{Houzz} dataset.}
\label{fig:ch4:houzz}
\end{figure}
\subsubsection{Network training data}
Traditionally, training a deep neural network requires a large amount of
training data. To our knowledge, there is no known large dataset of
photographs accurately annotated with object keypoints. As such, we resort
to creating our own training data. Ideally, the training data should be
from the same distribution as our intended testing data, i.e.\ photographs
of indoor scenes. However, creating a large-scale dataset of this type
is extremely time-consuming and expensive. On the other hand, synthetic
data in the form of realistic 3D indoor scenes along with physically-based
renders is already available in high numbers \cite{Zhang:2017:CVPR}.
Still, despite the high quality of the renders, there is still a significant
discrepancy between the feature distribution of the renders and that of the
photographs. As such, we augment the synthetic dataset with a subset of real
photographs from \textsc{Houzz} annotated through Amazon Mechanical Turk. We
now discuss each data type in turn.
\paragraph{Synthetic data}
The dataset provided by Zhang et al.~\cite{Zhang:2017:CVPR} provides 45K realistic indoor
scenes, and 400K physically-based renders of these scenes (see Figure~\ref{fig:ch4:pbrs}). We denote this
dataset as \textsc{PBRS}. These scenes consist of a fixed set of 2500 different
models across 60 classes. Among these models there are $\pm250$ chairs. We took a
subset of 100 of these chairs and annotated them with our previously selected
keypoint types. We then took all renders that contain at least 1 of the
annotated chairs and reprojected the keypoint locations into these renders,
yielding one image/keypoint map pair as training data per render. This resulted
in a set of $\pm8000$ image/keypoint map pairs in total.
\begin{figure}
\includegraphics[width=\linewidth]{figures/pbrs/pbrs}
\caption[PBRS dataset]{For the training setup of our network with synthetic data, we use renders from the PBRS dataset~\cite{Zhang:2017:CVPR}, which provides $\pm$45K houses with $\pm$400K high quality renders. Figure from \cite{Zhang:2017:CVPR}.}
\label{fig:ch4:pbrs}
\end{figure}
\paragraph{Real data}
Unfortunately, the synthetic data alone does not result in good performance on
real data. Two distinct reasons can be identified. First, even though the renders
in \textsc{PBRS} are of high quality, their feature distribution is both
distinct from real photographs as well as less diverse. Secondly, at the time
of writing, the set of renders and the set of scenes available for
\textsc{PBRS} had some discrepancies between them, resulting in a small but
significant set of renders that do not agree with the automatically generated
keypoint maps.
To address both of these issues, we annotated a subset of 500 images from the
\textsc{Houzz} dataset through Amazon Mechanical Turk. We asked 3 workers per
image to annotate all keypoints in the image through a drag-and-drop interface
(see Figure~\ref{fig:ch4:amt}), and averaged the resulting 3 keypoint maps per
image. This resulted in a training set of 500 hand-annotated photographs, which
was then used to train our keypoint estimation network.
\begin{figure}
\includegraphics[width=\linewidth]{figures/amt/amt}
\caption[MTurk interface]{The Amazon MTurk interface we used to annotate 500 photographs with keypoints.}
\label{fig:ch4:amt}
\end{figure}
\paragraph{Final training set} We experimented with 3 different training setups.
In the first setup, we trained the network only with synthetic data. In the second setup,
we only trained the network with real data. Finally, in the third setup, we
first trained the network until convergence with the synthetic data,
and then finetuned the network using the smaller set of real data.
Surprisingly, the best performance on the test set resulted from setup 2, i.e.
training only with the real data. Apparently, the shortcomings of the synthetic
data mentioned above were of higher importance than expected. One likely
explanation is the fact that training the network with the synthetic data first
steers away the network weights from those that were the result of the ImageNet
pretraining, which already encompass a high general understanding of real
photographs. The numbers show that this initial information is more valuable than the
extent of the synthetic data as well as its structural similarity to our test
data.
\subsubsection{Model data} The models annotated for the purpose of generating
synthetic network training data also immediately function as our model set
$\bb{M}$.
\section{Method}
We now describe the three main steps of the \mbox{\textsc{SeeThrough}}~system in detail starting with keypoint detection, followed by our approach for candidate object detection, and ending with our scene inference.
\subsection{Keypoint Detection}
\label{subsec:ch4:keypoint_maps}
At this stage our goal is to detect very subtle cues for potential object placements in a form of keypoints. A {\em keypoint} is a salient 3D point that appears across all objects of the same class (e.g., tip of a chair leg). We expect that a small number of (projected) keypoints will still be visible even under severe occlusions, and be useful in creating reasonable hypothesis for potential object placement. We represent this signal in two flavors: first, a \emph{keypoint map}, a per-pixel function that indicates how likely a particular keypoint is to occur at that pixel (each keypoint has a separate map $m_i$), and second, \emph{keypoint locations} which define the 2D coordinates for each keypoint.
Both sets of information are used at different stages of our algorithm. We collected our own training data and trained a convolution neural network to detect a continuous keypoint probability function, which we further use to extract candidate keypoint locations.
We picked $N_k$ keypoints ($N_k=8$ in our tests) (see supplemental material) and fine-tuned a variant of ResNet-50 neural network~\cite{He:2016:CVPR} to predict these keypoint maps in $N_k$ output channels (see supplemental material for architecture details). We also tested the CPM architecture~\cite{Wei:2016:CVPR}, but it yielded slightly inferior performance. While the latter focuses on keypoint detection it was pre-trained on human poses rather than general images, which is why we believe CPM did not generalized as well to our particuar task (see supplemental material).
The above network predicts continuous keypoint maps $\mathbb{M} := \{m_1, ..., m_{N_k}\}$, and to extract the final keypoint locations (2D positions in the image) we used local maxima above a threshold $\tau_m$ (Figure~\ref{fig:ch4:keypoint_map_to_keypoints}). We denote the set of these keypoint locations by $\mathbb{Q} := \{\bb{Q}_1, \ldots, \bb{Q}_{N_k}\}$.
\begin{figure}[h!]
\includegraphics[width=\linewidth]{figures/keypoint_map_to_keypoints/keypoint_map_to_keypoints}
\caption{We trained a neural network on real images to detect {\em keypoint maps}, which are then converted to 2D {\em keypoint locations} via thresholding and non-maximal suppression. }
\label{fig:ch4:keypoint_map_to_keypoints}
\end{figure}
\subsection{Candidate Object Detection}
\label{subsec:ch4:candidate_generation}
The goal of this step is to propose multiple candidate objects based on the detected keypoints. While we do not know how to group points, we observe that
a very small number of keypoints (as few as two) belonging to the same object, provide enough constraints to infer the scale and the orientation of a proxy 3D object. Hence, we can generate multiple candidates even with a sparse signal under moderate to high levels of occlusions. Using these generated candidates, we can recast the global inference problem as a discrete graph optimization problem, where we only need to solve for indicator variables, selecting a subset of candidates. Thus, we want higher recall at the expense of lower precision in this step.
Furthermore, in order to incorporate a slightly bigger context than a single keypoint, we select subsets of points that can compose an object.
At training time we learn a deformable template from a database of 3D models, and at test time we optimize the fitting of these templates to various subsets of keypoints.
\mypara{Object template}
Given a database of consistenly aligned 3D models $\bb{M}$ with manually labeled keypoints we use Principal Component Analysis (PCA) to
project 3D coordinates of keypoints to a lower-dimensional space (we take eigenvectors $\bb{\lambda}_1, ..., \bb{\lambda}_k$ that explain $>85\%$ of the variance). Our template is parameterized by a linear combination of these eigenvalues with weights $\bb{p}=[p_1, ..., p_k]$ (representing offset from the mean $\bb{\lambda}_0$). The final object template is defined by a weighted linear combination of the eigenvectors: $T(\bb{p}) := \bb{\lambda}_0 + \sum_i p_i \bb{\lambda}_i$.
We formulate an optimization problem where we solve for object parameters (i.e., $\bb{p}$) while making sure that the object aligns with the detected keypoints. To relate our 3D deformable model to 2D images, we need a camera estimate. We use a variant of Hedau et al.~\cite{Hedau:2009:ICCV} to estimate a rotation matrix $C_R$ with respect to the ground plane, the focal length $C_f$, and define the camera's location $C_t$ to be at eye height (1.8m) above the world origin, giving camera parameters $C : =[C_R, C_f, C_t]$.
For each object we solve for a 2D translation across the ground plane $\bb{t}$, azimuth $\theta$, scale $s$, and 3D chair template parameters $\bb{p}$.
Hence, the reprojection $z_i$ of the $i$-th keypoint to image space is:
\begin{align}
z_i := \Pi_C\left(R_\text{up}(\theta) ~ s ~ k_i(\bb{p}) + \bb{t}\right),
\end{align}
where $k_i(\bb{p})=[T(\bb{p})]_i$ is a keypoint on the deformed template, $R_\text{up}$ is a rotation around the up vector, and $\Pi_C$ is a projection to the camera space.
As described next, we fit our template object in two stages: first, we propose a candidate based on a pair of points, and then, we refine these candidate parameters with respect to all keypoint maps.
\mypara{(i)~Initial proposals} To propose initial object candidates we sample all pairs of detected keypoints. We use a pair because it gives the smallest set to sample that provides enough constraints to extract an initial guess for object translation, scale, and orientation. For each pair, we initialize as $\bb{t} = \bb{0}, \theta = 0, s = 1, \bb{p} = \bb{0}$, and optimize:
\begin{align}
L_\text{init} = \sum_{i \in \{u, v\}} \|z_i - k_i \|^2 + \underbrace{
\alpha_1 \|s-1\|^2 + \alpha_2 \|p\|^2
}_{\text{regularizer } (L_\text{reg})},
\label{eqn:energy}
\end{align}
where $\alpha_1$ and $\alpha_2$ are respectively the weights balancing scale and deformable template parameters ($\alpha_1 =1$ and $\alpha_2 = 1$ in our tests).
\mypara{(ii)~Parameter refinement} For each of the initial proposals extracted above, we refine the fitting. Specifically, instead of considering point-locations, we define our objective with respect to soft keypoint maps $m_j$, maximizing the probability of template corners to align with keypoints predicted by the neural network, i.e.,
\begin{align}
L = \sum_{i \in \{1, \ldots, N_k\}} \|1 - m_i(z_i)\|^2 + L_\text{reg},
\end{align}
with $L_\text{reg}$ as defined in Equation~\ref{eqn:energy}.
If $L < \tau_u$, we add the final parameters as a candidate placement to our candidate placement set $\bb{O}$.
\mypara{Selecting a 3D mesh}
For the results presented in this paper we show 3D meshes rather than object templates.
Particularly, we pick the closest 3D model from our database by projecting its keypoints
into the object PCA space, finding the nearest neighbor of the deformed template, and finally deforming it using the optimized parameters $\bb{p}$.
\subsection{Scene Inference}
\label{subsec:sceneInference}
We do not expect all individual objects selected as candidates to be in the scene, since they might overlap, or have inconsistent arrangement.
First, we capture scene statistics obtained from a large scene dataset with a probabilistic model, and then use the model to formulate an alternating discrete and continuous optimization.
\mypara{Learning scene model}
We model higher level scene statistics via a graphical model where each object is a node and edges between pairs of nodes capture object-to-object co-occurrence relationships. We used a Gaussian Mixture Model (GMM) with $N_m$ (set to $5$ in our tests) mixture components to model relative orientation $\delta_\theta$ and translation $\bb{\delta}_t$ of pairs of chairs from a very large synthetic scene dataset~\cite{Zhang:2017:CVPR}. We only take into account chairs that
are within a distance $\delta_r = 1.5m$ from each other, reasoning that far-away objects have weaker relationships.
We use Expectation-Maximization algorithm to fit the GMM and add a small bias (0.01) to the diagonal
of the fitted covariance matrices since objects in the database are axis-aligned.
\mypara{Graph optimization}
We formulate a graph labeling problem to decide which of the candidate objects should be included in the scene mockup, denoted by indicator variable $\gamma_i \in \{0,1\}$, where $\gamma_i=1$ iff object $O_i$ is included. We {\em minimize} the following objective function:
\begin{align}
L_\text{graph} := \sum_i \gamma_i U_i + \sum_{i,j} \gamma_i \gamma_j P_{i,j},
\end{align}
where $U_i$ is a unary penalty for an included object, and $P_{i,j}$ is pairwise penalty for a pair of included objects.
We define the unary energy by projecting object's keypoints to the image and convolve the resulting keypoint map with a Gaussian, following the same procedure we used to create ground truth keypoint maps. This provides a location map $\bb{n}$. And we set:
\begin{align}
U_i := -\text{logit} \left(\frac{\|\bb{n} \odot \bb{m}_i\|_F}{ \|\bb{n} \odot \bb{n}\|_F}\right),
\label{eq:Ui}
\end{align}
where $\|\cdot\|_F$ represents the Frobenius norm, $\odot$ represents the Hadamard product, and $\text{logit}(x) = \log\left(x/(1-x)\right)$.
Note that since we do not expect a
single placement to explain the entire keypoint location map, we setup the score
as a multiplicative one, with the value only being dependent on the agreement
of the actual keypoints the placement exhibits.
We define the pairwise energy using the GMM model learned from the scene dataset:
\begin{align}
P_{i,j} := -\text{logit}\left(GMM(\delta_\theta^{i,j}, \bb{\delta}_t^{i,j} )\right),
\end{align}
where $\delta_\theta^{i,j}, \bb{\delta}_t^{i,j} $ are the relative orientations and translation of the objects $o_i, o_j$.
We solve for the indicator variables $\{\gamma_i\}$ using OpenGM~\cite{OpenGM} by converting the above formulation into
a linear program and feeding it to CPLEX~\cite{CPLEX} to find the final set of selected objects.
\vspace{.1in}
\mypara{Refined object fitting}
After selecting the set of objects, the scene mockup is ready. However, we found that our scene priors can also improve the initial object fitting results. To achieve this, we add a term from our GMM model to the regularization term ($L_\text{reg}$) in object fitting. We go through all candidate objects and re-optimize their parameters, keeping the selected objects fixed.
As noted by Olson et al.~\cite{Olson:2013:IJRR}, the structure of the negative log-likelihood (NLL) of a GMM does not lend itself to
non-linear least squares optimization. Instead, we approximate the NLL of the full GMM by considering it as a Max-Mixture, reducing the NLL to the weighted distance from the closest mixture mean. We define the Max-Mixture likelihood function
\[
p_\mathrm{Max}(\bb{\delta}) = \max_i w_i N(\bb{\delta} | \bb{\mu}_i, \bb{\Sigma}_i),
\]
where $\bb{\delta} = \begin{bmatrix} \bb{\delta}_t \\ \delta_\theta
\end{bmatrix}$ is the relative translation and orientation of the new candidate
w.r.t.\ the already placed object, and $w_k$ is the weight of the $k$th mixture in
the model.
We use the sum of negative log-likelihoods of these terms for all selected objects that are within a distance of $\delta_r$ to the refined candidate:
{
\small
\[
-\log(p_\mathrm{Max}(\bb{\delta})) = \min_k \frac{1}{2} (\bb{\delta} - \bb{\mu}_k)^T \bb{\Sigma}_k^{-1}(\bb{\delta} - \bb{\mu}_k) - \log(w_k\eta_k),\]
}
where $N(\bb{\mu}, \bb{\Sigma})$ represents the normal distribution, and $\eta_k$ is the Gaussian normalization factor for the $k$th mixture. At
optimization time, during each step we find the mixture component $k^*$ that
minimizes this function, and then optimize w.r.t.\ the negative log likelihood
of the Gaussian of that component alone, resulting in the following term to be added to the objective function $L_\text{reg}$ (Equation~\ref{eqn:energy}):
\begin{align}
\frac{1}{2} (\bb{\delta} - \bb{\mu}_{k^*})^T \bb{\Sigma}_{k^*}^{-1}(\bb{\delta} - \bb{\mu}_{k^*}).
\end{align}
\mypara{Refined selection}
Refined candidates and objects selected for the mockup can help in placing additional objects that have subtler cues. Hence, we iterate between refined fitting and refined selection processes. In the refined selection, we assume that previously selected objects cannot be removed, and add the unary term to favor placing new candidates. So, for each candidate placement in the second iteration, we add a term to $U_i$ (Eq.~\ref{eq:Ui}):
{\small
\begin{align}
-\sum_k \text{logit} (GMM(o_i, o^*_k)^\beta),
\end{align}
}
where $\{o^*_k\}$ are the objects selected at previous iterations.
\section{Overview}
\label{sec:overview}
In a scene with many chairs, we observe that the environment is not important for the recognition of the unoccluded chair -- the shape of the object is clearly visible and
immediately recognizable. However, under occlusion, the
task of recognizing the object necessitates adding 3D contextual information. State-of-the-art methods based on FRCNN~\cite{Ren:2015:NIPS} correctly detect chairs that are visible, but miss partially occluded ones (see inset figure in Section~\ref{sec:related}). However, under occlusion, the
task of recognition becomes easier with more contextual and cooccurence information (see Figure~\ref{fig:context_example}).
\begin{figure}[h!]
\includegraphics[width=\linewidth]{figures/context_example/context_example}
\caption[Decreasing context]{As humans, our understanding of scenes is heavily predicated on
the context~\protect\cite{brainWorks17}. From left to right, less global information makes detection of chair harder.}
\label{fig:context_example}
\end{figure}
Motivated by the above insight, we design \textsc{SeeThrough} to run in three key steps:
(i)~an image-space keypoint detection trained on AMT-annotated real photographs (Section~\ref{subsec:ch4:keypoint_maps});
(ii)~a candidate generation step that takes the estimated camera to lift detected 2D keypoints to 3D (deformable) model candidates (Section~\ref{subsec:ch4:candidate_generation}); and
(iii)~an iterative scene mockup stage where we solve a selection problem to extract a scene arrangement that proposes a plausible object layout using a common object co-occurrence prior (Section~\ref{subsec:sceneInference}).
\if0
Our goal is to construct a method that
converts a 2D photograph to a 3D scene. The most classical
way of doing so would be to train some machine learning method on some feature
representation of many examples of 2D photograph / 3D scene pairs and use the
resulting classifier as our mockup black box. Such an approach can be easily
constructed from a combination of existing methods. It turns out, however, that
such methods fail badly when confronted with all but the simplest of scenes.
In fact, in our evaluation (Section~\ref{sec:ch4:evaluation}) we compare our
method with two alternate methods that follow this approach. Foreshadowing some of
their results in the left side of Figure~\ref{fig:ch4:baseline_foreshadowing}
shows that chairs that are obviously visible get placed correctly, but any
instances that are a little harder to see fail to be selected.
\begin{figure}[h!]
\includegraphics[width=\linewidth]{figures/baseline_foreshadowing/baseline_foreshadowing}
\caption[Baseline sample]{Methods based only on the image quickly fail in the presence of less than ideally visible chairs. Our method deals with this situation much better.}
\label{fig:ch4:baseline_foreshadowing}
\end{figure}
To understand this failure, and more importantly how to circumvent it, it is
useful to consider how we as humans are capable of understanding these kind of
scenes. Looking at Figure~\ref{fig:ch4:context_example}, we see a selection of chairs, some
heavily occluded and some clearly visible, in different conditions: (i)~we see the full scene, (ii)~only the local context, or (iii)~only the pixels that belong to the chair
itself. Observe that the
environment is not important for the recognition of the unoccluded chair -- the shape of the object is clearly visible and
we immediately recognize the chair. However, under heavily occlusion, the
task of recognizing the chair becomes easier as more context gets added. For
the last column, we might hypothesize that the image regions belong to a chair,
but we have no way of confirming this for certain -- unless the context is
restored.
\begin{figure}
\includegraphics[width=\linewidth]{figures/context_example/context_example}
\caption[Decreasing context]{As humans, our understanding of scenes is heavily predicated on
the context. From left to right, less global information is available,
making the classification of the marked object as ``chair'' harder}
\label{fig:ch4:context_example}
\end{figure}
We observe that the addition of context provides extra information in classifying and
posing the objects in a scene. Importantly,
the extra information obtained from the entire image is only useful
given prior knowledge we have built up over previous experiences. In this
particular example, the added context helps only because we know that chairs
often occur together with other chairs and tables. Given this prior knowledge
and the global context of the object, our recognition efficacy is enhanced.
This insight is what we capture in our approach to the scene mockup problem: to
maximize performance on the mockup task, we need to consider both local
information and the context the objects are placed in. Furthermore, to
understand this context we need to teach the system what usual scenes look like.
We express these notions in our method as follows: we extract \emph{local}
information from the input image using a keypoint detection network
(Section~\ref{sec:ch4:keypoint_maps}), then \emph{model} the prior knowledge
about how scenes are usually arranged (Section~\ref{ssec:ch4:scene_statistics}),
finally combining this model with the keypoints to find chair instances from a
\emph{global} perspective (Section~\ref{ssec:ch4:graph_optimization}). The
added high level information pushes the performance past that of the
alternative approach of using only the input data itself (see
Figure~\ref{fig:ch4:baseline_foreshadowing}, right). In the next section, we
will go through each of these steps in detail.
\fi
\section{Related Work}
\label{sec:related}
\mypara{Scene mockups} 3D scene inference from 2D indoor images has recently received significant research focus due to the ubiquity of the new generation capture methods that enable partial 3D and/or depth capture. A significant amount of progress has been made following the early work of Hoeim et al.~\cite{hoiem2005automatic}, first with approximating only room shape~\cite{dasgupta2016delay,mallya2015learning,lim2014fpm,hedau2009recovering}, then inferring cuboid-like structures as surrogate furniture~\cite{del2012bayesian,choi2015indoor,zhang2014panocontext,xiao2012localizing,schwing2013box}. However, for detailed geometry prediction, the image input is generally supplemented with additional per pixel depth or point clouds~\cite{kmyg_acquireIndoor_sigga12}. Mattausch et al.~\cite{Mattausch:2014:CGF} used 3D point cloud input to identify repeated objects by clustering similar patches. Li et al.~\cite{Li:2015:CGF} utilize an RGB-D sensor to scan an environment in real time, and use the depth input to detect 3D objects queried from a database. While these works take 3D data as input, our method relies only on a single RGB image.
Recently, Izadinia et al.~\cite{Izadinia:2016:Arxiv} in their impressive \textsc{Im2CAD} system demonstrated
scene reconstruction with CAD models from a single image using image based
object detection (using FRCNN) and pose estimation approaches. Although their objective is similar to ours, the performance is bounded by the individual vision algorithms utilized in their pipeline. For example, if the segmentation misses an
\begin{wrapfigure}{r}{0.43\columnwidth}
\vspace{-10pt}
\hspace{-5pt}
\includegraphics[width=0.43\columnwidth]{./figures/frcnn/frcnn_results.pdf}
\vspace{-20pt}
\end{wrapfigure}
object because of significant occlusion (inset shows top FRCNN~\cite{Ren:2015:NIPS} detections with scores), there is no mechanism to recover it in the reconstruction (see Section~\ref{sec:ch4:evaluation} for comparison). On the contrary, our novel pairwise based search incorporates high level relationships typical to indoor scenes to recover from such failures successfully.
\mypara{3D$\rightarrow$2D alignment} Another way to create scene mockups is by directly fitting 3D models to the image. Pose estimation work~\cite{wu2016single,tulsiani2015viewpoints,huang2015single,lim2014fpm,kholgade20143d,Aubry:2014:CVPR} also demonstrated that given object images, reliable 3D orientation can be predicted, which in turn might help with scene mockups. Lin et al.~\cite{Lim:2013:ICCV} used local image statistics along with image-space features to align a given furniture model to an image. Aubry et al.~\cite{Aubry:2014:CVPR} utilized a discriminative visual element processing step for each shape in a 3D model database, which is then used to localize and align models to given 2D photographs of indoor scenes. Like most existing methods, their approach breaks down under moderate to high occlusion. Our method performs better, as other nearby objects can provide higher order information to fill in the lost information (see Section~\ref{sec:ch4:evaluation}).
\mypara{Priors for scene reconstruction} Scene arrangement priors have been successfully demonstrated in 3D reconstruction from unstructured 3D input, as well as scene synthesis~\cite{Fisher:2012:SIGGASIA}. Shao et al.~\cite{Shao:2014:SIGGRAPH} demonstrated that scenes with significant occlusion can be reconstructed from depth images by reasoning about the physical plausibility of object placements. Monszpart et al.~\cite{Monszpart:2015:SIGGRAPH} uses the insight that
planar patches in indoor scenes are often oriented in a sparse set of
directions to regularize the process of 3D reconstruction. On the other hand, based on priors between humans, Fisher et al.~\cite{Fisher:2015:SIGGRAPH} leveraged human activity priors together with object relationships as a foundation for 3D scene synthesis. In contrast to the complex and high order joint relationships used in these works, our object centric templates are compact and primarily encode the repetition of similar shapes (such as two side by side chairs) across pose and location. This compact and simple template representation ensures that our search stays tractable at run-time.
\section{Results and Discussion}
\label{sec:ch4:evaluation}
\subsection{Training and test data}
We curated three datasets to evaluate our method. (Datasets to be made available for research use.)
\mypara{(a) 2D keypoints on indoor images} We downloaded 5000 images from the \textsc{Houzz} website using keywords like living room, kitchen, dining room, meeting room, etc.
We utilized the Amazon Mechanical Turk platform to obtain keypoints on the images requiring at least 3 workers to agree per image. For each image, we asked the turkers to mark the keypoints of the chairs (maximum of 8 keypoints per chair). Please refer to the supplemental material for details about the web-based annotation interface.
We convolved these keypoints with a Gaussian filter to simplify the CNN's task of learning of smooth filters and averaged the results.
\begin{figure}[h!]
\centering
\includegraphics[width=0.99\linewidth]{figures/performance_comparison/nr-of-chairs.pdf}
\caption{Number of chairs and their estimated visibility distribution in the sampling of images in our annotated \textsc{Houzz} dataset.}
\label{fig:datasetVisibility}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{figures/new_figures/result_plate.pdf}
\caption{Qualitative comparison of the baseline methods: \textsc{SeeingChairs}~(orange) and \textsc{FasterRCNN3D}~(blue) against \mbox{\textsc{SeeThrough}}~(green). Annotated groundtruth poses~(gray) are provided for reference in the top view. Note that our approach both detects ore chairs and correctly aligns them compared to the others. }
\label{fig:ch4:qualitative_results}
\vspace{-0.5cm}
\end{figure*}
\mypara{(b) Scene mockup groundtruth}
\label{sec:ch4:ground_truth_annotation}
In order to quantitatively measure the performance of \mbox{\textsc{SeeThrough}}\ and compare with alternate methods, we require a set of ground truth annotated scenes, i.e., images for
which all the 3D objects (chairs in our case) have been placed manually. We are not aware of a similar dataset with mockups for 3D objects including the (partially) occluded ones. Hence, we setup another annotation tool in which an object can be placed by clicking and dragging, as well as by annotating a
number of keypoints of the object, and optimizing for its location and scale.
Moreover, objects can be copied and translated along their local coordinate
axes, allowing for quick and precise annotation (see
supplemental for details). We used the automatically estimated camera
parameters for the automatic refinement, while discarding any image with grossly erroneous camera estimates. We used the tool to annotate 300 scenes (see Figure~\ref{fig:datasetVisibility}), which were
randomly selected from our \textsc{Houzz} dataset.
\mypara{(c) 3D models and scenes} For our database models, we used the chair models from the ShapeNet~\cite{shapenet2015} database and for scene statistics, we used 45K houses from the PBRS dataset~\cite{Zhang:2017:CVPR}. While the latter comes with 400K physically-based renderings, we tried using these synthetic images to pretrain networks for predicting keypoint maps, but found that fine-tuning a variant of ResNet-50 with weights trained on ImageNet produced more accurate results (see Section~\ref{sec:ch4:discussion} for more details).
\subsection{Performance Measures and Parameters}
\vspace*{0.1in}
\mypara{Hyperparameters}
Our optimization pipeline depends on a number of parameters that we optimized using HyperOpt~\cite{HyperOpt}, which employs a Tree of Parzen Estimators~\cite{Bergstra:2013:ICML}.
We used the \mbox{\textsc{LocAng}}\ measure as our objective measure.
As ground truth data, we used 10 scenes fully annotated specifically for this purpose,
in the same way as the data used for evaluation (see above).
See supplemental material for the list of resulting hyper parameter values.
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{figures/new_figures/combined_measures_aron.pdf}
\caption{Quantitative performance of \mbox{\textsc{SeeThrough}}\ against the state-of-the-art method-based baseline methods. We outperform the baselines significantly across all the measures. Please refer to supplemental for the tabulated values. }
\label{fig:quantMeasures_all}
\end{figure*}
\mypara{Quantitative measures}
We use \emph{source} and \emph{target}
to denote the two scenes between which a measure is computed. We
specifically do not use `result scene' and `ground truth scene' as
the ground truth acts as a target to compute precision, and acts as source
to compute recall.
We denote
the objects in the source and target scene as $o_S \in \bb{S}$, $o_T \in \bb{T}$,
respectively. We use $J_3(o_S, o_T)$ and $J_2(o_S, o_T)$ to represent the Jaccard index
or \emph{intersection-over-union} (IoU) of the bounding boxes of $o_S$ and
$o_T$ in 3D world space and 2D screen space, respectively. Finally, given an
object $o_S$ we define the `$J_i^*$ correspondence' with $\bb{T}$ as the object with the MaxIoU with $o_S$ as: $ J_i^*(o_S, \bb{T}) := \arg\max_{o_T \in \bb{T}}
J_i(o_S, o_T). $
Intuitively, this returns, for a given object, the {\em best matching} object from
the other scene in terms of overlap. Next, we briefly describe our selected measures (see supplemental for details).
{\noindent (a)~\mbox{\textsc{IoU3D}}:} This measures average IoU for 3D bounding boxes around objects.
Specifically, given a source scene and a target scene, we average MaxIoU across all objects in the source scene
(measuring IoU overlap with the corresponding object in the target).
{\noindent (b)~\mbox{\textsc{IoU2D}}:} Similar to \mbox{\textsc{IoU3D}}, this measure averages IoU for 2D bounding boxes around projected objects.
{\noindent (c)~\mbox{\textsc{Loc}}:} This measures the fraction of correct locations of objects in the source scene
with respect to the target. We consider every object in the source scene that has a $J_3^*$ correspondence over a threshold
$\tau_{J}$ to have a correct location.
{\noindent (d)~\mbox{\textsc{LocAng}}:} Similar to \mbox{\textsc{Loc}}, this measures additionally requires the angle difference to be under a threshold $\tau_{\theta}$.
{\noindent (e)~\mbox{\textsc{AngDiff}}:} This measures the average angle difference for the objects that have a
correct location.
\subsection{Baselines: State-of-the-art Alternatives}
\label{sec:ch4:baselines}
We are not aware of prior research focusing on producing scene mockups in the presence of {\em significant occlusion}. Hence, we created two baselines by combining relevant state-of-the-art methods. We convert the output
of each baseline (in both cases 3D pose but 2D image space locations of chairs) to
our comparable 3D scene mockup format.
\mypara{(a) \mbox{\textsc{SeeingChairs}}} Aubry et al.~\cite{Aubry:2014:CVPR} proposed a method to find chairs by
matching so-called `discriminative visual elements' (DVE) from a set of
rendered views of 1000+ chair models with any input image. These DVEs are
linear classifiers over HOG features \cite{Dalal:2005:CVPR} learned from the
rendered views in a discriminative fashion.
At training time, they are learned at multiple scales while keeping only the most discriminative ones for matching. At test time,
a patch-wise matching process finds the best-matching image and rendered patch pairs,
and then finds sets of pairs that come from the same rendered view (see \cite{Aubry:2014:CVPR} for details).
The above method outputs scored image space bounding boxes together with a specific
chair model and pose. For our 3D
performance measures, however, we need the
output in the form of a 3D scene. Hence, we convert each set of bounding
box, pose, and chair model to a 3D scene. Using our estimated camera, we optimize the location (in the
xz-plane) of the 3D model without changing its pose, such that the 2D bounding
box of the projected model matches as closely as possible with the detected
bounding box using a least-squares formulation (solved using Ceres~\cite{Ceres}).
\mypara{(b) \mbox{\textsc{FasterRCNN3D}}} As the second baseline, we combine a convolutional neural network (CNN) trained for image-space
object detection and another CNN trained for 3D object interpretation. Specifically, we use FasterRCNN~\cite{Ren:2015:NIPS} to extract bounding boxes of chairs from the input
image and then feed these regions of interest to 3D-INN~\cite{wu2016single}, which produces a
templated chair model consisting of a set of predefined 3D keypoints as well as
a pose estimate.
Since our set of keypoints is a subset of the keypoints
produced by 3D-INN, we use our 3D candidate generation part of \mbox{\textsc{SeeThrough}}\
to convert the extracted keypoints to a 3D chair for the resultant scene mockup.
\subsection{Evaluation and Discussion}
\label{sec:ch4:discussion}
We ran \mbox{\textsc{SeeThrough}}\ and the two baseline methods on the full ground truth
annotated scene set (Section~\ref{sec:ch4:ground_truth_annotation}). A sampling
of results can be seen in Figure~\ref{fig:ch4:qualitative_results}. (Further visualization
for 100 scenes in our groundtruth set can be found in the supplementary material.)
The baseline methods perform well when there is no occlusion in the scene.
Specifically, chairs that are clearly visible are reconstructed reliably as the direct visual
information is sufficient to make an accurate
inference about the objects' pose and identity. However, when chairs are partly
occluded, the methods break down quickly. In contrast, \mbox{\textsc{SeeThrough}}, by incorporating co-occurrence object model, is more
often able to recover from these situations.
This difference in performance is also reflected in the quantitative results (see Figure~\ref{fig:quantMeasures_all}). Our method outperforms the baselines on all
counts. Additionally, in Figure~\ref{fig:ch4:performance_changes}, we show how the
\mbox{\textsc{LocAng}}\ measure changes under varying thresholds of angle ($\tau_\theta$) and IoU ($\tau_J$).
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{figures/new_figures/threshold_plot.pdf}
\caption{Performance variation according to \mbox{\textsc{LocAng}}\ F1 measure for \mbox{\textsc{SeeThrough}}\ and the two baseline methods under varying angle and IoU thresholds. We perform significantly better across both the threshold ranges. }
\label{fig:ch4:performance_changes}
\end{figure}
\mypara{Performance under increasing occlusion} In order to specifically test performance under varying occlusion, we sorted the groundtruth annotated \textsc{Houzz} dataset into categories based on the extent of the visible chairs.
We approximate visibility as follows: we compute how many chairs lie along view rays connecting the estimated camera location with points on a discrete grid on the image plane. We used the objects' bounding boxes for this visibility computation. Higher values denote more occlusion (as there are more chairs along the view rays). Figure~\ref{fig:teaser} shows that while all the three methods perform comparably under low occlusion, only \mbox{\textsc{SeeThrough}}\ continues to have a high success rate under medium to heavy occlusion.
\mypara{Effect of multiple iterations} In Section~\ref{sec:ch4:ablation}, we demonstrate the positive utility of multiple iterations to \mbox{\textsc{SeeThrough}}.
One of our key observations is that high-confidence objects (e.g., unoccluded objects) are easier to detect, and hence can provide valuable contextual information in reinforcing the weaker signals (e.g., partially occluded objects). This behavior results in higher detection rates using iterations and believed to be also functional in the human perception systems~\cite{brainWorks12,brainWorks17}.
\mypara{Utility of synthetic data}
We found that training on synthetic datasets~\cite{Zhang:2017:CVPR} for predicting image-space keypoint maps led to unsatisfactory results.
For this experiment, we took all renderings from 400K images that contain at least one of the annotated chairs and reprojected the keypoint locations from corresponding 3D models into these renders, yielding one image/keypoint map pair as training data per render, resulting in a total of 8000 image/keypoint map pairs.
We experimented with three different training setups:
(i)~network trained with only synthetic data; (ii)~network first trained with synthetic data,
and then refined using real data, and (iii)~network trained with only real data.
The best performance on the test set resulted from setup \#iii, i.e.
training with only real data.
One likely explanation is that training the network with the synthetic data first
steers away the network weights from those that were the result of the ImageNet
pretraining, which already encompass a high general understanding of real
photographs.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/performance_comparison/bars-ablation.pdf}
\caption{Ablation study evaluating the importance of the different stages of \mbox{\textsc{SeeThrough}}.}
\label{fig:ablation}
\end{figure}
\subsection{Ablation Study}
\label{sec:ch4:ablation}
We evaluated the importance of the individual steps of \mbox{\textsc{SeeThrough}}\ to the final performance (see Figure~\ref{fig:ablation} and supplemental). Specifically,
we ran our pipeline on the full test set under two weakening conditions:
(a)~we disable all pairwise
costs and run the remaining pipeline based solely on the keypoint location maps; and
(b)~we disable iterations by running
the second and third stage only once, thus removing the possibility of the candidate generation stage benefiting from previously placed objects.
\mypara{Discussion} Although \mbox{\textsc{IoU2D}}\ recall increases when disabling scene statistics (option \#a), the precision goes down significantly. This is true
as the pairwise costs by themselves do not propose new objects -- they only make output mockups more precise by pruning objects
that do not agree with others.
In contrast, using only a single iteration (option \#b) increases precision, but recall takes a significant hit. This is not surprising,
as in the later iterations the keypoint location maps have decreased influence relative to the pairwise costs. As a result, while objects with weaker
keypoint response are more easily found, false positives also become more likely. Overall, the combined \mbox{\textsc{IoU2D}}\ F1 measure is highest
for the full \mbox{\textsc{SeeThrough}}\ as well as the \mbox{\textsc{LocAng}}\ F1 measure.
\section{Supplementary Materials}
\subsection{Amazon MTurk Annotation Interface and Keypoint Estimation} See Figure~\ref{fig:ch4:amt} for keypoint annotation (see Figure~\ref{fig:ch4:keypoint_types}) and corresponding network architecture is shown in Table~\ref{tab:ch4:network_architecture}.
\begin{figure}[h!]
\includegraphics[width=\linewidth]{figures/amt/amt}
\caption{The Amazon MTurk interface we used to annotate 500 photographs with keypoints. Annotations were accepted if each image had consensus among 3 or more users. }
\label{fig:ch4:amt}
\end{figure}
\begin{table}[h!tb]
\centering
\caption{Network architecture used for keypoint estimation.}
\label{tab:ch4:network_architecture}
\resizebox{\linewidth}{!}{
\bgroup
\def1.5{1.5}
\begin{tabular}{|c|c|c|}
\hline
layer name & output size & node type \\ \hline
input & $512 \times 512$ & \\ \hline
conv\_1 & $256 \times 256$ & $7\times 7$, stride 2 \\ \hline
max\_pool & $128 \times 128$ & Max pooling, stride 2 \\ \hline
block\_1 & $64 \times 64$ & Bottleneck units with shortcuts, $\begin{bmatrix} 1 \times 1, 64 \\ 3 \times 3, 64 \\ 1 \times 1, 256 \end{bmatrix} \times 3$, last $3\times 3$ stride 2 \MatTableStrut \\ \hline
block\_2 & $64 \times 64$ & Bottleneck units with shortcuts, $\begin{bmatrix} 1 \times 1, 128 \\ 3 \times 3, 128 \\ 1 \times 1, 512 \end{bmatrix} \times 4$, all stride 1 \MatTableStrut \\ \hline
block\_3 & $64 \times 64$ & Bottleneck units with shortcuts, $\begin{bmatrix} 1 \times 1, 256 \\ 3 \times 3, 256 \\ 1 \times 1, 1024 \end{bmatrix} \times 6$, all stride 1 \MatTableStrut \\ \hline
block\_4 & $64 \times 64$ & Bottleneck units with shortcuts, $\begin{bmatrix} 1 \times 1, 512 \\ 3 \times 3, 512 \\ 1 \times 1, 2048 \end{bmatrix} \times 3$, all stride 1 \MatTableStrut \\ \hline
\end{tabular}
\egroup
}
\end{table}
\subsection{Error Measures} In the following, we describe the error measures used to compare \mbox{\textsc{SeeThrough}}\ with the baseline alternatives.
\begin{description}\itemsep0pt
\item[Average Max IoU:] This measure takes a source scene and a target
scene, and records the accuracy with which the volumes of the objects
in the source scene agree with the objects in the target scene.
Specifically, for each object in the source scene, we record the IoU of
the object with its MaxIoU correspondence. This measure is averaged
over all objects in the source scene to produce the final measure.
\[ \mathrm{\mbox{\textsc{IoU3D}}}(\bb{S}, T) = \frac{1}{|\bb{S}|} \sum_{o_S \in \bb{S}} J_3(o_S, J_3^*(o_S, \bb{T})) \]
We measure in both directions, i.e. with the ground truth as source and
result as target, as well as vice versa. The former can be thought of
as a form of ``recall'' and the latter as a form of ``precision''. This
measure is angle-agnostic and captures the location similarity of objects in the source
scene w.r.t. those in the target scene.
\item[Average Max 2D IoU:] This measures the average maximum IoU of the
bounding boxes of each projected object in the source scene with the
bounding boxes of the projected objects in the target scene.
\[ \mathrm{\mbox{\textsc{IoU2D}}}(\bb{S}, \bb{T}) = \frac{1}{|\bb{S}|} \sum_{o_S \in \bb{S}} J_2(o_S, J_2^*(o_S, \bb{T})) \]
\item[Percentage correct location:] This measure takes a source scene and a
target scene, and records the percentage of objects in the source scene
that have a $J_3^*$ correspondence over a certain threshold
$\tau_{J}$. To define it, we first set
\begin{multline*}
\mathrm{CorrectLoc}(\bb{S}, \bb{T}) = \\ \{ o_S \in \bb{S} \mid J_3(o_S, J_3^*(o_S, \bb{T})) > \tau_{J} \}.
\end{multline*}
Then,
\[ \mathrm{\mbox{\textsc{Loc}}}(\bb{S}, \bb{T}) = \frac{|\mathrm{CorrectLoc}(\bb{S}, \bb{T})|}{|S|}. \]
We again measure in both directions, yielding recall (ground truth is source, result is target)
and precision (vice versa) measures.
\item[Percentage correct:] As the previous measure, but with the added constraint that the
angle difference is under a threshold $\tau_{\theta}$. So,
\begin{multline*}
\mathrm{CorrectFull}(\bb{S}, \bb{T}) = \\ \{ o_S \in \mathrm{CorrectLoc}(\bb{S}, \bb{T}) \mid \angle(o_S, J_3^*(o_S, \bb{T})) < \tau_{\theta} \}.
\end{multline*}
Then,
\[ \mathrm{\mbox{\textsc{LocAng}}}(\bb{S}, \bb{T}) = \frac{|\mathrm{CorrectFull}(\bb{S}, \bb{T})|}{|\bb{S}|}. \]
\item[Angle difference:] This measures the average angle difference for the objects that have
correct location. This measure is symmetrical.
\begin{multline*}
\mathrm{\mbox{\textsc{AngDiff}}}(\bb{S}, \bb{T}) = \\ \frac{\sum_{o_S \in \mathrm{CorrectLoc}(\bb{S}, \bb{T})} \angle(o_S, J_3^*(o_S, \bb{T}))}{|\mathrm{CorrectLoc}(\bb{S}, \bb{T})|}
\end{multline*}
\end{description}
\begin{figure}[h!]
\includegraphics[width=\linewidth]{figures/keypoint_types/keypoint_types}
\caption[Keypoint types]{Selected keypoint types.}
\label{fig:ch4:keypoint_types}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{figures/groundtruth_annotation/groundtruth_annotation}
\caption[Annotation tool]{We created a ground truth annotation tool for quickly creating ground truth scene mockup examples.}
\label{fig:ch4:gt_annotation}
\end{figure}
\subsection{Groundtruth Annotation Tool} We created a groundtruth annotation tool (see Figure~\ref{fig:ch4:gt_annotation}) for generating 3D mockup groundtruth to compare \mbox{\textsc{SeeThrough}}\ with the baseline alternatives.
\begin{table}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|}
\hline
Name & Description & Value \\ \hline
$\alpha$ & Sensitivity of keypoint maps & 0.61 \\ \hline
$\beta$ & Sensitivity to object co-occurrence model & 0.14 \\ \hline
$\tau_m$ & Lower threshold of keypoint location map & 0.25 \\ \hline
$\tau_u$ & Maximum cost for selecting candidate & 0.21 \\ \hline
\end{tabular}
}
\caption[Hyper parameters]{Hyper parameters of optimization, found by HyperOpt~\cite{HyperOpt}.}
\label{tab:ch4:hyperparameters}
\end{table}
\subsection{Qualitative Results} Please refer to supplementary files `topViewImages.pdf' and images in the folder `overlaidChairImages' (chairs shown in red and gray) for qualitative results.
As shown in Figure~\ref{fig:ch4:baseline_example}, existing methods work well in regions where the objects are fully visible. But, since they rely on directly visible cues, the methods start failing under moderate to heavy occlusion (see Tables~\ref{tab:ch4:ablation} and \ref{tab:ch4:performance}).
\begin{figure}[h!]
\centering
\def\linewidth{\linewidth}
\import{figures/baseline_example/}{baseline_example.pdf_tex}
\caption[Baseline output]{Example of raw output of the baseline methods.}
\label{fig:ch4:baseline_example}
\end{figure}
\subsection{Hyperparameters}
Table~\ref{tab:ch4:hyperparameters} lists the parameters used in our experiments.
\begin{table*}[h!]
\caption[Ablation study]{Ablation study showing the importance of using scene statistics and multiple iterations for best performance.}
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& AvgMaxIOU (precision) & AvgMaxIOU (recall) & AvgMaxIOU (F1) & PercCorrectFull (precision) & PercCorrectFull (recall) & PercCorrectFull (F1) \\ \hline
Full pipeline & 0.386 & 0.250 & \textbf{0.293} & 0.285 & \textbf{0.161} & \textbf{0.198} \\ \hline
No scene stats & 0.296 & \textbf{0.265} & 0.267 & 0.174 & 0.151 & 0.154 \\ \hline
Single iteration & \textbf{0.421} & 0.190 & 0.251 & \textbf{0.346} & 0.123 & 0.175 \\ \hline
\end{tabular}
}
\label{tab:ch4:ablation}
\end{table*}
\begin{table*}[h!]
\caption[Quantitative performance]{Quantitative performance of \mbox{\textsc{SeeThrough}}\ versus the two baseline methods. We outperform the baselines significantly across all measures.}
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
& \textsc{IoU3D} (precision) & \textsc{IoU3D} (recall) & \textsc{IoU3D} (F1) & \\ \hline
3D-INN~\cite{wu2016single} + FasterRCNN~\cite{Ren:2015:NIPS} & 0.316 & 0.150 & 0.198 & \\ \hline
SeeingChairs~\cite{Aubry:2014:CVPR} & 0.195 & 0.128 & 0.149 & \\ \hline
Ours & \textbf{0.386} & \textbf{0.250} & \textbf{0.293} & \\ \hline
& \textsc{Loc} (precision) & \textsc{Loc} (recall) & \textsc{Loc} (F1) & \\ \hline
3D-INN~\cite{wu2016single} + FasterRCNN~\cite{Ren:2015:NIPS} & 0.263 & 0.124 & 0.165 & \\ \hline
SeeingChairs~\cite{Aubry:2014:CVPR} & 0.071 & 0.043 & 0.052 & \\ \hline
Ours & \textbf{0.298} & \textbf{0.167} & \textbf{0.207} & \\ \hline
& \textsc{LocAng} (precision) & \textsc{LocAng} (recall) & \textsc{LocAng} (F1) & \\ \hline
3D-INN~\cite{wu2016single} + FasterRCNN~\cite{Ren:2015:NIPS} & 0.04 & 0.015 & 0.021 & \\ \hline
SeeingChairs~\cite{Aubry:2014:CVPR} & 0.013 & 0.007 & 0.009 & \\ \hline
Ours & \textbf{0.285} & \textbf{0.161} & \textbf{0.198} & \\ \hline
& \textsc{IoU2D} (precision) & \textsc{IoU2D} (recall) & \textsc{IoU2D} (F1) & \textsc{AngDiff} (in degrees) \\ \hline
3D-INN~\cite{wu2016single} + FasterRCNN~\cite{Ren:2015:NIPS} & 0.526 & 0.336 & 0.401 & 55.8 \\ \hline
SeeingChairs~\cite{Aubry:2014:CVPR} & 0.372 & 0.325 & 0.341 & 11.4 \\ \hline
Ours & \textbf{0.628} & \textbf{0.470} & \textbf{0.525} & \textbf{7.3} \\ \hline
\end{tabular}
}
\label{tab:ch4:performance}
\end{table*}
|
2,877,628,089,779 | arxiv | \section{Introduction}
Global warming and limited energy issues have increased the interest in the energy management and in particular heat losses. Indeed, heat wasted in energy production processes and thermal machines could in principle be better used in many applications if it could be guided or transport in a similar way as electricity. However, if heat pipes have been proved to be good candidates for thermal guiding, there exists few devices at the moment that can switch or amplify heat as is the case for electricity.
In electricity, the development of diodes \cite{lashkaryov_investigations_1941} and transistors \cite{bardeen_transistor_1998} have led to its control at the scale of the electron, leading to the emergence of electronics. One can therefore wonder whether heat could be managed in the same way, if the thermal equivalent of these two objects would exist. In the last decade, several works have focused on the development of thermal rectifiers, i.e devices for which the thermal fluxes flowing through them is different in magnitude when the temperatures are inverted at their ends. Thus, phononic \cite{terraneo_controlling_2002,li_thermal_2004,li_interface_2005,chang_solid-state_2006,hu_asymmetric_2006,yang_thermal_2007,hu_thermal_2009,pereira_sufficient_2011,zhang_thermal_2011,roberts_review_2011,garcia-garcia_thermal_2014} and electronic \cite{roberts_review_2011,segal_single_2008} thermal diodes or rectifiers have been developed, that have later led to the proposition of thermal transistors \cite{wang_thermal_2007,chung_lo_thermal_2008}. Later, these concepts have been extended to the case of thermal radiation both in the near field \cite{otey_thermal_2010-1,basu_near-field_2011,ben-abdallah_phase-change_2013} and far field \cite{van_zwol_emissivity_2012,ito_experimental_2014,nefzaoui_simple_2014,nefzaoui_radiative_2014,joulain_radiative_2015}. The most interesting results have been found through the use of phase change thermochrome materials \cite{huang_thermal_2013}, such as VO$_2$\cite{morin_oxides_1959,rini_photoinduced_2005}. Recently, thermal transistors have been designed using similar properties \cite{ben-abdallah_near-field_2014,joulain_modulation_2015}.
In the last years, individual quantum systems, such as classical atoms \cite{brune_quantum_1996,maunz_cavity_2004} or artificial ones, as is the case of quantum dots \cite{claudon_-chip_2009,dousse_ultrabright_2010}, have been proposed to develop photon rectifiers \cite{yu_complete_2009,mascarenhas_quantum_2014,mascarenhas_quantum_2015}, transistors \cite{hwang_single-molecule_2009,astafiev_ultimate_2010} or even electrically controlled phonon transistors \cite{jiang_phonon_2015}. Moreover, as quantum systems are always coupled to the environment, the question of how heat is transferred through a set of quantum systems in interaction naturally has arisen \cite{manzano_quantum_2012,bermudez_controlling_2013,pumulo_non-equilibrium_2011} and led to several works on thermal rectification \cite{scheibner_quantum_2008,pereira_symmetry_2009,werlang_optimal_2014,chen_thermal_2015}.
The goal of this article is to use elementary quantum objects, such as two-level systems (TLS) related to thermal baths, for developing thermal diodes and thermal transistors. To do so, we will use the classical quantum thermodynamics formalism proposed by Lindblad that is based on the resolution of a master equation for the density matrix. We show, following the work of Werlang et al. \cite{werlang_optimal_2014}, that 2 TLS can easily make a thermal diode and that 3 TLS can make a thermal transistor. These three TLS related to thermal reservoirs are equivalent to the three entries of a bipolar electronic transistor. It is shown that a thermal current imposed at the base can drive the currents at the two other entries of the system.
\section{Theory}
We consider in the following, that TLS are connected to a thermal bath and that can be coupled one to each other. Two configurations are studied in this article: 2 TLS coupled to each other make a thermal diode, whereas 3 coupled TLS make a thermal transistor.
\subsection{Thermal diode}
The system under consideration consists of two coupled TLS, each of them related to a thermal bath, as depicted in Fig.\ref{systemdiode}.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{Diode_scheme.eps}
\caption{Quantum thermal diode made up of 2 TLS coupled with each other and connected to a thermal bath.}
\label{systemdiode}
\end{center}
\end{figure}
The two TLS are labeled with the letters $L$ (left) and $R$ (right), which is also the case of the temperature of the thermal baths related to TLS. We use the strong-coupling formalism developed by Werlang {\it et al.} \cite{werlang_optimal_2014}. Each of the TLS is caracterized by an angular frequency $\omega_L$ or $\omega_R$. The coupling between the two TLS has the typical angular frequency $\omega_{LR}$. The hamiltionian of the system is (in $\hbar=1$ units)
\begin{equation}
\label{ }
H_S=\frac{\omega_L}{2}\sigma_z^L+\frac{\omega_R}{2}\sigma_z^R+\frac{\omega_{LR}}{2}\sigma_z^L\sigma_z^{R},
\end{equation}
where $\sigma_z^P$ ($P=L,R$) is the Pauli matrix $z$, whose eigenstates for the system $P$ are the states $\uparrow$ and $\downarrow$. $H_S$ eigenstates are given by the tensorial product of the individual TLS states, so that we have 4 eigenstates labeled as $\left|1\right>=\left|\uparrow\uparrow\right>$, $\left|2\right>=\left|\uparrow\downarrow\right>$, $\left|3\right>=\left|\downarrow\uparrow\right>$, $\left|4\right>=\left|\downarrow\downarrow\right>$. The coupling between the TLS and the thermal bath $P$ constituted of harmonic oscillators \cite{caldeira_quantum_1983}, is based on the spin-boson model in the $x$ component
$H_{\rm TLS-bath}^P=\sigma_x^P\sum_k g_k(a_k^P a_k^{P\dag})$.
The two reservoirs $P$ have their Hamiltonians equal to
$H_{\rm bath}^P=\sum_k\omega_ka_k^{P\dag}a_k^P$.
This modeling implies that baths can only flip one spin at a time. There are therefore 4 authorized transitions. Transitions $1\leftrightarrow3$ and $2\leftrightarrow4$ are induced by the thermal bath $L$, whereas transitions $1\leftrightarrow2$ and $3\leftrightarrow4$ are induced by the thermal bath $R$. Transitions $1\leftrightarrow4$ and $2\leftrightarrow3$ are forbidden.
The system state is described by a density matrix $\rho$, which obeys a master equation. In the Born-Markov approximation, it reads
\begin{equation}
\label{master}
\frac{d\rho}{dt}=-i[H_s,\rho]+{\cal{L}}_L[\rho]+{\cal{L}}_R[\rho].
\end{equation}
As in \cite{werlang_optimal_2014,breuer_theory_2002}, the Lindbladians ${\cal{L}}_P[\rho]$ are written for an Ohmic bath according to classical textbooks \cite{leggett_dynamics_1987,breuer_theory_2002}, so that we take the expression
\begin{eqnarray}
{\cal{L}}_P[\rho]&=&\sum_{\omega>0}{\cal{I}}(\omega)(1+n_\omega^P)\nonumber\\
&\times&\left[A_P(\omega)\rho A^+_P(\omega)-\frac{1}{2}\left\{\rho,A^+_P(\omega)A_P(\omega)\right\}\right]\\
&+&{\cal{I}}(\omega)n_\omega^P\left[A^+_P(\omega)\rho A_P(\omega)-\frac{1}{2}\left\{\rho,A_P(\omega)A^+_P(\omega)\right\}\right]\nonumber
\end{eqnarray}
of \cite{werlang_optimal_2014}, where
\begin{equation}
\label{ }
n_\omega^P=\frac{1}{e^{\hbar\omega/k_bT-1}},
\end{equation}
and
\begin{equation}
\label{ }
A_P(\omega)=\sum_{\omega=\epsilon_j-\epsilon_i}\left|i\right>\left<i\right|\sigma_x^P\left|j\right>\left<j\right|.
\end{equation}
We now consider a steady state situation. We define
\begin{equation}
\label{currentdef}
{\rm Tr}(\rho {\cal{L}}_P[\rho])=J_P,
\end{equation}
the heat current injected by the bath $P$ into the system. Averaging the master equation, we find $J_L+J_R=0$, in accordance with the energy conservation.
The master equation is a system of four equations on the diagonal elements $\rho_{ii}$. If we introduce the net decaying rate from state $\left|i\right>$ to the state $\left|j\right>$, due to the coupling with bath $P$, with the help of Bose-Einstein distribution $n_\omega^P=(e^{\omega/T_P}-1)^{-1}$ (in $k_b=1$ units):
$\Gamma_{ij}^P=\omega_{ij}\left[\left(1+n_\omega^P\right)\rho_{ii}-n_\omega^P\rho_{jj}\right]=-\Gamma_{ji}^P$,
the master equation yields
\begin{eqnarray}
\dot{\rho}_{11} & = & 0 = \Gamma_{31}^L + \Gamma_{21}^R, \nonumber \\
\dot{\rho}_{22} & = & 0 = \Gamma_{42}^L + \Gamma_{12}^R,\nonumber \\
\dot{\rho}_{33} & = & 0 = \Gamma_{13}^L + \Gamma_{43}^R, \label{rhodiode}\\
\dot{\rho}_{44} & = & 0 = \Gamma_{24}^L + \Gamma_{34}^R, \nonumber
\end{eqnarray}
from which it can be deduced that
\begin{equation}
\label{ }
\Gamma_{31}^L=\Gamma_{24}^L=\Gamma_{12}^R=\Gamma_{43}^R=\Gamma.
\end{equation}
The definition of the thermal currents (\ref{currentdef}) gives then the final expression of the thermal currents
\begin{equation}
\label{ }
J_L=-J_R=2\omega_{LR}\Gamma
\end{equation}
As an example, let us consider the example of a system where $\omega_L=1$, $\omega_R=0$ and $\omega_{LR}=0.1$. The energy levels and the authorized transitions are depicted in Fig.\ref{level_diode}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{Diode_levels.eps}
\caption{Energy levels and transitions in the case $\omega_L=1$, $\omega_R=0$ and $\omega_{LR}=0.1$. The arrow directions shows the balance of the authorized transition between levels. Left : $T_L>T_R$. Right $T_L<T_R$. }
\label{level_diode}
\end{center}
\end{figure}
When $T_L>T_R$, the left reservoir populates level $\left|1\right>$ from level $\left|3\right>$ through transition $1\leftrightarrow 3$. Level $\left|1\right>$ de-excitates through level $\left|2\right>$ by transfering energy to reservoir $R$. Level $\left|2\right>$ de-excitates through level $\left|4\right>$ by transfering energy to reservoir $L$ and finally level $\left|4\right>$ de-excitates through level $\left|3\right>$ by transfering energy to reservoir $R$. If $T_L<T_R$, the energy transfers are reversed. Now imagine that $T_L$ is of the order of the transition energies, whereas $T_R$ is much lower. Then, energy will easily flow from reservoir $L$ to reservoir $R$ according to the process described above. On the contrary, if $T_R$ is much lower than the transition energies and $T_L<T_R$ then the energy transfer is poor since excitation by reservoir $R$ through transition $4\leftrightarrow3$ and $2\leftrightarrow1$ is low. Hence, if we study the flux $J_L(T_L,T_R)$ with $T_R$ fixed at a value lower than the transition energies (for example $T_R=0.1$, Fig. \ref{Flux1001}), we see that the flux is close to 0 when $T_L<T_R$ .
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{Diode_1_0_01.eps}
\caption{$J_L(T_L,T_R)$ in the case $\omega_L=1$, $\omega_R=0$ and $\omega_{LR}=0.1$ with $T_R=1$}
\label{Flux1001}
\end{center}
\end{figure}
When $T_L$ is increased to values larger than $T_R$, the current inceases until saturation at high temperatures. The calculation of $\Gamma$, which gives the current, can be achieved by solving the system of equations on the populations ($\ref{rhodiode}$). Note that the system of equations are not totally independent since the fourth equation is actually the sum of the three others. One has to use the fact that the trace of the density matrix is equal to 1 (Tr[$\rho$]=1). The exact expression of $\Gamma$ can be found in \cite{werlang_optimal_2014}. In the case studied here, this expression can be simplified and the current reads
\begin{equation}
\label{ }
J_L\approx \frac{\omega_L\omega_{LR}}{2}\frac{e^{-\omega_{LR}/T_L}}{\cosh(\omega_L/T_L)}
\end{equation}
where the transition from low current values, at low $T_L$, to high current values, at higher $T_L$, can be seen.
Let us note that the system proposed here constitutes a passive thermal switch at low temperature. As long as $T_L$ is larger than $T_R$, the current in the structure is important and the thermal contact is good between the reservoirs $L$ and $R$. However, when the temperature $T_L$ reduces to values below $T_R$, the thermal current is drastically lowered, so that it can be seen as switched off. This system could therefore be used to isolate objects from a cold environment while it would be thermally linked to a hot environment. In a case of an environment with temperatures oscillating between high and low values, this simple quantum system can be seen as a passive heater and a thermal rectifier, i.e that heat flow through it depends on the direction of the heat flux.
There is actually another way to quantify the rectification of a system. This is the ratio between the sum of the fluxes through the system when the temperatures are reversed and the maximum of these 2 fluxes
\begin{equation}
\label{Rectification}
R(T_L,T_R)=\frac{|J_L(T_L,T_R)+J_L(T_R,T_L)|}{Max(|J_L(T_L,T_R)|,|J_L(T_R,T_L)|)}
\end{equation}
The rectification ratio $R(T_L,T_R)$ variations with $T_L$ for different $T_R$ are represented in Fig. \ref{rectif}.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{rectif_comp.eps}
\caption{Rectification ratio $R(T_L,T_R)$ variations with $T_L$ for different values of $T_R$. (a) $T_R=0.1$. (b) $T_R=1$. (c) $T_R=10$. (d) $T_R=100$. }
\label{rectif}
\end{center}
\end{figure}
When $T_R$ is small enough ($T_R<1$), rectification is strong except for values of $T_L$ very close to those of $T_R$. When $T_R$ is larger, rectification is smaller, even for $T_L$ values that are greatly different from $T_R$. We note in particular, that rectification is low for high $T_R$ temperature. In this later case, there is no rectification, because heat transfer can occur with both reservoir with help of the energy transitions presented above. However, when $T_R$ is fixed, and $T_L$ goes to 0, then $J_L(T_R,T_L)$ tends to 0. As well, rectification rises to 1. This kind of device can thus be seen as a thermal diode, since the heat current through the system is nonzero when the heat flux is in a given direction and 0 when it is in the opposite one.
This type of system paves the way to develop more complicated ones. For example, it is well known that electronic transistors as the bipolar ones, can be made up NPN et PNP junctions whereas it well known that the PN junction constitutes a diode. One can therefore wonder if it is also possible to conceive a transistor with the elementary quantum system that constitutes the thermal diode that we have just studied in this section. This is the subject of the next section.
\subsection{Thermal transistor}
The system studied in this part is constituted of three TLS coupled with each other, each of them being connected to a thermal bath (Fig. \ref{system}). This system is therefore similar to the previous one with one supplementary TLS and reservoir.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{system.eps}
\caption{Quantum system made up of 3 TLS coupled with each other and connected to a thermal bath.}
\label{system}
\end{center}
\end{figure}
The three TLS are now labeled with the letters $L$ (left), $M$ (medium), and $R$ (right), as well as the temperature of the thermal baths involved. As in the previous part, we use the strong-coupling formalism developed by Werlang {\it et al.} \cite{werlang_optimal_2014}.
Similarly, TLS can be in the up state $\uparrow$ or in the down one $\downarrow$. The Hamiltonian of the system is (in $\hbar=1$ units)
\begin{equation}
\label{ }
H_S=\sum_{P=L,M,R}\frac{\omega_P}{2}\sigma_z^P+\sum_{P,Q=L,M,R \ P\neq Q}\frac{\omega_{PQ}}{2}\sigma_z^P\sigma_z^{Q}
\end{equation}
$\omega_P$ denotes the energy difference between the two spin states, whereas $\omega_{PQ}$ stands for the interaction between the spin $P$ and the spin $Q$. Following the preceding part on the quantum thermal diode, we have eight eigenstates labeled as $\left|1\right>=\left|\uparrow\uparrow\uparrow\right>$, $\left|2\right>=\left|\uparrow\uparrow\downarrow\right>$, $\left|3\right>=\left|\uparrow\downarrow\uparrow\right>$, $\left|4\right>=\left|\uparrow\downarrow\downarrow\right>$, $\left|5\right>=\left|\downarrow\uparrow\uparrow\right>$, $\left|6\right>=\left|\downarrow\uparrow\downarrow\right>$, $\left|7\right>=\left|\downarrow\downarrow\uparrow\right>$ and $\left|8\right>=\left|\downarrow\downarrow\downarrow\right>$. There are now 12 authorized transitions. The left bath ($L$) induces the transitions $1\leftrightarrow5$, $2\leftrightarrow 6$, $3\leftrightarrow7$, and $4\leftrightarrow8$, the middle one ($M$) drives the transitions $1\leftrightarrow3$, $2\leftrightarrow 4$, $5\leftrightarrow7$, and $6\leftrightarrow8$. The right bath ($R$) triggers the transitions $1\leftrightarrow2$, $3\leftrightarrow 4$, $5\leftrightarrow6$, and $7\leftrightarrow8$. All other transitions flipping more than one spin are forbidden.
The master equation fulfilling the density matrix, in the Born-Markov approximation, reads
\begin{equation}
\label{master}
\frac{d\rho}{dt}=-i[H_s,\rho]+{\cal{L}}_L[\rho]+{\cal{L}}_M[\rho]+{\cal{L}}_R[\rho].
\end{equation}
We now go to the steady state situation. Averaging the master equation, we find $J_L+J_M+J_R=0$, in accordance with the energy conservation.
The master equation is a system of eight equations on the diagonal elements $\rho_{ii}$. Introducing the net decaying rate from state $\left|i\right>$ to the state $\left|j\right>$ due to the coupling with bath $P$,
the master equation becomes
\begin{eqnarray}
\dot{\rho}_{11} & = & 0 = \Gamma_{51}^L + \Gamma_{31}^M + \Gamma_{21}^R, \nonumber \\
\dot{\rho}_{22} & = & 0 = \Gamma_{62}^L + \Gamma_{42}^M + \Gamma_{12}^R,\nonumber \\
\dot{\rho}_{33} & = & 0 = \Gamma_{73}^L + \Gamma_{13}^M + \Gamma_{43}^R, \nonumber \\
\dot{\rho}_{44} & = & 0 = \Gamma_{84}^L + \Gamma_{24}^M + \Gamma_{34}^R, \nonumber \\
\dot{\rho}_{55} & = & 0 = \Gamma_{15}^L + \Gamma_{75}^M + \Gamma_{65}^R, \\
\dot{\rho}_{66} & = & 0 = \Gamma_{26}^L + \Gamma_{86}^M + \Gamma_{56}^R, \nonumber \\
\dot{\rho}_{77} & = & 0 = \Gamma_{37}^L + \Gamma_{57}^M + \Gamma_{87}^R, \nonumber \\
\dot{\rho}_{88} & = & 0 = \Gamma_{48}^L + \Gamma_{68}^M + \Gamma_{78}^R. \nonumber
\end{eqnarray}
These eight equations are not independent since their sum is 0. In order to solve the system for $\rho_{ii}$, one adds the condition $Tr[\rho]=1$ whose resolution provides all state occupation probabilities as well as the currents $J_P$.
We are now going to show that such a system is able to make a thermal transistor analogous to an electronic one. Let us recall that in an electronic bipolar transistor, such as a PNP or a NPN transistor, a current imposed at the base can modulate, switch or amplify the collector and emitter currents. Switching, modulation, and amplification are therefore the caracteristics that must be present in order to have a transistor. We are going to show here that it is possible to control $J_L$ or $J_R$ by slightly changing $J_M$. The situation is the following : the left and right TLS are both connected to thermal baths, whose respective temperatures $T_L$ and $T_R$ are fixed. The third bath at temperature $T_M$ controls the fluxes $J_L$ and $J_R$ with the help of the current $J_M$ injected into the system. The dynamical amplification factor $\alpha$, defined as
\begin{equation}
\label{ }
\alpha_{L,R}=\frac{\partial J_{L,R}}{\partial J_M}.
\end{equation}
measures the transistor ability to amplify a small heat flux variation at the base (M).
If a small change in $J_M$ induces a large one in $J_L$ or $J_R$, i.e. $|\alpha_{L,R}|>1$, then the thermal transistor effect exists.
The system presented here exhibit many parameters : the frequencies $\omega_P$, $\omega_{PQ}$ and the temperatures $T_L$ and $T_R$. The last temperature $T_M$, that is taken here between $T_L$ and $T_R$, controls the transistor properties and is related to the current $J_M$. The number of parameters involved can be reduced by choosing a situation that will not change the physics of the system but will allow a good understanding of the physical phenomena involved. We therefore restrict our analysis to a case for which $\omega_{LM}=\omega_{MR}=\Delta$, whereas $\omega_{RL}$ and the three TLS energies are equal to 0. As shown below, this configuration provides a good transistor effect, easy to handle with simple calculations. The transistor effect disappears when the three couplings are equal (symmetric configuration), but it still occurs and can even be optimized if the three TLS energies are nonzero \cite{Joulain16}. The operating temperature $T_L$ is taken so that $e^{-\Delta/T_L}\ll1$ ($T_L/\Delta\lesssim 0.25$), whereas $e^{-\Delta/T_R}\ll e^{-\Delta/T_L}$ ($T_R/\Delta\lesssim 0.0625$).
Under these conditions, the system states are degenerated 2 by 2. There are now only 4 states and 3 energy levels (see Fig. \ref{Energ_Levels}).The states $\left|1\right>$ and $\left|8\right>$ are now state $\left|I\right>$, $\left|2\right>$ and $\left|7\right>$ state $\left|II\right>$, $\left|3\right>$ and $\left|6\right>$ state $\left|III\right>$, and $\left|4\right>$ and $\left|5\right>$ state $\left|IV\right>$. One can define the new density matrix elements $\rho_I=\rho_{11}+\rho_{88}$, $\rho_{II}=\rho_{22}+\rho_{77}$, $\rho_{III}=\rho_{33}+\rho_{66}$, and $\rho_{IV}=\rho_{44}+\rho_{55}$. Using the net decaying rates between the states, the three currents read
\begin{eqnarray}
J_L & = & -\Delta\left[\Gamma^L_{I-IV}+\Gamma^L_{II-III}\right] \nonumber\\
J_M & = & -2\Delta\Gamma^M_{I-III}\\
J_R & = & -\Delta\left[\Gamma^R_{I-II}+\Gamma^R_{IV-III}\right] \nonumber
\end{eqnarray}
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{Energy_Level.eps}
\caption{Energy levels for $\omega_L=\omega_M=\omega_R=0$, $\omega_{RL}=0$, and $\omega_{LM}=\omega_{MR}=\Delta$. There are four states ($\left|I\right>$, $\left|II\right>$, $\left|III\right>$, and $\left|IV\right>$ but three energy levels since $E_{II}=E_{IV}=0$. The arrows indicate the net decaying rate between the states due to bath $L$ (red), bath $M$ (green), and bath $R$ (blue) for $T_L=0.1\Delta$, $T_R=0.01\Delta$, and $T_M=0.05\Delta$.}
\label{Energ_Levels}
\end{center}
\end{figure}
Transitions between the different states are illustrated in Fig. \ref{Energ_Levels}, for $T_L/\Delta=0.1$, $T_R/\Delta=0.01$, and $T_M/\Delta=0.05$. The arrow directions show the transition direction whereas its width is related to the decay time. We see that energy exchanges are mainly dominated by the $III-II$ and $IV-III$ transitions. One therefore expects $J_R$ and $J_L$ to be larger than $J_M$. This is illustrated in Fig. \ref{three_currents}, where $J_L$, $J_M$, and $J_R$ are represented versus $T_M$, for $T_L/\Delta=0.1$ and $T_R/\Delta=0.01$.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{threecurrents_100000.eps}
\includegraphics[width=8cm]{gatecurrent_10000000.eps}
\caption{Uppern: thermal currents $J_L$, $J_M$, and $J_R$ versus $T_M$ for $\omega_L=\omega_M=\omega_R=0$, $\omega_{RL}=0$, $\omega_{LM}=\omega_{MR}=\Delta$, $T_L=0.1\Delta$, and $T_R=0.01\Delta$. Lower : thermal current $J_M$ versus $T_M$. }
\label{three_currents}
\end{center}
\end{figure}
$J_L$ and $J_R$ increase linearly with $T_M$, at low temperature, and behave sublinearly as $T_M$ approaches $T_L$. Note that over the whole range, $J_M$ remains lower than $J_L$ and $J_R$, as expected. Thus, $T_M$ will be controlled by changing slightly the current $J_M$: a tiny change of $J_M$ can modify $J_L$ and $J_R$ in a larger proportion. Moreover, $J_L$ and $J_R$ are switched off when $J_M$ approaches 0, for small temperatures $T_M$ : the three TLS system exhibits the transistor switching property. One also remarks that the $J_M$ slope is larger than the ones of $J_L$ and $J_R$ over a large part of the temperature range. Given the definition of the amplification factor $\alpha$, the thermal currents slopes are essential to figure out amplification.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{gains_Bad.eps}
\caption{Amplification factors $\alpha_L$ (red) and $\alpha_R$ (dashed blue) versus $T_M$ for $\omega_L=\omega_M=\omega_R=0$, $\omega_{RL}=0$, $\omega_{LM}=\omega_{MR}=\Delta$, $T_L=0.1\Delta$ and $T_R=0.01\Delta$.}
\label{alpha}
\end{center}
\end{figure}
In Fig. \ref{alpha}, we plot the two amplification coefficients $\alpha_L$ and $\alpha_R$ versus temperature $T_M$. We see that at low $T_M$, $\alpha$ remains much larger than 1 (around $2.2\times10^4$). One also notes that $\alpha$ diverges for a certain value of the temperature for which $J_M$ has a minimum. This occurs for $T_M\simeq 0.07444\Delta$. In these conditions, an infinitely small change in $J_M$ makes a change in $J_L$ and $J_R$. As $T_M$ approaches $T_L$, the amplification factor drastically decreases to reach values below 1, i.e a regime where we cannot speak anymore of a transistor effect. Note also that, in between, there exists a temperature for which $J_M=0$. This is the temperature at which the bath $M$ is at thermal equilibrium with the system since it does not inject any thermal current in it. At this temperature ($T_M\simeq 0.08581\Delta$), $J_L=-J_R=3.325\times10^{-6}$. Amplification still occurs since $\alpha_L=831$ and $\alpha_R=-832$.
All these observations can be explained by examining carefully the populations and currents expressions. In the present case, if we limit the calculation to first order of approximations on $e^{-\Delta/T_L}$ and $e^{-\Delta/T_M}$, one can roughly estimate the populations by
\begin{eqnarray}
\rho_I & \simeq & \frac{e^{-2\Delta/T_M}}{2}+\frac{T_M}{4\Delta+8T_M}e^{-2\Delta/T_L} \label{rho1},\\
\rho_{II} & \simeq & \frac{\Delta+T_M}{\Delta+2T_M}e^{-\Delta/T_L}\label{rho2},\\
\rho_{III} & \simeq & 1-e^{-\Delta/T_L}\label{rho3},\\
\rho_{IV} & \simeq & \frac{T_M}{\Delta+2T_M}e^{-\Delta/T_L}\label{rho4}.
\end{eqnarray}
$\rho_{III}$ remains very close to 1 and $\rho_{II}$ to 10$^{-2}$. $\rho_{I}$ and $\rho_{IV}$
are much lower but change by 1 to 2 orders of magnitude with temperature.
We now explicitly present the three thermal currents expressions and their dependance with temperature which is the core of our study.
\begin{eqnarray}
J_L & \simeq & -J_R \simeq \frac{\Delta^2 T_M e^{-\Delta/T_L}}{\Delta+2T_M}, \\
J_M & \simeq &\Delta^2\left[-\frac{T_M}{\Delta+2T_M}e^{-2\Delta/T_L}+2 e^{-2\Delta/T_M}\right].
\end{eqnarray}
These formula are in accordance with the linear dependence of the thermal currents for small values of $T_M$. Note also that $J_L$ and $J_R$ seems to be driven by $\rho_{IV}$, the state population at the intermediate energy ($E_{IV}=0$) when we fully look at their expressions and to (\ref{rho4}). Examining the authorized transitions, one expects $J_M$ to be driven by the population of the most energetic state, i.e., $\rho_I$. The main difference between $\rho_{IV}$ and $\rho_I$ is the temperature dependence, which is linear in one case and exponential ($e^{-2\Delta/T}$) in the other one. The result is that even when $T_M$ is close to $T_L$, $\rho_I$ remains low. Therefore, $J_M$ keeps low values in the whole temperature range due to the low values of $\rho_I$.
If we look more carefully at $J_M$, one notices that it is the sum of two terms. The first one is roughly linear on $T_M$. It is similar to the one that appears in $\rho_{IV}$. $J_M$ depends on the population of state $IV$, which also influences the population of state $I$ with the transition $IV-I$. The increase of $\rho_{IV}$ with $T_M$ makes easier the $IV-I$ transition, and raises $\rho_I$. This increases the decaying of state $I$ through the $I-III$ transition. This term is negative and decreases as $T_M$ increases. This can be seen as a negative differential resistance since a decreasing of $J_M$ (cooling in $M$) corresponds to an increase of the temperature $T_M$. In this temperature range, one can easily show that the amplification factor $|\alpha_L|\approx|\alpha_R|\approx e^{\Delta/T_L}$($e^{10}=22026.5$).
A second term in $J_M$, is the classical $e^{-\Delta/T_M}$ Boltzmann factor, which makes the population of state $I$ increase with $T_M$. $J_M$ is a tradeoff between these two terms. At low temperature, the linear term is predominant. As $T_M$ increases, the term $e^{-\Delta/T_M}$ takes over. As a consequence, there is a point where the $\rho_I$ increasing reverses the $I-IV$ transition, so that the $I-III$ transition competes with both the $I-IV$ and $I-II$ transitions. $I-III$ is then reversed. With these two terms competing, there is a temperature for which $J_M$ reaches a minimum and a second temperature where $J_M=0$, as already described.
One can wonder what are the conditions to obtain the best transistor in the conditions studied here. There are two criteria that will quantify a good transistor. One is the amplification factor and the other one is the intensity of the heat currents at the emitter and the collector ($J_L$ and $J_R$). Note that the amplification factor depends on $e^{\Delta/T_L}$ and that the currents depends on $e^{-\Delta/T_L}$. Let us also recall that we have assumed up to now that $e^{-\Delta/T_L}\ll 1$. Therefore the best choice to have a transistor with a sufficiently collector or emitter current is to take the lowest $\Delta/T_L$ with the condition $e^{-\Delta/T_L}\ll 1$ and the criteria chosen to fullfill this last conditions (here $\Delta/T_L\approx 5$).
One can summarize the conditions needed for the system to undergo a thermal transistor effect. Two baths (here $L$ and $R$) induce transitions between two highly separated states with an intermediate energy level, whereas the third one ($M$) makes only a transition between the two extremes. This will first make $J_M$ much smaller than $J_L$ and $J_R$, and second, it will set a competition between a direct decay of the highest level to the ground level and a decay via the intermediate one. This competition between the two terms makes the thermal dependance of $J_M$ with $T_M$ slow enough to obtain a high amplification.
\section{Conclusion}
We have shown that coupled TLS linked to thermal reservoirs can make systems exhibiting thermal rectification. In the case of 2 TLS, a thermal diode can be made where one of the entry is set at a certain temperature of the order of the system transition. When the other end of the diode is set at a lower temperature, the system is blocked, whereas it is opened when the temperature is higher. This kind of device can isolate a system from cold sources. In the case of a 3 TLS system, we have shown that it is possible to make a thermal transistor. We found a temperature regime where a thermal current variation imposed at the base generates an amplified variation at the emitter and the collector. This regime is typically such that the temperature corresponds to an energy one order of magnitude smaller than the coupling energy between the TLS. With this kind of thermal transistor one can expect to modulate or amplify thermal fluxes in nanostructures made up of elementary quantum objects.
\begin{acknowledgments}
This work pertains to the French Government Program ''Investissement d'avenir'' (LABEX INTERACTIFS, ANR-11-LABX-0017-01).
\end{acknowledgments}
|
2,877,628,089,780 | arxiv | \section{Introduction}
Galaxy redshift surveys have been established as a pillar of observational cosmology over the past several decades. The large-scale structures, traced by galaxies, reveal the imprint of the baryon acoustic oscillation (BAO), a feature that can be used to measure the expansion history of the Universe. The redshift space distortions (RSD) caused by peculiar velocity of galaxies enable measurements of the growth of large-scale structure and tests of General Relativity.
Luminous red galaxies (LRGs) are an important type of galaxies in large-area redshift surveys, and are selected because of two advantages: 1) they are bright galaxies with the prominent $4000\,\text{\AA}$ break in their spectra, thus allowing for relatively easy target selection and redshift measurements; and 2) they are highly biased tracers of the large-scale structure, thus yielding higher S/N per-object for the BAO measurement compared to typical galaxies. In addition to the cosmological constraints from BAO and RSD, there will be significant gains in constraining powers when the LRG sample is combined with other observations, e.g., using the LRGs (and their massive dark matter halos) as gravitational lenses of background galaxies and the Cosmic Microwave Background (e.g., \citealt{mandelbaum_cosmological_2013,singh_cosmological_2020,white_cosmological_2022}).
The Dark Energy Spectroscopic Instrument (DESI, \citet{desi_collaboration_desi_2016, desicollaboration_desi_2016, desi_instrument_overview}) is undertaking the largest galaxy redshift survey to date, and LRGs will be the primary galaxy targets that DESI will observe in the redshift range of $0.4<z<{\sim}\,1.0$. Compared to LRG samples from previous surveys, such as the SDSS LRG survey \citep{eisenstein_spectroscopic_2001,eisenstein_detection_2005}, BOSS \citep{reid_sdssiii_2016,alam_clustering_2017} and eBOSS \citep{prakash_sdssiv_2016,ebosscollaboration_completed_2021}, the DESI LRG sample has a significantly higher target density and extends to higher redshifts (see Figure \ref{fig:main_dndz}). This is made possible by DESI's higher fiber multiplexing, larger telescope aperture and better spectroscopic performance, and the availability of deeper (and highly uniform) imaging data necessary for target selection.
In this paper, we describe the selection of the DESI LRG targets, and assess the selection uniformity and spectroscopic performance. Significant efforts were made to minimize the impact of imaging systematics (i.e. modulation of the target density caused by image quality variations on the sky), and these include improvements in the image reduction pipeline, which is motivated by the need for uniform target selections and is discussed in \citet{schlegel_dr9}, as well as making careful choices in target selection. We will describe these choices for the LRG sample in this paper.
The structure of the paper is as follows. We describe the imaging data, selection cuts, stellar mass completeness, and veto masks in \S\ref{sec:target_selection}. We assess potential imaging systematics in \S\ref{sec:ts_systematics}. In \S\ref{sec:spectro_assessment}, we evaluate the spectroscopic redshift efficiency and model its dependence on source brightness and exposure time. We conclude in \S\ref{sec:summary}. In the appendices, we describe the selections and redshift performance of the extended LRG samples observed before the start of the Main Survey, specifically in Survey Validation (``SV1'') and the 1\% Survey (``SV3''); these observations were done separately and are not included in the Main LRG sample (which we simply refer to as DESI LRGs).
This paper is part of a series of papers presenting the DESI target selections and their characterization. \citet{dawson_sv_overview} present an overview of the DESI spectroscopic observations and the tracers used by those papers, and \citet{myers_target_pipeline} present how those target selections are implemented in DESI. The Milky Way Survey sample is presented in \citet{cooper_mws}, the Bright Galaxy Survey sample is presented in \citet{hahn_desi_BGS_selection}, the emission line galaxy sample is presented in \citet{raichoor_desi_elg_selection}, and the quasar sample is presented in \citet{chaussidon_desi_qso_selection}. \citet{lan_desi_galaxy_vi} and \citet{alexander_desi_qso_vi} present the building of spectroscopic truth tables based on visual inspection for the galaxies (BGS, LRG, ELG) and the QSO targets, respectively.
The LRG target catalogs are publicly available\footnote{\url{https://data.desi.lbl.gov/public/ets/target/catalogs/dr9/1.1.1/targets/main/resolve/dark/}} (see \citealt{myers_target_pipeline} for details).
\begin{figure*}
\centering
\includegraphics[width=1.4\columnwidth]{figures/main_dndz_1.pdf}
\caption{The redshift distribution of the DESI LRG sample and comparing it with LRG samples from earlier surveys. The y-axis is the number of objects in each redshift bin (of width $\Delta z=0.05$) per deg$^2$. The survey area and the total number of LRGs that have or will be observed in each survey are listed in the legend. The dashed curve corresponds to the redshift distribution of a hypothetical sample with constant comoving density of $5\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$, which is the approximate DESI LRG density in the redshift range of $0.4<z<0.8$; the area under the curve is proportional to the enclosed comoving volume.}
\label{fig:main_dndz}
\end{figure*}
\section{Target selection}
\label{sec:target_selection}
\subsection{Imaging data}
The LRG targets are selected using the DESI Legacy Imaging Surveys Data Release 9 (\citealt{schlegel_dr9,dey_overview_2019}, hereafter LS DR9), specifically $g$, $r$, $z$ bands in the optical ($4000$--$10000\text{\AA}$, without $i$ band at $7800\text{\AA}$) and forced photometry of {\it WISE} \citep{wright_widefield_2010} $W1$ band in the infrared ($3.4\,\micron$). The imaging footprint is shown in Figure \ref{fig:density_map}. Table \ref{tab:lrg_summary} lists the areas and other summary information.
\begin{table}
\centering
\caption{Useful information about DESI and the LRG targets.}
\label{tab:lrg_summary}
\begin{tabular}{ll}
\hline
\hline
Area in imaging footprint & 19700 deg$^2$ \\
Area in DESI & 14800 deg$^2$ \\
Fraction of area in target mask & 1.0\% \\
Fraction of area in LRG veto mask & 8.5\% \\
Target density & 605 deg$^{-2}$ \\
Spectroscopically confirmed star fraction & 0.5\% \\
Spectroscopically confirmed quasar fraction & 1.6\% \\
Fraction rejected by redshift quality cut & 1.1\% \\
Fraction of catastrophic redshift failures & 0.2\% \\
\hline
\end{tabular}
\tablecomments{The areas include all regions with optical ($grz$) coverage without any masking. The area in DESI is approximate. The LRG veto mask includes the target mask. The target density is the average density over the DESI footprint. The target density and all the spectroscopy/redshift-related values are calculated after the LRG veto mask is applied. The catastrophic redshift failure rate is after applying the redshift quality cut.}
\end{table}
The optical $grz$ imaging consists of two regions separated at DEC$={\sim}\,30\degr$, with each region observed by different telescopes (with similar $grz$ filter sets). The southern part of the imaging footprint (hereafter ``the South'') is observed by DECam on the 4m Blanco Telescope at the Cerro Tololo Inter-American Observatory (CTIO). Most of the observation is done by the DECam Legacy Survey (DECaLS, \citealt{dey_overview_2019}), and data from other observations, including the Dark Energy Survey (DES, \citealt{thedarkenergysurveycollaboration_dark_2005}), is also used. The northern part of the imaging footprint (hereafter ``the North''), which is inaccessible from CTIO, is observed by two telescopes at Kitt Peak National Observatory in two surveys: the Beijing–Arizona Sky Survey (BASS, \citealt{zou_project_2017}) observed in $g$ and $r$ bands using the 90Prime Camera on the 2.3m Bok Telescope, and the Mayall $z$-band Legacy Survey (MzLS) observed in $z$ band using the Mosaic-3 Camera on the 4m Mayall Telescope. The Mayall Telescope has since been repurposed for DESI.
For the same photometric band, small differences in the filter sets, detectors and observing conditions between the North and the South cause their photometry to differ slightly, and we find it necessary to implement slightly different color cuts for each region to achieve uniform selection across the footprint. The specific color cuts are described in the next subsection.
\begin{figure*}
\centering
\includegraphics[width=2.05\columnwidth]{figures/density_lrg_256_lrgmask_1}
\caption{The footprint of the DESI Legacy Imaging Surveys DR9, with the colors representing the surface density (in deg$^{-2}$) of the LRG targets (after applying the LRG veto masks). DESI will only observe regions above DEC$>$-20, and the DESI footprint also avoids regions close to the edge of the imaging footprint. See the actual DESI footprint in \citet{schlafly_survey_ops}. This density map is computed with a healpix resolution of NSIDE=256, and we only plot pixels that are $>20\%$ occupied by the imaging survey footprint. The curve that separates the two regions is the Galactic plane.}
\label{fig:density_map}
\end{figure*}
\subsection{Selection cuts}
The LRG targets are selected using optical photometry in $grz$ bands and near-infrared photometry in {\it WISE} $W1$. The LRG selection cuts for the South, shown in Figure \ref{fig:main_lrg_selection}, are
\begin{subequations}
\label{eq:lrg_selection_south}
\begin{align}
& z_\mathrm{fiber} < 21.60 \label{eq:mag-limit}\\
& z-W1 > 0.8\times(r-z) - 0.6 \label{eq:non-stellar}\\
& (g-W1>2.9) \ \mathrm{OR}\ (r-W1>1.8) \label{eq:low-z}\\
\begin{split}
& ((r-W1 > 1.8\times(W1-17.14)) \ \mathrm{AND} \\
& (r-W1 > W1-16.33)) \ \mathrm{OR}\ (r-W1>3.3) \label{eq:sliding-cut}
\end{split}
\end{align}
\end{subequations}
\begin{figure*}
\centering
\includegraphics[width=1.55\columnwidth]{figures/main_lrg_selection_desi_z_crop_1.png}
\caption{Selection cuts for the LRG targets in the South footprint.
The points are color-coded by their redshifts from DESI. The upper left panel shows the stellar rejection cut, with gray points representing stars (which are plotted to show the stellar locus and are not LRG targets). The upper right panel shows the cut that removes lower-redshift and bluer galaxies. The lower left panel shows the sliding color-magnitude cut that serves as the luminosity cut and also shapes the redshift distribution; the ``knee'' at $W1={\sim}\,19$ introduces more galaxies at higher redshift. The lower right panel shows the magnitude limit in $z$-band fiber magnitude that ensures enough S/N for DESI observations.}
\label{fig:main_lrg_selection}
\end{figure*}
where $g$, $r$, $z$, and $W1$ are magnitudes and $z_\mathrm{fiber}$ is the magnitude corresponding to the expected $z$-band flux within a DESI fiber\footnote{It uses the FIBERFLUX\_Z value in the imaging data; see \url{https://www.legacysurvey.org/dr9/files/\#sweep-catalogs-region-sweep}}. Throughout this paper, all the magnitudes are in the AB system and are corrected for Galactic extinction, unless otherwise specified.
Eqn. \ref{eq:non-stellar} utilizes the $1.6$\,\micron\ (restframe)``bump'' \citep{John88,Sawicki02} to efficiently remove stars from the sample (as shown in the upper left panel of Figure \ref{fig:main_lrg_selection}), similar to the stellar-rejection cut in \citet{prakash_luminous_2015}, resulting in a low stellar contamination rate of ${\sim}\,0.5\%$ (see \S\ref{sec:spectro_classification}). Eqn. \ref{eq:low-z} removes galaxies at lower redshifts while retaining high completeness of massive galaxies at $z>0.4$.
The sliding color-magnitude cuts in eqn. \ref{eq:sliding-cut} function as luminosity cuts: as shown in the lower left panel of Figure \ref{fig:main_lrg_selection}, the $r-W1$ color is a good proxy for redshift, and the $W1$ magnitude limit that shifts with $r-W1$ effectively selects the most luminous (in the observed $W1$ band) galaxies at any redshift. For objects with $r-W1>{\sim}\,3.3$, which are the faintest LRGs in our sample and are near the faint limit (eqn. \ref{eq:mag-limit}), the $r-W1$ vs $W1$ sliding cut is dropped in order to boost the number density of the highest-redshift LRGs. The cuts are tuned so that the comoving number density of the selected sample is close to constant at $0.4<z<0.8$. Finally, eqn. \ref{eq:mag-limit} sets the faint limit for the sample to ensures high redshift success rate for DESI spectroscopic observation (see \S\ref{sec:z_quality_cut}-\ref{sec:depth_and_mag_dependence}). We choose the fiber magnitude over the total magnitude as the faint limit because the former is much more strongly correlated with the spectroscopic S/N.
The photometry in the North is slightly different from the South, and the selection cuts are tuned to match the number density and the redshift distribution in the South. The cuts for the North are
\begin{subequations}
\label{eq:lrg_selection_north}
\begin{align}
& z_\mathrm{fiber} < 21.61 \label{eq:mag-limit-north}\\
& z-W1 > 0.8\times(r-z) - 0.6 \\
& (g-W1>2.97) \ \mathrm{OR}\ (r-W1>1.8) \label{eq:low-z-north}\\
\begin{split}
& ((r-W1 > 1.83\times(W1-17.13)) \ \mathrm{AND} \\
& (r-W1 > W1-16.31))\ \mathrm{OR}\ (r-W1>3.4)
\end{split}
\end{align}
\end{subequations}
In addition to the aforementioned cuts, we apply the following quality cuts (e.g. to remove objects with bad photometry) in the target selection pipeline \citep{myers_target_pipeline}. We require that each object be observed at least once in all of the three optical bands. We also require that the inverse-variance values for $r$, $z$ and $W1$ fluxes be positive; this rejects problematic imaging data. A small number of stars are not removed by the aforementioned cuts due to saturation in the imaging. We remove them by requiring $z_\mathrm{fibertot}>17.5$, and if Gaia \citep{gaiacollaboration_gaia_2018} photometry is available, we also require $G_\mathrm{Gaia}>18$.
The DESI target selection pipeline also removes objects close to bright stars (in Gaia and Tycho-2), star clusters and large galaxies, which are flagged by the LS DR9 MASKBITS\footnote{\url{https://www.legacysurvey.org/dr9/bitmasks/\#maskbits}} 1, 12, and 13, respectively. These masks are very minimal and only remove regions with the worst contamination, and additional masking is needed for the clustering analysis. We describe these additional masks in \S\ref{sec:masks}.
The selection cuts for the Survey Validation and the 1\% Survey samples are described in Appendices \S\ref{sec:sv1} and \S\ref{sec:sv3}, respectively.
\subsection{Stellar Mass Completeness}
\label{sec:stellar_mass_completeness}
In order to accurately model galaxy clustering (e.g. \citealt{Zu2015clustering,zhou_clustering_2021}) and study galaxy-galaxy lensing (e.g. \citealt{Alam2017lensing,Jullo2019lensing}), halo occupation distribution (e.g. \citealt{RodriguezTorres2016HOD}) and evolution of the most massive galaxies (e.g. \citealt{Bundy2017MassiveGalaxies}), it is desirable to have a large spectroscopic sample of strongly clustered galaxies with well defined stellar populations. The target selection cuts for the DESI LRG sample have therefore been optimized to select the most massive galaxies with a high degree of completeness. We define completeness as
the ratio of selected LRGs to the total number of galaxies brighter than the LRG magnitude limit
(defined by eqns.~\ref{eq:mag-limit}~\&~\ref{eq:mag-limit-north}). For the purpose of this analysis, in this section we refer to objects that satisfy our stellar rejection cut (defined by eq.~\ref{eq:non-stellar}) as ``galaxies''.
The cuts defined by eqns. ~\ref{eq:low-z} \& \ref{eq:low-z-north} reject objects with low redshifts while retaining the most massive galaxies for redshifts over 0.4. The design of these cuts were guided by estimates of stellar masses of galaxies obtained using a random forest algorithm \citep{Breiman2001RandomForest} trained on DESI Legacy Imaging Survey photometry and stellar masses of galaxies from \citet{Bundy2015Stripe82Catalog}. A detailed description of the method used to obtain the stellar masses can be found in appendix~\ref{app:rf_masses}.
Figure ~\ref{fig:mass_completeness} shows the stellar mass completeness of the DESI LRG sample both as a function of stellar mass and redshift. As spectroscopic redshifts are only available for some of the selected LRGs and not the magnitude-limited sample, we use the photometric redshifts in LS DR9\footnote{\url{https://www.legacysurvey.org/dr9/files/\#photometric-redshift-sweeps}} \citep{zhou_clustering_2021} for this analysis. The selection cuts result in a sample which is highly complete for the most massive galaxies (i.e. $\log_{10}(\mathrm{M}_{\star}[\mathrm{M}_{\odot}])>11.5$) in the redshift range of 0.4 to 1.0. The completeness decreases significantly for redshifts lower than 0.4 but the decrease is less severe for redshifts above 1.0. This high mass completeness is one of the defining characteristics of the DESI LRG sample and will aid a multitude of scientific studies.
\begin{figure*}\label{fig:mass_completeness}
\centering
\includegraphics[width=1.95\columnwidth]{figures/mass_completeness.pdf}
\caption{Stellar mass completeness of the DESI LRG sample as a function of stellar mass and photometric redshift. The dashed curve shows the fraction of galaxies above a given stellar mass that have been selected as a DESI LRG compared to a magnitude-limited sample. The blue histogram shows distribution of stellar masses of a magnitude-limited sample of galaxies (having the same magnitude limit as the DESI LRG sample) whereas the black histogram denotes the subset of galaxies that have been selected as DESI LRGs. The stellar masses were obtained using a random forest based algorithm (described in appendix~\ref{app:rf_masses}) and the photometric redshifts are from \citet{zhou_clustering_2021}. As spectroscopic redshifts are not available for the magnitude-limited sample, we use photometric redshifts for this demonstration. The figure uses objects from both the North and the South where valid photometry in $g$, $r$, $z$, $\mathrm{W}_{1}$ and $\mathrm{W}_{2}$ is available. The selected sample is highly complete for the most massive galaxies (i.e. $\log_{10}(\mathrm{M}_{\star}[\mathrm{M}_{\odot}])>11.5$) in the redshift range of 0.4 to 1.0. The completeness decreases significantly for redshifts lower than 0.4 but the decrease is less steep for redshifts above 1.0.}
\label{fig:mass_completeness}
\end{figure*}
\subsection{Veto masks for clustering analysis}
\label{sec:masks}
Here we describe the additional veto masks specifically optimized for the LRG targets to create a clean sample for the DESI clustering analysis. They are mostly masks of stars, although they also include masking for large galaxies and star clusters, etc. The veto masks are comprised of four separate sets of masks:
1. unWISE \citep{meisner_unwise_2019} pixel-level bitmask: we use all but bit 5 (``AllWISE-like circular halo'') of the collapsed mask bits as listed in Table A4 of \citet{meisner_unwise_2019}. We exclude bit 5 because these circular masks are not optimal (either too large or too small, depending on the magnitude of the star) for the LRG targets, and they are replaced by
2. WISE circular geometric masks: these masks replace the ``AllWISE-like circular halo'' masks in unWISE. The radius vs W1 magnitude relation is optimized for the LRGs, so that the excess or deficit of LRG targets at the edge of the mask is less than ${\sim}\,10\%$.
3. Gaia/Tycho-2 circular masks: the radius vs magnitude relation is obtained similar to the WISE masks (with the same 10\% criterion). We use stars in Gaia Early Data Release 3 (EDR3, \citealt{gaia_edr3}) supplemented at the bright end (where Gaia photometry is unreliable) by Tycho-2 and 2MASS photometry.
4. Custom masks: these are masks for the large galaxies, star clusters and planetary nebulae that were not masked by the LS DR9 MASKBITS, and regions with other imaging artifacts (identified from visual inspection of regions with high LRG densities). The total area of the custom masks is much smaller than the other masks.
We describe 2-4 in more detail in Appendix \ref{sec:appdx_masks}. The combined veto masks remove 8.5\% of the DESI footprint. Within the masked (contaminated) area, the LRG target density is 1100 deg$^{-2}$ and the stellar contamination rate (based on spectroscopic classification) is ${\sim}\,10\%$, compared to the 605 deg$^{-2}$ density and ${\sim}\,0.5\%$ stellar contamination in the unmasked ``clean'' area. The stellar contamination rate is much higher in the masked region because the photometry (especially in WISE) used in the stellar rejection cut (eqn. \ref{eq:non-stellar}) is contaminated by the nearby bright stars.
While the full LRG spectroscopic and target catalogs include objects flagged by the LRG veto masks, we recommend that those objects be removed for analyses that require a clean sample with uniform selection. In the rest of the paper, we only use the ``clean'' LRG sample (with the LRG veto masks applied) instead of the full target/spectroscopic sample.
Finally, we note that the LRG veto masks presented here are not definitive, and they may see further improvements, e.g., for the DESI Year 1 science analyses.
\section{Target selection systematics}
\label{sec:ts_systematics}
\subsection{Imaging and foreground systematics}
The variation of imaging properties (such as depth and seeing) over the footprint and the presence of astrophysical foregrounds (such as Galactic dust) can imprint on the density of galaxy targets. Here we examine the impact of these systematics on the LRG target density. The systematics properties that we examine here include depth (galaxy depth in $grz$ and PSF depth in $W1$), seeing (in $grz$), Gaia stellar density, and Galactic extinction $E(B-V)$ (based on \citealt{schlegel_maps_1998} with correction from \citealt{schlafly_measuring_2011}).
While the photometry used in target selection has been corrected for Galactic extinction, we include $E(B-V)$ here because imperfections in the correction, e.g., due to errors in the dust map, can still bias the photometry and affect the target density.
We use the {\tt STARDENS} values from \citealt{myers_target_pipeline} for Gaia stellar density, and we use values from the imaging catalog\footnote{\url{https://www.legacysurvey.org/dr9/files/\#sweep-catalogs-region-sweep}} for all other systematics properties (GALDEPTH, PSFDEPTH, PSFSIZE and EBV).
Figure \ref{fig:systematics_trends} shows the dependence of LRG target density on the imaging and foreground systematics. The LRG sample is much brighter than the imaging detection limit (with the faintest LRG targets being at least ${\sim}\,2$ magnitudes brighter than the median $z$-band 5$\sigma$ detection limit), and the stellar-rejection cut and the LRG veto masks efficiently remove the contamination caused by stars. Therefore the LRG sample is relatively robust against these imaging/foreground systematics. The density deviations caused by these systematics are almost all within $\pm5\%$.
\begin{figure*}
\centering
\includegraphics[width=2.05\columnwidth]{figures/systematics_lrg_512_ebv_corr.png}
\caption{Density of LRG targets in bins of imaging/foreground systematics values in the three imaging regions. (DECaLS and DES are both observed with DECam and have the same selection cuts and linear regression coefficients, but DES is significantly deeper and we plot it as a separate region to illustrate the difference.) The error bars represent ``the error of the mean'' assuming Gaussian distribution. The histograms show the distribution of each systematics property for each imaging region. ``Galaxy depth'' is based on `the `GALDEPTH'' value in the LS DR9 catalog and it is the $5\sigma$ detection magnitude of an ELG-like galaxy (and it assumes zero Galactic extinction); to account for Galactic extinction, we add an $E(B-V)$ term to obtain the ``true'' depth of extragalactic sources. ``PSF size'' is the PSF FWHM and measures the seeing. The trends are computed using a healpix density map with nside of 512 by averaging over the pixels in bins of imaging/foreground properties.}
\label{fig:systematics_trends}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=2.05\columnwidth]{figures/systematics_lrg_512_ebv_corr_lw.png}
\caption{As Figure \ref{fig:systematics_trends} but with linear regression weights applied. Stellar density is not included in the parameters for the linear regression.}
\label{fig:systematics_trends_corrected}
\end{figure*}
The systematics trends in Figure \ref{fig:systematics_trends} can be almost completely removed via linear regression of the systematics properties:
\begin{equation}
N_\mathrm{predict, k} = c_0 + \sum_{i=1}^{8} \sum_{l=1}^{N_\mathrm{rand, k}} c_i S_{i,l} ,
\end{equation}
where $N_\mathrm{predict, k}$ is the ``predicted'' number of LRG targets in the $k$-th healpix pixel, $S_{i, l}$ is the value of the $i$-th systematics property of the $l$-th random, $N_\mathrm{rand, k}$ is the number of randoms in the $k$-th pixel, and $c_i$ are the coefficients. We use the randoms in LS DR9. The ``corrected'' systematics trends are shown in Figure \ref{fig:systematics_trends_corrected}. For the linear regression, we include all systematics properties except stellar density, because 1) the stellar contamination rate is already very low and 2) the stellar density maps are inherently noisy and the Gaia catalog (which the stellar density map is based on) are much brighter than the stellar contamination in the LRG targets. Indeed we find little correlation with stellar densities after applying the systematics weights. The coefficients for the North and the South are different, but DECaLS and DES are treated as one sample in the linear regression and have the same coefficients. The LRG density in the DES region is ${\sim}\,3\%$ lower than the average density due to its deeper photometry, and the linear regression weights accurately predicts this density difference.
There is an unexpected dependence on $E(B-V)$ that remains after correcting for depth and seeing dependence: the LRG density at very low $E(B-V)$ is more than $5\%$ lower than average. We also see similar trends at very low $E(B-V)$ in the other DESI tracers (which have very different selections and redshifts), so it is unlikely to be a statistical fluke. While we are not certain what causes this drop in target density at very low $E(B-V)$, we speculate that it is caused by systematics in the Galactic extinction map. Specifically, the $E(B-V)$ map from \citet{schlegel_maps_1998} is based on dust emission in the far infrared, which may have been contaminated by FIR emissions from background galaxies. The remaining trends (at higher $E(B-V)$) might be due to SED-dependent effects: ideally we would calculate the extinction coefficients for each galaxy based on its spectral energy distribution (SED), but in practice we use a single stellar spectrum for calculating the extinction coefficients, and this could lead to systematic errors in the dereddened fluxes. We will examine the Galactic dust-related systematics in future investigations.
Note that while the aforementioned linear weights work well for the projected density of the full LRG sample, for subsets of the sample (e.g., in redshift bins) the coefficients and weights should be recomputed for each subset, because different subsamples are affected by the selection cuts differently (e.g., brighter subsamples might be more sensitive to the sliding cut in $r-W1$ vs $W1$ while fainter subsamples might be more sensitive to the $z_\mathrm{fiber}$ faint limit) and have different sensitivities to the different systematics.
\subsection{Zero point sensitivities}
\label{sec:zp_sensitivity}
Another source of imaging systematics is the zero point (ZP) uncertainty –– the uncertainty of the photometric ZP for each exposure of the imaging data. While depth and seeing can be accurately quantified and can in principle be modeled, the ZP uncertainty is a systematic uncertainty that is mostly due to imperfect modeling of the observing conditions. Thus, the imprint of ZP uncertainties on the sky is difficult, if not impossible, to correct for. We design the LRG target selection to be as insensitive to the ZP uncertainties as possible.
One way to quantify the effects of ZP uncertainties on the LRG targets is to estimate the level of fluctuation in target density caused by the ZP uncertainties. The estimated ZP uncertainties in $g,r,z,W1$ are 3,3,6,1 mmag, respectively \citep{schlegel_dr9}. For the LRG selection, a net change of +10 mmag in g, r, z, W1 causes a change of +0.11\%, +1.40\%, -1.23\%, -2.89\% in target density, respectively. If we assume that the ZP errors in the four bands are all uncorrelated with each other, we can treat the combined effect on the target density as sums of independent Gaussian random variables, and the resulting RMS of the target density is 0.9\%. During Survey Validation, we considered an alternative LRG selection that implements the luminosity cut using $z$ band fiber magnitude and $r-z$ color (see Appendix \ref{sec:sv1}), and this selection has much a larger RMS of ${\sim}\,4\%$ due to the large ZP uncertainty in $z$ band. (The $z$-band photometric calibration has large uncertainties mainly because the effective $z$-band filter transmission can vary significantly due to telluric water vapor absorption.) Its insensitivity to zero point errors is the main reason that we chose to adopt the WISE-based luminosity cut.
\section{Spectroscopic assessment}
\label{sec:spectro_assessment}
\subsection{Spectroscopic data}
\label{sec:spectro_data}
We use spectroscopic data from SV1, SV3 and the first 2 months of the Main Survey. We only select LRG+QSO tiles in SV1 and dark tiles in SV3 and the Main Survey. The sky coverage of the spectroscopic LRGs is shown in Figure \ref{fig:observed_lrgs}. Figure \ref{fig:example_spectra} shows some example spectra of LRGs observed during the Main Survey, and the image cutouts (from the Legacy Surveys Viewer\footnote{\url{https://www.legacysurvey.org/viewer}}).
SV1 has several flavors of coadds, and we use the single-exposure coadds, the nominal (1$\times$) depth coadds, and the cumulative (deep) coadds. LRG targets are assigned the target bit $2^0$ in all three programs. A significant number of brighter LRGs are also observed in the BGS program under very different observing conditions, and we do not include them here. We remove objects affected by instrument issues by requiring that the COADD\_FIBERSTATUS value in the catalogs is equal to 0. We apply the veto mask (\S \ref{sec:masks}) to create a clean sample.
The redshift fitting is done using Redrock \citep{redrock}. It uses 1-D spectra produced by the DESI spectroscopic pipeline \citep{guy_spectro_pipeline} as input. For each spectrum, it computes the best-fit redshift and $\Delta \chi^2$, which is the difference in $\chi^2$ between the best-fit model and second best-fit model and is an indication of the reliability of the best-fit redshift.
We use a sample of ``true'' redshifts for measuring the catastrophic failure rates. While the redshifts from visual inspection \citep{lan_desi_galaxy_vi} could be used as true redshifts, they are only available for a few thousand SV LRGs, and a few hundred Main Survey LRGs. Therefore we use the much larger sample of redshifts from deep coadded SV1 spectra as the true redshifts. We require a minimal effective exposure time ($t_\mathrm{eff}$, see its definition in \citealt{guy_spectro_pipeline}) of 3000s, which is 3 times the DESI nominal depth of $t_\mathrm{eff}$=1000s. We validate these deep redshifts by comparing them with the visual inspection redshifts, and we find that the redshifts disagree for less than 0.5\% of the Main Survey LRGs.
For assessing the spectroscopic performance at nominal depth, we require $t_\mathrm{eff}>$800s. And we also require $t_\mathrm{eff}<$1200s for the SV1 coadds (which have a wider range of $t_\mathrm{eff}$ than the Main Survey). To assess if a redshift (obtained at nominal depth) is correct, we compare it with the redshift of the same object from the deep coadded spectra. If a nominal-depth redshift differs from the deep redshift by more than 1000 km/s, that redshift is considered a ``catastrophic redshift failure''. (A small number of deep redshifts has ZWARN$\neq$0 or $z_\mathrm{redrock}>1.5$, and are likely not reliable. We treat the corresponding nominal-depth redshifts as catastrophic failures regardless of their redshift value.)
\begin{figure*}
\centering
\includegraphics[width=1.95\columnwidth]{figures/observed_lrgs_radec.png}
\caption{LRGs observed by DESI during Survey Validation and the first two months of Main Survey. We use this data for evaluating the LRG redshift performance.}
\label{fig:observed_lrgs}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=2.0\columnwidth]{figures/example_spectra}
\caption{Example spectra and image cutouts of DESI LRGs that were observed in the Main Survey to nominal spectroscopic depth. The observed and model spectra are convolved with a Gaussian kernel with $\sigma=2.4\text{\AA}$ to reduce the noise. The three spectra from the B/R/Z spectrographs are coadded into a single spectrum in the figure. The target ID, $g/r/z/W1$ magnitudes and $z_\mathrm{fiber}$ magnitude, best-fit redshift, best-fit spectral type, ZWARNING flag, and $\Delta \chi^2$ values are listed for each object. Major absorption and emission lines, which are taken from the visual inspection tool \emph{Prospect} (\url{https://github.com/desihub/prospect}), are shown in green dashed lines. The image cutouts are 34\arcsec$\times$34\arcsec composites in $g/r/z$ (top) and $W1/W2$ (bottom).}
\label{fig:example_spectra}
\end{figure*}
Figure \ref{fig:main_dndz} shows the redshift distribution of LRGs observed in the Main Survey. The sample has a roughly constant comoving density of $5\times10^{-4}\ h^3\mathrm{Mpc}^{-3}$ in $0.4<z<0.8$ and a high-z tail that extends beyond $z=1.0$. Figure \ref{fig:main_dndz} excludes a small ($1\%$) fraction of the observed LRG targets that are rejected by the quality cut (see \S\ref{sec:z_quality_cut}); these objects are the faintest LRG targets and are mostly at the high-redshift end of the sample.
\subsection{Spectroscopic classification}
\label{sec:spectro_classification}
Only 0.5\% of the LRG targets are spectroscopically classified by Redrock as stars and 1.7\% as QSOs. The rest are classified as galaxies. Almost all the stellar contamination are cool stars such as M dwarfs. About 0.6\% of the LRG targets are also QSO targets,
and among them 61\% are spectroscopically classified as QSOs and 2\% as stars. If we exclude QSO targets, the stellar fraction is 0.5\% and the QSO fraction is 1.2\%. A large fraction of the area observed in the first 2 months is at relatively low Galactic latitude, and the full DESI footprint will include a larger fraction of high-latitude area where the stellar density is lower. Therefore we expect the 0.5\% stellar contamination rate to be an upper bound for the full DESI sample.
Note that in the first two months of Main Survey observations the observed LRG targets include a larger fraction of objects that are also QSO targets than in the full survey, because the fiber-assignment rate is lower at the beginning of the survey and the QSO targets have a higher fiber-assignment priority than the LRGs (see \citealt{raichoor_fiberassign}). We correct for this by downweighting the QSO targets so that the fractions described above are estimates for the final Main Survey sample.
\subsection{Redshift failure rate and quality cut}
\label{sec:z_quality_cut}
The catastrophic redshift failure rate of LRGs observed at nominal depth is $0.7\%$ (110/15379) from comparison with the deep redshifts. We also assess the catastrophic failure rate using repeat observations, which exist in the overlap between SV3 and the Main Survey. We find that $1.2\%$ (84/7233) of the repeats have different redshifts, thus the per-object catastrophic failure rate is $0.6\%$ if we assume that the redshift efficiency is the same in SV3 and Main Survey. (The slightly lower catastrophic failure rate from repeats could be due to the fact that SV3 observations are about $20\%$ deeper than the Main Survey.)
To reject incorrect redshifts, we apply the following redshift quality cut (shown in Figure \ref{fig:dchi2_vs_z}): we require $\Delta \chi^2>15$, $z_\mathrm{redrock}<1.5$, and the ZWARNING flag ZWARN==0. The $z_\mathrm{redrock}<1.5$ cut removes the pile-up of catastrophic failures at $z{\sim}\,1.6$. (While it is not entirely clear what causes this pile-up, based on the fact that these objects have mostly noisy and featureless spectra, we speculate that in order to best match the observed featureless spectra, Redrock finds the best fit at $z>{\sim}\,1.5$ so that the $4000\text{\AA}$ break feature of the PCA templates are red-shifted outside the DESI spectral coverage.) We note that the redshift quality cut is preliminary, and it may change, e.g., as the spectroscopic pipeline and Redrock evolve.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{figures/main_dchi2_vs_z.pdf}
\caption{$\Delta \chi^2$ vs redshift (best-fit from Redrock) at nominal depth. The black lines are the redshift quality cut. We distinguish between correct redshifts and catastrophic redshift failures by using redshifts from the deep coadds as truth.}
\label{fig:dchi2_vs_z}
\end{figure}
The redshift quality cut removes $1.1\%$ of the LRGs, $43\%$ (82/191) of which are catastrophic failures. The catastrophic failure rate in the accepted (confident) redshifts is $0.2\%$ (28/15188). From visual inspection of the spectra and images, we find that a significant fraction of the catastrophic failures that pass the redshift quality cut are blends.
Hereafter, we refer to the fraction of objects that meets the quality cut as the ``redshift success rate'' and the fraction that fails the cut as ``failure/rejection rate'', and we refer to the incorrect redshifts (as determined by comparing with deep or repeat spectra) as ``\textit{catastrophic} failures''.
In the above assessments we have excluded QSO targets and objects spectroscopically classified as QSOs, because they have much higher redshift failure rates and they are a very different population than the ``normal'' LRGs. The objects that are targeted or classified as QSOs have approximately 10 times higher catastrophic failure rates than the rest of the LRG targets, and they are also about 10 times more likely to fail the redshift quality cut. It should be noted that the distinction between a ``galaxy'' and ``QSO'' is not always sharply defined (e.g., Redrock may classify some ``galaxies'' with AGN features as ``QSOs'' but not others), and more careful consideration may be needed for the selection of the DESI clustering sample.
\subsection{Depth and magnitude dependence}
\label{sec:depth_and_mag_dependence}
The spectroscopic redshifts of the LRGs are primarily based on absorption lines and the $4000\text{\AA}$ break, and sufficient spectroscopic S/N is critical for confident redshift estimation. The S/N mainly depends on two factors: the source brightness and the spectroscopic depth ($t_\mathrm{eff}$). Here we investigate how the two factors affect the LRG redshift failure rate. (The redshift failure rate also depends on other factors such as the strength of the absorption and emission lines and and prominence of the $4000\,\text{\AA}$ break, but they do not vary significantly within the LRG sample and are thus less important factors for the redshift determination.)
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{figures/main_dchi2_vs_zfiber.pdf}
\caption{Similar to Figure \ref{fig:dchi2_vs_z} but with x-axis replaced by $z$-band fiber magnitude.}
\label{fig:dchi2_vs_zfiber}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{figures/main_failure_rate_vs_zfiber.pdf}
\caption{Catastrophic failure rates and rejection rates as a function of $z$-band fiber magnitude for the nominal exposures. (Error bars are not shown for fractions equal to zero.) The gray crosses are the predicted rejection rates based on $z_\mathrm{fiber}$ and $t_\mathrm{eff}$ (see \S\ref{sec:depth_and_mag_dependence}). The histogram shows the $z_\mathrm{fiber}$ distribution of the LRG sample. We restrict to coadds with 800s $< t_\mathrm{eff} <$ 1200s.}
\label{fig:failure_rate_vs_zfiber}
\end{figure}
The $z$-band fiber magnitude ($z_\mathrm{fiber}$) is very well correlated with the $\Delta \chi^2$ and catastrophic redshift failures, as shown in Figure \ref{fig:dchi2_vs_zfiber}, and the correlation is much stronger than the fiber magnitudes in $g$ and $r$ bands. Therefore we use $z_\mathrm{fiber}$ as the parameter for source brightness. In Figure \ref{fig:failure_rate_vs_zfiber} we show the catastrophic redshift failure rate and rejection rate as a function of $z_\mathrm{fiber}$ at nominal depth. The error bars show the uncertainty for a binomial distribution: $\sigma_p=\sqrt{N p (1-p)}/N$ where $N$ is the total number of objects and $p$ is the failure/rejection rate. At the $z_\mathrm{fiber}$ limit, the LRGs have the highest catastrophic failure rate of ${\sim}\,2\%$ and rejection rate of ${\sim}\,4\%$. The catastrophic rate after applying the redshift quality cut is much less than $1\%$ at the $z_\mathrm{fiber}$ limit.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{figures/main_failure_rate_vs_efftime}
\caption{Similar to Figure \ref{fig:failure_rate_vs_zfiber}, but with the effective exposure time $t_\mathrm{eff}$ in the x-axis (and with the restriction on $t_\mathrm{eff}$ removed). The histogram shows the $t_\mathrm{eff}$ distribution of LRGs in the Main Survey.}
\label{fig:failure_rate_vs_efftime}
\end{figure}
Figure \ref{fig:failure_rate_vs_efftime} shows the catastrophic redshift failure rate and rejection rate as a function of $t_\mathrm{eff}$. The rejection rate flattens out at above $t_\mathrm{eff}={\sim}\,$1000s. And while the catastrophic failure rate and rejection rate increase significantly at $t_\mathrm{eff}<$800s, the catastrophic failure rate of the accepted redshifts remains well below $1\%$.
For clustering measurements, it is important to correct for redshift incompleteness caused by targeting and observational factors. Here we use the following function of the effective exposure time and $z$-band fiber flux to predict the LRG redshift failure/rejection probability:
\begin{equation}
P_\mathrm{fail}(t_\mathrm{eff}, f_{z,\mathrm{fiber}}) = \exp(a_0 S + a_1) + a_2/(f_{z,\mathrm{fiber}}/1\,\mathrm{nanomaggy}) ,
\label{eqn:failure_rate}
\end{equation}
where $S \equiv (f_{z,\mathrm{fiber}}/1\,\mathrm{nanomaggy})\sqrt{t_\mathrm{eff}/1\,\mathrm{sec}}$ is (approximately) proportional to the spectroscopic S/N when the sky is much brighter than the source (which is true for the fainter LRGs), $f_{z,\mathrm{fiber}}$ is the $z$-band fiber flux, and $a_0$, $a_1$, $a_2$ are constant coefficients.
The exponential term is motivated by the observation that the redshift failure rate decreases exponentially with $S$. At brighter magnitudes and in deeper exposures (where the per-object failure rate becomes less than ${\sim}\,1\%$) the exponential term approaches zero faster than the observed failure rate, so we add the second term $a_2/f_{z,\mathrm{fiber}}$ to account for the higher observed redshift failure rate. From visual inspection we find that many of the redshift failures at brighter magnitudes are blends or have problematic spectra due to instrument issues.
The best-fit coefficients are found by minimizing
$\sum_i (P_{\mathrm{fail}, i} - Q_i)^2$,
where $P_{\mathrm{fail},i}$ is the predicted failure probability of the $i$-th object, and $Q_i=0$ if the object passes the quality cut and $Q_i=1$ if it fails the cut. For the fitting we use Main LRGs observed in SV1 and the first 2 months of the Main Survey, and SV3 LRGs (which are ${\sim}\,$0.1 magnitude fainter than the Main LRGs). To best match the redshift failure rates of the Main Survey, we only use LRGs with $500s<t_\mathrm{eff}<2000s$; this prevents the large number of redshift failures at very low $t_\mathrm{eff}$ (mainly from SV1) from dominating the fit. The best-fit coefficients are $a_0=-0.0911$, $a_1=3.34$, $a_2=0.0228$.
The gray crosses in Figures \ref{fig:failure_rate_vs_zfiber} and \ref{fig:failure_rate_vs_efftime} are the predicted failure rates, and they match the observed failure rates (green error bars) very well. Figure \ref{fig:failure_rate_in_2d} shows the observed and predicted dependence of the LRG redshift failure rate on both $z_\mathrm{fiber}$ and $t_\mathrm{eff}$. The residuals are negligible in the range of $z_\mathrm{fiber}$ and $t_\mathrm{eff}$ of the LRGs that are observed in the Main Survey.
\begin{figure*}
\centering
\includegraphics[width=2.\columnwidth]{figures/failure_rate_in_2d}
\caption{\textit{Left}: Redshift failure rate in bins of spectroscopic depth ($t_\mathrm{eff}$) and fiber magnitude ($z_\mathrm{fiber}$). The horizontal dashed line marks the nominal depth of 1000s. The vertical line marks the magnitude limit ($z_\mathrm{fiber}<21.6$) of the Main LRGs. At $z_\mathrm{fiber}<21.6$ the failure rate is computed for the combined sample of Main and SV3 LRGs, and at $z_\mathrm{fiber}\geq21.6$ SV1 LRGs are added. \textit{Middle}: the redshift failure rate from the model prediction using eqn. \ref{eqn:failure_rate}. \textit{Right}: the residual, i.e., the measured failure rate subtracted by the predicted failure rate. The model fitting is done using Main and SV3 LRGs with $t_\mathrm{eff}>500$, which is why the prediction is less accurate at lower $t_\mathrm{eff}$ and in the faintest magnitude bin.}
\label{fig:failure_rate_in_2d}
\end{figure*}
\subsection{Fiber-to-fiber variation in failure rate}
For the DESI clustering analysis, it is important that variations in the spectroscopic efficiency of each fiber do not imprint on the measured galaxy densities. E.g., \citet{ross_clustering_2012} found that the redshift efficiency of the BOSS CMASS sample varies with the fiber location on the focal plane. To quantify the per-fiber efficiency, we compute the average LRG redshift failure rate for each fiber using observations during the Main Survey. The per-fiber LRG failure rate is shown in Figure \ref{fig:per_fiber_failure_rate} (left panel), and we do not see clear patterns in the fiber efficiency given the statistical uncertainties. To more rigorously assess if the observed failure rates are consistent with uniform fiber efficiency, we perform Monte Carlo simulations with the per-object failure probability given by the model described in \S\ref{sec:depth_and_mag_dependence}. We find that the distribution of the measured per-fiber failure rates is consistent with the simulated distributions for all except a few fibers, as shown in Figure \ref{fig:per_fiber_failure_rate} (right panel), indicating very uniform spectroscopic efficiencies for almost all the fibers.
\begin{figure*}
\gridline{\fig{figures/fiber_to_fiber.pdf}{1.05\columnwidth}{}
\fig{figures/per_fiber_failure_rate.pdf}{0.95\columnwidth}{}
}
\caption{\textit{Left}: Per-fiber LRG redshift failure rate. Each point represents a fiber on the DESI focal plane with the colors indicating its average LRG failure failure rate during the first 2 months of the Main Survey. Only fibers with $>$40 LRG observations are plotted, and the median number of LRG observations by each fiber is 70. Since the average failure rate is ${\sim}\,1\%$, most fibers have either 0 or 1 redshift failure, and much of the variation in this figure is simply noise. \textit{Right}: The distribution of the per-fiber LRG redshift failure rates and the simulated distributions from 100 Monte Carlo simulations. In the simulations, the redshift failure probability of each object is determined by its $z_\mathrm{fiber}$ and $t_\mathrm{eff}$ via eqn. \ref{eqn:failure_rate}. The fact that the measured distribution matches the simulations (for all except a handful of fibers) suggests that the spectroscopic efficiency is very uniform across the fibers.
\label{fig:per_fiber_failure_rate}}
\end{figure*}
\section{Summary}
\label{sec:summary}
To achieve the required accuracy on cosmological measurements for DESI, it is critical that the sample selection and spectroscopic observations have minimal and well-understood systematics. With this in mind, the DESI LRG sample is designed to be robust against variations in imaging properties and zero point uncertainties, and to achieve high redshift success rate and low stellar contamination rate. The high stellar mass completeness of the sample also ensures high large-scale bias and should facilitate modeling of the galaxy-halo connection. In addition to the already robust target selection, we developed veto masks specifically optimized for the LRG targets to produce a clean sample suitable for clustering analysis. We also created a simple model that can accurately predict (and thus correct for) the per-object redshift failure rate based on the object's brightness and spectroscopic depth.
\section*{acknowledgments}
This research is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE–AC02–05CH11231, and by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; additional support for DESI is provided by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to the NSF's National Optical-Infrared Astronomy Research Laboratory; the Science and Technologies Facilities Council of the United Kingdom; the Gordon and Betty Moore Foundation; the Heising-Simons Foundation; the French Alternative Energies and Atomic Energy Commission (CEA); the National Council of Science and Technology of Mexico (CONACYT); the Ministry of Science and Innovation of Spain (MICINN), and by the DESI Member Institutions: \url{https://www.desi.lbl.gov/collaborating-institutions}.
The DESI Legacy Imaging Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS), the Beijing-Arizona Sky Survey (BASS), and the Mayall z-band Legacy Survey (MzLS). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerro Tololo Inter-American Observatory, NSF's NOIRLab; the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOIRLab. NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. Pipeline processing and analyses of the data were supported by NOIRLab and the Lawrence Berkeley National Laboratory. Legacy Surveys also uses data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), a project of the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. Legacy Surveys was supported by: the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy; the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility; the U.S. National Science Foundation, Division of Astronomical Sciences; the National Astronomical Observatories of China, the Chinese Academy of Sciences and the Chinese National Natural Science Foundation. LBNL is managed by the Regents of the University of California under contract to the U.S. Department of Energy. The complete acknowledgments can be found at \url{https://www.legacysurvey.org/}.
The authors are honored to be permitted to conduct scientific research on Iolkam Du'ag (Kitt Peak), a mountain with particular significance to the Tohono O'odham Nation.
BASS is a key project of the Telescope Access Program (TAP), which has been funded by the National Astronomical Observatories of China, the Chinese Academy of Sciences (the Strategic Priority Research Program ``The Emergence of Cosmological Structures'' Grant \#XDB09000000), and the Special Fund for Astronomy from the Ministry of Finance. The BASS is also supported by the External Cooperation Program of Chinese Academy of Sciences (Grant \#114A11KYSB20160057), and Chinese National Natural Science Foundation (Grant \#12120101003, \#11433005).
ADM was supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0019022.
\software{Astropy \citep{astropy:2013, astropy:2018}, HEALPix/healpy \citep{healpix, healpy} Matplotlib \citep{matplotlib}, Numpy \citep{numpy}, scikit-learn \citep{scikit-learn}, Scipy \citep{scipy}}
|
2,877,628,089,781 | arxiv | \section{\label{sec:level1}Introduction}
Splitting a secret message in way that a single person is not able
to reconstruct it is a common task in information processing and especially high security applications.
Suppose e.g. that the launch sequence of a nuclear missile is
protected by a secret code, and it should be ensured that not a
single lunatic is able to activate it but at least two lunatics. A
solution for this problem and its generalization including several
variations is provided by classical cryptography \cite{brucesch} and
is called secret sharing. It consists of a way of splitting the
message using mathematical algorithms and the distribution of the
resulting pieces to two or more legitimate users by classical
communication. However all ways of classical communication currently
used are susceptible to eavesdropping attacks. As the usage of
quantum resources can lead to unconditionally secure communication
(e.g. \cite{gisin, ekert}), a protocol introducing quantum
cryptography to secret sharing was proposed \cite{HARALD, qsecsh,
got, karl}. In this protocol a shared GHZ-state allows the
information splitting and the eavesdropper protection
simultaneously. But, due to lack of efficient multi-photon sources
an experimental demonstration of secret sharing is still missing.
Till now solely the principle feasibility of an experimental
realization using pseudo-GHZ states was shown \cite{tittel}.
Here we propose a protocol for $(N+1)$ parties in which only
sequential single qubit communication between them is used and
show its equivalence to the GHZ-protocol. As our protocol requires
only single qubits it is realizable with the
current state-of-the-art technologies and above all much more
scalable with respect to the number of participating parties.
These gains enabled the experimental demonstration of our protocol
for six parties. To our knowledge this is the first experimental
implementation of a full protocol for secret sharing and by far
the highest ever reported number of participants in any quantum
information processing task.
Let us first shortly describe the entanglement based protocol using
a GHZ state for secret sharing. Consider $(N+1)$ persons, each
having a particle from the maximally entangled $(N+1)$ particle
GHZ-state
\begin{equation}
\ket{GHZ}=\frac{1}{\sqrt{2}}\left(\ket{\underbrace{00\dots0}_{N+1}}+\ket{\underbrace{11\dots1}_{N+1}}\right).
\end{equation}
One of the parties, let's call him distributor, wants to
distribute a secret message among the remaining $N$ persons
(recipients) in a way that all of them
have to cooperate in order to reconstruct the distributed message.
To achieve this task each participant performs a projection
measurement of his particle onto the eigenstates
$\ket{k_j,\phi_j}=1/\sqrt{2}(\ket{0}+k_j \exp(i\phi_j)\ket{1})$
($j=1,2,\dots,N+1$) of the operator
\begin{equation}
\widehat{\sigma}_j(\phi_j)=\sum_{k_j} k_j\ketbra{k_j,\phi_j},
\end{equation} where $k_j=\pm1$ denotes the local result in mode
$j$ for a preselected parameter $\phi_j$. The partners randomly
and independently choose between $\phi_j=0$ or $\pi/2$. The
correlation function for a $(N+1)$ particles GHZ state is defined
as the expectation value of the product of $(N+1)$ local results
and is therefore given by
\begin{equation}\label{eqn:ghzcorr}
E(\phi_j)=\langle\prod_j^{N+1}
\widehat{\sigma}_j(\phi_j)\rangle\;=\;\cos\left(\sum_j^{N+1}\phi_j\right).
\end{equation}
After the measurement each recipient publicly announces her/his
choice of $\phi_j$, but keeps the result $k_j$ secret. By doing so
the distributor can decide when this procedure leads to perfect
(anti-)correlated results, i.e. when $|\cos(\sum_j^N\phi_j)|=1$,
which happens in half of the runs. In these instances each of the
recipients is able to infer the distributor's measurement result
$k_d$ if and only if he/she knows the measurement results $k_r$
($r=1,2,\dots,N$) of all the other recipients. Consequently the
cooperation of all the recipients is required and any subset of
the parties has no information on the secret. For a security proof
of this scheme against eavesdropping attacks see
\cite{qsecsh,scarani}.
An equivalent $(N+1)$ party scheme (see fig. \ref{fig:setup}) for
the same task where only the sequential communication of a single
qubit is used, runs as follows.
\par
The distributor randomly prepares a qubit in one of the four
states $\ket{\pm x}, \ket{\pm y}$ of two mutually unbiased bases x
and y with
\begin{align}
\ket{\pm x}&=\frac{1}{\sqrt{2}}(\ket{0}\pm \ket{1})\\
\ket{\pm y}&=\frac{1}{\sqrt{2}}(\ket{0}\pm i \ket{1}).
\end{align}
Note that all these states are of the form
\begin{equation}
\ket{\chi}_i=\frac{1}{\sqrt{2}}\left(\ket{0}+e^{i\varphi_d}\ket{1}\right),
\end{equation}
where $\varphi_d$ is chosen to have one out of the four values
$\{0, \pi , \pi/2, 3\pi /2 \}$.
\begin{figure}
\includegraphics[scale=0.43,clip]{Schmid_fig_1.eps}
\caption{\label{fig:setup}Scheme for $(N+1)$ party single qubit
secret sharing. The distributor prepares a qubit in an initial
state and acts on it with the phase operator
$\widehat{\sigma}(\varphi_d)$. Afterwards the qubit is
sequentially communicated from one recipient to another each
acting on it with $\widehat{\sigma}(\varphi_j)$ as well. The last
recipient performs finally a measurement of the qubit leading to
the result $\pm 1$. In half of the cases the phases add up such
that the preparation and the measurement are perfectly
(anti-)correlated.}
\end{figure}
During the protocol the qubit is then sequentially communicated
from recipient to recipient each acting on it with the unitary
phase operator
\begin{equation}
\widehat{\sigma}_j(\varphi_j)=\begin{cases} \ket{0}\rightarrow \ket{0}&\\
\ket{1}\rightarrow e^{i\varphi_j}\ket{1},
\end{cases}
\end{equation}
where $\varphi_j \in \{0, \pi , \pi/2, 3\pi /2 \}$ as well.
Therefore having passed all parties the qubit will end up in the
state
\begin{equation}
\ket{\chi}_f=\frac{1}{\sqrt{2}}\left(\ket{0}+e^{i(\varphi_d+\sum_j
\varphi_j)}\ket{1}\right).
\end{equation}
After this communication stage each participant divides his action
for every run into two classes: a class X corresponding to the
choice of $\varphi_j\in \{0, \pi\}$ and a class Y corresponding to
$\varphi_j\in \{\pi/2, 3\pi /2\}$. Following this classification
they inform the distributor about the class-affiliation of their
action for each run. Note that they keep the particular value of
$\varphi_j$ secret. This corresponds to the announcement of $\phi_j$
while keeping $k_r$ secret in the GHZ-scheme. The order in which the
recipients $R_j$ announce the class-affiliation is randomly
determined by the distributor. The last recipient $R_N$ finally
measures the received qubit in the x basis. Therefore for her/him it
suffices to choose only between $\varphi_N=0$ or $\varphi_N=\pi/2$
and keep the outcome $k_N$ of the measurement secret \cite{remark1}.
The probability that $R_N$ detects the state $\ket{+x}$ is given by
\begin{equation}
p_+(\varphi_d,\varphi_{1},\dots,\varphi_N)=\frac{1}{2}(1+\cos
(\varphi_d+\sum_j^N\varphi_{j})),
\end{equation}
whereas the probability to detect the state $\ket{-x}$ is
\begin{equation}
p_{-}(\varphi_d,\varphi_{1},\dots,\varphi_N)=\frac{1}{2}(1-\cos
(\varphi_d+\sum_j^N\varphi_{j})).
\end{equation}
So the expectation value of the measurement result is
\begin{multline}\label{eqn:singlecorr}
A(\varphi_d,\varphi_{1},\dots,\varphi_N)=p_+(\varphi_d,\varphi_{1},\dots,\varphi_N)\\
-p_-(\varphi_d,\varphi_{1},\dots,\varphi_N)=\cos(\varphi_d+\sum_j^N
\varphi_j).
\end{multline}
From the broadcasted class-affiliations of all introduced phase
shifts $\varphi_j$ the distributor is able to decide which runs
lead to perfect (anti-)correlations, means when
$|\cos(\varphi_d+\sum_j^N\varphi_j)|=1$, what happens in half of
the runs. We call this a valid run of the protocol. In these cases
each of the recipients is able to infer the distributor's choice
of $\varphi_d$ if and only if he/she knows the choice of
$\varphi_j$ of the other recipients. Consequently the
collaboration of all recipients is necessary.
By associating the particular value of $\varphi_d$ with "0" and
"1", say e.g. $\varphi_d \in \{0, \pi/2\} \,\widehat{=}\, 0$ and $\varphi_d
\in \{\pi, 3\pi/2\} \,\widehat{=}\, 1$, the parties are able to secretly
share a common bit string (key). This is possible as obviously the
required correlations based on local manipulation of relative
phases can equivalently be established by communicating a single
qubit instead of employing many entangled qubits of a GHZ-type
state; (compare equation \ref{eqn:ghzcorr} and
\ref{eqn:singlecorr}).
In order to ensure the security of the protocol against
eavesdropping or cheating \cite{remark2} the distributor arbitrarily
selects a certain number (might depend on the degree of security
requirements) of particular valid runs. For this subset the
correlations are publicly compared, again in a random order of the
recipients. The public comparison will reveal any eavesdropping or
cheating strategy. That can be easily seen from the following
intercept/resend eavesdropping attacks.
Imagine for instance the first recipient $R_1$ tries to infer the
secret without the help or the authorization of the remaining
participants by measuring the qubit sent by the distributor
\emph{before} acting on it with $\widehat{\sigma}_1(\varphi_1)$
and afterwards sending it ahead to the second recipient $R_2$. For
convenience, let us assume $R_1$ chooses for this measurement one
of the two protocol bases x or y. As the distributor applies
randomly one of four different phase shifts, the probability that
the state $\ket{\chi}_i$ is an eigenstate of the measurement
chosen by $R_1$ is 1/2. In the other half of the cases the
measurement result of $R_1$ will be completely random as it holds
that $\absl{\bracket{\pm y}{\pm x}}=\absl{\bracket{\pm x}{\pm
y}}=1/2$. This means that recipient $R_1$ gets no information
about the distributor's choice of $\varphi_d$. Furthermore this
cheating will cause an overall error of 25 \% in the correlations.
That's because if $R_1$ has chosen the wrong basis, the final
state of the qubit after all $(N+1)$ introduced phase shifts will
be of the form
\begin{equation}
\ket{\chi}_{f\prime}=\frac{1}{\sqrt{2}}\left(\ket{0}+e^{i\sum_{j=1}^N
\varphi_j} \ket{1}\right)
\end{equation}
instead of $\ket{\chi}_{f}$.
The state $\ket{\chi}_{f\prime}$ will, measured by the last
recipient $R_N$, give with probability 1/2 a result which is not
compatible to the expected correlations. The same situation an
eavesdropper is faced with, when applying such a strategy. The
usage of the bases x and y for an intercept/resend attack is
already the optimal one concerning the information gain on the
valid runs. One might only consider using the intermediate (or so
called Breidbart) basis $\ket{\pm
b}\frac{1}{\sqrt{2+\sqrt{2}}}(\ket{\pm x} + \ket{\pm
y})=\frac{1}{\sqrt{2}}(\ket{0}\pm e^{i \pi/4}\ket{1})$ which gives
the eavesdropper maximum information on all exchanged bits
\cite{hutek}. But even here the error rate goes necessarily up to
25 \%. The security of the presented protocol against a general
eavesdropping attack follows from the proven security (see for
detail \cite{gisin}) of the well known BB84 protocol \cite{bb84}.
Each communication step between two successive parties can be
regarded as a BB84 protocol using the bases x and y. Any set of
dishonest parties in our scheme can be viewed as an eavesdropper
in BB84 protocol.
The presented protocol was experimentally implemented for six
(5+1) parties, thus clearly showing the practicality and
user-friendliness of the scheme.
We encoded the qubit of the protocol in a single photon where the
basis states $\ket{0}$ and $\ket{1}$ are represented by the
polarization states of the photon $\ket{H}$ and $\ket{V}$
respectively, corresponding to horizontal (H) and vertical (V)
linear polarization. The single photons were provided by a
heralded single photon source. The setup is shown in
Fig.~\ref{fig:setup}.
\begin{figure}
\includegraphics[scale=0.37,clip]{Schmid_fig_2.eps}
\caption{\label{fig:setup}Setup for single qubit secret sharing.
Pairs of orthogonally polarized photons are generated via a type
II SPDC process in a BBO crystal. The detection of one photon from
the pair by D$_{\mathrm{T}}$ heralds the existence of the other
one used for the performance of the protocol. The initial
polarization state is prepared by the distributor by a polarizer
in front of the trigger detector and a half- and quarter wave
plate (HWP$_1$, QWP). Each of the recipients ($R_1 \dots R_5$)
introduces one out of four phase shifts according to a number from
a pseudo random number generator (RNG) by the rotation of YVO$_4$
crystals (C$_1 \dots$C$_5$). The last party analyzes additionally
the resulting polarization state of the photon with a half-wave
plate (HWP$_2$) and a polarizing beam splitter. }
\end{figure}
A pair of photons is created via a spontaneous parametric down
conversion (SPDC) process. As the photons of a pair are strongly
correlated in time the detection of one photon in D$_{\mathrm{T}}$
heralds the existence of the other one which is used for the
protocol. Thus from a coincidence detection between
D$_{\mathrm{T}}$ and D$_+$/D$_-$ within a chosen time window of 4
ns we assume the communication of a single photon only. For this
coincidence time window and singlecount rates of about 70000
$\mathrm{s}^{-1}$ in D$_+$/D$_-$ accidental coincidences were
negligible. The SPDC process was run by pumping a 2 mm long
$\beta$-barium borate (BBO) crystal with a blue single mode laser
diode (402.5 nm) at an optical output power of 10 mW. Type-II
phase matching was used at the degenerate case leading to pairs of
orthogonally polarized photons at a wavelength of $\lambda=805$ nm
($\Delta \lambda \approx 6$ nm).
In order to prepare the initial polarization state a polarizer
transmitting vertically polarized photons was put in front of the
trigger detector D$_{\mathrm{T}}$ ensuring that only horizontally
polarized photons can lead to a coincidence detection. The
distributor was equipped with a motorized half-wave plate
(HWP$_1$) followed by quarter-wave plate (QWP) at an angle of
$\dg{45}$. By rotation of HWP$_1$ to the angles $\dg{0},\dg{45}$
and $\dg{22.5},\dg{-22.5}$ he could transform the horizontally
polarized photons coming from the source to $\ket{\pm y}$ and
$\ket{\pm x}$. This corresponds to applying the phase-shifts
$\varphi_d\in\{\pi/2,3\pi/2\}$ and $\varphi_d\in\{0,\pi\}$
respectively. As the phase-shifts of the recipients had to be
applied independently from the incoming polarization state the
usage of standard wave plates was not possible. Therefore the
unitary phase operator was implemented using birefringent uniaxial
200 $\mu$m thick Yttrium Vanadate (YVO$_4$) crystals (C$_i$). The
crystals were cut such that their optic axis lies parallel to the
surface and aligned that H and V polarization states correspond to
their normal modes. Therefore by rotating the crystals along the
optic axis for a certain angle a specific relative phase shift was
applied independent from the incoming polarization state. An
additional YVO$_4$ crystal (C$_{comp}$, 1000 $\mu$m thick) was
used to compensate for dispersion effects. The last party
performed the projection measurement using a half-wave plate
(HWP$_2$) at an angle of $\dg{22.5}$ followed by polarizing
beam-splitter (PBS). The photons were detected at D$_+$/D$_-$ and
D$_{\mathrm{T}}$ by passively quenched silicon avalanche photo
diodes (Si-APD) with an efficiency of about 35 \%.
\begin{table}[t]
\begin{ruledtabular}
\begin{tabular}{l|c c c c c}
& $z_{total}$ & $z_{one}$ & $z_{raw}$ & $z_{val}$ & QBER [\%] \\
\hline
$\ket{\pm x}$ & 27501 & 9814 & 883 & 452 & $25.22 \pm 2.04$\\
$\ket{\pm y}$ & 24993 & 9188 & 784 & 409 & $30.32 \pm 2.27$\\
$\ket{\pm b}$ & 38174 & 13706 & 1137 & 588 & $30.27 \pm 1.89$\\
\end{tabular}
\end{ruledtabular}
\caption{\label{tab:res}Results of the simulation of an
intercept/resend eavesdropping strategy in the protocol- and
intermediate basis. The attack was done by inserting a polarizer
between the distributor and the first recipient. In each case the
quantum bit error rate (QBER) rises up to more than 25 \% and by
this blows the eavesdropper's cover. }
\end{table}
The protocol was repeated $z_{total}=25000$ times. One run
consisted of rotating the crystals and opening the detectors for a
collection time window $\tau=200\,\mu$s what took together about 1
s. Each crystal was thereby driven by a motor to one of four
different positions given by a pseudo random number. This means
the application of one of the four phase shifts at random by each
party. Out of $z_{total}$ only $z_{one}=9125$ times exactly one
photon was detected at D$_{\mathrm{T}}$ within $\tau$ due to
poissonian photon-counting statistics. In these runs a coincidence
detection happened $z_{raw}=2107$ times which provided us with the
raw key. From this we extracted $z_{val}=982$ valid runs where
$|\cos(\sum_j^N\varphi_j)|=1$ (506 times
$\cos(\sum_j^N\varphi_j)=1$ and 476 times
$\cos(\sum_j^N\varphi_j)=-1$ ) with a quantum bit error rate
(QBER) of $2.34 \pm 0.48$ \%.
In order to show that the QBER increases significantly by an
eavesdropping attack we simulated an intercept/resend strategy by
inserting a polarizer between the distributor and the first
recipient. The attack was done in the protocol bases $\ket{\pm x},
\ket{\pm y}$ as well as in the intermediate basis $\ket{\pm b}$.
For the latter two the polarizer was additionally sandwiched by
two quarter-wave plates. The angular settings (1st QWP, polarizer,
2nd QWP) were $\{\dg{45},\dg{0},\dg{-45}\}$ and
$\{\dg{-45},\dg{22.5},\dg{45}\}$. For every choice of the basis
the QBER went up to at least 25 \% (or even higher due to other
experimental imperfections). The results are summarized in
Table~\ref{tab:res}.
In summary, we introduced a new scheme for solving the multi-party
communication task of secret sharing. Unlike other schemes
employing multi-particle entangled states our protocol uses only
the sequential communication of a single qubit. As single qubit
operations using linear optical elements and the analysis of
photon polarization states are quite well accomplishable with
present day technology, we were therefore able to present a first
experimental demonstration of the protocol for six parties. This
is to our knowledge the highest number of actively performing
parties in a quantum protocol ever implemented so far, and the
first ever experimental implementation of a full quantum secret
sharing protocol. We also simulated an eavesdropping
intercept/resend attack and by this showed the resistance of the
protocol against such kind of strategies because of a
significantly increasing error rate. In principle we see no
experimental barrier to extend the performed protocol to even
significantly higher number of participants. The achieved key
exchange rate could be easily increased by using fast
electro-optical phase modulators. Also the use of weak coherent
pulses of light containing much less than one photon on average,
instead of a heralded single photon source, is possible and might
further reduce the experimental effort. However, this would be at
the expense of the concept of communicating strictly one qubit and
can be also disadvantageous for the practical performance of the
protocol \cite{sanders, lut}. While we have realized our secret
sharing protocol using photons and polarization encoding,
alternative schemes, like proposed or realized in BB84-type
protocols can be adopted as well. One might think of other forms
of information encoding, higher multilevel, or continuous
variables. Finally we stress that by showing that our approach is
equivalent to the use of a many qubit GHZ state we opened the door
to the possible application of this method in other generic multi
party communication tasks.
M.\.{Z}. is supported by an FNP Profesorial Subsidy, and MNiI Grant
1 P03B 04927. The work is a part of MNiI/DAAD collaboration program
and was furthermore supported by German DFG and BMBF, the Bavarian
high-tech initiative, Swedish Research Council (VR), and the
European Commission through the IST FET QIPC RamboQ.
|
2,877,628,089,782 | arxiv | \section{Introduction}
Most of the planets found in {\it Kepler} multi-planetary systems lie outside mean-motion
resonances (MMRs). This seems incompatible with a formation process strongly affected by planet
migration and may either indicate that planets formed in-situ (e.g. \cite{2012ApJ...751..158H,
2013ApJ...770...24P,2014ApJ...780...53C}) or that planetary migration occurred in a turbulent
environment, where resonance capture is not guaranteed (e.g.
\cite{2012MNRAS.427L..21R,2013ApJ...778....7B}). However, the {\it Kepler} population also shows
the existence of a statistically significant number of planetary pairs close to resonances,
where the orbital period ratios is usually larger than the nominal value. It is generally
believed that these systems are near-resonant and located outside the libration domain.
The origin of these near-resonant systems is a dilemma. Planetary migration in a turbulent disk
\cite{2013MNRAS.434.3018P} can lead to near-resonant configurations, although it is not clear
whether this mechanism can reproduce the observed near-resonant distribution (e.g.
\cite{2013MNRAS.435.2256Q}). This mechanism only seems to work for sub-Jovian bodies, and
therefore, larger than the expected size for most of the {\it Kepler} systems.
In \cite{Ramos2017} we presented an analytical model that allow us to reproduce the general
trend of the resonance offset, as the disk is assumed significantly flared and with a small
scale height. This model strongly depends on the planetary masses, values which are not usually
known for the {\it Kepler} systems, thus we performed Monte Carlo statistical analysis
with no direct application to a given planetary system. Even so, the method has proved promising
and able of reproducing the offset distribution around both the 2/1 and 3/2 MMRs, as well as
predicting an increase in this value for planets closer to the central star.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{21.pdf}
\includegraphics[width=0.45\textwidth]{mapa_din.pdf}
\caption{{\bf Left:} Distribution of orbital period ratios, in the vicinity of the 2/1
resonance, as a function of the orbital period of the inner planet. Red circles identify planets
detected by transits or TTV, while those discovered by other methods are depicted with open
circles. The green circle indicates the location of Kepler-25 while the black one corresponds to
K2-24. Data was obtained from {\tt exoplanet.eu}. {\bf Right:} Dynamical map of $max(\Delta e)$
for two-planet systems with $m_1 = 0.05 m_{\rm Jup}$ and $m_2 = 0.10 m_{\rm Jup}$ orbiting a
central star of mass $m_0 = 1 M_{\odot}$, in the vicinity of the 2/1 MMR. The orbit of the outer
planet was initially circular with $a_2=1$ AU, and all the angular variables where chosen equal
to zero. The black continuous line marks the location of the zero-amplitude ACR-type librational
solutions, estimated from the simple analytical model (eq. (\ref{eq15})). White dots are the
result of three N-body simulations of resonance trapping.}
\label{fig1}
\end{figure}
From all the {\it Kepler} systems near the 2/1 MMR, just two cases have fairly credible
estimations for the masses: Kepler-25 and K2-24, indicated as green and black circles in the
left hand frame of Figure \ref{fig1}. Red circles correspond to systems detected by transits or
Transit Time Variations (TTV). Bodies discovered by any other method are identified by open
circles. This distribution shows increasing $\Delta_{2/1}$ for planets closer to the star,
indicating a possible smooth trend which, if confirmed, would indicate that the distribution
found in different populations belong to the same functional form, and just differing from the
distance to the star. It also shows little correlation with either the detection method or the
stellar/planetary masses. Here, we apply the model of \cite{Ramos2017} to these two systems and
attempt to constraint the properties of the protoplanetary disk that are consistent with their
observed location and deviation from the exact resonance.
Right frame of Figure \ref{fig1} shows a dynamical map for the 2/1 commensurability. We
integrated series of two-planet systems with initial conditions in a grid defined in the
$(P_2/P_1,e_1)$ plane and specifically chose $m_2/m_1 > 1$ to guarantee symmetric fixed points
for the resonant angles (e.g. \cite{2006MNRAS.365.1160B, 2008MNRAS.391..215M}). The color code
corresponds to the maximum value of $|e_1(t) - e_1(t=0)|$ (denoted as $max(\Delta e)$) attained
a during $10^3$ years integration time. Darked (lighter) tones are associated to small (large)
variations in the eccentricity of the inner planet. Although this indicator does not measure
chaotic motion, it is an important tool to probe the structure of resonances and identify the
locus of stationary solutions (so-called ACR solutions, see \cite{2003ApJ...593.1124B,
2006MNRAS.365.1160B}). It also helps to identify the separatrix delimiting the librational from
the circulation domains (e.g. \cite{2015CeMDA.123..453R}). The black line shows the approximate
location of the family of zero-amplitude ACR solutions characterized by the simultaneous
libration of the both resonant angles.
\section{The Ramos et al. Model}
In the following we will assume two planets of masses $m_1$ and $m_2$ orbiting a star $m_0$,
orbital periods $P_1 < P_2$ and in the vicinity of a first-order $(p+1)/p$ MMR. We define the
{\it resonance offset} $\Delta_{(p+1)/p}$ as
\begin{equation}
\Delta_{(p+1)/p} = \frac{P_2}{P_1} - \frac{(p+1)}{p},
\end{equation}
whose value indicates the distance from the exact resonance.
Different values of $\Delta_{(p+1)/p}$ are attained in different parts of the disk. In order to
study this, we use a resonant Hamiltonian neglecting secular perturbations to estimate the
resonance offset as function of $e_1$, as well as a relation between the eccentricities of both
planets:
\begin{equation}
\Delta_{(p+1)/p} = C_1(\alpha) \; \frac{m_2}{m_0} \frac{1}{e_1}
\hspace*{0.5cm} ; \hspace*{0.5cm} e_2 = C_2(\alpha) \; \frac{m_1}{m_2} e_1,
\label{eq3}
\end{equation}
(see \cite{1988AJ.....96..400F,2004ApJ...611..517L}), where the coefficients $C_i$ depends
solely on $\alpha = a_1/a_2$ ($C_1 \simeq 1.5$ and $C_2 \simeq 0.29$). For a given resonance,
very low $e_i$ are necessary to obtain a significant deviation from the exact resonance.
However, since $e_i$ does not attain zero for the ACR solution, the singularity at $e_1=0$ is
never reached.
If the disk-driven planetary migration is sufficiently slow and smooth, we expect the orbital evolution to follow the pericentric branch into the librational domain and exhibit low-amplitude oscillations of the resonant angles. In such an ideal scenario, the final eccentricities and resonant offset $\Delta_{(p+1)/p}$ will depend on the relative strength between the eccentricity damping and orbital migration timescales (\cite{2002ApJ...567..596L,2006MNRAS.365.1160B})
$\tau_{e_i}$ and $\tau_{a_i}$, respectively. Thus, the final outcome of a resonance trapping will depend on the ratios ${\cal K}_i = \tau_{a_i}/\tau_{e_i}$, which we denote as the {\it K-factors}.
We performed 3 N-body simulations including an ad-hoc external acceleration (e.g. \cite{2008A&A...482..677C}), set the values of $\tau_{a_i}$ at certain prefixed amounts, varied ${\cal K}_i$ and analyzed its effects on the resonance offset (white dots in the right frame of Figure \ref{fig1}). We chose planetary masses $m_1 = 0.05 m_{\rm Jup}$ and $m_2 = 0.10 m_{\rm Jup}$, and $1M_\odot$ for the central star. The outer planet was initially at $a_2=1$ AU, in circular orbit and all angles equal to zero. The initial conditions were integrated for $10^5$ yrs. All values agree with the ACR loci given by expression (\ref{eq3}) deduced from the analytical resonance model. Although these simulations indicate that is possible to obtain large values for the offset, they only appear attainable for K-factors of the order of $10^4$, much higher than predicted by linear models of Type-I disk-planet migration (e.g. \cite{2000MNRAS.315..823P, 2004ApJ...602..388T,2008A&A...482..677C}). In \cite{Ramos2017}, we found that it is possible to overcome this problem assuming a significant flare for the disk (of
the order of $f \simeq 0.25$) in addition to a relatively small value for the disk aspect ratio ($H_0 \simeq 0.03$). This combination, in addition to moderately low values for the mass ratios of the planets, generates large deviations from exact resonance even with K-factors of the order of $10^2$, well within the classical limits.
We assume a laminar disk with surface density $\Sigma(r) = \Sigma_0 r^{-\alpha}$ and aspect-ratio $H_r(r) = H_0 r^{f}$, where $r$ is the distance to the central star in astronomical units. We will consider $H_0$, $\alpha$ and $f$ as unknown parameters that will be estimated in accordance with the observed dynamical characteristics of the planetary systems.
Following \cite{2002ApJ...565.1257T} and \cite{2004ApJ...602..388T}, orbital migration and
eccentricity damping timescales are approximated as
\begin{equation}
\tau_{a_i} = Q_a \frac{t_{{\rm wave}_i}}{H_{r_i}^2}
\hspace*{0.5cm} ; \hspace*{0.5cm}
\tau_{e_i} = Q_e \frac{t_{{\rm wave}_i}}{0.780}
\hspace*{0.5cm} ; \hspace*{0.5cm}
t_{{\rm wave}_i} = \frac{m_0}{m_i} \frac{m_0}{\Sigma(a_i) a_i^2} \frac{H_{r_i}^4}{\Omega(a_i)}.
\label{eq8}
\end{equation}
In these expressions, $H_{r_i}$ is the disk aspect-ratio in the position of each planet and $\Omega(a_i)$ their orbital frequency. $Q_e$ is a constant introduced by \cite{2006A&A...450..833C} in order to reproduce the eccentricity damping rates from hydro-dynamical simulations, while $Q_a = Q_a(\alpha)$ is a function of the surface density profile. Finally, $t_{{\rm wave}_i}$ is the typical timescale of planetary migration.
Recently, the classical Type-I migration models were revised by \cite{2014AJ....147...32G}, who
considered the contribution of eccentricity damping to changes in the semimajor axis associated
to (partial) conservation of the angular momentum. They found that the effective characteristic
timescale for the orbital evolution should actually be given by $\tau_{{a_{eff}}_i} =
\left(\tau_{a_i}^{-1} + 2 \beta e_i^2 \tau_{e_i}^{-1}\right)^{-1}$, where $\tau_{a_i}$ and
$\tau_{e_i}$ maintain the same form as equations (\ref{eq8}) and $\beta$ is a factor that
quantifies the fraction of the orbital angular momentum preserved during the migration. This
modified migration timescale changes the K-factor, leading to a new ``effective'' form. This
revised migration model, together with the analytical resonant Hamiltonian led in
\cite{Ramos2017} to a relation between the disk properties and resonance offset in the form:
\begin{equation}
\Delta_{(p+1)/p}^2 =
\frac{2}{D} \left( C_1 \frac{m_2}{m_0} \right)^2
\frac{ \left[(1-D(1+\beta)){\cal K}_2 \left( C_2 \frac{m_1}{m_2}\right)^2 +
(B+D\beta){\cal K}_1 \left( \frac{\tau_{a_2}}{\tau_{a_1}} \right) \right]}
{ 1 - \left( \frac{\tau_{a_2}}{\tau_{a_1}} \right)} ,
\label{eq15}
\end{equation}
where the parameters $B$ and $D$ depend both on planetary mass ratio and the resonance under
consideration
\begin{equation}
D = \frac{1}{(p+1)} \left( 1 + \frac{a_1}{a_2} \frac{m_2}{m_1} \right)^{-1}
\;\; ; \;\;
B = \frac{m_1n_2a_2}{m_2n_1a_1}+D.
\label{eq5}
\end{equation}
The importance of expression (\ref{eq15}) lies in the fact that it shows that large values of
the offset may be obtained if the denominator is sufficiently small, independently of the
K-factors. Since the ratio $\tau_{a_2}/\tau_{a_1}$ depends on the planetary mass ratio (as well
as on parameters $f$ and $\alpha$), it is possible to obtain values of $\Delta_{(p+1)/p}$
consistent with the observed planetary systems, as long as the parameters lie within certain
values.
\section{Application to Kepler-25 and K2-24 systems}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth,clip=true]{mapas.pdf}
\caption{Results of Monte Carlo simulations for Kepler-25 (upper frames) and K2-24 (bottom
panels) for different density slopes values. In the left we show $\alpha=0.5$ and in the right
$\alpha=1.5$. Red colors indicate more possible pairs $(H_0,f)$ for an observed $\Delta_{2/1}$,
than the blue values.}
\label{mapas-keplers}
\end{figure}
\begin{table}[h!]
\centering
\caption{Mass measurements and orbital periods for Kepler-25 b,c and K2-24 b,c.}
\begin{tabular}{l r r r r r }
\hline\hline
System & $m_1$ [$m_\oplus$] & $m_2$ [$m_\oplus$] & $P_1$ [d] & $P_2/P_1$ & $m_0
[M_\odot]$\\
\hline
Kepler-25 & $9.6 \pm 4.2$ & $24.6 \pm 5.7$ & $6.24$ & $2.0390$ & $1.22 \pm 0.06$ \\
K2-24 & $21.0 \pm 5.4$ & $27.0 \pm 7.0$ & $20.89$ & $2.0284$ & $1.12 \pm 0.05$ \\
\hline
\end{tabular}
\label{tab1}
\end{table}
Of 165 planetary pairs lying in the vicinity of the 2/1 resonance and with $P_1 \leq 100$ days, only in 18 cases have the masses of both bodies have been measured or estimated with some accuracy. Of these, only 10 have orbital period ratios in the interval $P_2/P_1 \in [2.0, 2.10]$, and may be thus cataloged as members of the (near)-resonant region. This number continues to decline as we note that 7 planetary pairs have at least one of its members with $m_i > 50 m_\oplus$, more than sufficient to open a gap in the disk and to have migrated following a Type-II scenario. Since our model is based on analytical prescriptions for laminar Type-I migration, these systems are beyond the scope of our work.
Of the three remaining candidates, HD 219134 have recently been questioned. The first reference to this system appears in \cite{2015A&A...584A..72M}, who analyzed a total of 98 nightly averaged RV observations obtained with HARPS-N and found evidence of 4 low-mass planets. The outer member of the (near)-resonant pair was not detected. Later, \cite{2015ApJ...814...12V} analyzed a total of 276 RV Doppler measurements and identified a total of 6 planets in this system. \cite{2016arXiv160205200J} found a substantial periodicity in the RV data due to stellar rotation with a period of 22.8 days, a value practically equal to $P_1/2$. Although the authors do not believe there is sufficient evidence to rule out the existence of the inner planet, the amplitude generated by the planet in the RV signal may be affected by stellar rotation and thus, the mass deduced for $m_1$ could in fact be substantially lower.
This leaves us with two systems, Kepler-25 and K2-24. Table \ref{tab1} gives the masses and orbital periods of both systems, together with the stellar masses and the respective standard deviations. Both systems have nominal mass ratios larger than unity (i.e. $m_2/m_1 > 1$) and are thus candidates for resonant trapping in the 2/1 commensurability. We therefore proceeded to analyze whether the observed value of the resonance offset $\Delta _{2/1}$ could be achieved using our model with the assumption of a laminar flared disk.
Since the value of $\Delta_{2/1}$ is known, we can invert expression (\ref{eq15}) to obtain explicitly the value of $H_0$ as function of the masses and the disk flare:
\begin{equation}
H_0^2 = \frac{2}{D \, \Delta_{\rm obs}^2} \left( C_1 \frac{m_2}{m_0} \right)^2
\frac{ \left[(1-D(1+\beta)){\cal K}_2^* \left( C_2 \frac{m_1}{m_2}\right)^2 +
(B+D\beta){\cal K}_1^* \left( \frac{\tau_{a_2}}{\tau_{a_1}} \right) \right]}
{ 1 - \left( \frac{\tau_{a_2}}{\tau_{a_1}} \right)} ,
\label{eq16}
\end{equation}
where $\Delta_{\rm obs}$ is the observed value of the offset and ${\cal K}_i^* = 0.78 (Q_a/Q_e) a_i^{-2f} $ are $H_0$-normalized expressions for the K-factors of each planet. If the planetary masses are known with even some accuracy, it is then possible to estimate relations between $f$ and $H_0$ leading to the observed values of the offset. Since uncertainties in these values may be significant, we chose a statistical Monte Carlo approach incorporating the errors in the masses into the calculation.
For each system we ran a series of 1000 sets of $(m_1,m_2)$ from a normal distribution with mean and variance as depicted from Table 1. From the values of each run, we then determined the distribution of values of $(H_0,f)$ according to (\ref{eq16}), for fixed values of $\alpha$. Results are shown in color scales in Figure \ref{mapas-keplers}, where the top (bottom) panels correspond to Kepler-25 (KE-24), respectively. Left plots were drawn assuming $\alpha=0.5$ while the right graphs are the results obtained considering $\alpha=1.5$. As can be noted, outcomes appear only weakly dependent on the surface density profile, such that we have plotted only the extreme results, and not for $\alpha = 1$, as it is the interpolated plot between $\alpha = 0.5$ and $\alpha = 1.5$. The color code corresponds to the possibility of a pair $(H_0,f)$ to be a solution of equation \eqref{eq16}. Blue colors mean low possibilities while red is associated to higher frequency in the outcomes.
For Kepler-25, most of the positive results are occur along a broad curve with low values of $H_0$ and large values of the disk flare $f$. This is consistent with the Monte-Carlo simulations presented in \cite{Ramos2017} for the overall near-resonant population and systems with under-determined masses. Notice that the large uncertainty in $m_1$ does not significantly affect the result and the loci of values of $(H_0,f)$ consistent with the observed offset remains fairly restricted.
Results for K2-24 appear less defined which implies that a wide range of disk parameters would lead to the observed resonance offset. In part this is due to the lower, and more easily achieved, value of $\Delta_{2/1}$ but also to particular planetary masses. On one hand, the individual masses are larger than for Kepler-25 which in itself leads to a wider resonance domain. On the other hand, the ratio of $m_2/m_1$ is almost unity which, as seen from equation (\ref{eq15}) also implies larger offset even for low flare values and/or large $H_0$.
\section{Conclusions}
We present an application of the \cite{Ramos2017} model for resonance offset to two planetary systems (Kepler-25 and K2-24) close to a 2/1 MMR with significant observed offset and fairly established planetary masses. We find that the disk parameters necessary to explain the deviation from exact resonance under a laminar type-I migration are similar to those predicted by \cite{Ramos2017}, namely a low disk scale-height and significant flare index. This agreement indicates that the proposed mechanism could indeed have played a dominant role in determining the observed distribution of (near)-resonant exoplanets.
\ack{This work has been supported by research grants from ANCyT, CONICET and Secyt-UNC. We are
grateful to IATE and CCAD (UNC) for extensive use of computational facilities.}
\bibliographystyle{iopart-num}
|
2,877,628,089,783 | arxiv | \section{\label{sec:introduction}Introduction}
Einstein's theory of general relativity \cite{einstein1916} has enjoyed overwhelming success since its inception, predicting phenomena that have been observed in our solar system and beyond, including the perihelion precession of Mercury, relativistic effects in the Hulse-Taylor binary pulsar B1913+16, and the existence of black holes. Despite its successes, a number of outstanding issues persist, both in the special and general theories. These include\footnote{This list is by no means exhaustive.}: the paradoxes associated with relativistic rotating frames of reference \cite{relrotbook}; the existence of closed timelike curves \cite{lobocrawford} in numerous exact solutions of the Einstein field equations; the inability to unify the electromagnetic and gravitational fields \cite{einstein1945,pais1982}; the absence of a consistent quantum theory of gravity \cite{woodard2009}; the need to postulate dark matter in order to resolve the observed acceleration discrepancies in astrophysical systems \cite{turner,silk,brainerdetal}; the dark energy problem in the $\Lambda\mathrm{CDM}$ model \cite{copelandetal}; and, the apparent anomalous acceleration observed in radio Doppler and ranging data from the Pioneer missions \cite{andersonetal1988}.
Consequently, many theories have been proposed that attempt to reconcile relativity theory with these outstanding problems. The earliest examples of such theories include the unified field theories of Einstein \cite{einstein1945,pais1982} and others \cite{kaluza1921,weyl1922,klein1926,eddington1954,schrodinger1950,pauli1958}. More recent examples include new formulations of special relativity in rotating reference frames \cite{relrotbook}. In gravitational physics, contemporary examples include modifications of general relativity that may eliminate the need for dark matter \cite{finzi,tohline,sanders1984,sanders1986,goldman,kuhnandkruglyak,milgrom1983a,milgrom1983b,milgrom1983c,bekenstein2004} and dark energy (see Reference \cite{copelandetal} for a review of the various theories that incorporate general forms of dark energy such as quintessence, K-essence, tachyon, phantom, and dilatonic models). To date, none of these theories have superseded Einstein's original theory \cite{einstein1916}, in either the special or general formulations.
In the following we propose a new formulation of the gravitational field equations based on general relativity that addresses a number of these aforementioned issues. It is based on the preservation of the local properties of time under arbitrary four-dimensional coordinate transformations. We motivate the theory by observing that the well-known paradoxes associated with time in relativistic rotating frames \cite{relrotbook} and certain exact solutions of Einstein's equations \cite{lobocrawford,nahin} are resolved by demanding that the terms associated with physical space and time measurements remain separately invariant under the full set of coordinate transformations, and not just the restricted set of the traditional theory. As a result, we are forced to introduce four new degrees of freedom into the space-time structure, which are identified with the electromagnetic field and whose contribution to the space-time interval sums identically to zero. The resulting field equations resemble the traditional Einstein-Maxwell theory, however, a coupling between the gravitational and electromagnetic fields emerges. We show that this theory provides a rich framework for addressing a number of outstanding issues in contemporary gravitational physics.
\section{\label{sec:paradoxes}Paradoxes Associated with Time in Relativity}
In this section we briefly discuss two closely-related paradoxes associated with time in Einstein's theory of relativity: the time discontinuity paradox in relativistic rotating frames \cite{relrotbook} and the existence of Closed Timelike Curves (CTCs) \cite{lobocrawford,nahin} in certain exact rotating solutions of Einstein's equations.
\subsection{Time Discontinuity Paradox}
The time discontinuity (or time lag) arises when one tries to establish standard simultaneity along a closed curve in a rotating coordinate system. Upon traversing a complete circuit in such a frame of reference an observer discovers that a clock situated at the curve's orgin is not synchronized with itself. This is often treated in the context of special relativity alone. According to the traditional viewpoint (see, for example, Refs. \cite{dieks, weber,anandan,bergiaguidone,cranoretal,rizzitartaglia}) special relativity is valid in rotating frames of reference and the time discontinuity is only an apparent problem. This traditional approach maintains that multiple clock readings at a given event, depending on the chosen synchronization procedure, are indeed acceptable. Furthermore, it is argued that the time gap is no more problematic than the discontinuity in time at the International Date Line or the coordinate discontinuity in angle at 2$\pi$. On the other hand, many authors have questioned the validity of special relativity in rotating frames of reference and have attempted to modify Einstein's postulates for rotational motion. For example, Klauber \cite{klauber2} and independently, Selleri \cite{selleri2} contend that the synchronization procedure cannot be chosen freely for the rotating frame and propose a unique (non-Einstein) synchronization along the circumference.
Consider a Minkowski space-time with cylindrical coordinates $\{T,R,\Phi,Z\}$. The line element is given by:
\begin{equation}
\label{flatlineelement}
ds^2=c^2dT^2-dR^2-R^2d\Phi^{2}-dZ^2.
\end{equation}
The coordinate transformation from the laboratory frame $\{T,R,\Phi,Z\}$ to the rotating frame $\{t,r,\phi,z\}$ is given by:
\begin{equation}
\label{coordtransformation}
T=t, R=r, \Phi=\phi+\omega t, Z=z,
\end{equation}
where $\omega$ is the angular velocity of the rotating system as observed from the laboratory frame. Substituting (\ref{coordtransformation}) into (\ref{flatlineelement}) gives:
\begin{equation}
\label{rotlineelement}
ds^2=\gamma^{-2}c^{2}dt^{2} - 2c\beta rd\phi dt-dr^{2}-r^{2}d\phi^{2} -dz^{2},
\end{equation}
where $\beta=\omega r /c<1$ and $\gamma = (1-\beta^2)^{-1/2}$. Note that the condition $\beta<1$ is arbitrarily imposed on the coordinate transformation.
A self-consistent problem emerges when one attempts to define simultaneity globally in the rotating frame of reference. Consider two clocks, A and B, separated by the infinitesimal distance $rd\phi$ along the circumference in the rotating frame. In order to define standard simultaneity between the two (infinitesimally near) clocks the time on clock B must be adjusted by the amount \cite{landaulifshitz}:
\begin{equation}
\label{localtimelag}
c\Delta t = -\beta\gamma^{2}rd\phi.
\end{equation}
The well-known expression for the time discontinuity is obtained by integrating around the entire circumference of the circle:
\begin{equation}
\label{timegap}
\Delta t = -\frac{2\pi\beta\gamma^{2}r}{c}.
\end{equation}
Thus, if one sends a light ray from a clock A around the entire circumference of the circle, establishing standard simultaneity along the way, then one discovers that the clock at A is not synchronized with itself. While nearby clocks on an open curve can be synchronized by adjusting the readings of the various clocks according to Eq. (\ref{localtimelag}), this procedure cannot be extended globally since $\Delta t$ in Eq. (\ref{localtimelag}) is not a total differential in $r$ and $\phi$. That is to say, the synchronization procedure is path dependent in the rotating frame of reference.
\subsection{Closed Timelike Curves (CTCs)}
Closed timelike curves are also the subject of much debate in Einstein's theory of relativity. A CTC is a future directed timelike curve in the space-time manifold that runs smoothly back into itself. As is well known \cite{nahin}, the existence of CTCs suggests that time travel is compatible with general relativity since an observer may evolve in time within the future light cone and return to an event that coincides with an earlier departure from that event. A number of exact solutions of the Einstein field equations exhibit nontrivial CTCs, including a rapidly rotating infinite cylinder \cite{vanstockum, tipler}, the G\"{o}del universe \cite{godel}, a Kerr black hole \cite{carter}, and spinning cosmic strings \cite{deseretal,gott}. While the G\"{o}del universe, the cosmic strings and the van Stockum cylinder all possess properties that may be deemed unphysical, the low angular momentum Kerr black hole is believed to possess physical relevance - it is the unique final state of gravitational collapse \cite{wald}. Therefore, CTCs cannot be dismissed simply as mathematical curiosities. Furthermore, the proliferation of new solutions that exhibit CTCs \cite{mallett,bonnor2,bonnor3,bonnorward,bicakpravda} suggests that their appearance in general relativity poses a critical problem to the foundations of physics \cite{bonnor}.
Hawking \cite{Hawking} has suggested that quantum effects prevent the emergence of CTCs. In particular, he showed that divergences in the energy momentum tensor caused by vacuum polarization effects create singularities prior to the appearance of CTCs. Based on these results Hawking proposed the chronology protection conjecture: the laws of physics do not allow the appearance of CTCs. Kim and Thorne \cite{kimthorne} have suggested otherwise, namely, that the divergences in the energy momentum tensor may be cut off by quantum gravitational effects. Without a well-defined theory of quantum gravity this matter is still open to debate \cite{lobocrawford}.
\section{\label{sec:resolution}Resolution of the Paradoxes}
Neither the time discontinuity nor CTCs have been experimentally observed. According to the traditional viewpoint the time discontinuity paradox is only an apparent problem, whereas the existence of CTCs is more fundamental. Deviating from this viewpoint, we assert that both of these paradoxes are real and are evidence of a fundamental crisis in relativity theory. Therefore, the inability to experimentally realize both the time discontinuity and CTCs forces us to modify general relativity in a manner that is consistent with our physical experience of time. In this section we show that the modification we seek follows from a single postulate, namely, the preservation of the local properties of time. In addition, we show that four new degrees of freedom are required to satisfy this postulate.
The space-time interval of conventional general relativity is given by\footnote{In this paper, Greek indices will run from $0\ldots 3$, lower-case Latin indices will run from $1\ldots 3$, and $c=1$, unless otherwise stated.}:
\begin{equation}
\label{eq:metric1}
ds^2 = g_{\mu\nu}dx^{\mu}dx^{\nu},
\end{equation}
where $g_{\mu\nu}$ is symmetric and $ds^2$ is invariant under the full set of general coordinate transformations:
\begin{eqnarray}
\label{eq:general_transformations}
x^{\prime 0} &=& f\left(x^{\alpha}\right) \nonumber \\
x^{\prime i} &=& x^{\prime i}\left(x^{\alpha}\right).
\end{eqnarray}
As is well-known \cite{landaulifshitz,moller1955}, the space-time interval may be decomposed into two separate terms representing the contributions of physical time and space measurements, respectively:
\begin{equation}
\label{eq:metric2}
ds^2 = d\sigma^2 - dl^2,
\end{equation}
where
\begin{eqnarray}
\label{eq:physical_time_space}
d\sigma^2 &\equiv& g_\mu g_\nu dx^\mu dx^\nu \nonumber \\
-dl^2 &\equiv&\left(g_{\mu\nu} - g_\mu g_\nu\right)dx^\mu dx^\nu
\end{eqnarray}
and we have introduced the notation:
\begin{equation}
g_u \equiv \frac{g_{0\mu}}{\sqrt{g_{00}}}.
\end{equation}
Note that $-dl^2$ is often written as a sum over the spatial coordinates only:
\begin{equation}
-dl^2 \equiv\left(g_{ij} - g_i g_j\right)dx^i dx^j.
\end{equation}
However, the definition in Equation (\ref{eq:physical_time_space}) is equivalent since $g_{00}-g_0g_0 = g_{0i}-g_0g_i=0$. Note that by writing the spatial distance as in Equation (\ref{eq:physical_time_space}), the relationship (\ref{eq:metric2}) becomes more explicit in Equation (\ref{eq:physical_time_space}):
\begin{equation}
\label{eq:physical_space}
-dl^2 = ds^2 - d\sigma^2.
\end{equation}
It is important to emphasize that Equation (\ref{eq:metric2}) follows from Equation (\ref{eq:metric1}) by adding and subtracting the quantity $g_i g_jdx^i dx^j$ (adding zero):
\begin{eqnarray}
\label{eq:addzero}
ds^2 &\rightarrow& \left[g_{00}\left(dx^{0}\right)^2 + 2g_{0i}dx^0dx^i+ g_i g_jdx^i dx^j\right] +\left[g_{ij}dx^i dx^j - g_i g_jdx^i dx^j \right] \nonumber \\
&=&ds^2 + 0.
\end{eqnarray}
The terms $d\sigma^2$ and $-dl^2$ are not separately invariant under the full set of general coordinate transformations (\ref{eq:general_transformations}), but are each invariant under a `gauge transformation of the gravitational potentials' \cite{moller1955}, the restricted group of transformations for which the functions defining the transformations of the spatial coordinates do not depend on time:
\begin{eqnarray}
\label{eq:restricted_transformations}
x^{\prime 0} &=& f\left(x^{\alpha}\right) \nonumber \\
x^{\prime i} &=& x^{\prime i}\left(x^{j}\right).
\end{eqnarray}
This is the reason the time discontinuity paradox emerges. The term that is responsible for the time discontinuity (\ref{localtimelag}) is a result of the time dependence of the angular coordinate transformation in Equation (\ref{coordtransformation}). In other words, the local properties of time are not preserved under the transformation (\ref{coordtransformation}) from the laboratory frame to the rotating frame. Similarly, for `permanent' gravitational fields, the local properties of time are not fixed for all matter distributions, and therefore CTCs can emerge for valid solutions of Einstein's equations even when they are not observed in standard systems of reference. There is no principle nor mechanism in gravitational theory proper that preserves the local properties of time, given a standard system of reference.
\textit{Therefore, we advance the postulate that the local properties of time are externally constrained in general relativity and must be preserved in the formulation of the theory.} According to this postulate, the local properties of time must remain invariant under the general set of coordinate transformations (\ref{eq:general_transformations}) and not just the restricted set (\ref{eq:restricted_transformations}) of traditional general relativity. To be sure, this postulate does not preclude a physical time with local properties different from that observed in our standard system of reference (for example, it does not preclude path-dependent synchronization). Rather, the postulate forbids a general coordinate transformation (\ref{eq:general_transformations}) from modifying the local properties of time, whatever they may be prior to the transformation. This will be our primary point of departure from traditional relativity, which, as we have seen above possesses a non-invariant $d\sigma$. In the remainder of the paper, we reformulate general relativity to satisfy this postulate and examine its consequences.
The term $d\sigma = g_\mu dx^\mu$ is not a scalar invariant because $g_\mu$ does not transform as a four-vector. However, it can be made an invariant if we make the following substitution in Equation (\ref{eq:physical_time_space}):
\begin{equation}
\label{eq:psi_definition}
g_\mu \rightarrow \psi_\mu \equiv g_\mu + \phi_\mu,
\end{equation}
where $\phi_\mu$ are four new degrees of freedom whose transformation properties are defined so that $\psi_\mu$ transforms as a four-vector. This substitution yields:
\begin{eqnarray}
\label{eq:physical_time_space2}
d\sigma^2 &=& \psi_\mu \psi_\nu dx^\mu dx^\nu \nonumber \\
-dl^2 &=&\left(g_{\mu\nu} - \psi_\mu \psi_\nu\right)dx^\mu dx^\nu.
\end{eqnarray}
Since $ds^2 = d\sigma^2 - dl^2$, the new terms that result from the introduction of $\phi_\mu$ automatically subtract out so that the net effect on the space-time interval results in the addition of zero (cf.\ Equation (\ref{eq:addzero})):
\begin{eqnarray}
\label{eq:new_line_element}
ds^2 &=& g_{\mu\nu} dx^\mu dx^\nu \nonumber \\
&=&g_\mu g_\nu dx^{\mu}dx^{\nu} +\left(g_{\mu\nu} - g_\mu g_\nu\right)dx^{\mu} dx^{\nu}\nonumber \\
&=& \left[\left(g_\mu + \phi_\mu \right)\left(g_\nu + \phi_\nu \right)dx^{\mu}dx^{\nu}\right]
+\left[ g_{\mu\nu} - \left(g_\mu + \phi_\mu \right)\left(g_\nu + \phi_\nu \right) \right]dx^{\mu} dx^{\nu} \nonumber \\
&=& g_{\mu\nu} dx^\mu dx^\nu + 0.
\end{eqnarray}
The field $\phi_\mu$ can be viewed as a `gauge'\footnote{We use the term `gauge' loosely since the introduction of the field does not involve a derivative.} field whose function is to preserve the local properties of time in a relativistic system of reference. The space-time interpretation of the field $\phi_\mu$ follows by examining the definition of simultaneity in general relativity \cite{landaulifshitz}. Following Reference \cite{landaulifshitz} we consider the propagation of a signal from point $B$ in space (with coordinates $x^i + dx^i$) to a point $A$ infinitely near (with coordinates $x^i$) and then returning back to $B$ over the same path. The time of arrival at $A$ is $x^0$ and the times of departure and arrival at $B$ are $x^0+dx_0^{(1)}$ and $x^0+dx_0^{(2)}$, respectively, where $dx_0^{(1)}$ and $dx_0^{(2)}$ are the two roots for $dx^0$ of the equation $ds^2=0$:
\begin{equation}
\label{eq:dx0_1_2}
dx_0^{(1),(2)} = -\frac{1}{g_{00}}\left[g_{0i}dx^i \mp \sqrt{\left(g_{0i}g_{0j}-g_{ij}g_{00} \right)dx^idx^j} \right].
\end{equation}
The quantities $dx_0^{(1),(2)}$ correspond to the coordinate time intervals for propagation in the two directions between $A$ and $B$. Landau and Lifshitz \cite{landaulifshitz} point out that the time at $B$ which is simultaneous with the the time of arrival of the light signal at $A$, $x_0$, is shifted in coordinate time by the amount:
\begin{equation}
\label{eq:old_simultaneity}
d x^0 = \frac{1}{2}\left(dx_0^{(2)} + dx_0^{(1)}\right).
\end{equation
Converting this to a proper time interval and substituting Equation (\ref{eq:dx0_1_2}) this becomes:
\begin{equation}
\label{eq:proper_time_difference}
\Delta \tau = \sqrt{g_{00}}dx^0=g_0dx^0 = -g_i dx^i.
\end{equation}
Therefore, in traditional general relativity local simultaneity is defined by the vanishing of the quantity $g_\mu dx^\mu $:
\begin{equation}
g_\mu dx^\mu = 0.
\end{equation}
Note that this definition of local simultaneity is not generally covariant. However, a generally covariant definition of local simultaneity can be obtained by redefining the simultaneity condition so that the proper time difference in simultaneous events at $A$ and $B$ is a function of the field $\phi_\mu$ according to:
\begin{equation}
\label{eq:new_simultaneity}
\sqrt{g_{00}}dx^0 = \frac{\sqrt{g_{00}}}{2}\left(dx_0^{(2)} + dx_0^{(1)}\right) - \phi_\mu dx^\mu,
\end{equation}
where $\phi_\mu$ transform as defined above. As a result, local simultaneity is defined by the covariant relationship:
\begin{equation}
g_\mu dx^\mu = -\phi_\mu dx^\mu.
\end{equation}
This equation is equivalent to that obtained by setting $d\sigma=0$ in Equation (\ref{eq:physical_time_space2}). Hence, the field $\phi_\mu$ enables local simultaneity to be defined covariantly throughout space-time. Traditional general relativity defines local simultaneity via Equation (\ref{eq:old_simultaneity}), and therefore does not preserve the local properties of time under general coordinate transformations. We see that by replacing Equation (\ref{eq:old_simultaneity}) with Equation (\ref{eq:new_simultaneity}) we can satisfy the postulate of the preservation of the local properties of time.
We can rewrite Equation (\ref{eq:new_simultaneity}) in order to reveal explicitly the space-time roles of the quantities $\phi_0$ and $\phi_i$, respectively:
\begin{equation}
\label{eq:phi_mu_roles}
dx^0 = \frac{1}{2}\left[\frac{dx_0^{(2)}}{\left(1 + \frac{\phi_0}{g_0} \right)} + \frac{dx_0^{(1)}}{\left(1 + \frac{\phi_0}{g_0} \right)}\right] - \frac{\phi_i}{g_0\left(1 + \frac{\phi_0}{g_0} \right)} dx^i.
\end{equation}
Defining the scaled time intervals for light propagation in the two directions, $\tilde{dx_0}^{(A)}$, where $A=1,2$:
\begin{equation}
\label{scaled_times}
\tilde{dx_0}^{(A)} = \frac{dx_0^{(A)}}{\left(1 + \frac{\phi_0}{g_0} \right)},
\end{equation}
we can rewrite Equation (\ref{eq:phi_mu_roles}) as:
\begin{equation}
\label{eq:phi_mu_roles2}
dx^0 = \frac{1}{2}\left(\tilde{dx_0}^{(2)} + \tilde{dx_0}^{(1)}\right) - \frac{\phi_i}{g_0\left(1 + \frac{\phi_0}{g_0} \right)} dx^i.
\end{equation}
Therefore, the quantity $\phi_0$ scales the coordinate time interval for light to travel in each direction between $A$ and $B$. In other words, $\phi_0$ modifies the local speed of light $\tilde{c}$:
\begin{equation}
\label{local_speed_light}
\tilde{c} = c\left(1 + \frac{\phi_0}{g_0} \right).
\end{equation}
On the other hand, the quantities $\phi_i$ (with the appropriate $\phi_0$-dependent scaling) shift the definition of simultaneity in an analogous manner as the $g_i$ and define (along with the $g_i$) the difference in propagation speeds for light travel in each direction between $A$ and $B$. Loosely stated, $\phi_0$ and $g_0$ define the two-way speed of light and $\phi_i$ and $g_i$ define the one-way speed of light. This analysis reveals the space-time interpretation of the $\phi_\mu$ field.
Note that the field $\phi_\mu$ does not transform as a four-vector, but the field $\psi_\mu \equiv g_\mu + \phi_\mu$ transforms as a four-vector so that the linear term $d\sigma$ is an invariant:
\begin{equation}
\label{eq:linear_element}
d\sigma = \psi_\mu dx^{\mu} = \left( g_\mu + \phi_\mu \right) dx^{\mu}.
\end{equation}
Also note that the integrability of $d\sigma$ depends on the vanishing of the field:
\begin{equation}
\label{eq:skew_symmetric_tensor}
f_{\mu\nu} \equiv \psi_{\mu,\nu} - \psi_{\nu,\mu},
\end{equation}
where a comma denotes partial differentiation. We see at once both a similarity and a difference between Equation (\ref{eq:skew_symmetric_tensor}) and the standard electromagnetic field tensor. The integrability of the linear quantity (\ref{eq:linear_element}) is defined by a skew-symmetric tensor (\ref{eq:skew_symmetric_tensor}) that resembles the electromagnetic field tensor; however, the four-vector $\psi_\mu$ is not independent of the gravitational potentials. Rather, the fundamental variable that is independent of the gravitational potentials is $\phi_\mu$, which exhibits vector transformation properties only under the restricted transformation (\ref{eq:restricted_transformations}).
\subsection{Example: Transformation to a Constant-Velocity Frame}
It is constructive to pause before developing the theory further in order to consider a simple example of the formalism developed above. Therefore, we examine the transformation from a one-dimensional stationary frame to a frame moving with a constant relative velocity $v$. In the rest frame the line element is:
\begin{equation}
ds^2 = dt^2 - dx^2,
\end{equation}
and the metric quantities that define physical time are:
\begin{eqnarray}
g_0 &=& 1 \nonumber \\
g_1 &=& 0 \nonumber \\
\phi_0 &=& 0 \nonumber \\
\phi_1 &=& 0
\end{eqnarray}
so that $d\sigma = \psi_\mu dx^\mu = dt$.
Let us now consider the transformation to a frame moving with relative velocity $v$:
\begin{eqnarray}
\label{galilean_trans}
x &=& x^{\prime} - v t^{\prime} \nonumber \\
t &=& t^{\prime}.
\end{eqnarray}
The line-element now becomes:
\begin{equation}
ds^2 = \left(1-v^2\right)dt^{\prime 2} + 2vdx^{\prime}dt^{\prime} - dx^{\prime 2},
\end{equation}
and the metric quantities that define physical time in the primed frame are:
\begin{eqnarray}
g_0^{\prime} &=& \sqrt{1-v^2} \nonumber \\
g_1^{\prime} &=& \frac{v}{\sqrt{1-v^2}} \nonumber \\
\phi_0^{\prime} &=& 1-\sqrt{1-v^2} \nonumber \\
\phi_1^{\prime} &=& -\frac{v}{\sqrt{1-v^2}}
\end{eqnarray}
so that $d\sigma = \psi^{\prime}_\mu dx^{\prime \mu} = dt^{\prime}$. Note that $g_0+\phi_0 = g_0^{\prime} + \phi_0^{\prime} = 1$ and $g_1 + \phi_1 = g_1^{\prime} + \phi_1^{\prime} =0$. The quantities $\phi_0^{\prime}$ and $\phi_1^{\prime}$ emerge in the primed frame to preserve $d\sigma$ under the transformation (\ref{galilean_trans}).
Let us now examine simultaneity in the primed frame. According to conventional relativity simultaneity is defined by the condition $g^{\prime}_\mu dx^{\prime \mu} =0$ so that:
\begin{equation}
\label{standard_relativity_sim}
dt^{\prime} = -\frac{v}{1-v^2}dx^{\prime},
\end{equation}
which results from the fact that the speed of light according to $dt^{\prime}$ is different in each direction between $A$ and $B$:
\begin{equation}
\frac{dx^{\prime}}{dt^{\prime}} = v\pm c.
\end{equation}
In traditional relativity simultaneity is defined to compensate for the difference in the one-way speeds of light. On the other hand, according to the formalism developed above simultaneity is defined by the relationship $g^{\prime}_{\mu} dx^{\prime \mu} = -\phi_\mu dx^{\prime \mu}$:
\begin{equation}
\label{new_relativity}
dt^{\prime} = 0.
\end{equation}
At first glance, it appears that the proposed theory conflicts with special relativity. However, no contradiction exists because simultaneity in the new theory is defined with a local speed of light, $\tilde{c}$, different from $c$:
\begin{equation}
\label{new_light_speed}
\tilde{c} = \frac{c}{\sqrt{1-v^2}},
\end{equation}
that is the same for propagation in each direction between $A$ and $B$. In traditional relativity, the difference in light speed in each direction is responsible for a nonzero temporal difference between the simultaneous events, namely $dt^{\prime} \neq 0$ (cf.\ Equation (\ref{standard_relativity_sim})). In the reformulation of relativity proposed above, the quantities $\phi_i$ emerge so that the speed of light is the same in each direction between points $A$ and $B$. This allows simultaneity to be defined by the vanishing of the coordinate time differential, $dt^{\prime}$, in the moving frame.
\section{\label{sec:field_equations}Field Equations}
The space-time interval of the proposed reformulation of general relativity is the same as that of the traditional theory (\ref{eq:metric1}) because the contribution of the new terms that results from the introduction of the field $\phi_\mu$ sums identically to zero in the space-time interval. Therefore, the derivation of the new field equations will be similar to the derivation of the traditional gravitational field equations. However, we will have to modify the derivation in order to satisfy the postulate of the preservation of the local properties of time.
The action integral of traditional general relativity is given by:
\begin{equation}
I = \int_D{\left( R-2\kappa L_F \right)\sqrt{-g}\,d^4x},
\end{equation}
where $R$ is the Ricci scalar curvature, $L_F$ describes all fields except the gravitational field, and $\kappa = 8\pi G$. The field equations follow from the variational principle, $\delta I = 0$, where the $g_{\mu\nu}$ are varied independently subject only to the requirement that their variations $\delta g_{\mu\nu}$ as well as the variations of their first derivatives $\delta g_{\mu\nu,\lambda}$ vanish on the boundary of integration. This yields the well-known Einstein field equations:
\begin{equation}
G_{\mu\nu} = \kappa T_{\mu\nu},
\end{equation}
where $G_{\mu\nu}$ is the divergenceless Einstein tensor and $T_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta \left(L_F\sqrt{-g}\right)}{\delta g^{\mu\nu}}$ is the energy-momentum tensor of all the other fields.
As stated above, we must modify the derivation in order to preserve the local properties of time under the variations $g_{\mu\nu}$. This can be accomplished with the method of Lagrange multipliers. First, we note that the integrability of $d\sigma$ is determined by the antisymmetric tensor $f_{\mu\nu} \equiv \psi_{\mu,\nu} - \psi_{\nu,\mu}$. Consequently, the quantity $f_{\mu\nu}f^{\mu\nu}$ is an invariant characterization of the local properties of time, which must be preserved under the variations of $g_{\mu\nu}$. Therefore, we introduce the constraint:
\begin{equation}
\label{eq:constraint}
f_{\mu\nu}f^{\mu\nu} = f_0,
\end{equation}
into the action of the gravitational field:
\begin{equation}
I_G = \int_D{\left( R-2\kappa L_F \right)\sqrt{-g}\,d^4x} + \lambda_0\int_D{\left(f_{\mu\nu}f^{\mu\nu} - f_0\right)\sqrt{-g}\,d^4x},
\end{equation}
where $f_0$ is externally defined and $\lambda_0$ is a dimensionless Lagrange multiplier field. We vary the action with respect to the variables $g_{\mu\nu}$ and $\phi_\mu$\footnote{Note that we can alternatively take $\psi_\mu$ as an independent quantity for the variation since the $g_{\mu\nu}$ are to be held fixed during this variation.}. Variation of the action with respect to the Lagrange multiplier gives the constraint (\ref{eq:constraint}). Variation of the action with respect to the fields $g_{\mu\nu}$ and $\phi_\mu$ gives:
\begin{eqnarray}
\label{new_field_equations}
G_{\mu\nu}+\Lambda g_{\mu\nu} + \kappa \Omega_{\mu\nu} &=& \kappa T_{\mu\nu} \nonumber \\
f^{\mu\nu}_{\;\; ;\nu} &=& j^\mu,
\end{eqnarray}
where $j^{\mu}=\frac{2\kappa}{\lambda_0\sqrt{-g}}\frac{\delta \left(L_F\sqrt{-g}\right)}{\delta \phi_{\mu}}$, a semicolon denotes covariant derivative, $\Lambda \equiv \frac{\lambda_0 f_0}{2}$, and
\begin{equation}
\Omega_{\mu\nu} \equiv \frac{2\lambda_0}{\kappa} \left(T_{\mu\nu}^{(\psi)} + t_{\mu\nu} \right),
\end{equation}
with
\begin{equation}
\label{psi_tensor}
T_{\mu\nu}^{(\psi)} \equiv \left(f_{\mu\alpha}f_{\nu}^{\;\alpha} - \frac{1}{4}f_{\alpha\beta}f^{\alpha\beta}g_{\mu\nu} \right)
\end{equation}
and
\begin{eqnarray}
t^{00} &\equiv& \left[g_{00}^{-1/2}f^{0\nu} - g_{0i}g_{00}^{-3/2}f^{i\nu} \right]_{,\nu} -\left[-\frac{1}{2}g_{00}^{-3/2}g_{00,\nu}f^{0\nu} + \frac{3}{2}g_{0i}g_{00}^{-5/2}g_{00,\nu}f^{i\nu} - g_{00}^{-3/2}g_{0i,\nu}f^{i\nu}\right] \nonumber \\
&=& g^{-1/2}_{00}f^{0\nu}_{\;\;,\nu} - g_{0i}g^{-3/2}_{00}f^{i\nu}_{\;\;,\nu} \nonumber \\
t^{0i} &\equiv& \left[2g_{00}^{-1/2}f^{i\nu} \right]_{,\nu} + g_{00}^{-3/2}g_{00,\nu}f^{i\nu} = 2g_{00}^{-1/2}f^{i\nu}_{\;\;,\nu}\nonumber \\
t^{ij} &\equiv& 0.
\end{eqnarray}
Note that $j^{\mu}$ and $f_0$ cannot be prescribed arbitrarily. The source term $j^{\mu}$ must be defined so that the solution of $f^{\mu\nu}_{\;\; ;\nu} = j^\mu$ satisfies the constraint $f_{\mu\nu}f^{\mu\nu} = f_0$.
We see that a `cosmological constant' term emerges naturally in this theory and is proportional to the quantity $f_0$ in the constraint. By substituting the constraint (\ref{eq:constraint}) in (\ref{new_field_equations}), the `cosmological constant' term cancels out, producing the following field equations:
\begin{equation}
\label{new_field_equations2}
G_{\mu\nu}+ 2\lambda_0 \left(f_{\mu\alpha}f_{\nu}^{\;\alpha} + t_{\mu\nu} \right)= \kappa T_{\mu\nu};
\end{equation}
By taking the covariant divergence of the above equations we obtain an equation for the Lagrange multiplier field:
\begin{equation}
\left[2\lambda_0 \left(f^{\mu}_{\;\;\alpha}f^{\nu\alpha} + t^{\mu\nu}\right)\right]_{;\nu}=\kappa T^{\mu\nu}_{\;\; ;\nu}.
\end{equation}
Also, we see that $f^{\mu\nu}$ satisfies Maxwell's equations and the tensor $T^{(\psi)}_{\mu\nu}$ in Equation (\ref{psi_tensor}) resembles the standard electromagnetic stress-energy tensor:
\begin{equation}
\label{em_tensor}
T_{\mu\nu}^{\textrm{(EM)}} \equiv \frac{1}{4\pi}\left(F_{\mu\alpha}F_{\nu}^{\;\alpha} - \frac{1}{4}F_{\alpha\beta}F^{\alpha\beta}g_{\mu\nu} \right),
\end{equation}
where $F_{\mu\nu} = A_{\mu,\nu}-A_{\nu,\mu}$ and $A_\mu$ are the electromagnetic potentials of the traditional Einstein-Maxwell theory. Therefore, we identify the field $\psi_\mu$ with the classical electromagnetic field such that:
\begin{equation}
\label{eq:EM_field_identification}
\psi_\mu = \alpha A_\mu,
\end{equation}
where $\alpha = \sqrt{\frac{\kappa}{8\pi\lambda_0}}$. Note that $\psi_\mu$ (and hence $A_\mu$) is composed of both $g_{\mu}$ and $\phi_\mu$. Therefore, the generally covariant electromagnetic field is composed of terms from both the standard gravitational tensor $g_{\mu\nu}$ as well as the new degrees of freedom $\phi_\mu$. Note also that $\phi_\mu = \alpha A_\mu$ when $g_\mu = 0$.
\section{\label{sec:equations_motion}Equations of Motion}
In traditional general relativity, a point-like particle moving between two points $A$ and $B$ in Riemannian space traverses a geodesic. The equations of motion follow from the variational principle:
\begin{equation}
\label{variational_principle}
\delta\int_A^B{m\,ds} = \delta\int_A^B{L_0\,d\tau}=0,
\end{equation}
where $m$ is the mass of the particle and $L_0=m\sqrt{g_{\mu\nu}\frac{dx^\mu}{d\tau}\frac{dx^\nu}{d\tau}}$ is the free-particle Lagrangian. The variation is made arbitrarily, subject only to the constraint of fixed endpoints, producing:
\begin{equation}
\label{geodesicequations1}
\frac{d^2 x^\mu}{d\tau^2} + \Gamma^{\mu}_{\alpha\beta}\frac{dx^\alpha}{d\tau}\frac{dx^\beta}{d\tau}
=0.
\end{equation}
Since $ds$ is varied arbitrarily and is only subject to the endpoint constraints, the quantity $d\sigma = \psi_\mu dx^\mu$ is not preserved under the variations. Consequently, the standard variational principle (\ref{variational_principle}) for geodesics does not satisfy the postulate of the preservation of the local properties of time. Therefore, we must reformulate the variational principle in order to preserve physical time under the variations of the space-time path. To this end, we introduce the local constraint along the parameterized space-time path:
\begin{equation}
\label{sigma_constraint}
d\sigma = d\sigma_0,
\end{equation}
where $d\sigma_0$ defines physical time intervals at every point on the space-time path. Note that $d\sigma$ need not be integrable. Using the method of Lagrange multipliers, we write the new Lagrangian as:
\begin{equation}
\label{lagrangian2}
L=L_0 + \lambda_1 \left( \psi_\mu\frac{dx^\mu}{d\tau} - \frac{d\sigma_0}{d\tau}\right),
\end{equation}
where $\lambda_1$ is the Lagrange multiplier that also absorbs the constant of proportionality so that each term has the same units. Note that $\lambda_1$ must be a constant so that the new action is invariant. Variation of the Lagrange multiplier yields the constraint (\ref{sigma_constraint}), whereas the variations of the parameterized space-time path produce a new term in the equations of motion that acts as a Lorentz force:
\begin{equation}
\label{geodesicequations2}
\frac{d^2 x^\mu}{d\tau^2} + \Gamma^{\mu}_{\alpha\beta}\frac{dx^\alpha}{d\tau}\frac{dx^\beta}{d\tau} = \frac{\lambda_1}{m} f^{\mu}_{\,\,\nu}\frac{dx^\nu}{d\tau},
\end{equation}
if we identify $\lambda_1$ as being proportional to electric charge. Therefore, the Lorentz force acquires a space-time interpretation: A charged particle deviates from geodesic motion in order to satisfy the local constraints imposed on physical time along the trajectory. Note that the quantity $f_{\mu\nu}$ is responsible for the Lorentz force and therefore the emergence of the field $\phi_\mu$ alone is not sufficient to produce an electromagnetic deflection. For example, consider the emergence of $\phi_\mu$ as a result of a coordinate transformation from a frame in which $f_{\mu\nu} = 0$. An example of this type of field is the $\phi_\mu$ field that emerges as a result of a coordinate transformation from the laboratory frame to the rotating frame. Since the condition $f_{\mu\nu} = 0$ is preserved under an arbitrary coordinate transformation, then according to Equation (\ref{geodesicequations2}), this field will have no effect on the motion of charged particles. On the other hand, when $f_{\mu\nu} \neq 0$ (non-integrable $d\sigma$) charged particles will deflect according to the Lorentz force term in Equation (\ref{geodesicequations2}).
\section{Weak Field Limit}
In this section we consider the weak-field approximation \cite{misnerthornewheeler} of the new set of field equations (\ref{new_field_equations}). In this approximation the metric may be written as a linear perturbation from the flat space-time metric, $\eta_{\mu\nu}$:
\begin{equation}
g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu},
\end{equation}
where $\left|h_{\mu\nu}\right|\ll 1$ is the small perturbation. The field equations for the $f^{\mu\nu}$ become:
\begin{equation}
\label{linearized_maxwell}
f^{\mu\nu}_{\;\; ,\nu}=j^\mu.
\end{equation}
We solve for the metric terms in a region outside the source $j^{\mu}$ so that in the region of interest:
\begin{equation}
\label{linearized_maxwell_no_source}
f^{\mu\nu}_{\;\; ,\nu}=0.
\end{equation}
and therefore the terms $t^{\mu\nu}$ vanish. As a result, the remaining field equations in the `Lorentz gauge' ($\bar{h}^{\mu\nu}_{\;\;,\nu}=0$) are:
\begin{equation}
\label{linearized_einstein}
\Box \bar{h}_{\mu\nu} = -2\kappa \left( T_{\mu\nu} - \frac{2\lambda_0}{\kappa}f_{\mu\alpha}f_{\nu}^{\;\alpha} \right),
\end{equation}
where $\Box = \partial_\mu \partial^\mu$ and $\bar{h}_{\mu\nu}\equiv h_{\mu\nu} - \frac{1}{2}\eta_{\mu\nu}h$, with $h=\eta^{\mu\nu}h_{\mu\nu}$. Therefore, in the weak-field limit the equations of the proposed reformulation of general relativity are very similar to the equations of general relativity in the presence of an electromagnetic field; this is a result of the vanishing of the $t^{\mu\nu}$ in regions for which $j^{\mu}=0$. The difference in the two theories, however, should lead to measurably different predictions.
We follow the usual treatment and assume that the perturbation $h_{\mu\nu}$ is expressed in isotropic space coordinates so that $h_s=h_{11}=h_{22}=h_{33} $ and that the matter is slowly moving with low density and negligible pressure:
\begin{equation}
T^{\mu\nu} = \rho u^\mu u^\nu,
\end{equation}
where $u^\mu$ is the 4-velocity and $\rho$ is the matter density. Therefore, the linearized field equations (\ref{linearized_einstein}) become:
\begin{eqnarray}
\label{linearized_einstein2}
\Box \Phi &=& -\frac{\kappa}{2}\left(\rho - \frac{2\lambda_0}{\kappa}f_{0\alpha}f_{0}^{\;\alpha} \right)\nonumber \\
\Box \vec{h} &=& 2\kappa \rho\left(\vec{v} + \frac{2\lambda_0}{\kappa \rho}\vec{f}^{(\psi)} \right),
\end{eqnarray}
where $h_{00} = h_s = 2\Phi$, $\vec{h}=\left\{h_{01},h_{02},h_{03}\right\}$, $\vec{v}$ is the velocity of the source, and $\vec{f}^{(\psi)}= \left\{f_{0\alpha}f_{1}^{\;\alpha},f_{0\alpha}f_{2}^{\;\alpha},f_{0\alpha}f_{3}^{\;\alpha}\right\}$. When $f_{0\alpha}f_{0}^{\;\alpha}=0$ and $\vec{f}^{(\psi)}=0$ these equations reduce to the standard linearized gravitational equations. In this case the equations may be solved to produce (neglecting retardation effects):
\begin{eqnarray}
\label{standard_solutions}
\Phi\left(\vec{r}\right) &=& - G\int\frac{\rho\left(\vec{r}^{\prime}\right)\,d^3\vec{r}^{\prime}}{\left|\vec{r} - \vec{r}^{\prime} \right|} \nonumber \\
\vec{h}\left(\vec{r}\right) &=& 4G\int\frac{\rho\left(\vec{r}^{\prime}\right)\vec{v}\left(\vec{r}^{\prime}\right)\,d^3\vec{r}^{\prime}}{\left|\vec{r} - \vec{r}^{\prime} \right|}.
\end{eqnarray}
This results in the well-known Lense-Thirring line element:
\begin{equation}
\label{lense_thirring}
ds^2 = \left( 1 + 2\Phi \right) dt^2 - \left(1 - 2\Phi \right)\vec{dr^2} + 2\vec{h}\cdot \vec{dr}dt.
\end{equation}
However, when $f_{0\alpha}f_{0}^{\;\alpha}\neq 0$ or $\vec{f}^{(\psi)}\neq0$ one must recognize the additional dependence on the metric that emerges in Equation (\ref{linearized_einstein2}). In this case the solutions of Equations (\ref{linearized_einstein2}) are not given by Equations (\ref{standard_solutions}). We immediately see that the above reformulation of general relativity suggests that the Lense-Thirring effect in the presence of an electromagnetic field will differ slightly from that predicted by standard general relativity.
\section{Implications for Outstanding Problems}
In this section we discuss the implications of our reformulation of general relativity on some of the aforementioned outstanding problems.
\subsection{The Pioneer Anomaly}
As is well known, analyses of radio Doppler and ranging data from the Pioneer missions indicate that there is an anomalous blueshift in the detected microwave signal, which can either be attributed to an anomalous acceleration of the spacecraft $a_P\sim 8.7\times 10^{-8} \mathrm{cm}\,\mathrm{s}^{-2}$ directed towards the sun or to an acceleration of clocks $a_t = a_P/c \sim 2.9 \times 10^{-18} \textrm{s}^{-1}$ \cite{andersonetal1988}. While Page et al.\ \cite{pageetal2009} have recently argued that the current ephemeris of Pluto does not preclude the existence of the Pioneer effect, it is unlikely that such an acceleration, which is four orders of magnitude greater than the largest relativistic corrections to Newtonian gravity, has a gravitational origin \cite{iorio2007}. Such an acceleration would require a violation of the weak equivalence principle at the outer radii of the Solar System. Therefore, if the anomalous blueshift is a result of unexplained physics and not a systematic error, then it is most likely due to a clock acceleration that does not manifest itself in a physical acceleration of this magnitude in our solar system. In this section, we show how the proposed reformulation of gravity naturally accounts for this anomaly.
We start by writing the Schwarzschild line-element, which is also a solution of the free-field field equations in the proposed reformulation of general relativity:
\begin{equation}
\label{schwarzschild}
ds^2 = \left(1-\frac{2MG}{r}\right)dt^2 - \left(1-\frac{2MG}{r}\right)^{-1}dr^2-r^2d\Omega^2,
\end{equation}
where $M=2\times 10^{33}\,\textrm{gm}$ is the mass of the sun. We assume all quantities are independent of time and are spherically symmetric. According to Equation (\ref{eq:EM_field_identification}) there will be an electric field present due to the radial dependence of the quantity $\psi_0$:
\begin{equation}
\psi_0 = g_0 + \phi_0 \approx 1 - \frac{MG}{r} + \phi_0.
\end{equation}
We first consider the case where the electric field at the spacecraft vanishes. In this case, $\psi_0$ must be constant; therefore we conclude $\phi_0 = \frac{MG}{r} + a_0$, where $a_0$ is an arbitrary constant, which can be set to zero.
According to Equation (\ref{local_speed_light}) the local speed of light at the location of the spacecraft is:
\begin{equation}
\label{local_speed_light_pioneer}
\tilde{c} = c\left(1 + \frac{\phi_0}{g_0} \right) \simeq c\left( 1 + \frac{MG}{c^2r} \right),
\end{equation}
where we have now introduced $c$ explicitly back into all the terms. Therefore, according to the definition of simultaneity in general relativity, the following distance for light travel is neglected:
\begin{equation}
dL = \frac{MG}{cr}dt,
\end{equation}
which will result in attributing an apparent acceleration to the spacecraft:
\begin{equation}
\label{anomalous_acceleration}
\frac{d^2L}{dt^2} = -\frac{MGv}{cr^2},
\end{equation}
where $v$ is the velocity of the spacecraft. Using $v=12\, \textrm{km}\,\textrm{s}^{-1}$ and $r=20\, \textrm{AU}$, we obtain for the `anomalous acceleration':
\begin{equation}
\frac{d^2L}{dt^2} = - 5.9\times 10^{-8} \,\textrm{cm}\,\textrm{s}^{-2},
\end{equation}
which is to be compared to the anomalous Pioneer acceleration $a_P\sim -8.7\times 10^{-8} \mathrm{cm}\,\mathrm{s}^{-2}$. Note that the acceleration derived above is not a `real' acceleration of the spacecraft, but is only apparent due to the incorrect speed of light used in the equations of traditional general relativity. We can derive the same result by considering the acceleration of the clocks at the location of the spacecraft (cf.\ Equation (\ref{scaled_times})):
\begin{equation}
\tilde{dt}^{(\textrm{Pioneer})} = \frac{dt}{\left(1 + \frac{\phi_0}{g_0} \right)} \simeq dt\left(1-\phi_0\right),
\end{equation}
so that the clocks on the Pioneer spacecraft are accelerated relative to the clocks of traditional relativity according to:
\begin{equation}
\frac{d^2\tilde{t}^{(\textrm{Pioneer})}}{dt^2} = \frac{MGv}{c^2r^2} = \frac{1}{c}\left|\frac{d^2L}{dt^2}\right|.
\end{equation}
While the simple model presented above provides encouraging results at $20\,\textrm{AU}$, it predicts a $r^{-2}$ dependence of the anomalous acceleration that is not observed in the data. For example, Equation (\ref{anomalous_acceleration}) predicts an anomalous acceleration of $- 1.48\times 10^{-8} \,\textrm{cm}\,\textrm{s}^{-2}$ at $40 \,\text{AU}$. On the other hand, the Pioneer's anomalous acceleration has been observed to be approximately constant for radii $20 - 70 \,\textrm{AU}$. This radial dependence can be traced back to the assumption that the electromagnetic field vanishes at the radii of interest. A more complete analysis requires a model for the electromagnetic field at radii $20 - 70 \,\textrm{AU}$.
\subsection{The Dark Matter Problem}
First discovered by Zwicky in the 1930's \cite{zwicky1933,zwicky1937}, velocities on the galactic scale are much larger than those predicted by general relativity when the source of the gravitational field is taken to be the observed visible matter (see also, e.g., \cite{einastoetal} and \cite{rubinetal}). Zwicky postulated, and it is now generally accepted, that a considerable amount of non-visible matter must be present in the extragalactic regime in order to provide the additional acceleration required to maintain these excessive velocities. This non-visible matter is commonly called dark matter and is believed to resolve acceleration discrepancies observed in systems ranging from dwarf spheroidal galaxies with visible masses $\sim 10^7 M_{\odot}$ to clusters of galaxies with observed masses $\sim 10^{14} M_{\odot}$. Furthermore, dark matter is believed to play a key role in structure formation of the universe and primordial nucleosynthesis, and is believed to significantly affect the anisotropy of the cosmic microwave background. Excellent reviews of the dark matter problem are given in Refs. \cite{turner,silk}.
Despite thirty years of laboratory experiments and astronomical observation, dark matter has never been observed directly \cite{Bertoneetal}; its existence is only inferred indirectly due to its purported gravitational effects on visible matter. Modifications of gravitational theory have been proposed \cite{finzi,tohline,sanders1984,sanders1986,goldman,kuhnandkruglyak} that may eliminate the need for dark matter, and perhaps the most well-known is the modified Newtonian dynamics (MOND) theory \cite{milgrom1983a,milgrom1983b,milgrom1983c}. MOND is characterized by an acceleration scale and predicts departures from a Newtonian force law in the extragalactic regime where dynamical accelerations are small. Recently, a relativistic generalization of MOND, Tensor-Vector-Scalar (TeVeS), was proposed \cite{bekenstein2004} that resolves some of the earlier problems of the MOND theory. However, TeVeS has not been experimentally confirmed. In this section we show how the proposed reformulation of gravity naturally accounts for the `anomalous' acceleration observed in rotating spiral galaxies, without the need to postulate dark matter.
We model a galaxy rotating in the azimuth ($\hat{\phi}$) direction with the Schwarzschild line element (\ref{schwarzschild}) at radius $r \gg r_0$, where $r_0$ is the effective radius of the visible matter distribution; the quantity $M$ now refers to the total visible mass of the galaxy contained within $r<r_0$. Note that we are assuming the weak-field frame-dragging term, $g_3 \simeq 2GJ/r^2$, where $J$ is the angular momentum of the galaxy, is negligible at $r \gg r_0$. According to the proposed reformulation of general relativity, the electromagnetic field is a result of the coordinate-dependence of $\psi_\mu = g_\mu + \phi_\mu$. Since we are assuming the $g_i$ are negligible ($r\gg r_0$) we can write $\psi_i \simeq \phi_i$.
We are particularly interested in the quantity $\phi_3$, since according to Equation (\ref{eq:phi_mu_roles2}) this will modify the one-way speed of light in the azimuth ($\hat{\phi}$) direction - in exactly the same way the quantities $g_i$ are responsible for a difference in the one-way and two-way speeds of light in the conventional theory. In other words, traditional general relativity will incorrectly attribute an additional velocity $\phi_3$ to the rotational motion of the galaxy when inferring velocity from Doppler shifts of electromagnetic signals. At the tail-end of the galactic rotation curve ($r\gg r_0$) this additional velocity is approximately constant. Therefore, at $r\gg r_0$ we assume $\phi_3 = A_0$, where $A_0$ is a constant. As a result, $\psi_3 = g_3 + \phi_3 \simeq \phi_3 = A_0$. In the plane of the galaxy, this corresponds to a perpendicular magnetic field with a $r^{-1}$ dependence:
\begin{equation}
B_\theta = -\frac{cA_0}{\alpha r}.
\end{equation}
As $r\rightarrow r_0$, one expects $\phi_3 \neq \textrm{constant}$ because the frame-dragging term, $g_3$, cannot be considered negligible at these radii. If we assume the magnetic field retains its same functional form as $r\rightarrow r_0$, then $\phi_3$ would need to be:
\begin{equation}
\phi_3 = A_0 - 2GJ/r^2,
\end{equation}
in order to compensate for the $g_3$ term. A more complete analysis will require a detailed model of the galactic magnetic field as well as a better model of the galactic gravitational field \cite{neugebauermeinel1,neugebauermeinel2,neugebauermeinel3}. Nevertheless, we observe how the presence of a galactic electromagnetic field can modify the local one-way speed of light and lead to seemingly anomalous measurements of the galactic rotational velocities. It should be emphasized that this difference in velocity is not due to a real acceleration of the rotating matter, but is due to an incorrect quantification of the azimuthal one-way speed of light in traditional general relativity. Hence, there is no need to postulate dark matter to account for the anomalous acceleration observed in rotating spiral galaxies.
\subsection{The Dark Energy Problem}
The Friedmann-Lema\^itre $\Lambda\textrm{CDM}$ model of cosmology has been accepted by the scientific community as the new Standard Model of Cosmology \cite{steinhardtostriker,bahcalletal}. It supercedes the previous Standard Model of Cosmology, embracing all of its accomplishments, and claims additional success. This model agrees closely with a wide range of observations, including measurements of the abundance of primordial elements, CMB anisotropies, the age of the universe, the luminosity of supernovae, and the large scale structure of the universe. According to the $\Lambda\textrm{CDM}$ model, the universe is spatially flat and was initiated with the Big Bang, a state of infinite density and temperature, approximately $15 \times 10^9$ years ago. This was followed by a potential, or vacuum, energy-dominated (inflation) phase, a radiation-dominated phase, and a matter-dominated phase. It is believed that the universe is presently transitioning from the matter-dominated phase to a cosmological constant-dominated phase.
First reported in Refs. \cite{perlmutteretal, riess1, riess2}, observations of Type Ia Supernova (SN Ia) indicate that the universe is accelerating. The $\Lambda\textrm{CDM}$ model attributes this acceleration to the cosmological constant, $\Lambda$, which was originally introduced into general relativity by Einstein \cite{einstein1917} in order to permit homogeneous, static solutions of the field equations. However, the introduction of the cosmological constant brings a number of problems in its wake, including the well known cosmological constant, or fine-tuning, problem. This results from the observation that the contribution to the vacuum energy density from quantum fields behaves like a cosmological constant, and is according to modern particle theories orders of magnitude larger than the measured cosmological constant, which is crudely approximated by $\Lambda \approx H_0^2$, where $H_0$ is the present value of the Hubble parameter. Consequently, considerable effort is being exerted to replace $\Lambda$ in the $\Lambda\textrm{CDM}$ model with more general forms of dark energy that are typically described by scalar fields such as quintessence, K-essence, tachyon, phantom and dilatonic models (see \cite{copelandetal} for an excellent review).
We have seen above that the cosmological constant emerges in the proposed reformulation of general relativity and because of the constraint (\ref{eq:constraint}) it cancels out of the field equations. Therefore, a deeper understanding of the constraint and its relationship with quantum fields can shed light on the remarkable vanishing of the various contributions to the vacuum energy. In other words, the vanishing of the various contributions to the vacuum energy may be attributed to a symmetry principle: all contributions to the energy density are subject to the requirement that they preserve the local properties of time in the universe. Thus, the $\psi_\mu$ field, along with the other quantum fields,
are subject to the constraint, $f_0 = f_{\mu\nu}f^{\mu\nu}$, which guarantees the vanishing of the cosmological constant.
At first glance, a theory that predicts a vanishing cosmological constant may seem to contradict the supernova observations. However, this is not the case because the proposed theory (even with a vanishing cosmological constant) can account for the apparent acceleration of the universe. Let us consider the general solution to the field equations (\ref{new_field_equations}) for a homogeneous and isotropic cosmological model with vanishing electromagnetic field ($f_{\mu\nu}=0)$:
\begin{eqnarray}
ds^2 &=& dt^2 - a(t)^2 d\Sigma^2 \nonumber \\
\phi_0 &=& \phi_0(t),
\end{eqnarray}
where $ds^2$ is the Robertson-Walker line element, $d\Sigma^2$ represents the three-dimensional line element of uniform curvature, $a(t)$ is the scale factor, and $\phi_0(t)$ is an arbitrary function of time. Note that a vanishing electromagnetic field does not force the $\phi_\mu$ field to vanish entirely; rather, a general homogeneous and isotropic solution with vanishing electromagnetic field admits $\phi_0=\phi_0(t)$. Therefore, the general cosmological model predicted by the proposed reformulation of general relativity naturally includes a speed of light that varies with time:
\begin{equation}
\label{cosmological_speed_light}
c(t) = c\left(1 + \frac{\phi_0(t)}{g_0} \right) = c\left(1 + \phi_0(t) \right).
\end{equation}
Such a variation of the speed of light can account for the apparent acceleration of the expansion of the universe \cite{sanejouand2009}, without requiring the introduction of a cosmological constant. Note that the proposed theory shares similarities with Variable Speed of Light (VSL) theories \cite{magueijo2003}, and therefore, it can also provide insight into other cosmological problems, such as the horizon, flatness, homogeneity and isotropy problems \cite{albrechtMagueijo1999}.
\subsection{Quantum Gravity}
The proposed reformulation of general relativity provides an invariant definition of time and consequently may shed light on the `problem of time' in quantum gravity \cite{isham1993}. Therefore, it is constructive to consider a quantum theory of gravity based on the theory presented above. While it is out of the scope of the present paper to develop a complete theory of quantum gravity we discuss some of its salient features in this section.
Our starting point is the ADM formulation of general relativity \cite{ADM1962}, which is based on the following decomposition of the metric tensor:
\begin{equation}\left(g_{\mu\nu} \right) = \left( \begin{array}{cc}
-N^2 +N_iN^i & N_j \\
N_i & h_{ij} \end{array} \right)
\end{equation}
and
\begin{equation}
\left(g^{\mu\nu} \right) = \left( \begin{array}{cc}
-N^{-2} & N^{-2}N^j \\
N^{-2}N^i & h^{ij}-N^{-2}N^iN^j \end{array} \right),
\end{equation}
where $N$ and $N_i$ are commonly known as the lapse and shift vector, respectively, $h_{ik}$ is the induced 3-metric on a hypersurface $\Sigma(t)$ at constant $t$, and the following relations hold: $h_{ik}h^{kj}=\delta_i^{\;j}$, $N^i = h^{ij}N_j$, $-g = N^2 h$, and $h=\textrm{det}\left(h_{ij}\right)$. The traditional gravitational Lagrangian, $R\sqrt{-g}$, can be written in terms of the canonical variable set $\left\{N, N_i, h_{ij}\right\}$:
\begin{equation}
\label{ADM_lagrangian}
\mathcal{L}_\textrm{ADM} = Nh^{1/2}\left({}^{(3)}R+K_{ij}K^{ij} - K^2 \right),
\end{equation}
where $K_{ij}=\frac{1}{2}N^{-1}\left(N_{i.j}+N_{j.i}-h_{ij,0} \right)$ is the second fundamental form, $K^{ij}=h^{ik}h^{jl}K_{kl}$, $K=h^{ij}K_{ij}$ and dots denote covariant differentiation based on the 3-metric $h_{ij}$.
The canonical momenta corresponding to the variables $\left\{N, N_i, h_{ij}\right\}$ are:
\begin{eqnarray}
\label{canonical_momenta}
\pi &=& \frac{\partial \mathcal{L}_\textrm{ADM}}{\partial N_{,0}} = 0 \nonumber \\
\pi^{i} &=& \frac{\partial \mathcal{L}_\textrm{ADM}}{\partial N_{i,0}} = 0 \nonumber \\
\pi^{ij} &=& \frac{\partial \mathcal{L}_\textrm{ADM}}{\partial h_{ij,0}} = -\sqrt{h}\left(K^{ij} - h^{ij}K\right).
\end{eqnarray}
The first two equations are known as primary constraints; the momenta $\pi$ and $\pi^i$ vanish because the Lagrangian is independent of the velocities $N_{,0}$ and $N_{i,0}$. The Hamiltonian is calculated in the usual way:
\begin{eqnarray}
\label{ADM_hamiltonian}
\mathcal{H}_\textrm{ADM} &=& \pi N_{,0} + \pi^iN_{i,0} + \pi^{ij}h_{ij,0} - \mathcal{L}_{ADM} \nonumber \\
\mathcal{H}_\textrm{ADM} &=& \pi N_{,0} + \pi^iN_{i,0} + N\mathcal{H} + N_i\chi^i
\end{eqnarray}
where $\mathcal{H} = h^{1/2}\left(K_{ij}K^{ij} - K^2- {}^{(3)}R\right)$ and $\chi^i = -2\pi^{ij}_{\;\;,j} - h^{il}\left(2h_{jl,k}-h_{jk,l} \right) \pi^{jk}$. Since $\pi$ and $\pi^i$ vanish, the ADM Hamiltonian can be written:
\begin{equation}
\label{ADM_hamiltonian2}
\mathcal{H}_\textrm{ADM} = N\mathcal{H} + N_i\chi^i.
\end{equation}
It is straightforward to derive Einstein's free-field equations by taking the Poisson bracket of the dynamical variables $\left\{N, N_i, h_{ij}\right\}$ with the Hamiltonian (\ref{ADM_hamiltonian2}) and enforcing the primary constraints.
By taking the Poisson bracket of the primary constraints with the Hamiltonian, one obtains the secondary constraints:
\begin{eqnarray}
\label{secondary_constraints_traditional}
\mathcal{H} &=& 0 \nonumber \\
\chi^i &=& 0.
\end{eqnarray}
These secondary constraints restrict the dynamics in order to preserve the primary constraint equations for all time. These equations are often called the Hamiltonian constraint and the momentum constraint, respectively. We see that the ADM Hamiltonian is the sum of the secondary constraints, with arbitrary Lagrange multipliers $N$ and $N_i$, respectively.
The Wheeler-Dewitt theory of quantum gravity follows by elevating the Poisson brackets to commutators and turning the constraints into conditions on the state vector $\Psi$ \cite{dewitt1967}. This leads to the following commutation relations:
\begin{eqnarray}
\label{commutation_relations}
\left[N,\pi^{\prime}\right] &=& i\hbar\delta\left(\vec{x},\vec{x^{\prime}}\right) \nonumber \\
\left[N_i,\pi^{j^{\prime}}\right] &=& i\hbar\delta_i^{\;j^{\prime}} \nonumber \\
\left[h_{ij},\pi^{k^{\prime}l^{\prime}}\right] &=& i\hbar \delta_{ij}^{\;\;k^{\prime}l^{\prime}},
\end{eqnarray}
and the well-known Wheeler-DeWitt equations:
\begin{eqnarray}
\label{wheeler_dewitt}
\pi\Psi &=& 0 \nonumber \\
\pi^i\Psi &=& 0 \nonumber \\
\mathcal{H}\Psi &=& 0 \nonumber \\
\chi^i\Psi &=& 0,
\end{eqnarray}
where
\begin{eqnarray}
\delta_i^{\;j^{\prime}} &\equiv& \delta_i^{\;j} \delta\left(\vec{x},\vec{x}^{\prime} \right) \nonumber \\
\delta_{ij}^{\;\;k^{\prime}l^{\prime}} &\equiv& \delta_{ij}^{\;\;kl} \delta\left(\vec{x},\vec{x}^{\prime} \right) \nonumber \\
\delta_{ij}^{\;\;kl} &\equiv& \frac{1}{2}\left(\delta_{i}^{\;k} \delta_{j}^{\;l} +\delta_{i}^{\;l} + \delta_{j}^{\;k} \right).
\end{eqnarray}
As is well known, these equations are ill-defined.
We now turn to the equations of quantum gravity in the proposed reformulation of general relativity. Because the dynamics are now constrained by the additional constraint (\ref{eq:constraint}), the Lagrangian of the new theory is:
\begin{equation}
\mathcal{L}^{\star} = \mathcal{L}_\textrm{ADM} + Nh^{1/2}\lambda_0\left(f_{\alpha\beta}f^{\alpha\beta}-f_0 \right).
\end{equation}
Note that the Lagrangian is no longer independent of the velocities $N_{,0}$ and $N_{i,0}$, and therefore, the canonical momenta (\ref{canonical_momenta}) now become:
\begin{eqnarray}
\label{canonical_momenta_new}
\pi &=& \frac{\partial \mathcal{L}}{\partial N_{,0}} = Nh^{1/2}\lambda_0\frac{\partial \left(f_{\mu\nu}f^{\mu\nu}\right)}{\partial N_{,0}} \nonumber \\
\pi^{i} &=& \frac{\partial \mathcal{L}}{\partial N_{i,0}} = Nh^{1/2}\lambda_0\frac{\partial \left(f_{\mu\nu}f^{\mu\nu}\right)}{\partial N_{i,0}} \nonumber \\
\pi^{ij} &=& \frac{\partial \mathcal{L}}{\partial h_{ij,0}} = -\sqrt{h}\left(K^{ij} - h^{ij}K\right) +Nh^{1/2}\lambda_0\frac{\partial \left(f_{\mu\nu}f^{\mu\nu}\right)}{\partial h_{ij,0}}
\end{eqnarray}
where the comma denotes a Lie-derivative in the direction of the time vector. We treat $\lambda_0$ as a new canonical coordinate, which introduces the primary constraint:
\begin{equation}
\label{new_primary_constraint}
\pi_{\lambda_0} = \frac{\partial \mathcal{L}}{\partial \lambda_{0,0}} = 0.
\end{equation}
The new Hamiltonian is:
\begin{equation}
\mathcal{H}^{\star} = \pi N_{,0} + \pi^i N_{i,0} + \pi^{ij}h_{ij,0} + \pi_{\lambda_0}\lambda_{0,0} - \mathcal{L}^{\star} .
\end{equation}
Note that the canonical momenta $\pi$ and $\pi^i$ no longer vanish and do not act as primary constraints; however, $\pi_{\lambda_0}$, the momentum conjugate to $\lambda_0$, does vanish. By requiring that the Poisson bracket of the new primary constraint with the Hamiltonian $\mathcal{H}^{\star}$ vanishes, we obtain the secondary constraint:
\begin{equation}
f \equiv f_{\alpha\beta}f^{\alpha\beta}-f_0 =0.
\end{equation}
By taking the Poisson bracket of the secondary constraint $f$ with the Hamiltonian $\mathcal{H}^{\star}$, and iterating this process, one obtains a set of secondary constraints $\Phi_k$
The Wheeler-Dewitt equations now become:
\begin{eqnarray}
\label{wheeler_dewitt_add}
\pi_{\lambda_0}\Psi &=& 0 \nonumber \\
\Phi_k\Psi &=& 0.
\end{eqnarray}
The commutation relations must now be formulated in terms of Dirac brackets instead of Poisson brackets \cite{dirac1964}:
\begin{equation}
\label{dirac_bracket_definition}
\left\{ f,g \right\}_{D} = \left\{ f,g \right\} - \left\{f,\Phi_k\right\}M^{-1}_{kl}\left\{\Phi_l,g\right\},
\end{equation}
where $\Phi_k$ are the second-class constraints and $M_{ij}=\left\{\Phi_i,\Phi_j\right\}$. Therefore, the commutation relations are:
\begin{eqnarray}
\label{new_commutation_relations}
\left[\lambda_0,\pi_{\lambda_0}^{\prime}\right] &=& i\hbar\left\{\lambda_0,\pi_{\lambda_0}^{\prime}\right\}_{D} \nonumber \\
\left[N,\pi^{\prime}\right] &=& i\hbar\left\{N,\pi^{\prime}\right\}_{D} \nonumber \\
\left[N_i,\pi^{j^{\prime}}\right] &=& i\hbar\left\{N_i,\pi^{j^{\prime}}\right\}_{D} \nonumber \\
\left[h_{ij},\pi^{k^{\prime}l^{\prime}}\right] &=& i\hbar \left\{h_{ij},\pi^{k^{\prime}l^{\prime}}\right\}_{D}.
\end{eqnarray}
\section{\label{sec:discussion}Discussion}
We have argued that the time discontinuity paradox and the existence of CTCs in certain exact solutions of general relativity are manifestations of a fundamental a crisis in the foundations of Einstein's general relativity. Therefore, we were forced to reformulate general relativity in a manner that is consistent with our physical experience of time. Note that this conclusion is in sharp contradistinction to the widespread claim that the existence of CTCs in solutions to Einstein's equations forces one to accept the possibility of time travel. We are arguing the converse, namely, that time travel has not been experimentally observed, and therefore CTCs (along with the time discontinuity) must be expunged from general relativity. Additional reasons for revisiting these paradoxes include: `the problem of time' in quantum gravity and the observation that rotation is a general property of astrophysical systems exhibiting the dark matter problem.
We showed that we can resolve these paradoxes with a single postulate, namely, the preservation of the local properties of time. As a result, the electromagnetic field emerges a `gauge' field that preserves the local properties of time in gravitational fields. In this work, we introduced this postulate into general relativity by demanding $d\sigma$ remain a scalar invariant under arbitrary four-dimensional coordinate transformations. This resulted in replacing the non-covariant definition of simultaneity in traditional general relativity with a covariant definition. On the other hand, the postulate to preserve the local properties of time was introduced into the field equations by adding the constraint $f_0 = f_{\alpha\beta}f^{\alpha\beta}$ to the action. While this constraint was introduced in order to prevent CTCs from emerging, we have not proven that this will be the case in general. More work is needed to understand the role of CTCs in solutions of the new field equations. If CTCs do arise in the proposed reformulation of general relativity, then one may need to introduce a stronger constraint into the variational principle. For example, another invariant exists, $\star f_{\mu\nu}f^{\mu\nu}$, where $\star$ denotes dual, that can be preserved in the action principle. Therefore, more work may be needed to refine the postulate of the preservation of the local properties of time in a way that forces a unique set of field equations consistent with our physical experience of time.
The proposed theory does not allow a clear separation of the gravitational and electromagnetic fields; the electromagnetic field $\psi_\mu$ (and hence $A_\mu$) is composed of both $g_{\mu}$ and $\phi_\mu$. Therefore, the generally covariant electromagnetic field is composed of terms from the standard gravitational tensor, as well as the new degrees of freedom $\phi_\mu$. Note that a general coordinate transformation can introduce the field $\phi_\mu$ into space-time, however, it cannot introduce non-integrability of $d\sigma$ (it cannot make $f_{\mu\nu}=0 \rightarrow f_{\mu\nu}\ne 0$). For example, the $\phi_\mu$ field can emerge as a result of a coordinate transformation from the laboratory from to the rotating frame. Since the condition $f_{\mu\nu} = 0$ is preserved under an arbitrary coordinate transformation, then according to Equation (\ref{geodesicequations2}), this field will have no effect on the motion of charged particles. On the other hand, the usual electromagnetic source term $j_\mu\psi^\mu$, which must be consistent with the constraint $f_0 = f_{\alpha\beta}f^{\alpha\beta}$, can produce electromagnetic fields that deflect charged particles.
A close examination of simultaneity revealed the space-time interpretation of the $\phi_\mu$ field. The quantity $\phi_0$ modifies the local speed of light, whereas the quantities $\phi_i$ (with the appropriate $\phi_0$-dependent scaling) shift the definition of simultaneity and define (along with the $g_i$) the one-way speed of light. Therefore, the proposed theory predicts variations of the speed of light in the presence of non-zero $\phi_\mu$, for both $f_{\mu\nu}=0$ and $f_{\mu\nu}\neq 0$. In particular, the proposed theory predicts variations of the one-way and two-way speeds of light in the presence of a classical electromagnetic field. Unfortunately, we cannot quantify the magnitude of such effects because the proportionality constant between the $\phi_\mu$ field and the classical electromagnetic field, $\alpha$, is proportional to the Lagrange multiplier $\lambda_0$. Nevertheless, experimental verification of the proposed theory can be sought in measurements of the variation of the speed of light due to $\phi_0$ and Sagnac-like effects due to the quantities $\phi_i$, which can in turn provide an estimate of the Lagrange multiplier $\lambda_0$\footnote{Similarly, astrophysical measurements of the above predicted anomalous accelerations along with precise knowledge of the associated electromagnetic fields can provide an estimate of the Lagrange multiplier $\lambda_0$.}.
There are numerous examples of astrophysical electromagnetic fields with unknown origin, ranging from the Earth's magnetic field to magnetic fields on galactic scales. The proposed theory provides a new framework for addressing these anomalous electromagnetic fields. The theory outlined above predicts electromagnetic fields when $f_{\mu\nu}\neq0$. This condition can be the result of the usual electromagnetic source term in the action $j_{\mu}\psi^{\mu}$, where $j_{\mu}$ is the electromagnetic four-current. However, one must also recognize the fact that the source term cannot be prescribed arbitrarily, and must be defined to be consistent with the quantity $f_0$. Therefore, one can view the constraint as the `source' of the electromagnetic field and consequently derive the structure of astrophysical electromagnetic fields in order to satisfy the constraint in the field equations. When $f_0$ is non-zero then $f_{\mu\nu}$ must also be non-zero. In addition, even when $f_0$ vanishes $f_{\mu\nu}$ can be nonzero. Therefore, either $j_{\mu}$ or $f_0$ can be considered the `source' of the electromagnetic field. A deeper understanding of the constraint can provide insight into the origin of seemingly anomalous astrophysical electromagnetic fields.
In summary, the proposed reformulation of general relativity provides a rich framework for addressing a number of outstanding problems in contemporary gravitational physics, including: the unification of gravitation and electromagnetism; the resolution of the time discontinuity paradox; the removal of closed timelike curves from solutions of Einstein's equations; dark energy and other cosmological problems; the dark matter problem; the Pioneer anomaly; the problem of quantum gravity; and, the existence of anomalous astrophysical electromagnetic fields. Since the proposed theory depends on both a Lagrange multiplier field $\lambda_0$ as well as the externally-defined quantity $f_0$, more work is needed to give it more predictive capabilities. This points to the need to develop a complete theory of quantum gravity based on the proposed theory.
\bibliographystyle{unsrt}
|
2,877,628,089,784 | arxiv | \section{Introduction}
\label{sec:introduction}
Planets orbiting subgiant stars offer the possibility to analyze a broad set of physical processes that are not at play with main-sequence stars. These phenomena range from the
atmospheric expansion and evaporation mechanisms,
the orbital period decay,
the influence of stellar mass loss on the orbital evolution of planetary systems, as well as instabilities related to the evolution of stellar binaries \citep[e.g.][]{icko1991}. Given their intermediate evolutionary state, well characterized subgiant stars can place important constraints on several physical processes that depend on the position in the Hertzsprung-Russell diagram \citep[e.g.][]{godoy2021}.
The occurrence of planets around subgiant stars is debated in the literature. The Lick, Keck and California radial velocity planet searches \citep{johnson2006,johnson2007,johnson2008,johnson2010a,
johnson2010b,johnson2011a,johnson2011b}
targeted about 500 very bright (V <8.5) subgiants during the past two decades and reached two main conclusions: SG stars present (1) an higher occurrence of giant planets and (2) a lower occurrence of giant planets with short period (hot-Jupiters) with respect to main sequence planets' hosts.
Planets found around subgiants by these surveys are usually massive (>2 M$\rm_J$) and their larger occurrence rate with respect to giant planets around main sequence stars suggests that formation of massive planets is promoted around intermediate mass subgiant stars \citep[1.5<M/M$_{\odot}$<2, e.g.][]{bowler2010}. At the same time, the lack of close-in Jupiters indicates that, during post main sequence evolution, planets with orbital separations beyond $\sim$1 AU are more likely to survive than planets closer to their hosts and that orbital evolution mechanisms critically depend on the mass of the planets and of the planets’ hosts \citep[e. g.][]{villaver2009}.
\citet{lillobox2016} pointed out that planetary companions to subgiant stars with semi-major axis smaller than 0.5 AU tend to be inner components of multi-planetary systems and suggested that close-in Jupiters (a<0.06 AU) are engulfed by their host stars as they evolve off the main sequence. More recently \citet{grunblatt2019} performed a systematic search of planetary transits in a sample of 2476 low luminosity red giant branch stars observed during the NASA {\it K2} mission and tentatively found a higher fraction of inflated radius (R>1 R$\rm_{J}$) planets with short orbital periods (P<10 days) around evolved stars than around main sequence stars. Their results suggest that close-in planets larger than Jupiter survive the subgiant phase at least until their host stars are substantially evolved (R>5-6 R$_{\odot}$).
\begin{figure}
\includegraphics[width=\columnwidth]{tesscut.png}
\caption{
{\it TESS} Target Pixel File (TPF) centered on TIC~257060897 and relative to the first cadence of Sector~14. The image represents an area of
$\sim$4.2 square arcmin around the target. Sources in {\it Gaia} EDR3 are represented by the red dots, scaled inversely proportionally to their difference of apparent $G$-band magnitude with respect to the target and corrected for proper motion to the epoch of the TPF. The yellow circle shows our adopted photometric aperture.
}
\label{fig:tesscut}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{ASIAGO_REFERENCE.png}
\caption{
An area of $\sim6^{\prime}\times7^{\prime}$ centered on TIC~257060897 (indicated by the magenta cross) as imaged by the Asiago Schmidt 67/92 telescope. The image is displayed with the same orientation of the TESS image. Red open circles represent {\it Gaia} EDR3 sources.
}
\label{fig:ASIAGO_reference}
\end{figure}
The advent of the NASA {\it TESS} satellite
\citep{ricker2015} represents an important opportunity for the study of planetary systems around evolved stars. {\it TESS} provides short-cadence (2 minutes) photometry for a sample of about 200 000 pre-selected targets across the entire sky but also delivers Full Frame images with a cadence of 30 min (during the nominal mission) and 10 minutes (during the extended mission). In \citet{montalto2020} we described a new project to exploit {\it TESS} Full Frame Images. We are monitoring a sample of about 2.6 millions FGKM dwarfs and subgiants to search for transiting planets and to globally characterize their variability properties. Subgiant stars represent nearly 50\% of this set. This is a tremendous increment of the number of evolved stars analyzed so far to search for transiting planets (by a factor of $\sim$500). We expect therefore that {\it TESS} will significantly contribute to the discovery of new planetary systems orbiting these stars. Subgiant stars are also primary targets of the next space-based planetary transits search mission PLATO \citep{rauer2014}.
In this work, we present the first discovery we achieved,
a novel short period transiting planet found around the subgiant star TIC~257060897\footnote{Recently this object has been included in the TESS Object of Interest (TOI) list as TOI~4138.01. It was alerted on June 23, 2021.}.
Table~\ref{tab:stellar_properties} summarizes some basic properties of the target star.
The procedure we followed to identify planetary transits was described in \citet{montalto2020}. We searched for planets around dwarf and subiant stars selected following the criteria described in \citet{montalto2021} using
the box-fitting least square algorithm (BLS) of \citet{kovacs2002} and applied a random forest classifier to isolate plausible transiting planetary candidates. We also applied vetting criteria related to the centroid motion and local stellar density and inspected Gaia root-mean square radial velocity measurements whenever available to rule out obvious eclipsing binaries.
In Sect.~\ref{sec:observations}, we describe the photometric and spectroscopic observations we acquired. In Sect.~\ref{sec:data_analysis}, we describe our reduction procedure. In Sect.~\ref{sec:spectroscopic_parameters}, we explain how we determined the spectroscopic parameters of the host star, in Sect.~\ref{sec:stellar_parameters} the stellar parameters and in Sect.~\ref{sec:planetary_parameters} the planetary parameters.
In Sect.~\ref{sec:stellar_activity}, we analyze the stellar activity.
In Sect.~\ref{sec:discussion}, we discuss our results and in Sect.~\ref{sec:conclusions} we conclude our analysis.
\begin{table}
\centering
\caption{Identifiers, astrometric and photometric measurements of the host star.}
\label{tab:stellar_properties}
\begin{tabular}{lcc}
\hline
\hline
Parameter & Value & Source \\
\hline
{\it Gaia} & 1697129530714536320 & {\it Gaia} EDR3 \\
TYC & TYC 4417-1588-1 & Simbad \\
2MASS & J15100767+7242372 & Simbad \\
TIC & 257060897 & TIC v8.1 \\
TOI & 4138 & ExoFOP \\
$\alpha$(J2016) & 15:10:7.718 & {\it Gaia} EDR3\\
$\delta$(J2016) & +72:42:37.12 & {\it Gaia} EDR3\\
$\pi$ (mas) & 1.97$\pm$0.01 & {\it Gaia} EDR3\\
$\mu_{\alpha}$ (mas yr$^{-1}$) & 13.51$\pm$0.01 & {\it Gaia} EDR3\\
$\mu_{\delta}$ (mas yr$^{-1}$) & -7.78$\pm$0.02 & {\it Gaia} EDR3\\
{\it TESS} & 11.263$\pm$0.007 & TIC v8.1\\
{\it G} & 11.6617$\pm$0.0002 & {\it Gaia} EDR3\\
{\it G$_{BP}$} & 11.9595$\pm$0.0006 & {\it Gaia} EDR3\\
{\it G$_{RP}$} & 11.2016$\pm$0.0003 & {\it Gaia} EDR3\\
{\it B} & 12.6$\pm$0.3 & TIC v8.1\\
{\it V} & 11.81$\pm$0.02 & TIC v8.1\\
{\it J} & 10.70$\pm$0.02 & TIC v8.1\\
{\it H} & 10.45$\pm$0.02 & TIC v8.1\\
{\it K$\rm_s$} & 10.39$\pm$0.02 & TIC v8.1\\
{\it W1} & 10.35$\pm$0.02 & TIC v8.1\\
{\it W2} & 10.38$\pm$0.02 & TIC v8.1\\
{\it W3} & 10.40$\pm$0.05 & TIC v8.1\\
{\it W4} & 9.52& TIC v8.1\\
\hline
\end{tabular}
\end{table}
\begin{figure*}
\includegraphics[width=2\columnwidth]{TIC257060897_full_lightcurve.png}
\caption{
{\emph Top:} {\it TESS} lightcurve corrected for systematics with eigenvector analysis. {\emph Bottom:} final lighcurve normalized by a B-spline fitted on out-of-transit data. The lightcurves are offset vertically by an arbitrary amount for clarity.
}
\label{fig:TOI257060897_full_lightcurve}
\end{figure*}
\section{Observations}
\label{sec:observations}
\subsection{Photometry}
TIC~257060897 was imaged by the {\it TESS} satellite
between July 2018 and July 2020, as reported below. We used {\it TESS} Full Frame Images to discover this object. Subsequently, we performed a ground-based follow-up using the 67/92 cm Schmidt telescope in Asiago, Italy and the 35.6 cm CROW telescope in Portalegre, Portugal.
\subsubsection{{\it TESS} photometry}
The {\it TESS} satellite imaged TIC~257060897 during the second year of operation in seven sectors: sector 14, 15, 16, 20, 21, 22 and 26. Full Frame Images were acquired with a cadence of 30 min. In total the satellite
collected 8428 images of the target. The first image was
taken on July 18, 2019 at 20:44 UT and the last image
on July 4, 2020 at 14:43 UT. In total 47 transits have been observed. Figure~\ref{fig:tesscut} shows the Target Pixel File (TPF) for TIC~257060897 relative to the first cadence of Sector 14.
The image represents an area of $\sim$4.2 square arcmin around the target. The red dots denote sources from {\it Gaia} EDR3 corrected for proper motion at the TPF epoch. Their dimension is scaled inversely proportionally to their difference of apparent {\it Gaia} G-band magnitude with respect to the target. We used a modified version of \texttt{tpfplotter}
\citep{aller2020} to generate this figure.
\subsubsection{Asiago 67/92 cm Schmidt telescope}
A partial transit during the egress phase was observed with the Schmidt telescope in Cima Ekar on March 2, 2021. The telescope has a correcting plate of 67 cm and a spherical mirror of 91 cm. The focal length is 215 cm. It is equipped with a KAF-16803 detector with an active area of 4096$\times$4096 pixels covering a field of view of $\sim$1 square degree with a pixel scale of 0.87 arcsec pix$^{-1}$.
The telescope is completely robotized. We used the Sloan r$^{\prime}$ filter acquiring 382 images between 18:14 UT of March 2, 2021 and 01:17 UT of March 3, 2021. We used an exposure time of 25 sec. Figure~\ref{fig:ASIAGO_reference} shows an image of the sky region with dimension $\sim7^{\prime}\times6^{\prime}$ centered on TIC~257060897 (indicated by the magenta cross) as obtained with the Asiago Schmidt 67/92 telescope.
\subsubsection{CROW Observatory, Portalegre}
The telescope is a Schmidt Cassegrain Telescope,
Celestron C14 with aperture of 356 mm , F6
with 2135 mm of focal length. The images were acquired with a SBIG camera model ST-10XME with CCD KAF3200ME @-20$^{\circ}$. We used the Sloan r$^{\prime}$ filter. The telescope is completely robotized and it is operated by the Atalaia group \& CROW Observatory, Portalegre, Portugal. The data were analyzed by J. Gregorio. We observed three transits with this setup. A partial transit was observed during the ingress phase between 22:44 UT on March 20, 2021 and 04:50 UT on March 21, 2021. A full transit was observed between 20:55 UT on May 3, 2021 and 04:22 UT on May 4, 2021.
A partial transit during egress was observed between 23:01 UT on June 5, 2021 and
02:28 on June 6, 2021.
The exposure time was fixed to 120 sec for the first and second visits and to 150 sec for the third one. A total of 90, 162 and 63 images were acquired the first, the second and the third night, respectively.
\subsection{Spectroscopy}
Spectroscopic observations were obtained with the HARPS-N \citep{cosentino2012} spectrograph\footnote{Program ID: A41TAC\_24, PI: M. ~Montalto} at the Telescopio Nazionale Galileo (TNG) at the Observatorio del Roque de los Muchachos (La Palma). We acquired 11 measurements with an exposure time of 12.6 min or 15 min obtaining a S/N$\sim$20 at 5500$\,$\AA. The measurements were acquired between May 16, 2020 and March 25, 2021, exploiting a time sharing agreement with the GAPS
({\it Global Architecture of Planetary Systems})
program \citep{covino2013,benatti2018}.
\begin{table}
\centering
\caption{HARPS-N radial velocities of TIC~257060897.}
\label{tab:spectroscopic_observations}
\begin{tabular}{ccc}
\hline
\hline
BJD$_{\textrm{TDB}}$ & RV(km s$^{-1}$) & $\rm\sigma_{RV}$ (km s$^{-1}$) \\
\hline
2458986.421420916 & -11.580 & 0.005 \\
2459026.478923034 & -11.573 & 0.005 \\
2459028.532127090 & -11.719 & 0.005 \\
2459029.430919364 & -11.629 & 0.006 \\
2459051.412596867 & -11.632 & 0.005 \\
2459072.456649624 & -11.72 & 0.01 \\
2459099.378091395 & -11.589 & 0.009 \\
2459272.722442626 & -11.665 & 0.005 \\
2459296.505446991 & -11.653 & 0.005 \\
2459297.669279696 & -11.596 & 0.009 \\
2459298.609597649 & -11.69 & 0.01 \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\includegraphics[width=0.9\columnwidth]{SED.png}
\caption{
Optical and near-infrared broadband photometric measurements of the target star (black circles) and best-fit model (red open circles).
}
\label{fig:SED}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{TESS_LC.png}
\caption{
\emph{Top:} {\it TESS} light curve of TIC~257060897 folded with the best fit ephemerides. The best fit transit model is denoted by the red curve. \emph{Bottom:} residuals of the model fit.
}
\label{fig:TESS_LC}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{ASIAGO_LC.png}
\caption{
\emph{Top:} light curve of TIC~257060897 obtained with the Asiago 67/92 cm Schmidt telescope. The best fit transit model is
denoted by the red curve.
\emph{Bottom:} residuals of the model fit.
}
\label{fig:ASIAGO_LC}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{CROW1_LC.png}
\caption{
\emph{Top:} light curve of TIC~257060897 obtained with the CROW telescope on March 20, 2021. The best fit model (transit model + Gaussian process model) is
denoted by the red curve.
\emph{Middle:} light curve of TIC~257060897
after subtracting the best fit Gaussian process model. The best fit transit model is
denoted by the red curve.
\emph{Bottom:} residuals of the fit.
}
\label{fig:CROW1_LC}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{CROW2_LC.png}
\caption{
\emph{Top:} light curve of TIC~257060897 obtained with the CROW telescope on May 3, 2021. The best fit model (transit model + Gaussian process model) is
denoted by the red curve.
\emph{Middle:} light curve of TIC~257060897
after subtracting the best fit Gaussian process model. The best fit transit model is
denoted by the red curve.
\emph{Bottom:} residuals of the fit.
}
\label{fig:CROW2_LC}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{CROW3_LC.png}
\caption{
\emph{Top:} light curve of TIC~257060897 obtained with the CROW telescope on June 5, 2021. The best fit model (transit model + Gaussian process model) is
denoted by the red curve.
\emph{Middle:} light curve of TIC~257060897
after subtracting the best fit Gaussian process model. The best fit transit model is
denoted by the red curve.
\emph{Bottom:} residuals of the fit.
}
\label{fig:CROW3_LC}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{HARPS_RV.png}
\caption{
\emph{Top:} the two diagrams show HARPS-N radial velocity measurements as a function of time (TJD=BJD-2457000) separated in two different time intervals to better visualize them. \emph{Middle:} radial velocities folded with the best-fit ephemerides and best fit model (continuous line). \emph{Bottom:} residuals of the fit.
}
\label{fig:HARPS_RV}
\end{figure}
\section{Data analysis}
\label{sec:data_analysis}
\subsection{Photometry}
The {\it TESS} images were analyzed with the \texttt{DIAMANTE} pipeline following the procedure described in \citet{montalto2020}. In brief, the {\it TESS} FFIs were analyzed with a difference imaging approach where a stacked reference image was subtracted from each image after convolving the reference by an optimal kernel.
The photometry was extracted using a circular aperture of radius equal to 2 pixels. We chose this aperture after testing different aperture radii between 1 pix and 4 pix. The lightcurves from different sectors are merged together by accounting for sector by sector photometric zero points variations, then they are corrected for systematics on a sector by sector basis by using a best set of eigenlightcurves and finally normalized by a B-spline function fitted on out-of-transit data. In Fig.~\ref{fig:TOI257060897_full_lightcurve} we show the eigenvector corrected lightcurve (top) and the final B-splined lightcurve (bottom). We analyzed only the data that were not flagged by the pipeline \citep[see ][]{montalto2020} yielding 7997 measurements.
The analysis of the Asiago data was performed with custom built software. The analysis of the CROW Observatory data was done using the
AstroImageJ software\footnote{\url{https://www.astro.louisville.edu/software/astroimagej/}}. In both cases the fluxes of the target and a set of comparison stars were derived by simple aperture photometry and differential photometry was performed.
\subsection{Spectroscopy}
The spectroscopic data were reduced by the HARPS-N Data Reduction Software
\citep[DRS v3.7, ][]{lovis2007} using a G2V template mask. Table~\ref{tab:spectroscopic_observations} reports the radial velocities extracted by the pipeline.
\section{Spectroscopic parameters}
\label{sec:spectroscopic_parameters}
We measured effective temperature (T$\rm_{eff}$), surface gravity (log g), iron abundance [Fe/H] and
microturbulence velocity ($\xi$)
using the equivalent width method. We used the software \texttt{StePar} \citep{tabernero2019} which
implements a grid of MARCS model atmospheres \citep{gustafsson2008} and the MOOG \citep{sneden1973} radiative transfer code to compute stellar atmospheric parameters by means of a Downhill Simplex minimisation algorithm which minimizes a quadratic form composed of the excitation and ionisation equilibrium of Fe. Equivalent widths were measured with \texttt{ARES}~v2
\citep{sousa2015} from the coadded spectrum obtained from the individual HARPS-N measurements used for the radial velocity measurements. The coadded spectrum had a S/N$\sim$60 at
5500 \AA. We used the FeI and FeII line list fornished by the authors for the case of the Sun.
Using this approach we obtained the parameters reported in Table~\ref{tab:spectroscopic_parameters}.
\subsection{Empirical spectral library}
\label{sec:empirical_spectral_library}
We also compared the spectroscopic parameters
we derived in the previous section with the ones obtained using the spectra of the empirical library
of \citet{yee2017}. This library includes 404 stars observed with Keck/HIRES by the California Planet Search. We used the software \texttt{SpecMatch-Emp}
\citep{yee2017} to perform the comparison between our stacked spectrum of TIC~257060897 and the library spectra. In Fig.~\ref{fig:TIC257060897_EmpiricalLibrary.png} we represent with the magenta colour the three empirical library spectra\footnote{The spectra of star HD 31523, HD84737 and KIC 12258514} most highly correlated with the target spectrum
in the region of the Mgb triplet. With the red colour we indicate the target spectrum, and in blue the best-fit linear combination of the three reference spectra reported above. Finally, in black at the bottom we show the difference between the target spectrum and the linearly combined reference spectra. In this case we obtained: T$\rm_{eff}$=(5967$\pm$110) K,
log$\,$g=(4.1$\pm$0.1) dex, [Fe/H]=(0.19$\pm$0.09) dex all consistent within 1$\sigma$ with the spectrosopic parameters previously derived and adopted.
\begin{figure}
\includegraphics[width=\columnwidth]{TIC257060897_EmpiricalLibrary.png}
\caption{
Comparison of the target spectrum in the Mgb triplet region with empirical
spectra in the \citet{yee2017} library. The target's spectrum is depicted in red. The three spectra of the library most highly correlated with the target's spectrum are shown in magenta, while the best-fit
linear combination of them is depicted in blue. The residuals between the best fit and the target's spectrum are represented in black.
}
\label{fig:TIC257060897_EmpiricalLibrary.png}
\end{figure}
\begin{table}
\centering
\caption{Spectroscopic parameters of TIC~257060897.}
\label{tab:spectroscopic_parameters}
\begin{tabular}{cccc}
\hline
\hline
T$\rm_{eff}$ & log$\,$g & [Fe/H] & $\rm \xi$ \\
(K) & (dex) & (dex) & (km s$\rm^{-1}$) \\
\hline
6128$\pm$57 & 4.2$\pm$0.1 & +0.20$\pm$0.04 & 1.28$\pm$0.07\\
\hline
\end{tabular}
\end{table}
\begin{table*}
\centering
\caption{System parameters relative to TIC~257060897.}
\label{tab:system_parameters_A}
\begin{tabular}{llccc}
\hline
\hline
Parameter & Symbol & Value & Priors & Units \\
\hline
{\it Fitted parmeters} & & & & \\
Transit Epoch (BJD) & $T_0$ & 1708.9983$\pm$0.0003 & $\mathcal{U}$(1708.8, 1709.1) & days \\
Orbital Period & $P$ & 3.660028$\pm$0.000006 & $\mathcal{U}$(3.6, 3.7) & days \\
Planet-to-star radius ratio & $p=\frac{R_p}{R_{\star}}$ & 0.0841$\pm$0.0009 & $\mathcal{U}$(0.000010; 0.5) & - \\
Impact parameter & $b$ & 0.42$\pm$0.08 & $\mathcal{U}$(0; 2) & - \\
Stellar reflex velocity & $K$ & 74$\pm$3 & $\mathcal{J}$(0.5; 2000) & m s$^{-1}$ \\
Center-of-mass velocity & $\gamma$ & -11.653$\pm$0.002 & $\mathcal{U}$(-21.720; -1.573) & km s$^{-1}$\\
$\sqrt{e}\cos\omega$ & $\sqrt{e}\cos\omega$ & 0.08$\pm$0.08 & $\mathcal{U}$(-1; 1) & m s$^{-1}$ \\
$\sqrt{e}\sin\omega$ & $\sqrt{e}\sin\omega$ & 0.0$\pm$0.2 & $\mathcal{U}$(-1; 1) & m s$^{-1}$ \\
Stellar density & $\rho_{\star}$ & 0.22$\pm$0.01 & $\mathcal{N}$(0.22; 0.01) & $\rho_{\odot}$\\
Radial velocity jitter (HARPS-N) & $\sigma_{\textrm{HARPS-N}}$ & 3$\pm$2 & $\mathcal{U}$(0.05; 1000) & m s$^{-1}$ \\
Jitter error (TESS) & $\sigma_{\textrm{TESS}}$ & 373$\pm$18 & $\mathcal{U}$(4, 419000) & ppm \\
Jitter error (ASIAGO) & $\sigma_{\textrm{ASIAGO}}$ & 1052$\pm$531 & $\mathcal{U}$(38, 389000) & ppm \\
Jitter error (CROW1) & $\sigma_{\textrm{CROW1}}$ & 430$\pm$356 & $\mathcal{U}$(32, 327200) & ppm \\
Jitter error (CROW2) & $\sigma_{\textrm{CROW2}}$ & 305$\pm$244 & $\mathcal{U}$(38, 415800) & ppm \\
Jitter error (CROW3) & $\sigma_{\textrm{CROW3}}$ & 1449$\pm$256 & $\mathcal{U}$(13, 132500) & ppm \\
GP$\rm_{\log \rho}$ (CROW) & GP$\rm_{\log \rho}$ parameter (Mat\'ern Kernel) & -2.1$\pm$0.4 & $\mathcal{U}$(-3, 3) & - \\
GP$\rm_{\log \sigma}$ (CROW) & GP$\rm_{\log \sigma}$ parameter (Mat\'ern Kernel) & -5.6$\pm$0.3 & $\mathcal{U}$(-6, 6) & ppm \\
Parameter related to linear limb darkening (TESS) & $q_{1,\textrm{TESS}}$ & 0.3$\pm$0.1 & $\mathcal{U}$(0, 1) & - \\
Parameter related to quadratic limb darkening (TESS) & $q_{2,\textrm{TESS}}$ & 0.3$\pm$0.2 & $\mathcal{U}$(0, 1) & - \\
Parameter related to linear limb darkening (ASIAGO) & $q_{1,\textrm{ASIAGO}}$ & 0.7$\pm$0.2 & $\mathcal{U}$(0, 1) & - \\
Parameter related to quadratic limb darkening (ASIAGO) & $q_{2,\textrm{ASIAGO}}$ & 0.5$\pm$0.3 & $\mathcal{U}$(0, 1) & - \\
Parameter related to linear limb darkening (CROW) & $q_{1,\textrm{CROW}}$ & 0.3$\pm$0.2 & $\mathcal{U}$(0, 1) & - \\
Parameter related to quadratic limb darkening (CROW) & $q_{2,\textrm{CROW}}$ & 0.4$\pm$0.3 & $\mathcal{U}$(0, 1) & - \\
\hline
{\it Derived parameters} & & & & \\
Orbital inclination & $i$ & 86.0$\pm$0.7 & - & $^{\circ}$\\
Stellar mass & $M_{\star}$ & 1.32$\pm$0.04 & - & M$_{\odot}$ \\
Stellar radius & $R_{\star}$ & 1.82$\pm$0.05 & - & R$_{\odot}$ \\
Scaled semi-major axis of the orbit & $\frac{a}{R_{\star}}$ & 6.05$\pm$0.09 & - & - \\
Semi-major axis & a & 0.051$\pm$0.002 & - & AU \\
Eccentricity & $e$ & 0.03$\pm$0.02 & - & - \\
Argument of periastron & $\omega$ & 20$\pm$72 & - & $^{\circ}$\\
Extinction in the visible & $A_{V}$ & 0.08$\pm$0.02 & - & - \\
Luminosity & $\log~L_{*}$ & 0.61$\pm$0.02 & - & L$\rm_{\odot}$ \\
Distance & $d$ & 498$\pm$13 & - & pc \\
Age & $\log~Age$ & 9.54$\pm$0.04 & - & \\
Planet mass & $m_p$ & 0.67$\pm$0.03 & - & M$\rm_{jup}$\\
Planet radius & $r_p$ & 1.49$\pm$0.04 & - & R$\rm_{jup}$\\
Planet surface gravity & $log\,g_{p}$ & 2.87$\pm$0.03 & - & - \\
Planet density & $\rho_p$ & 0.25$\pm$0.02 & - & g cm$\rm ^{-3}$\\
Planet equil. temp. (A=0) & $T_{eq}$ & 1762$\pm$21 & - & K \\
Total duration & T$_{41}$ & 0.194$\pm$0.005 & - & days \\
Duration of total transit phase & T$_{32}$ & 0.158$\pm$0.006 & - & days \\
Linear limb darkening (TESS) & $\mu_{1,\textrm{TESS}}$ & 0.3$\pm$0.1 & - & - \\
Quadratic limb darkening (TESS) & $\mu_{2,\textrm{TESS}}$ & 0.2$\pm$0.3 & - & - \\
Linear limb darkening (ASIAGO) & $\mu_{1,\textrm{ASIAGO}}$ & 0.8$\pm$0.3 & - & - \\
Quadratic limb darkening (ASIAGO) & $\mu_{2,\textrm{ASIAGO}}$ & 0.0$\pm$0.4 & - & - \\
Linear limb darkening (CROW) & $\mu_{1,\textrm{AT1}}$ & 0.4$\pm$0.3 & - & - \\
Quadratic limb darkening (CROW) & $\mu_{2,\textrm{AT1}}$ & 0.1$\pm$0.3 & - & - \\
\hline
\end{tabular}
\end{table*}
\begin{figure}
\includegraphics[width=\columnwidth]{Track_M1_30.png}
\caption{
Stellar track of a star with mass M=1.3 M$_{\odot}$ and
metallicity [M/H]=0.2. The luminosity and the effective temperature of the target are indicated by the black dot.
The diagram shows also some critical evolutionary phases along the track (red dots). Point A: beginning of the main sequence; point B: the hydrogen burning is almost ended and a small contraction phase begins; point C: the small contraction ends, the hydrogen is exhausted in the core and the star moves towards the RGB.
}
\label{fig:iso}
\end{figure}
\section{Stellar parameters}
\label{sec:stellar_parameters}
We derived stellar parameters using a Bayesian approach and our custom software.
In particular, we used
the {\it Gaia} EDR3 \citep{riello2021}, 2MASS \citep{skrutskie2006, cohen2003} and ALLWISE (W1 and W2) photometry \citep{wright2010, jarrett2011}. We imposed a Gaussian prior on the effective temperature, the gravity, the metallicity using the values reported in Table~\ref{tab:spectroscopic_parameters}.
We imposed a uniform prior on the distance: [$d$-3$\times\,ed$; $d$+3$\times\,ed$] where
$d$ was the distance obtained from the simple inversion of the parallax, and $ed$ was the the semi-difference between the upper and lower estimates obtained subtracting and adding the parallax standard error to the {\it Gaia} EDR3 parallax value.
Such parallax value was first corrected by the zero point bias discussed in \citet{lindegren2021} using the software provided by the authors\footnote{\url{https://www.cosmos.esa.int/web/gaia/edr3-code}}. We found a value equal to -0.044953 mas
for the bias.
We also imposed a uniform prior on the interstellar extinction A$\rm_V$. To construct this prior we first calculated the expected value of the reddening for our target
using the reddening map of \citet{lallement2018}. At the position of TIC257060897 we obtained {\it E(B-V)}=0.02$\pm$0.01. We assumed then a standard reddening law and obtained A$\rm_v=3.1\,E(B-V)=0.06\pm0.03$. Then we considered as plausible interval for the optical extinction the values between [0;A$\rm_v$+3$\times\sigma_{A\rm_v}$]=[0;0.15]. To calculate the expected broadband photometry we used the Padova library of stellar isochrones \citep[PARSEC,][]{bressan2012}. We first restricted the age range of the stellar isochrones to be considered within the range log Age = [9.4; 9.8] using a {\it Gaia} absolute colour-magnitude diagram. We therefore generated a set of isochrones with log Age = [9.4; 9.8] and logarithmic step size equal to 0.01 dex. We varied the metallicity of the isochrone set between [M/H] = [0.05; 0.3] in steps of size equal to 0.01 dex. For each value of the age and of the metallicity we considered nine different values of the optical extinction equal to A$\rm_V$=[0.00,0.01,0.02,0.03,0.04,0.05,0.10,0.15,0.20] and generated the corresponding models. Linear interpolation was used to derive the broadband photometry corresponding to any intermediate value of the reddening. For each value of the effective temperature, gravity, metallicity generated by the algorithm we identified the stellar model with the closest stellar parameters in our
isochrone set. We then calculated from this model the broadband photometry applying to the model magnitudes the simulated distance modulus and extinction. We also calculated the parallax (from the simulated distance value). We then compared these simulated values of the broadband photometry and of the parallax with the observed ones. The log-likelihood function we adopted to evaluate the model performance was equal to:
$\ln\mathcal{L}$=-$\rm\frac{1}{2}\sum_{i=1}^{i=Nobs} (\frac{o_i-s_i}{\sigma_{o_i}})^2$. For any simulated model we registered also the value of the stellar mass, stellar radius, luminosity and age. The posterior distributions of the parameters were obtained using the nested sampling method implemented in the \texttt{MultiNest}
package \citep{feroz2008,buchner2014,feroz2009,feroz2019}. We used 250 live points. The result of the fit is shown in Fig.~\ref{fig:SED}. All photometric data are well reproduced. The reduced chi-square of the fit ($\rm\chi_r$) is equal to $\rm\chi_r$=1.2. The best fit stellar mass and radius are M$_{\star}$=(1.32$\pm$0.04) M$_{\odot}$ and R$_{\star}$=(1.82$\pm$0.05) R$_{\odot}$, respectively.
We also obtained a distance (d) equal to d=(498$\pm$13) pc,
an extinction equal to A$\rm_V$=(0.08$\pm$0.02) and an age
equal to log Age=(9.54$\pm$0.04).
The results of our analysis are reported in Table~\ref{tab:system_parameters_A}. In Figure~\ref{fig:iso}, we also present the stellar track of a star with mass M=1.3 M$_{\odot}$ and metallicity [M/H]=0.2. The diagram shows also some critical evolutionary phases along the track (red dots). In particular point A denotes the beginning of the main sequence (the pre-main sequence phase was neglected). At point B the hydrogen burning is almost ended. A small contraction phase begins here for intermediate and massive stars
\citep[M$\gtrsim$1.25$\,$M$_{\odot}$, e.g.][]{kippenhahn1994}. At point C the small contraction ends, the hydrogen is exhausted in the core and the star moves toward the RGB.
The location of the target star in Figure~\ref{fig:iso} (black dot) in between points B and C suggests
that this object has already entered a phase of instability where the hydrogen in its core is nearly completely exhausted and the core is slightly contracting before igniting the hydrogen shell. The evolution across these phases is very fast (see Sect.~\ref{sec:discussion}).
By using the empirical spectral library described in Sec.~\ref{sec:empirical_spectral_library}
we also derived the stellar parameters obtaining
R$_{\star}$=(1.7$\pm$0.2) R$_{\odot}$, M$_{\star}$=(1.20$\pm$0.08) M$_{\odot}$, log Age=9.7, v$\,$sin$\,$i=1 km$\,$s$^{-1}$.
The estimated v$\,$sin$\,$i and the estimated stellar radius and imply a rotation period $\sim92$ days (assuming the inclination of the stellar rotation axis is identical to the inclination of the planetary orbit).
The Lomb-Scargle periodogram of the eigenvector corrected out-of-transit data obtained by TESS is rather flat beyond $\sim$30 days as shown in Fig.~\ref{fig:oot_ls}, whereas some structure is visible for smaller periods although it is not associated with a clear modulation.
\begin{figure}
\includegraphics[width=\columnwidth]{OOT_LS.png}
\caption{
The Lomb-Scargle periodogram of the eigenvector corrected out-of-transit data obtained by TESS.
}
\label{fig:oot_ls}
\end{figure}
An alternative method to determine the stellar parameters is described in \citet{montalto2021} and employed for the construction of the all-sky PLATO input catalogue (asPIC1.1).
In this case we obtained
T$\rm_{eff}$=(5946$\pm$254) K,
R$_{\star}$=(2.0$\pm$0.2) R$_{\odot}$ and M$_{\star}$=(1.3$\pm$0.1) M$_{\odot}$.
These estimates are all compatible with the parameters obtained by the bayesian approach described above, which was finally adopted in our analysis.
\section{Planetary parameters}
\label{sec:planetary_parameters}
Planetary parameters were obtained performing a simultaneous fit
of both spectroscopic and photometric data with the software
\texttt{PyORBIT} \citep{malavolta2016,malavolta2018}.
For the transiting planet we assumed a Keplerian orbit with free eccentricity, following the parametrization of \citet{eastman2013}.
Transit models were computed with the \texttt{batman} package \citep{kreidberg2015}, following the prescriptions of \citet{benatti2019}.
For TESS long-cadence data, we took into account the distortion due to the extended integration time \citet{kipping2010} by averaging the light curve model over ten evenly-spaced points computed within each 1800s exposure. For the other datasets, this correction was not deemed necessary.
We used independent limb-darkening quadratic coefficients for each instrument. We sampled limb-darkening coefficients with the method described in \citet{kipping2013} and used uninformative priors for all parameters except the stellar density, for which we used a Gaussian prior following the results of Sect.~\ref{sec:stellar_parameters}. Each dataset came with its own jitter parameter to absorb underestimated white-noise errors and unaccounted sources of red noise, with each CROW transit treated as an independent dataset.
We explored the possibility of modelling instrumental systematic effects in the light curves with a Gaussian processes (GP) using a Matérn-like kernel, as implemented in the code \texttt{celerite} \citep{ambikasaran2014,foreman2017}.
We performed model selection among different combinations of datasets with/without GP by computing the Bayesian evidence through Dynamic Nested Sampling \citep{higson2019}, after implementing \texttt{dynesty} \citep{speagle2020} into \texttt{PyORBIT}. The favourite model foresees the use of a GP for CROW data only, with hyper parameters shared among the three transits. This model was moderately favoured over the use of independent hyper parameters for each CROW transit ($\Delta \ln\mathcal{Z} = 4.9 \pm 0.8$) or an additional GP for the Asiago light curve ($\Delta \ln\mathcal{Z} = 4.2 \pm 0.8$), while it was strongly favoured over any other combination ($\Delta \ln\mathcal{Z} > 10 $). Regarding radial velocities, we preferred to leave any possible activity-related signal to be absorbed by the jitter parameter, rather than using a GP, due to the relatively small number of observations.
The posterior distributions of the parameters were obtained using \texttt{emcee} \citep{foremanmackay2013}, employing the most favourite model according to the Bayesian evidence. The model encompassed 23 parameters, for which we used 92 MCMC walkers. We run the MCMC for 100000 steps, conservatively discarding the first 25000 steps as burn-in and applying a thinning factor of 100. The confidence interval were estimated by taking the 15.86th and 84.13th percentiles of the posterior, reported together with the median values in Table~\ref{tab:system_parameters_A}. We obtained that the transiting body is a Jupiter-like planet with a mass m$\rm_p=$(0.67$\pm$0.03) M$\rm_{j}$ and a radius r$\rm_p=$(1.49$\pm$0.04) R$\rm_{j}$ yielding a density $\rho_p$=(0.25$\pm$0.02) g cm$^{-3}$. The resulting eccentricity is equal to $e=$(0.03$\pm$0.02) consistent with a circular orbit according to the criterion of \citep{lucy1971}. The best fit model is represented in the Figures \ref{fig:TESS_LC}-\ref{fig:HARPS_RV}.
\section{Stellar activity}
\label{sec:stellar_activity}
\subsection{Chromospheric activity}
\label{sec:chromospheric_activity}
In Fig.~\ref{fig:bis_logR} (top) we present the chromospheric activity index log R$^{\prime}\rm_{HK}$ as a function of the radial velocity measurements. To calculate this index we first calculated the $S\rm_{CaII}$ index with the software \texttt{ACTIN}
\footnote{https://github.com/gomesdasilva/ACTIN}
\citep{dasilva2018} and then followed the procedure reported in
\citet{dasilva2021} to calibrate the $S\rm_{CaII}$ index to the Mount Wilson scale and to calculate the photospheric and bolometric
corrected chromospheric emission ratio R$^{\prime}\rm_{HK}$. Errors on the individual measurements are obtained from error propagation of the formulas reported in \citet{dasilva2021}\footnote{We also considered a minimum error on Mount Wilson in dex of 0.0005 dex,
as suggested in \citet{dasilva2021}.}. The Pearson correlation coefficient between the log R$^{\prime}\rm_{HK}$ index and the radial velocities is equal to 0.23 (p-value=0.5416) indicating a negligible correlation between these two quantities. The average log R$^{\prime}\rm_{HK}$ is
<log R$^{\prime}\rm_{HK}$>=(-5.06$\pm$0.05). According to the classification proposed by \citet{henry1996}, during our observations
TIC~257060897 was inactive, consistently with the moderately old age derived from stellar models in Sect.~\ref{sec:stellar_parameters}.
\subsection{Bisector span}
\label{sec:bisector_span}
In Fig.~\ref{fig:bis_logR} (bottom) we report the bisector span \citep{queloz2001} vs the radial velocity measurements. Such quantity was calculated by the HARPS-N pipeline and it is a measure of the line asymmetry which may as well suggest the presence of issues related to activity and/or multiplicity. The error on the bisector has been assumed equal to twice the error on the radial velocities. Also in this case we measured a negligible correlation between the radial velocities and the bisector span (r$\rm_{pearson}$=0.44, p-value=0.1808).
\begin{figure}
\includegraphics[width=\columnwidth]{BIS_LOGR.png}
\caption{
\emph{Top: } the chromospheric activity index of TIC~257060897 vs the radial velocities measurements. \emph{Bottom: } the bisector span vs the radial velocities.
}
\label{fig:bis_logR}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{radius_mass.png}
\caption{
Planetary radius vs planetary mass for the 250 known transitng planets with mass>0.5 M$\rm_{J}$ and with masses and radii measured with a precision better than 10$\%$ (open circles). The position of TIC~257060897b in this
diagram is indicated by the red dot.
}
\label{fig:radius_mass}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{density_mass.png}
\caption{
Planetary density vs planetary mass for the same sample of planets presented in Fig.~\ref{fig:radius_mass}. The position of TIC~257060897b in this
diagram is indicated by the red dot.
}
\label{fig:density_mass}
\end{figure}
\section{Discussion}
\label{sec:discussion}
The lack of correlation between RVs vs log R$^{\prime}\rm_{HK}$ and RVs vs BIS$\rm_{span}$
supports the Keplerian origin of the RV variations
and the planetary nature of the transiting object.
TIC~257060897b is an inflated hot-Jupiter planet with a very low density and surface gravity. In
Fig.~\ref{fig:radius_mass}, we present the
radius against the mass of all 250 known transiting exoplanets (with mass>0.5 M$\rm_{J}$) having a precision better than 10$\%$ in both radius and mass\footnote{This list was retrieved from the NASA exoplanet archive: \url{http://exoplanetarchive.ipac.caltech.edu} on September 21, 2021. Whenever a planet had multiple entries in the table we averaged the quantities of interest among the different entries.} and in Fig.\ref{fig:density_mass} the density vs the mass for the same planets' sample. From these figures, it is clear that TIC~257060897b is one of the largest radius and smallest density transiting exoplanets known so far.
This result contributes to make TIC~257060897b an extreme planetary system in the context of known hot-Jupiter planets.
The inflated radius of TIC~257060897b
may be put in relation with the evolutionary state of its host star. Since the beginning of the main sequence this star increased its luminosity by 70$\%$ in about 3.5 Gyr. Over the next 130 Myr it will increase further its luminosity by 30$\%$. Therefore TIC~257060897 is in a phase of extremely rapid luminosity increment (see Fig.~\ref{fig:iso}) and the The atmosphere of the close-in exoplanet may have reacted puffing up in response of the increment of energy input from the host.
The argument of re-inflation of close-in giant planets has been extensively discussed
in \citet{hartman2016} who demonstrated that inflated radius
Hot-Jupiters are preferentially found around more evolved host stars and that
this is not due to any kind of observational bias. TIC~257060897b is a new
object that belongs to the class of inflated planets around moderately evolved
stars and therefore appears to naturally support the idea of re-inflation.
In Figure~\ref{fig:InflatedJupiters}, we show that inflated radii planets (R$>$1.45 R$\rm_J$, green dots) are found preferentially
around the most evolved stars also in the sample we analyzed.
As discussed in \citet{hartman2016} this is likely a consequence of
the well established correlation between the equilibrium
temperature and planetary radius, once accounting for the fact that
planets around evolved stars are generally more highly irradiated than planets
around main sequence stars (at the same orbital distance). TIC~257060897b appears to follow such a correlation (Fig.~\ref{fig:RadiusTeq}).
The important theoretical implication of re-inflation is that the
incident energy should be deposited deep in the interiors of the planets in order to permit the rapid expansion
of the planetary atmosphere \citep{lopez2016} and present theories of radius inflation are still not able to convincingly explain
how this could happen irrespectively of the mechanism that it is considered
\citep[e.g.][]{thorngren2018}. Probably re-inflation is fornishing an important clue on where to look to further
improve theory. This is especially true if the sample of inflated planets around moderately evolved stars will
continue to grow in the future, which also demonstrates the importance of targeting subgiant and giant stars
to discover new planets around them \citep{lopez2016}.
Considering the mass of the host, the fate of this planetary system is likely to be engulfed in the stellar envelope \citep{villaver2009}, but the exact time of engulfment depends on several factors among which the mass, the internal structure of the planet and its initial orbit around the parent star seem to play a crucial role \citep[e.g.][]{villaver2014}.
The inflated radius of TIC~257060897b and the possibility that systems like this one may be capable to survive at least up to the base of the red-giant branch \citep{villaver2014} suggest that they may be also precursors of super-Jupiter planets at short orbital period around low luminosity red giant branch stars, as recently discussed in the literature \citep{grunblatt2019}.
The low density and high equilibrium temperature of
TIC~257060897b make this object an attractive target in the contex of exoplanet atmospheric studies.
The planet's equilibrium temperature ($T_{eq}$)
was calculated assuming zero albedo and full day-night heat redistribution according to
\begin{equation}
T_{eq} = T_{\ast}\sqrt{\frac{R_{\ast}}{a}}\Big(\frac{1}{4}\Big)^{1/4}
\end{equation}
\noindent
where $a$ is the orbital semi-major axis given in the same units as $R_{\ast}$ (the stellar radius) and $T_{\ast}$ is the host star effective temperature.
Assuming a molecular hydrogen (H$_2$)-dominated, cloud-free atmosphere, we can estimate the scale height of the planetary atmosphere,
H=$\rm\frac{k_b\,T_{eq}}{\rm\mu\,g}$ where $k_b$ is the Boltzmann constant, $T\rm_{eq}$ is the planet equilibrium temperature, $\mu$ is the mean molecular mass and $g$ is the gravity. We obtain
H = (985 $\pm$ 70) km. The amplitude of spectral features in trasmission is $\rm\sim4pH/R\rm_{s}=(182\pm14) $~ppm \citep{kreidberg2018}, where p is the radius ratio and R$\rm_{s}$ is the radius of the star.
We also calculated the Transmission Spectroscopy Metric
\citep[TSM; ][]{kempton2018} a parameter which quantifies the expected signal-to-noise in transmission spectroscopy for a given planet and permits to determine its suitability for future atmospheric characterization in particular using the James Webb Space Telescope (JWST). The analytic expression for this parameter is:
\begin{equation}
\mbox{TSM} = S \times \frac{R_p^3T_{eq}}{M_pR_{\ast}^2} \times 10^{-m\rm_J/5}
\end{equation}
\noindent
where S is a normalization constant to match the more detailed work of
\citet{louie2018}, $R_p$ is the radius of the planet in units of Earth radii, $M_p$
is the mass of the planet in units of Earth masses, $R_{\ast}$ is the radius of the host
star in units of solar radii, $m\rm_J$ is the apparent magnitude of the host star in the J band and T$_{eq}$ is expressed in Kelvin. Following \citet{martin2021} we decided to choose the scale factor S = 1.15. For TIC 257060897b we found a value of
TSM = $98\pm11$. Jupiter and sub-Jupiter planets with TSM values greater than 90 are considered suitable for transmission spectroscopy observations with JWST and TIC257060897b belongs to the 70$\rm^{th}$ percentile of the cumulative distribution of TSM values of the planets
we considered in Figg.~\ref{fig:radius_mass}-~\ref{fig:RadiusTeq}. Moreover, with an ecliptic latitude of 74.415$^{\circ}$, TIC 257060897b is near the northern JWST continuous viewing zone and will be observable for at least 197 continuous days per year \citep{gardner2006}.
Finally, we note that the host star of TIC~257060897b is also metal-rich. The Jupiter planet frequency-metallicity correlation \citep[e.g.][]{santos2004, fischer2005} predicts that Jupiter-like planets should be preferentially found around metal-rich stars. This fact is considered as a proof in favour of the core-accretion planet formation model \citep[e.g.][]{pollack1996} and TIC~257060897b therefore may have been formed by means of this mechanism in the outer stellar disk to then migrate inward at the location where we now observe it.
\begin{figure}
\includegraphics[width=\columnwidth]{InflatedJupiters.png}
\caption{
Stellar density vs effective temperature diagram for the same sample of planets presented in Fig.~\ref{fig:radius_mass}.
The position of TIC~257060897b in this diagram is indicated by the red dot, while inflated Jupiter planets with radius $>$1.45 R$\rm_{J}$ are represented by the green dots. Magenta and blue crosses show a 100 Myr and a 13 Gyr solar metallicity isochrone from the Padova database.
}
\label{fig:InflatedJupiters}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{radius_teq.png}
\caption{
Planetary radius vs planetary equilibrium temperature diagram for the same sample of planets presented in Fig.~\ref{fig:radius_mass}. The position of TIC~257060897b in this diagram is indicated by the red dot.
}
\label{fig:RadiusTeq}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
We report the discovery of TIC~257060897b, an inflated, low-density hot-Jupiter orbiting a rapidly evolving subgiant star and detected using {\it TESS} Full Frame Images. We performed photometric and spectroscopic ground-based follow-up observations which permit to conclude that the host star is an intermediate age ($\sim$3.5 Gyr), metal-rich, subgiant star with T$\rm_{eff}$=(6128$\pm$57) K, log~g=(4.2$\pm$0.1) and [Fe/H]=(0.20$\pm$0.04) implying M$_{\star}$=(1.32$\pm$0.04) M$_{\odot}$ and R$_{\star}$=(1.82$\pm$0.05) R$_{\odot}$.
The transiting body is a giant planet with mass m$\rm_p=$(0.67$\pm$0.03) M$\rm_{J}$, radius r$\rm_p=$(1.49$\pm$0.04) R$\rm_{J}$ yielding a density $\rho_p$=(0.25$\pm$0.02) g cm$^{-3}$
and orbiting its star every $\sim$3.66 days.
TIC~257060897b is one of the hot-Jupiters with the smallest density known so far. It is also an excellent target for
atmospheric characterization with the James Webb Space Telescope. We suggest that the inflated radius of this object may be related
to the fast increase of luminosity of its host star as itevolves outside the main sequence
and that systems like TIC~257060897b could be precursors of inflated radius short period planets found around low luminosity
red giant branch stars, as recently debated in the literature.
\section*{Acknowledgements}
Based on observations made with the Italian {\it Telescopio Nazionale Galileo} (TNG) operated by
the {\it Fundaci\'on Galileo Galilei} (FGG) of the {\it Istituto Nazionale di Astrofisica} (INAF) at
the {\it Observatorio del Roque de los Muchachos} (La Palma, Canary Islands, Spain) and
on observations collected at the Schmidt telescope (Asiago, Italy) of the INAF - Osservatorio
Astronomico di Padova. We thank the GAPS collaboration for the time sharing agreement and for handling the scheduling
and execution of the observations. MM is grateful to the TNG staff for the prompt support during the preparation
and execution of the observations. MM, GP, VN, VG, RC acknowledge support from PLATO ASI-INAF
agreements n.2015-019-R0-2015 and n. 2015-019-R.1-2018. This work made use of \texttt{tpfplotter} by
J. Lillo-Box (publicly available in \url{http://www.github.com/jlillo/tpfplotter}), which also
made use of the python packages \texttt{astropy}, \texttt{lightkurve}, \texttt{matplotlib} and \texttt{numpy}.
\section{Data Availability}
The light curves and spectroscopic data presented in this article are available online as electronic tables.
\bibliographystyle{mnras}
\typeout{}
|
2,877,628,089,785 | arxiv | \section{Introduction}
If Supersymmetry (SUSY) is realized at the TeV-scale or below it will
be probed at the Large Hadron Collider (LHC). Experimental
studies will be possible through the direct production of SUSY
particles. In particular colored particles will be copiously
produced, so squark and gluino production can play an important role
in the hunting for SUSY. In the following we will focus on the production
of a squark--anti-squark pair,
\begin{equation}
P~P \to \tilde{Q}^a~\tilde{Q}^{a*}\, X\quad (\tilde{Q} \neq \tilde{t}, \tilde{b})\, . \,
\label{Eq:Process}
\end{equation}
The lowest order
cross section for the process~(\ref{Eq:Process})
is of $\mathcal{O}(\alpha_s^2)$ and was computed in the early 1980's~\cite{Tree,Tree2,Tree3,Tree4}.
The dominant NLO corrections, of $\mathcal{O}(\alpha^3_s)$,
were calculated in Ref.~\cite{Beenakker1996}. They are positive and sizable, typically from 20\% to 30\%
of the lowest order prediction.\\
There are also $\mathcal{O}(\alpha_s \alpha)$ and $\mathcal{O}(\alpha^2)$
corrections to diagonal squark pair production from $q \bar{q}$ annihilation~\cite{Drees}.
Contributions of $\mathcal{O}(\alpha^2)$ are obtained squaring the tree-level EW graphs
while $\mathcal{O}(\alpha_s \alpha)$ corrections arise from the interference of tree-level EW
diagrams with the tree-level QCD ones. The latter vanish for $\tilde{Q}=\tilde{t}$, but they
can become sizable if $\tilde{Q} \neq \tilde{t}$.\\
NLO electroweak (EW) contributions were found to be significant
in the case of top-squark pair production, with effects up to 20\%~\cite{StopEW, Beccaria:2007dt}. In the
case of the process~(\ref{Eq:Process})
NLO EW corrections can reach the same size as the tree-level EW contributions of $\mathcal{O}(\alpha_s\alpha)$
and $\mathcal{O}(\alpha^2)$~\cite{SquarkEW}.
\section{EW contributions}
Diagrams and corresponding amplitudes for the EW contributions to the process~(\ref{Eq:Process})
are generated using \verb|FeynArts|~\cite{FeynArts,FeynArts2} while the algebraic manipulations and the numerical evaluation of the loop integrals
are performed with the help of \verb|FormCalc| and \verb|LoopTools|~\cite{FormCalc,FormCalc2}.
IR and Collinear singularities are regularized within mass regularization.
\subsection{Tree level EW contributions}
Tree-level EW contributions to the process~(\ref{Eq:Process}) are of $\mathcal{O}(\alpha_s \alpha)$
and $\mathcal{O}(\alpha^2)$. The interference of the tree-level electroweak and tree-level QCD diagrams give rise to terms of
order $\mathcal{O}(\alpha_s\alpha)$, while $\mathcal{O}(\alpha^2)$ terms are obtained squaring the aforementioned
tree-level EW graphs.
We consider also the photon-induced partonic process $\gamma g \to \tilde{Q}^a\tilde{Q}^{a *}$, which contributes
at $\mathcal{O}(\alpha_s \alpha)$, owing to the
non-zero photon density in the proton which stems from the inclusion of NLO QED effects into the DGLAP equations
for the parton distribution functions (PDFs).
\subsection{NLO EW contributions}
\begin{figure}
\includegraphics[height=.2\textheight]{PLOT/UPL_Ch}
\includegraphics[height=.2\textheight]{PLOT/UPL_Per}
\label{Fig01}
\caption{Invariant mass distribution or $u^L u^{L*}$
production for the SUSY parameter point corresponding to SPS1a$'$. The left panel shows the contributions
of the different channels. $\delta$ is the EW contribution relative to
the LO one.}
\end{figure}
\begin{figure}
\includegraphics[height=.2\textheight]{PLOT/UPR_Ch}
\includegraphics[height=.2\textheight]{PLOT/UPR_Per}
\caption{Same as Fig.~\ref{Fig01} but for $u^R u^{R*}$ production}
\label{Fig02}
\end{figure}
NLO EW corrections arise from three different channels, gluon fusion, quark--anti-quark annihilation,
and quark--gluon fusion channels.
\subsubsection{Virtual Corrections}
Virtual corrections originate from the interference of the tree-level diagrams with the one-loop EW graphs.
In the case of $q \bar{q}$ annihilation channels, the interference between tree-level EW diagrams and QCD one loop graphs
has to be considered as well. Particularly interesting is the $q \bar{q}$ annihilation channel when the quark belongs to the same
SU(2) doublet of the produced squark. In this case many types of interferences occur between amplitudes of
$\mathcal{O}(\alpha_s\alpha)$ and $\mathcal{O}(\alpha_s)$ and between $\mathcal{O}(\alpha_s^2)$ and $\mathcal{O}(\alpha)$ amplitudes. This is
related
to the presence of EW tree-level diagrams with $t$-channel neutralino and chargino exchange and of QCD tree-level diagrams with $t$-channel
gluino exchange. \\
The on-shell scheme~\cite{RzehakHollik,DennerHab} has been used to
renormalize masses and wavefunctions of the squarks, of the quarks, and of the gluino.
The strong coupling $\alpha_s$ is renormalized in the
$\overline{\mbox{MS}}$ scheme. The contribution of the
massive particles (top, squarks, and gluino) to the running of $\alpha_s$
has been subtracted at zero momentum transfer.
Dimensional regularization spoils SUSY at higher order;
at one loop SUSY can be restored by adding a finite counterterm for the
renormalization of the squark-quark-gluino Yukawa coupling.
\subsubsection{Real Corrections}
IR and collinear finite results are obtained including
the processes of real photon emission. In the case of $q \bar{q}$ annihilation
real gluon emission of $\mathcal{O}(\alpha^2_s \alpha)$ has to be considered as well.
The treatement of IR and collinear divergences has been performed
using two different methods: phase space slicing and dipole subtraction~\cite{Dipole}. They give
results in good numerical agreement. \\
\noindent
IR singularities drop out in the sum of virtual and real corrections. Surviving collinear singularities have to be
factorized and absorbed into the definition
of the PDF of the quarks. \\
Other contributions of $\mathcal{O}(\alpha_s^2\alpha)$ arise from the
processes of real quark emission from quark--gluon fusion. These contributions exhibit divergences
when the outgoing quark is emitted collinearly to the gluon. Such singularities are extracted using the two
aforementioned methods and have been absorbed into the PDFs. In specific SUSY scenarios, the internal-state
gauginos can be on-shell. The poles in the resonant propagators are regularized
introducing the width of the corresponding particle.
\section{Numerical Results}
\begin{figure}
\includegraphics[height=.2\textheight]{PLOT/DOL_Ch}
\includegraphics[height=.2\textheight]{PLOT/DOL_Per}
\caption{Same as Fig.~\ref{Fig01} but for $d^L d^{L*}$ production}
\label{Fig03}
\end{figure}
\begin{figure}
\includegraphics[height=.2\textheight]{PLOT/CHL_Ch}
\includegraphics[height=.2\textheight]{PLOT/CHL_Per}
\label{Fig04}
\caption{Same as Fig.~\ref{Fig01} but for $c^L c^{L*}$ production}
\end{figure}
For illustration of the EW effects, we study the pair production of the
squarks $\tilde{u}^R$, $\tilde{u}^L$, $\tilde{d}^L$
and $\tilde{c}^L$, focusing on the SPS1a$'$ point of the MSSM parameter space,
suggested by the SPA convention~\cite{SPA}. A more comprehensive analysis
can be found in ref.~\cite{SquarkEW}. \\
\noindent
Figs.~\ref{Fig01}--\ref{Fig04} contain the invariant mass distribution of the squark--anti-squark pair
for the different squark species. In the low invariant mass region
EW corrections are positive, decreasing as $M_{\mbox{\tiny inv}}$
increases and becoming negative in the high invariant mass region. The contribution of the $g\gamma$ channel
is independent on the chirality of the produced squark, determined only by its charge.
In the case of production of same-chirality and same-isospin squarks, {\it e.g.} $u^Lu^{L*}$
and $c^Lc^{L*}$, the corresponding contributions of $gg$ and $g \gamma$ channels are equal ({\it c.f.} Fig.~\ref{Fig01} and
Fig.~\ref{Fig04}), owing to the mass degeneracy of the produced squarks
\footnote{Such degeneracy arises because quarks belonging to the
first two generations are treated as massless.}.
Comparing Fig.~\ref{Fig01} and Fig.~\ref{Fig04}, one can understand the
key role of the $q \bar{q}$ annihilation channels when
the quark belongs to the same $SU(2)$ doublet of the produced squark. In the case of $u^Lu^{L*}$ production the contribution of these channels is
negative and sizeable
while in the case of $c^L c^{L*}$ production it is suppressed by the PDFs rendering the impact of the $q \bar q$ channels negligible.
\bibliographystyle{aipproc}
|
2,877,628,089,786 | arxiv | \section{Introduction}
Dark matter is thought to be an important component of the Universe
and research into its nature is actively pursued using a variety of
techniques. Dark matter may be weakly interacting massive particles
(WIMPs) which would tend to accumulate at the bottom of gravitational
potential wells, such as galaxies, where they could undergo
self-annihilation processes. Depending on WIMP mass and branching
ratios, a measurable flux of high energy gamma rays could result.
The Draco dwarf spheroidal galaxy has long garnered interest as a
potential source of concentrated dark matter~\cite{Tyler}. Draco has
one of the highest known mass-to-light ratios ($M/L$), perhaps as high
as $500 M_\odot/L_\odot$~\cite{Wu}. Current observations
are consistent with a cuspy density profile~\cite{Lokas},
which would enhance the gamma-ray production rate. Furthermore, since
Draco is a satellite of the Milky Way, its relative proximity to the
Earth ($d \sim 75~kpc$)\cite{Bonanos} might allow for the detection of
such gamma rays.
\section{STACEE Observations of Draco}
\begin{table*}[ht]
\begin{center}
\begin{tabular}{|l|r|r|r|r|} \hline
~ & ON events & OFF events & Excess & Significance \\ \hline
After Time Cuts & 177498 & 177273 & 225 & $+0.39\sigma$ \\
+ grid ratio Cut & 3094 & 3120 & -26 & $-0.33\sigma$ \\ \hline
\end{tabular}
\caption{Data summary of STACEE observations of Draco during the 2005-2006
observing season, representing approximately $3.67\times10^{4}~s$
of livetime.
\label{datasum}}
\end{center}
\end{table*}
\begin{figure}[bt]
\begin{center}
\includegraphics[width=0.45\textwidth]{earea}
\caption{Effective area curves for STACEE observations of Draco.
The blue (solid) line represents the STACEE effective area without cuts, the red (dashed)
line represents the STACEE effective area including the grid-ratio cut.
\label{effarea}}
\end{center}
\end{figure}
The Solar Tower Atmospheric Cherenkov Effect Experiment (STACEE) is a
gamma-ray telescope operating at the National Solar Thermal Test
Facility (NSTTF) in Albuquerque, NM. STACEE is a wavefront-sampling
Cherenkov telescope which uses 64 of the mirrors in the NSTTF
heliostat array for a total of $\sim 2400~m^2$ of collecting surface.
Cherenkov light from gamma-induced air showers is reflected off the
heliostats onto secondary mirrors on a tower on the south side of the
field. These secondaries focus the light onto photomultiplier tubes
(PMTs) in such a way that each PMT sees the light from a single
heliostat. Pulses from the PMTs are split, with one copy
discriminated and used in the formation of a trigger and the other
digitized using a 1 GS/s FADC. The trigger selects showers that
deposit light evenly over the heliostat field, which favors those
showers initiated by gamma rays over those resulting from charged
cosmic rays, the most important background for the STACEE experiment.
For a more complete description of the STACEE experiment, see
\cite{Gingrich}.
The basic unit of observation for STACEE is the ``ON-OFF'' pair; 28 minutes
on-source and 28 minutes off-source. Both observations view the same path
across the sky in local coordinates (altitude and azimuth), but separated by
30 minutes in celestial coordinates (right ascension). The
off-source observation allows for a measurement of the local
background conditions. We measure the significance of a measurement as
\begin{equation}
\sigma = \frac{ON-OFF}{\sqrt{ON+OFF}}
\approx \frac{S}{\sqrt{2B}} \label{sig}
\end{equation}
where $S$ is the signal and $B$ is the background.
STACEE observations of Draco total 35 ``ON-OFF'' pairs, of which approximately 10 hours
of livetime remain after excluding periods with bad weather and known
technical difficulties. Our data set is summarized in Table \ref{datasum}.
\section{Data Analysis}
Our raw background trigger rate from cosmic rays is approximately 5
Hz. In order to reduce this, we perform a grid-ratio cut which
preferentially removes hadron-induced showers. This technique has
been used successfully by the CELESTE experiment~\cite{Brion} and our
implementation is described in more detail in~\cite{Kildea}. A
simplistic description of the technique is that the ``smoothness'' of
a shower is measured by the height-to-width ratio ($H/W$) of the sum
of pulses from all 64 channels in the detector. This quantity depends
on the relative timing of each FADC trace, which depends on the
assumed impact point of the shower core (i.e., the extrapolated shower
axis). The grid-ratio cut is based on how sharply peaked the $H/W$
distribution as a function of assumed core position is. Gamma-ray
showers, smooth and symmetric, are expected to produce narrower $H/W$
distributions than hadronic showers, which result in broader, clumpier
deposits of Cherenkov light. Applied to our 2002-2004 Crab data, the
grid-ratio cut improves the detection significance from $4.8\sigma$ to
$8.1\sigma$\cite{Lindner}.
As seen in Table \ref{datasum}, we do not detect an excess gamma-ray signal
from Draco in our data set. We derive an upper limit for the
flux from Draco given a measure of our detector response to a candidate
source spectrum. We discuss two possible source spectra, an $E^{-2.2}$
power law (suggested by the gamma-ray flux from the galactic
center\cite{Hooper}) and a candidate dark matter spectrum taken from
Tyler\cite{Tyler}, with an energy-dependent shape given by
\begin{equation}
\phi(x<1) = \frac{e^{-8x}}{x(x^{1.5} + 0.00014)},
\label{tspec}
\end{equation}
where $ x = E / m_\chi c^2 $. This gives a sharp cutoff at the energy
corresponding to the candidate WIMP mass.
\begin{figure*}[tb]
\begin{center}
\includegraphics[width=0.45\textwidth]{resp}
\includegraphics[width=0.45\textwidth]{resptyler}
\caption{Response Functions for STACEE observations of Draco
given two candidate spectra. The left figure
corresponds to an $E^{-2.2}$ spectrum, and the
right figure corresponds to a Tyler spectrum
(Eq.~\ref{tspec}) for an example
$300~GeV/c^2$ WIMP. The blue (solid) curves represent the
STACEE sensitivity without cuts, the red (dashed) curves
include the grid-ratio cut.
\label{detresp}}
\end{center}
\end{figure*}
{\bf Power Law Spectrum:}
Figure~\ref{effarea} shows effective area curves for STACEE
observations of Draco. Gamma-ray showers were simulated using the
Monte Carlo chain of the CORSIKA air shower simulation together with
our own optical ray-tracing model for the heliostats, secondaries, and
PMTs, and a simulation of the electronics~\cite{Lindner, Fortin}. The
effective area is given by the product of the probability that a
shower triggers our detector and the area over which the simulated
showers were generated.
STACEE has an energy-dependent response which means the sensitivity to a
given source depends on its energy spectrum.
Figure~\ref{detresp} shows the result of convolving the effective area
curves with candidate spectra. As is customary, we define the
energy thresholds of our measurements as the peak of these curves.
The flux limit is defined by
\begin{equation}
N_{UL} = ~T~\int^\infty_0{ A(E)\Phi(E) dE }
\end{equation}
where $N_{UL}$ is given by the 95\% upper limit of the excess
$N_{ON}-N_{OFF}$, $T$ is the livetime, and $A(E)$ is the effective
area. The differential flux, $\Phi(E) = C \phi(E)$, contains a
normalization constant with units of $[cm^{-2}~s^{-1}~GeV^{-1}]$.
For the data given in Table \ref{datasum}, including
the grid-ratio cut, $N_{UL} = 138$ and the resulting upper limit is
\mbox{\bf $\Phi(E)<4\times10^{-8}~\left(\frac{E}{GeV}\right)^{-2.2}$}
\mbox{\bf $\gamma~s^{-1}~cm^{-2}~GeV^{-1}$}
at a characteristic energy of 220 GeV.
Figure \ref{flim} shows a comparison of this limit with the published
upper limit of the Whipple collaboration\cite{Hall} and our own Crab spectrum.
{\bf Tyler Spectrum:}
Tyler provides a prescription for converting a flux limit into a
cross-section limit (\mbox{$<\sigma v>_{\chi\bar\chi}$}) assuming a spherical
isothermal halo model where the mass density is given by $\rho_{halo} = Ar^{-2}$. We avoid a divergence at the
center by including a constant-density core physically motivated by an
equilibrium between infalling particles and annihilation:
\begin{equation}
R_{min} = R_{ext} <\sigma v>^{1/2}_{\chi\bar\chi}
\left[ \frac{\rho_{halo}}{4 \pi G m_\chi^2} \right]^{1/4}
\end{equation}
where $R_{ext}$ is the outer radius of Draco
and $<\sigma v>_{\chi\bar\chi}$ is the expectation value of the
self-annihilation rate, given by the product of the cross-section and
the velocity of the WIMPs in the halo.
We then substitute this into our flux equation
\begin{equation}
\Phi_\gamma(E) = \frac{4A^2}{3d^2R_{min}} <\sigma v>_{\chi\bar\chi}
\phi_\gamma(E)
\end{equation}
and solve for the self-annihilation rate. The resulting limits are shown in Figure
\ref{dmlim}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.45\textwidth]{flim}
\caption{STACEE Flux limits for a $E^{-2.2}$ spectrum
as applied to Draco (blue). For comparison, also shown is the energy spectrum of the Crab Nebula (green) as measured by STACEE during 2003-2005 which is well fit by the form: $\frac{dN}{dE}=1.2\times10^{-7}~E^{-2.23}$
\label{flim}}
\end{center}
\end{figure}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.45\textwidth]{dmlim}
\caption{Upper limits on the WIMP self-annihilation rate
(cross-section multiplied by halo WIMP velocity) for the Tyler
spectrum as a function of $m_\chi$ as applied to the STACEE Draco
observations.
\label{dmlim}}
\end{center}
\end{figure}
\section{Conclusions}
STACEE does not detect a significant signal from Draco, and sets upper
limits on cross-sections for WIMP with rest-mass energy greater
than about 150 GeV.
{\bf Acknowledgments:}
Many thanks go to the staff of the National Solar Tower Test Facility, who
have made this work possible.
This work was funded in part by the U.S. National Science Foundation, the
Natural Sciences and Engineering Research Council of Canada, Fonds Quebecois
de la Recherche sur la Nature et les Technologies, the Research
Corporation, and the University of California at Los Angeles.
\bibliographystyle{unsrt}
|
2,877,628,089,787 | arxiv | \section{Introduction}
L'article de Bell\cite{bell1964} où il a proposé ses inégalités devenues célèbres,
on en verra le contenu plus loin, est peut-être encore aujourd'hui
l'article de physique le plus cité. L'article d'Einstein, Podolsky,
Rosen\cite{EPR} auquel ces inégalités sont dites être confrontées n'est pas loin
derrière, mais c'est l'article de Bell écrit 30 ans aprés qui lui a
redonné cette notoriété. Cependant, si Bell à redonné vie à l'article
EPR, c'est pour aussitôt en restreindre la portée et même en détruire
le sens.
\begin{wrapfigure}[16]{r}[34pt]{12 cm}
\centering
\includegraphics[ scale = 0.7]{bellfig1.ps}
\end{wrapfigure}
On rapporte ainsi que Bohr et Einstein étaient en désaccord sur
l'interprétation de la mécanique quantique mais que Bell avec ses
inégalités a permis de trancher, et que l'expérience a donné raison
{\sl définitivement} à Bohr après qu' aient été confirmée la violation
attendue de ces inégalités, à partir de 1971 et plus fortement en 1982
avec les expériences d'Alain Aspect\cite{Aspe82}.
Notons que si l'article de Bell atteint un record de popularité, on
pourrait dire aussi qu'il atteint un record de portée dans le temps :
écrit en 1964 à propos d'un article de 1935, il conduit à des
expériences dans les années 1970-1980 et il est encore souvent cité
aujourd'hui en 2014. Une histoire singulière dans le monde
scientifique.
En reprenant de plus près les textes originaux, on va tenter de
retrouver l'enchainement des arguments et examiner s'ils se laissent
bien contenir dans cette histoire dont le calendrier étendu sur un
demi-siècle est présenté sur la figure 1 ci-dessus où sont indiquées
en haut les interventions importantes sur le sujet et en bas, la
disparition des intervenants.
\section{EPR et Bell, un début tout en prudence}
Commençons par le titre de l'article de Bell\footnote{le texte de
Bell est imprimé en bleu pour faciliter la lecture} :
\begin{quotation}
{\color{blue}\sl On the Einstein Podolsky Rosen paradox}.
\end{quotation}
On remarquera la prudence de ce titre qui n'annonce pas du tout la
confrontation qui va suivre. Juste évoque-t-il le sujet, l'article de
Einstein, Podolsky et Rosen, et le qualifie-t-il de paradoxe. Pourquoi
cette prudence et pourquoi ce qualificatif\footnote{ Pour le TLF :
`Affirmation surprenante en son fond et/ou sa forme, qui contredit les
idées reçues, l'opinion courante, les préjugés''}? Et confrontons le
titre de Bell avec celui de Einstein, Podolsky, Rosen\footnote{les
textes issus de EPR ou d'Einstein sont imprimés en bleu et
souslignés}:
\begin{quotation}
{\color{blue}\sl\uline{ Can quantum-mechanical description of physical reality be
considered complete?}}
\end{quotation}
Une grande prudence donc là aussi, une simple question, et peut-être
cette prudence du titre choisie par EPR, il faudra comprendre
pourquoi, explique-t-elle celle de Bell... dans son titre!
Quant au paradoxe, une question ne peut constituer un paradoxe,
éventuellement la réponse.
Et en effet, la réponse est là chez EPR, dès la fin du résumé :
\begin{quotation}
{ \color{blue}\sl\uline{ One is thus led to conclude that the description of reality as
given by a wave function is not complete}}.
\end{quotation}
Il y a bien là de quoi ``contredire l'opinion courante'', ``les
préjugés'', on peut bien parler de paradoxe, même si le mot n'est pas
chez EPR, à cette nuance près qu'il ne s'agit pas d'une affirmation
mais bien d'une démonstration.Il restera à voir précisément de quoi.
Même si la réponse du résumé est plus précise que la question du titre
(c'est le caractère exhaustif de la fonction d'onde qui est remis en
cause), on doit s'interroger sur la raison qui a conduit Einstein et
ses collègues à choisir le mode interrogateur pour le titre de
l'article alors que le texte comme le résumé défendent une réponse si
clairement négative. Pourquoi également le faire porter sur la notion
la plus générale de { \sl description de la réalité physique}. On y
reviendra.
\section{EPR de plus près.}
Avant de poursuivre avec Bell, il devient nécessaire d'analyser plus
en détail l'article EPR. De distinguer ce qui est démonstration de ce
qui est commentaires. Une démonstration est éternelle les commentaires
dépendent de l'époque, du contexte, des questions qui s'y posaient.
L'article pose d'abord les 2 conditions qui fondent pour eux (pour
nous) le succès d'une théorie physique.{\bf 1) La théorie est-elle
correcte, ses prédictions conformes à la réalité des observations. 2)
Est-elle complète c.a.d. tout élément de réalité trouve-t-il un
correspondant dans la théorie.}
EPR écrivent explicitement que c'est la seconde question qui est
traitée dans leur article. Bien que ce ne soit pas dit, on peut penser
que l'examen de la question 2) suppose acquise la réponse positive à
la question 1). C'était en tout cas l'avis d'Einstein exprimé de
multiples fois bien avant 1935 comme bien après(voir plus loin).
L'article propose ensuite un long exposé pour associer, en général,
pour un système physique isolé, la réalité d'une grandeur pour ce
système avec la prédictabilité de cette grandeur. Pourquoi cette
insistance, c'est que la mécanique quantique à laquelle EPR se
tiennent rigoureusement ne prétend pas trouver la réalité des choses
mais seulement prédire le résultat d'une mesure ou la distribution des
probabilités de ses résultats. A une grandeur dont le résultat de la
mesure est prédictible, EPR font donc correspondre une réalité
physique. La mécanique quantique ne s'oppose pas à cela.
Insistons que EPR, eux, et au contraire de leurs contemporains parlent
bien de ``réalité objective'' fortement contestée à l'époque (encore
aujourd'hui!) mais ils reprennent et utilisent fidèlement les
préceptes de la mécanique quantique. Ceux de leur époque, les mêmes
qu'aujourd'hui.
\begin{itemize}
\item
Le concept d'état caractèrisé par une fonction d'onde qui fournit
toute l'information disponible sur l'état.
\item
Le concept d'observable et son opérateur correspondant.
\item
La particularité d'un état propre qui seul permet de prédire avec
certitude la valeur de l'observable correspondante.
\item
Le processus de ``mesure'' (c'est clairement pour EPR un processus,
aucune tentation pour l'appel à la conscience de l'observateur!), le
changement d'état (réduction du paquer d'onde) auquel il conduit en
général (voir appendice A) par une interraction incontrôlable avec l'appareil de mesure.
\item
Enfin, l'impossibilité pour un état d'être à la fois état propre de
deux opérateurs qui ne commutent pas, position et impulsion par
exemple, mais c'est celui là qui sera utilisé plus loin : Un état
propre de $X$ permet de connaitre avec certitude x mais ne dit rien,
sinon une distribution de probabilité, sur la variable conjuguée$
P_x$.
\end{itemize}
Tout cet arsenal est mis en route par EPR, mais il est utilisé dans le
cas particulier où le système étudié est composé de deux éléments. Là
est l'originalité du cas examiné et la surprise du résultat
démontré. Disons tout de suite que la réponse démontrée est une
alternative et dont les deux termes sont embarassants, là est la
subtilité. Embarassants pour EPR et embarassants pour (presque!) tout
le monde. On verra tout de même que l'un plus que l'autre. Et c'est de
cet ambarras général que peut résulter finalement la conclusion que la
mécanique quantique n'est pas complète. Le titre d'EPR pose au moins
la question. La démonstration va aller plus loin.
\section{Le coeur de la démonstration : l'alternative EPR}
La démonstration est à trouver dans l'article original bien sûr, mais
on peut en donner ici la trame du déroulement (figure 2 ).
Les systèmes I et II initialement séparés sont envoyés dans une zone
d'interaction où une fonction d'onde composée particulière {\Large
$\Psi_{I,II}$} est produite qui laisse les systèmes I et II se séparer
de nouveau. Insistons que les systèmes I et II se séparent mais la
fonction d'onde reste une tant qu'aucune mesure n'est effectuée sur
l'un ou l'autre des systèmes. Elle ne l'est plus ensuite, c'est ce
qu'on va voir. Le système I est dirigé, au choix, vers l'un de deux
appareils de mesure MSP pour mesurer $P_I$ ou MSQ pour mesurer $Q_I$,
P et Q étant des grandeurs conjuguées, ici position et impulsion. La
fonction d'onde {\Large $\Psi_{I,II}$} subit une ``réduction du paquet
d'onde'' (wave collapse dans EPR) dès que l'appareil MSP donne un
résultat $p_I$. Mais {\Large $\Psi_{I,II}$} est ainsi préparée qu'il
en résulte que II est lui aussi dans un état P ($p_{II}$ qui dépend de
la valeur $p_I$ trouvée sur I). De même, si c'est l'appareil MSQ qui
est présenté à I, des que l'appareil MSQ donne un résultat $q_I$, il
en résulte que II est dans un état Q ($q_{II}$ qui dépend de la valeur
$q_I$ trouvée sur I).
\includegraphics[angle=-90,scale=0.5]{bellfig2.ps}
EPR démontrent ainsi que l'état de II dépend d'une mesure effectuée
sur I avec lequel il n'interagit pas. Contradiction dans les termes
parfaitement démontrée dans le cadre de la MQ. Comment en sortir? Ou
bien II peut être à la fois dans un état P et un état Q, ce qui
contredit complètement la MQ : elle ne serait pas incomplète elle
serait incohérente. Ou bien la MQ introduit des ``actions à distance''
qui ne permettent plus d'affirmer, comme le supposent EPR :
\begin{quotation}
{ \color{blue} \sl\uline{ {\ldots} we have two systems I and II , which we permit to
interact from the time t=0 to t=T, {\bf after which time \bf we suppose that
there is no longer any interaction between the two parts.}}}
\end{quotation}
C'est cette démonstration sous forme d'alternative qui est éternelle
{\ldots} à l'intérieur du cadre de la mécanique quantique, en 1935
comme aujourd'hui. Bien sûr que les choix dans l'alternative ou les
commentaires peuvent être différents hier et aujourd'hui!
\section{Tout simplement un dilemme.}
Pour EPR en 1935, clairement rien ne convient. Ni que p et q puissent
à la fois être déterminés, ni que des actions à distance soient alors
démontrées. Rejetant pourtant cette dernière possibilité, ils
affirment seulement :
\begin{quotation}
{ \color{blue} \sl \uline{ We are thus forced to conclude that the quantum-mechanical
description of physical reality given by wave-functions is not
complete.}}
\end{quotation}
Ils se gardent bien par contre d'en déduire que P et Q puissent être
en même temps déterminés: ils ont trop confiance dans ce que prédit la
MQ. Voilà ce qu'en dira plus tard Einstein (en 1949, c'est vrai, bien
après 1935 oui, mais 15 ans avant 1964!) :
\begin{quotation}
{ \color{blue}\sl\uline{ `` Cette théorie est jusqu'à maintenant la
seule qui unifie le double caractère corpusculaire et ondulatoire de
la matière d'une façon dont la logique est satisfaisante ; et les
relations (vérifiables) qu'elle contient, sont, à l'intérieur des
limites fixées par la relation d'incertitude {\sl complètes}. Les
relations formelles qui sont données dans cette théorie -c.a.d. son
formalisme mathématique tout entier- devront probablement être
contenues, sous la forme de conséquences logiques, dans toute théorie
future utile''{\normalfont\cite{eins1949} Einstein's reply, page
666-667. } }}
\end{quotation}
C'est peut-être pourquoi, voulant garder une distance avec le contenu
formel de sa démonstration, il n'en garde dans le titre qu'une mise en
cause générale, un questionnement même, sur la complétude de la
description quantique du monde physique.
Remarque : Si la MQ introduit des actions à distance, elle le fait
subrepticement, sans rien en dire, sans définir la nature de ces
interactions, leur portée finie ou infinie, leur vitesse de
propagation, finie ou infinie, leur articulation avec le cadre
existant de la relativité. Alors oui la MQ est incomplète.
\section{Ce que veut et ce que ne veut pas Einstein et la lecture
qu'en fait Bell}
On peut maintenant revenir à l'article de Bell, et on commence avec
l'introduction.
\begin{quotation}
{\color{blue}\sl ``The paradox of Einstein, Podolsky, and Rosen was
advanced as an argument that quantum mechanics could not be a complete
theory but should be supplemented by additionnal variables. These
additionnal variables were to restore to te theory causality and
locality''}
\end{quotation}
On l'a vu, mécanique quantique incomplète oui, mais on chercherait en
vain dans le texte EPR le mot, l'idée même de variables
additionnelles, de paramètres cachés. Comment Bell peut-il écrire
cela?
Voici ce qu'en dira Einstein un peu plus tard :
\begin{quote}
{ \color{blue}\sl\uline{ Je ne pense pas que l'on puisse arriver à une
description des systèmes individuels simplement en complétant la
théorie quantique actuelle. Le principe de superposition et
l'interprétation statistique sont indissociablement liés entre eux. Si
l'on pense que l'interprétation statistique doit être dépassée, on ne
peut pas conserver l'équation de Schrödinger dont la linéarité
implique la superposition des états''}} Lettre à Kupperman de novembre
1953 (\cite{bali1989} page 233),
\end{quote}
C'était en 1953, 18 ans après EPR mais 11 ans avant Bell!
Revenons sur les termes de causalité et localité. Einstein est
clairement attaché à ce qu'on peut appeler la causalité (il n'emploie
pas lui même ce terme). On a souvent pappelé {\color{blue}\sl\uline{``Dieu ne joue pas au
dès''}}
et Einstein a toujours affirmé l'objectif de {\color{blue}\sl\uline {traiter ``les cas
individuels'}'} :
\begin{quotation}
{\color{blue}\sl\uline{ ``Je suis en fait, et au contraire de presque
tous les physiciens théoriciens contemporains fermement convaincu que
le caractère essentiellement statistique de la théorie quantique
contemporaine doit uniquement être attribué au fait que cette théorie
opère avec une description incomplète des systèmes
physiques''}{\em\cite{eins1949}Einstein's reply, page 666. }}
\end{quotation}
Mais rien de cela n'apparait dans le texte EPR focalisé sur tout autre
chose.
Quant-au terme de localité lui aussi absent du texte EPR, il est
évoqué ici par Bell pour, en fait, introduire son contraire la
``non-localité''. Une façon par un mot de rendre compte de la
démonstration EPR mais sans en accepter la conclusion : la mécanique
quantique est incomplète. Quel est en effet le fondement scientifique
de cette ``non-localité''?
Un terme qu'on retrouve pourtant aujourd'hui encore et toujours aussi
peu scientifiquement fondé même s'il est en relation avec une
réalité aujourd'hui incontestable (tout aussi incontestable que la
vertu dormitive de l'opium, chère à Molière) de ou dans la mécanique
quantique. Une réalité mise à jour et démontrée en 1935 par EPR!
Bell poursuit :
\begin{quotation}
{\color{blue}\sl ``In this note that idea will be formulated
mathematically and shown to be incompatible with the statistical
predictions of quantum mechanics''}
\end{quotation}
Comment peut-on formuler mathématiquement une idée qui n'est pas
présente (dans EPR)? Bell le tente dans la phrase qui suit :
\begin{quotation}
{\color{blue}\sl ``It is the requirement of locality, or more
precisely that the result of a measurement on one system be unaffected
by opérations on a distant system with which it has interracted in the
past, that creates the essential difficulty''}
\end{quotation}
Mais exprimée comme cela, cette ``difficulté essentielle'' n'est-elle
pas celle de tout le monde?
On retrouve bien là le coeur de l'article EPR, mais de nouveau, ni
paradoxe ni difficulté, démonstration d'une action à distance,
non-localité si on veut, mais qu'aucun paramètre suplémentaire ne peut
ni faire disparaitre ni expliquer.
Ce que demontrent EPR, c'est que l'état de II (état P ou état Q) est
modifié par le déclenchement d'un appareil sur I et la réduction du
paquet d'onde qui y est associée, point n'est besoin d'une mesure sur
II qui, statistiquement donnera ce que prévoit la connaissance de
l'état II, quelle que soit cette mesure.
\section{La nécessité d'un nouveau participant : David Bohm}
Comment comprendre ces décalages (pour le moins) entre Bell et EPR
auquel il prétend se confronter? Cest qu'entre temps (1951-1957) en
effet, est apparue avec David Bohm une théorie quantique munie de
paramètres supplémentaires qui la rendent directement déterministe,
mais au prix d'actions à distance, de changements instantannés tout à
fait inconnus\cite{Brog52}. C'est ce que Bell écrit ensuite /
\begin{quotation}
{ \color{blue}\sl ``Moreover, a hidden variable interpretation of
elementary quantum theory has been explicitly constructed. That
particular interpretation has indeed a grossly non-local structure.''}
\end{quotation}
Oui, cela peut bien reproduire les effets EPR, rappelons cependant que
les paramètres supplémentaires, locaux ou non ne sont pas ce qui
satisferait EPR en tout cas Einstein.
Bell termine son introduction :
\begin{quotation}
{\color{blue}\sl``It is characteristic, according to the result to be
proved here, of any such theory which reproduces exacly the quantum
mechanical predictions''}
\end{quotation}
et Bell retourne à Bohm, mais cette fois avec Aharonov pour un
article\cite{bohm1957}
directement en relationa avec EPR :
\begin{quotation}
{\color{blue}\sl``With the example advocated By Bohm and Aharonov, the
EPR argument is the folowing''}
\end{quotation}
Mais quel est vraiment le rapport de Bohm et Ahronov avec EPR. Tout
EPR? Rien que EPR? La réponse n'est pas simple pas aussi simple que le
prétend Bohm lui même :
\begin{quotation}
{\sl ``{\ldots} EPR have given an example of a hypothetical experiment
capable of testing certain apparently paradoxical predictions of the
current quantum theory. In order to illustrate this experiment we
shall consider a special example which permits us to present the
arguments of EPR in a simplified form. ''}
\end{quotation}
Car d'une part EPR ne proposent aucune expérience à vérifier et
d'autre part, comme on va le voir, on ne peut pas dire non plus que
l'expérience proposée par Bohm est une version simplifiée de
l'expérience de pensée EPR.
Mais reprenons plus en détail l'intervention de Bohm puisque c'est lui
qui est convoqué, et commençons par le titre :
\begin{quotation}
{\sl Discussion of Experimental Proof for the Paradox of Einstein,
Rosen, and Podolsky }
\end{quotation}
Ce titre est inattendu puisqu'il focalise donc l'attention sur une
expérience rapportée dans la troisième partie de l'article et qui
démontrera que le paradoxe EPR (les actions à distance) est bien réel.
Mais cette expérience est menée avec des photons polarisés et pas avec
des particules de spin 1/2 comme proposé dans les parties 1 et 2 de
l'article. C'est à ces deux parties que se réfère Bell, pas du tout à
la partie expérimentale, celle qui justifie le titre ( celle aussi qui
d'une certaine façon peut sembler conclure le débat sur l'existence du
``paradoxe''!). Nous nous intéresserons nous aussi à ces deux parties.
\section{Bohm et EPR : quoi de commun, quelle diférence?}
Une molécule de spin total 0 est composée de deux atomes A et B de spin 1/2.
La fonction d'onde du système est alors :
\[ \psi = \frac{1}{\sqrt
2}[\psi_{+}(1)\psi_{-}(2) - \psi_{-}(1)\psi_{+}(2) ] \]
Les deux atomes sont séparés par une opération qui conserve le spin.
Le spin d'un des atomes A est mesuré selon une direction quelconque et
la réponse + ou - de cette mesure permet de déduire celle trouvée sur
B si elle est conduite selon la même direction. L'état de B est
modifié par la mesure sur A. Il y a bien là équivallence avec EPR et
là aussi, le même raisonnement peut être déroulé : si une autre
direction est choisie pour la mesure sur A, alors B est aussi projeté
dans un état polarisé dans une autre direction alors que la MQ
interdit à un atome (B) d'être dans un état propre de spin sur deux
directions comme elle interdit avec EPR d'être en même temps dans un
état propre de P et de Q.
Bohm dit noter une difficulté spécifique à sa proposition (on peut
penser cependant qu'elle est transposable à EPR). La MQ attribue les
fluctuations de spin pour A sur les autres directions que celle
mesurée à l'interaction incontrôlable avec l'appareil de mesure sur A.
Mais il faudrait maintenant que cette interaction provoque aussi les
mêmes fluctuations sur B (``avec lequel il n'interagit pas''). On
retrouve les actions à distance.
Mais il y a pourtant une différence importante avec EPR : la
possibilité d'envisager une sortie du paradoxe par l'hypothèse d'un
mécanisme ad hoc, inventé pour la circonstance. Examinons ce que Bohm
propose et dont l'objectif est pour lui parfaitement clair :
\begin{quotation}
{\sl {\ldots} There exists at present no experimental proof that the
paradoxical behavior described by ERP (sic) will really occur.}
\end{quotation}
Il attribue ensuite l'idée de la proposition qui suit à Einstein lui
même, dans une communication privée (Einstein est disparu depuis deux
ans) :
\begin{quotation}{\sl namely,that the current formulation of the many-body problem in
quantum mechanics may break down when particles are far enough apart.}
\end{quotation}
Bohm reprend cette idée avec l'exemple qu'il utilise :
\begin{quotation}
{\sl {\ldots} , we may consider {\ldots} that after the molecule of
spin zero decomposes, {\ldots} we suppose that in any individual case,
the spin of each atom becomes definite in {\normalfont some}
direction, while that of the other atom is opposite. The wave function
will be the product :}
\[ \psi = \psi_{+\theta,\phi}(1)\psi_{-\theta,\phi}(2) \]
{\sl where $\psi_{+\theta,\phi}(1)$ is a wave function of particle A
whose spin is positive in the direction given by $\theta $ and $\phi
$}
\end{quotation}
Bohm indique alors que dans cette hypothèse, il n'y a plus
conservation du spin total pour un cas individuel, mais seulement en
moyenne!
\begin{quotation}
{\sl {\ldots} ,but the model described above has the advantage of
avoiding the paradox of ERP}
\end{quotation}
Pas d'équivallent avec EPR, au moins pas d'équivallent qui ne remette
en cause l'ensemble de l'édifice de la MQ. Au mieux, il faudrait en
effet, pour EPR, que I et II se séparent emportant avec eux la double
information $x$ et $p_x$ ce qu'interdit radicalement la MQ.
Insistons sur le rôle essentiel dans EPR de la réduction du paquet
d'onde. A l'oeuvre sur I au moment de la mesure sur I de $x$ ou de
$p_x$ (en conformité avec le principe de complémentarité de Bohr),
elle opère aussi sur II, c'est là qu'est la surprise : la découverte
d'une complémentarité à distance! Pourrait-on dire. I et II sont
séparés mais la fonctiond'onde reste une. Là est la nouveauté, là est
le problème. On voit bien alors comment Bohm, dans la parie 2 de son
article, tente d'apporter une solution à cette situation nouvelle :
Les systèmes se séparent et la fonction d'onde aussi, elle ne reste
pas une. Proposition intéressante mais complètement hors de la
mécanique quantique. Il s'agit d'une invention originale,
l'introduction d'une auto-mesure en quelque sorte. On remarquera
pourtant contre cette invention que la mesure en mécanique quantique
résulte de l'interraction d'un micro-objet avec un élément (appareil
de mesure ou non) macroscopique complètement absent lors de cette
séparation des deux atomes.
Alors, Bohm, version simplifiée de EPR?
Si on reprend les trois parties de son article;
1) L'expérience de pensée avec la molécule de spin total zéro. Une
véritable équivallence avec EPR et toute la MQ et rien que la MQ.
2) La brisure spontanée de symétrie et l'introduction correspondante
d'un paramètre supplémentaire comme tentative de contourner les
conclusions de EPR au prix de l'invention d'un mécanisme hors de la MQ.
3) Une expérience avec polarisation de photons et qui confirme
l'existence du ``paradoxe EPR'' mais que Bell ignorera complètement.
Avec 1) comme avec EPR, nulle possibilité d'introduire un paramètre
supplémentaire.
Avec 2), il s'introduit naturellement mais en sortant radicalement de
la MQ.
On va voir que Bell met à l'épreuve une autre introduction de
paramètres supplémentaires mais elle aussi complètement en dehors de
la MQ. Bell ne fais pas appel à un mécanisme explicite comme Bohm. Ses
conclusions sont plus générales c'est vrai, mais hors de référence à
la physique (et pas seulement quantique).
\section{Bell dans le détail}
Mais ces considérations établies à partir des textes se
reflètent-elles dans les calculs utilisés par Bell pour établir ses
inégalités et dans le modèle à la base de ce calcul (le modèle qui
introduit les paramètres supplémentaires)? Evidemment
oui, comme on va le voir.
Le modèle d'abord.
Bell le fait se référer\footnote{On notera que Bell n'ignore pas ce
que défendait Einstein bien après 1935, et là en 1949, dans un
document plusieurs fois utilisé ici, dans cet article.} à une
déclaration d'Einstein en 1949(\cite{eins1949} page 85) :
\begin{quotation}{\color{blue}\sl\uline{But on one supposition, absolutely hold fast :
the real factual situation of the system $S_2$ is independent of what
is done with the system $S_1$, which is spatially separated from the
former }}
\end{quotation}
On retrouve bien là l'affirmation que parler de deux systèmes {\bf
séparés} a un sens. Pour Einstein mais pour chacun de nous non?
Mais c'est Bell lui même, et pas du tout Einstein qui fait découler de
cette déclaration la nécessité de l'introduction d'un paramètre
supplémentaire $\lambda $ dans la préparation de l'état, pour {\bf
prédéterminer} résultat des deux mesures qui vont suivre. Bell le fait
après avoir supposé raisonnable que l'orientation d'un polariseur
n'influence pas le résultat de la mesure sur l'autre.
L'introduction de paramètes supplémentaires n'est pas du tout conforme
aux souhaits de Einstein (voir plus haut), elle est par contre bien
conforme (avec la prédétermination qui en résulte) avec l'hypothèse et
le modèle avancés par Bohm 2). C'est cette hypothèse que Bell
généralise en quelque sorte en s'affranchissant de toute référence à
un mécanisme sous-jacent (pas de rupture spontanée, pas d'auto-mesure
pour faire naitre le paramètre). Le paramètre supplémentaire dans le
passé commun est posé au départ sans que son existence soit associée à
un mécanisme connu ou inventé comme pour Bohm2). Comme pour Bohm2, par
contre, il n'y a pas avec Bell de réduction du paquet d'onde : rien ne
se produit au moment de la mesure sur A. Là est la rupture avec EPR et
avec la mécanique quantique.
Ainsi, Bell écrit :
\begin{quotation}
{\color{blue}\sl The vital assumption is that the result B for particle 2 does not
depend on the setting $ \overrightarrow {a} $, of the magnet for
particle 1, nor A on B.}
\end{quotation}
Avec EPR, c''est au contraire la présence de l'appareil de mesure de P
ou celui de Q et le collapse qui suit qui déterminent le résultat sur
B. L'un ne peut pas être une extension de l'autre.
Poursuivons avec Bell :
\begin{quotation}
{\color{blue}\sl The result A of measuring $
\overrightarrow{\sigma_1}.\overrightarrow{a} $ is then determined by
$\overrightarrow{a}$ and $\lambda $, and the result B of measuring $
\overrightarrow{\sigma_2}.\overrightarrow{b} $ is then determined by
$\overrightarrow{b}$ and $\lambda $}
\end{quotation}
La valeur moyenne du produit des deux composantes
$\overrightarrow{\sigma_1}.\overrightarrow{a}$ et
$\overrightarrow{\sigma_2}.\overrightarrow{b}$ est alors :
\begin{equation}
P(\overrightarrow{a},\overrightarrow{b})=\int d\lambda \varrho(\lambda)
A(\overrightarrow{a},\lambda)B(\overrightarrow{b},\lambda)
\end{equation}
où $\varrho(\lambda)$ est la distribution de probabilité de $\lambda$.
C'est cette expression que Bell va démontrer être incompatible avec la
valeur attendue pour la mécanique quantique :
\[ <
\overrightarrow{\sigma_1}.\overrightarrow{a}.\overrightarrow{\sigma_2}.\overrightarrow{b}>
= - \overrightarrow{a}.\overrightarrow{b}\]
On sait que l'incompatibilité nécessite que trois directions de
mesures soient utilisées. On sait que l'expérience confirme cette
incompatibilité. Mais ce n'est pas ce sur quoi nous portons notre
attention ici.
Revenons au point de départ du modèle mis à l'épreuve par Bell et
rapporté ci-dessus :
\begin{quotation}
{\color{blue}\sl The result A of measuring $
\overrightarrow{\sigma_1}.\overrightarrow{a} $ is then determined by
$\overrightarrow{a}$ and $\lambda $, and the result B of measuring $
\overrightarrow{\sigma_2}.\overrightarrow{b} $ is then determined by
$\overrightarrow{b}$ and $\lambda $}
\end{quotation}
On voit très clairement l'impossibilité que ce modèle puisse
reproduire l'expérience de pensée de EPR. Avec eux en effet, c'est la
mesure sur I qui produit un changement d'état sur II, la détermination
de la nature d'un état (sur I aussi bien sûr). Rien de tel avec Bell.
On montre dans l'appendice B que la même formule (1) de Bell est
obtenue, au signe près, si on prétend rechercher un paramètre caché
$\lambda $ qui permettrait de déterminer les résultats de deux mesures
de polarisation successives sur le même atome. On se convaincra que là
c'est absurde et sans rapport avec la mécanique quantique. Avec le
modèle de de Bohm-Bell, on reste simplement sans rapport avec la mécanique
quantique.
\section{En résumé}
\begin{tabular}{|c|c|c|c|c|}
\hline
& & $P(\overrightarrow{a},\overrightarrow{b})$ =
$< \overrightarrow{\sigma_1}.\overrightarrow{a}.\overrightarrow{\sigma_2}.\overrightarrow{b}>
$ &$ P(\overrightarrow{a},\overrightarrow{a})$ & $
P(\overrightarrow{a},\overrightarrow{a_{\bot}})$ \\
\hline
\hline
Avec collapse & MQ (Bohm1 et EPR) & $ - \overrightarrow{a}.\overrightarrow{b} =
cos(\theta)$ & -1
& 0 \\
\hline
\multirow{3}{*}{Sans collapse}& Bohm2 & $ -1/3
\overrightarrow{a}.\overrightarrow{b} $ & -1/3 & 0 \\
& Bell particulier & $ -1 + 2 \theta/\Pi $ & -1 & 0 \\
& Bell général & $ P(\overrightarrow{a},\overrightarrow{b})=\int d\lambda
\varrho(\lambda)
A(\overrightarrow{a},\lambda)B(\overrightarrow{b},\lambda) $
& -1 & 0 \\
\hline
\end{tabular}
\vspace{0.5 cm}
On a rassemblé ces résultats dans le tableau I. Seule la MQ avec EPR
appliqué au cas Bohm1 comprend comme il se doit le recours à la
réduction du paquet d'onde à l'origine même du paradoxe. On a fait
ensuite figurer le modèle Bohm2 qui correspond à la rupture spontanée
de symétrie au moment de la séparation des composants. Ensuite deux
modèles relevant de la problèmatique de Bell : une dépendance générale
d'un paramètre supplémentaire non attachée à un mécanisme physique
supposé. Dans le premier modèle, une distribution spécifique du
paramètre et une dépendance des valeurs trouvées $
A(\overrightarrow{a},\lambda) $ et $ B(\overrightarrow{a},\lambda) $
de ce paramètre sont choisis. Dans le second et dernier modèle,
l'expression la plus générale où les dépendances explicites sont
abandonnées.
On a fait figurer pour ces quatre choix, la valeur moyenne $
<\overrightarrow{\sigma_1}.\overrightarrow{a}.\overrightarrow{\sigma_2}.\overrightarrow{b}>
= P (\overrightarrow{a},\overrightarrow{b})$ et les valeurs
particulières $ P (\overrightarrow{a},\overrightarrow{a})$ et $ P
(\overrightarrow{a},\overrightarrow{a_{\bot}})$. Bell montre alors
avec les deux dernières colonnes que la MQ reste compatible si on se
limite à ces seuls choix (un seul pour Bohm2). Bell montre que par
contre, l'expression $-1+\frac{2}{\Pi}$ obtenue avec les dépendances
explicites est fort différente de la MQ des qu'on s'éloigne de ces
conditions particulières ( $ \overrightarrow{a}.\overrightarrow{b} = 1
$ ou $ \overrightarrow{a}.\overrightarrow{b} = 0 $).
Avec l'abandon de cette dépendance spécifique, quand il atteint donc
la plus grande généralité, le modèle déterministe avec paramètre
supplémentaire dans le passé commun peut s'approcher bien plus de la
MQ et il faut une ruse, en quelque sorte pour s'en distinguer : le
choix de trois direction de mesure est nécessaire et aboutit à
l'obtention des fameuses inégalités que viole donc la MQ.
Ce qui étonne, ce n'est pas cette violation mais bien qu'il faille
tout cet arsenal tant, tout de même, le modèle mis à l'épreuve est
éloigné de cette MQ et rappelons une fois de plus éloigné des
exigences d'Einstein! Ces inégalités confirment la démonstration
d'EPR, elles ne la contredisent pas.
\section{On fait le point}
Près de 30 ans séparent EPR de Bell, Bohm se situant à mi-course. Mais
quelles ont été les réactions immédiates à EPR? On doit bien sûr citer
Bohr qui réplique immédiatement avec le même titre, la même
interrogation devrait-on dire. Pas la même réponse, non! Et puis Furry
et surtout Schrodinger\cite{schr35} qui répond avec une grande honnèteté et avec
beaucoup de détails. Et jusqu'à Bohm (et quelquefois jusqu'à
aujourd'hui!) le contenu est le même. Le ton\footnote{Ce ton, celui de
la certitude, de la fermeture même, on le retrouve chez la plupart des
acteurs, on en trouvera quelques citations dans \cite{rous2006}} était donné par Bohr
tellement cité :
\begin{quotation}
{\sl \ldots a viewpoint termed `complementarity'' is explained from which
quantum mechanical description of physical phenomena would seem to
fulfill, within its scope, all rational demands of completeness}
(\cite{Bohr35} résumé, page 696).
{\sl Such an argumentation [celle d'EPR] however, would hardly seem suited
to affect the soundness of quantum-mechanical description, which is
based on a coherent mathematical formalism covering automatically any
procedure of measurement like that indicated } (\cite{Bohr35}, page
696).
\end{quotation}
Mais analysé avec lucidité par Bohm en 1957 :
\begin{quotation}
{\sl It is clear that in Bohr's point of view, no paradox can arise in the
hypothetical experiment of ERP. For the system of two atoms plus the
apparatus which is used to observe their spins is, in any case,
basically unseparable and unanalysable, so that the question of how
the correlations come about simply has no meaning.}
\end{quotation}
Mais ce qu'ont apporté EPR c'est que l'appareil de mesure des spins
est en fait deux appareils aussi éloignés l'un de l'autre que
souhaité. L'argument de Bohr s'en trouve considérablement affaibli, au
moins plus exposé au doute.
Si Bohm dit accepter le point de vue de Bohr, il précise cependant :
\begin{quotation}
{\sl {\ldots}; but we differ, in that we suppose that this combined system
is at least conceptually analysable into components which satisfy
appropriate laws.}
\end{quotation}
Bohm exprime alors l'alternative, ou bien des actions à distance ou
bien une physique plus profonde dont la MQ ne serait que
l'approximation.
Bohm termine cette partie 2 de son article, en avance sur ce qui va
venir en partie 3, et conclut avec la plus grande clarté\footnote{ En
1951\cite{bohm1951}, quand Bohm introduit le modèle des atomes couplés (la version
simplifiée de EPR!), il n'a pas encore ce point de vue et reste alors
un défenseur sans réserve du point de vue de Bohr} :
\begin{quotation}
{\sl In sum, then, the quantum theory of the many-body problem implies the
possibility of a rather strange kind of correlation in the properties
of distant things. As we shall see in the next section, experiments
proving the presence of this kind of of correlation already exist.
Any attempt to interpret (sic) the quantum mechanics and to understand
its meaning must therefore take such correlations into account.}
\end{quotation}
Nous sommes en 1957. Il y a donc alors pour Bohm des corrélations
surprenantes dont la compréhension reste à construire.
Peut-être comprend-on alors pourquoi Bell en 1964 s'intéresse
principalement à la partie 1 de l'article de Bohm, celle qui lui
fournit le modèle à mettre à l'épreuve, un peu à la partie 2 pour
l'éliminer car trop éloignée de la MQ, et pas du tout à la partie 3.
Bohm conclut déjà dans cette partie 3 ce que Bell va vouloir encore
mettre à l'épreuve, encore le fait-il avec un modèle très général mais
sans grand rapport avec la MQ (oui, sans collapse).
\section { Einstein et la mécanique quantique : quatre points forts.}
Au delà du rapport avec Bell, il peut être intéressant de schématiser
le point de vue d'Einstein sur la mécanique quantique dont il fut
,rappelons le, l'un des découvreurs majeurs
1) La description que fournit la mécanique quantique et les
prédictions qui s'en déduisent sont justes.
2) La mécanique quantique est insuffisante, inachevée, incomplète car
elle ne traite pas les cas individuels.
3) Pas de bricolage possible, pas de paramètres supplémentaires. Pour
arriver à la théorie complète il faut une refondation, une
reconstruction.
4) Démonstration de l'alternative EPR et la question des actions à
distance.
On peut dire qu'on trouve la trace des trois premiers points tout au
long du demi-siècle qui sépare 1905 de 1955 ; pour le quatrième, c'est
plus compliqué. S'il est forcément schématique de rigidifier ainsi en
quatre points ce que furent cinquante ans d'interventions sur le
sujet, au moins, ce schéma est-il construit à partir de textes qu'on
peut vérifier! On en trouvera de plus larges extraits dans \cite{rous2006}.
Pour les points 2 et 3, on peut tenter un parallèle avec la
relativité. La mécanique classique pourrait être dite incomplète, mais
on ne passe pas à la mécaniqque relativiste en ajoutant des paramètres
supplémentaires. La mécanique classique devient une approximation de
la mécanique relativiste. Des situations expérimentales distinguent
l'une de l'autre. Mais pour la MQ, il suffirait que la mécanique du
futur englobe la MQ en éliminant, même seulement en principe, le
caractère probabiliste. Voilà le genre de considérations que proposait
Einstein sur le sujet :
\begin{quotation}
{\color{blue}\uline{\sl``Il me semble en tout cas, que l'alternative
continu-discontinu est une authentique alternative ; cela veut dire
qu'ici, il n'y a pas de compromis possible. {\ldots} Dans cette
théorie, il n'y a pas de place pour l'espace et le temps, mais
uniquement pour des nombres, des constructions numériques et des
règles pour les former sur la base de règles algébriques excluant le
processus limite. Quant à savoir quelle voie s'avérera la bonne, seule
la qualité du résultat nous l'apprendra''}} Lettre à Joachim,
\cite{bali1989} page 256.
\end{quotation}
Concernant les actions à distance, voilà ce qu'il en dit en 1949 :
\begin{quotation}
{\color{blue} \uline{\sl I close these expositions, which have grown
rather lengthy, concerning the interpretation of quantum theory with
the reproduction of a brief conversation which I had with an important
theoretical physicist.}}
{\color{blue}\uline{\sl
He : ``I am inclined to believe in
telepathy.''}}
{\color{blue}\uline{\sl
I : `` This has probably more to do with physics than with psychology.''
}}
{\color{blue}\uline{\sl
He : ``Yes''
}}
\end{quotation}
\cite{eins1949} page 683.
\section{Plus de questions que de réponses, plus de doute aussi.}
Einstein est un des artisans majeurs de la consruction de la MQ.
Pourtant, en 1935 il y est marginalisé. Pourquoi alors est-ce lui qui
fait la découverte de ces corrélations, ce phénomène physique nouveau,
et pourquoi sa découverte est-elle ignorée{\ldots} jusqu'à Bohm en
1957 et Bell en 1964? Mais aussi, pourquoi pas les autres?
\begin{wrapfigure}[11]{r}[34pt]{10 cm}
\includegraphics[ scale = 0.9]{bellfig3.ps}
\end{wrapfigure}
Einstein, depuis le début, est préoccupé par le statut de la fonction
d'onde. Il est toujours tenté d'y accorder une valeur statistique
sinon on a un problème avec la relativité au moment du collapse. Il a
souvent utilisé l'expérience de pensée (une de plus!) très simple,
schématisée sur le dessin ci-contre (figure 3). Une particule ou un
photon sont diffractés au passage d'un petit orifice $O$ dans un écran
$S$ puis détectés sur une plaque photographique semi-sphérique $P$. Et
son commentaire (ici au cogrès Solvay de 1927) :
{ \color{blue}\sl\uline{Mais l'interprétation d'après laquelle $|\Psi|^2 $ exprime la
probabilité que {\em cette} particule se trouve à un endroit déterminé
suppose un mécanisme d'interraction à distance tout particulier, qui
empèche que l'onde continuement répartie dans l'espace produise une
action en {\em deux} endroits de l'écran.}}
Quelle est la réponse à cette question complètement légitime dés
qu'une part de réalité est donnée à la fonction d'onde? C'est
justement de dire que la fonction d'onde n'est pas réelle (un état
dans l'espace de Hilbert). Schrodinger dont l'expression est la plus
détaillée et la plus honnète dit que c'est ``le catalogue des
réponses possibles de la mesure avec leur probabilités''. La réponse
c'est aussi de dire avec Bohr (comme le rappelle bien Bohm) qu'un
ensemble matériel n'est pas analysable en ses parties {\ldots} tant
que la mesure n'est pas accomplie. On ne peut pas, on ne veux pas
savoir ce qui se passe mais seulement quel sera le résultat. On fuit
la physique pour se réfugier dans l'interprétation\footnote{On ne peut
pas non plus passer sous silence les décennies pendant lesquelles,
plus ou moins clairement affirmé, un rôle a été attribué à
l'observateur, quand ce n'était pas à sa conscience (voir de
nombreuses citations dans \cite{rous2003}).}!
On voit bien comment EPR rendent cette position un peu plus difficile,
c'est exprimé clairement par Bohm, 22 ans après, on l'a vu plus haut.
Et la réponse ne sera pas l'abandon de cette posture (on ne veut rien
savoir) mais le refuge dans un mot la non-localité, expression
complètement absente avec Bohr et aujourd'hui complètement intégrée
depuis -au moins- Bell. On doit remarquer, aussi insuffisant soit-il,
qu'avec ce mot on se rappoche tout de même de la réalité, de la
physique, on veut déjà voir, avant de savoir.
\section{En guise de conclusion}
En 1935, Einstein et ses collègues Podolsky et Rosen démontrent
l'existence d'une propriété insolite/étonnante/inattendue de la MQ et
s'interrogent alors sur sa cohérence globale, sa complétude. De quoi
s'git-il. Lorsque les deux éléments d'un sytème composite
convenablement préparé se séparent, la fonction d'onde reste une et
peut conduire à la production de corrélations au moment où des mesures
et le collapse correspondant sont réalisées. Cette découverte est
restée inaperçue, noyée dans les concepts flous de complémentarité et
de non-séparablité. Il faudra attendre 22 ans (le monde il est vrai
sera bouleversé dans cet intervalle) et la disparition d'Einstein pour
que, avec Bohm, l'attention soit portée sur cette découverte comme
découverte d'un phénomène physique exigeant une conpréhension
spécifique.
Si sa compréhension n'a pas alors avancé, au moins un mot a-t-il été
inventé pour le décrire : non-localité, un peu plus explicite et
spécifique d'un phénomène physique [que l'ancienne non-séparibilité].
Avec Bohm [selon peut-être une idée d'Einstein], est examinée la
possibilité d'éviter le paradoxe/phénomène en modifiant/complètant la
MQ d'une manière radicale par l'intervenntion d'une brisure spontanée
de symétrie au moment (lequel?) de la séparation des composants. Cette
brisure aboutissant à un paramètre supplémentaire dans le passé
presque-commun, la directon commune de polarisation. Les résultats de
cette hypothèse sont très différents de ceux de la MQ comme le
montrera plus tard Bell.
Mais Bohm affirme clairement alors et en s'appuyant sur des résultats
expérimentaux, que le phénomène d'action à distance, le paradoxe est
bien réel.
En 1964, Bell reprend pourtant l'idée de paramètres cachés dans le
passé commun mais en en généralisant la base, plus de mécanisme
générateur de ce paramètre, faiblesse et force de son hpothèse. Il
montre alors qu'un tel modèle dans ses résultats peut se rapprocher
beaucoup plus de la MQ que le modèle de Bohm puisque des inégalités
sont nécessaires impliquant trois directions de mesure pour les
différentier. Il reste que ce modèle, malgré la proximité de ses
résultats avec ceux de la MQ est tellement éloigné de celle-ci qu'on
ne peut en aucun cas le considérer comme une extension de celle-ci. On
ne peut non plus le considerer comme une réponse à l'article original
EPR tant dans celui-ci la mesure/le collapse est essentiel au paradoxe
démontré et complètement absent du modèle de Bell.
C'est pourtant cet article de Bell et les inégalités qu'il établit qui
vont alimenter expériences et commentaires sur ce sujet pendant des
décennies et jusqu'à aujourd'hui.
Et pour revenir à la question évoquée au début de cet article, il
semble bien difficile de faire entrer l'enchainement des arguments
échangés durant cette longue période dans l'histoire qu'on raconte!
\section*{Appendice A : Mesure, collapse ou réduction du paquet d'onde?}
Rien n'est vraiment satisfaisant : mesure peut laisser croire à un
rôle pour l'observateur, collapse ou réduction du paquet d'onde peut
laisser croire que tout est dans la représentation, dans l'espace de
Hilbert. Le vocabulaire est ambigu parce que la chose elle même n'est
pas aussi claire qu'on le souhaiterait. Et puis cela résonne avec les
questions délicates de la dualité onde/corpuscule ou celles concernant
le statut de la fonction d'onde.
Gardons collapse, plus spécifique que réduction, plus court si on veut
joindre ``du paquet d'onde''et qui peut marquer le caractère concret
et objectif d'un processus. On peut garder mesure lorsqu'il y a
détermination de la valeur d'un paramètre avec ou sans présence d'un
observateur.
\newpage
\vspace{1.5cm}
\large
\hspace{4cm}\fbox{\parbox{5cm}{
\centerline{\bf Mesure donnant}
\centerline{\bf le résultat {\sl a$_n$}}}}
\begin{pspicture}(16,5)
\psline[linewidth=.06]{c->}(6.6,5)(6.6,4)
\psline[linewidth=.06]{c-}(6.5,4)(6.5,0)
\psline[linewidth=.06]{c-}(6.7,4)(6.7,0)
\psline[linewidth=.05]{c->}(0,0)(13.5,0)
\psline[linewidth=.05]{c-}(1,.3)(1,-.3)
\psline[linewidth=.05]{c-}(11.75,.3)(11.75,-.3)
\rput{0}(13.7,0){{\em t}}
\rput{0}(11.75,-.5){{\em t$_1$}}
\rput{0}(6.6,-.5){{\em t$_0$}}
\rput{0}(1,-0.5){0}
\rput[bl]{0}(0.,1){$\ket{\psi(0)}$}
\rput[bl]{0}(4.8,1){$\ket{\psi(t_o)}$}
\rput[bl]{0}(6.9,3){$\ket{u_n}$}
\rput[bl]{0}(11.,3){$\ket{\psi'(t_1)}$}
\multips{0}(1.8,1)(.8,0){3}{
\pscurve[linewidth=.05](0,.15)(0.2,0)(.4,.15)(.6,.3)(.8,0.15)
}
\psline[linewidth=.05]{c->}(4.2,1.15)(4.5,1.15)
\multips{0}(8.,3)(.8,0){3}{
\pscurve[linewidth=.05](0,.15)(0.2,0)(.4,.15)(.6,.3)(.8,0.15)
}
\psline[linewidth=.05]{c->}(10.4,3.15)(10.7,3.15)
\end{pspicture}
\vspace{7.mm}
\large
\noindent FIGURE 2
{\bf Lors d'une mesure à l'instant {\sl t$_0$} de l'observable {\sl A}
donnant le résultat {\sl a$_n$}, le vecteur d'état du système subit une
brusque modification, et devient $\ket{u_n}$. Il évolue ensuite à partir de
ce nouvel état initial. }
\vspace{9mm}
La figure 2, reproduite de la référence \cite{Cohe77} page 221,
présente la question de la {\sl mesure} . Le vecteur d'état (la
fonction d'onde) évolue de façon déterministe (équation de
Schrödinger) depuis la préparation initiale en {\em t} = 0 jusqu'à la
{\bf mesure} en {\sl t = t$_o$}. Il subit alors un changement brusque
probabiliste vers l'un des vecteurs propres $\ket{u_n}$ de A, celui
qui est associé à la valeur propre a$_n$ trouvée. Le vecteur d'état
(la fonction d'onde) est projeté sur un de ses vecteurs propres et
renormalisé. Il reprend ensuite une évolution déterministe depuis {\sl
t = t$_o$} jusqu'à (par exemple) {\sl t = t$_1$} où une nouvelle
mesure est éventuellement pratiquée etc\ldots
Cette présentation n'est pourtant pas suffisante et pour deux raisons :
\begin{itemize}
\item
ne sont pas du tout évoquées (ce n'est pas si simple, c'est vrai!) les
circonstances qui font qu'on passe d'un comportement à l'autre. On
précisera simplement nous, qu'un objet macroscopique éventuellemnt
partie d'un appareil de mesure, vient interagir avec la particule
microscopique à laquelle est associée la fonction d'onde. Insistons
pourtant que la plupart des réductions ont lieu dans l'univers sans
qu'aucun observateur ne soit présent et qu'il est donc complétement
exceptionnel que cela se produise dans un appareil de mesure.
\item
n'est pas noté non plus que souvent (toujours?) la mesure d'un
paramètre correspond à la localisation de la particule dans un
détecteur ou une partie d'un détecteur et la fonction d'onde d'espace
est également modifiée au moment de la mesure recherchée et décrite
dans la figure 2 d'un paramètre autre que la localisation.
\end{itemize}
On a parlé de localisation de la particule, il faudrait mieux dire
réduction de la localisation car on n'arrive jamais à une localisation
ponctuelle, elle n'a pas de sens. C'est cette réduction qui se produit
si souvent dans l'univers en dehors de tout apparail de mesure. Alors
oui, pourquoi ne pas parler de {\sl \guillemotleft collapse
\guillemotright} pour le phénomène général et garder {\sl
\guillemotleft mesure \guillemotright} ... quand il y a mesure c'est à
dire détermination de la valeur d'un paramètre par ce {\sl
\guillemotleft collapse \guillemotright}.
Mais justement, dans l'examen qui précède des articles EPR et Bell on
a beaucoup parlé de mesure, cette réduction/localisation, ce collapse
très exceptionnel.
\section {Appendice B : spin, MQ et paramètres supplémentaires.}
On examine dans ce qui suit avec un petit peu plus de détails les
propriétés du spin dans la MQ et des possibilités de rendre compte des
mêmes résultats avec un modèle (classique ) à paramètres
supplémentaires.
\subsection*{Une simple mesure de spin}
On considère une particule de spin 1/2.
Elle est préparée dans un état pur de polarisation selon la direction
d'un vecteur unitaire $\overrightarrow{p}.$ La mesure
$\overrightarrow{\sigma}.\overrightarrow{a}$ de la polarisation de
cette particule selon une direction $\overrightarrow{a} $ va donner +1
ou -1 avec une valeur moyenne :
\[ < \overrightarrow{\sigma}.\overrightarrow{a} > = cos(\theta) \]
où $ \theta$ est l'angle entre$ \overrightarrow{p}$ et
$\overrightarrow{a}.$
La MQ en effet ne permet pas de prédire la valeur trouvée -1 ou +1
mais seulement sa valeur moyenne.
Si $\overrightarrow{a}$ est identique à $\overrightarrow{p}$, l'état
de polarisation est inchangé. Avec $\overrightarrow{a}$ différent de
$\overrightarrow{p}$, si la mesure a donné +1 l'état est polarisé
selon $+\overrightarrow{a}$, si elle a donné -1 l'état est polarisé
selon $-\overrightarrow{a}$. Cela est bien conforme à la bonne mesure!
On est prêt à faire une seconde mesure.
\subsection*{une mesure double ou deux mesures?}
Supposons donc qu'après la première mesure une nouvelle mesure est
effectuée selon une direction $\overrightarrow{b}$, différente de
$\overrightarrow{a}$, et que la première ait donné par exemple +1. De
nouveau, le résultat selon $\overrightarrow{b}$ sera +1 ou -1 de façon
inprédictible sauf en moyenne puisque cette fois,
\[ < \overrightarrow{\sigma}.\overrightarrow{b} > = cos(\theta') \]
où $ \theta'$ est l'angle entre$ \overrightarrow{a}$ et
$\overrightarrow{b}.$ (si le résultat de la mesure sur
$\overrightarrow{a}$ avait été -1 au lieu de 1, $ \theta'$ aurait été
l'angle entre$ -\overrightarrow{a}$ et $\overrightarrow{b}.$)
Notons que, pour les moyennes au moins, le résultat dépend de $
\overrightarrow{a}$ et de $\overrightarrow{b}$ mais plus du tout de
$\overrightarrow{p}$ dont on peut dire que le souvenir a été en
quelque sorte effacé par la première mesure.
Pour la MQ, le processus de mesure avec sa projection/renormalisation
est tel que chaque mesure donne au système un recommencement.
\subsection*{On complète la MQ? Introduction de paramètres supplémentaires}
Peut-on imaginer, au delà de la MQ donc, qu'un paramètre $\lambda$
dont on ignorerais la valeur déterminerait le résultat de la
réduction/localisation (simple localisation ou la valeur trouvée pour
une mesure) pour chaque événement.
La valeur prédite (avec certitude!) pour la moyenne par la MQ étant respectée
grace à la distribution particulière
$D(\overrightarrow{\lambda})$ des valeurs de $\lambda$, chaque résutat possible
de la mesure étant associé à une partie de
la distribution $D(\lambda)$
Cette distribution $D(\overrightarrow{\lambda})$ doit évidemment
dépendre de l'état intial et d'autre part la règle qui permet de
déterminer le résultat doit dépendre de la disposition particulière de
l'objet macroscopique qui a provoqué la réduction/localisation, et
pour une mesure, du choix de l'appareil et de la valeur des paramètres
qui peuvent le caractèriser. Mais ce qui semble absolument nécessaire
pour être conforme à la MQ et à son processus de mesure
(projection/renormalisation), c'est que la distribution D($\lambda$)
est réinitialisée par chaque mesure.
Précisons dans le cas d'une mesure de spin telle qu'elle est évoquée plus haut.
La distribution $D$ dépend de $\overrightarrow{p}$, le vecteur selon
lequel la dernière mesure a été effectuée et de $\overrightarrow{a}$
celui qui va maintenant servir, la présence de l'appareil de mesure en
quelque sorte.
Alors, peut-on compléter la MQ et trouver cette distribution
$D(\overrightarrow{\lambda})$ qui réponde à nos objectifs? Pour une
mesure simple (sur un état initialement polarisé) la réponse est oui.
Oui c'est possible, cela ne dit pas bien sûr que c'est la réalité.
C'est possible. Montrons que c'est possible, qu'on peut définir une
distribution $D(\lambda)$ qu'on va pouvoir partager entre
$D^+(\lambda)$ pour lequel le résultat de la mesure est +1 et
$D^-(\lambda)$ pour lequel il est -1.
\subsection*{On commence par la mesure simple}
On définit d'abord la distribution
$D(\overrightarrow{\lambda};\overrightarrow{p})$ une distribution
uniforme de $\overrightarrow{\lambda}$ sur la demi sphère
$\overrightarrow{\lambda}.\overrightarrow{p} \leq 0$
On va maintenant séparer cette distribution en ses deux sous-ensembles
$D^+(\lambda)$ et $D^-(\lambda)$. Soit $\overrightarrow{a'} $ un
vecteur unitaire dépendant de $\overrightarrow{a}$ et de
$\overrightarrow{p}$ de telle sorte que :
\begin{enumerate}
\item
le résultat de la mesure $ \overrightarrow{\sigma}.\overrightarrow{a}=
signe(\overrightarrow{\lambda}.\overrightarrow{a'} )$ est déterminé
par la valeur de $\overrightarrow{\lambda}$
\item
la moyenne de ce résultat est conforme aux prédictions de la MQ .
\end{enumerate}
Si $\Theta '$ est l'angle entre $ \overrightarrow{a'}$ et
$\overrightarrow{p}.$ Si $\Theta $ est l'angle entre $
\overrightarrow{a}$ et $\overrightarrow{p}.$
On doit avoir :
\[ 1- \frac{2\Theta'}{\Pi} = cos(\Theta) \]
C'est l'angle $\Theta '$ entre $ \overrightarrow{a'}$ et
$\overrightarrow{p}$ qui est ainsi déterminé (une infinité de
directions répondent à cette condition.)
C'est John Bell qui a proposé ce petit modèle. Von Neumann avait
pourtant démontré que des paramètres cachés étaient incompatibles avec
la MQ mais c'est aussi John Bell qui a montré que la supposée
démonstration de Von Neumann nécessitait une hypothèse que Bell a
montré être sans fondement.
\subsection*{... et la double mesure?} La remarque faite plus haut
s'applique directement : la première mesure a fait ``oublier'' l'état
initial, c'est le processus essentiel de projection/normalisation et
les nouveaux paramètres ne peuvent être communs avant ou après cette
première mesure. Inutile donc, absurde même de chercher un modèle qui
permettrait à partir d'une distribution $D(\overrightarrow{\lambda})$
de prévoir le résultat de la mesure sur $\overrightarrow{p} $ puis sur
$\overrightarrow{a}$... et pourquoi pas sur toute autre direction, la
suite infinie des mesures possibles, ... futures et passées! . On
voudrait de plus que la valeur moyenne $<
\overrightarrow{\sigma}.\overrightarrow{p}.\overrightarrow{\sigma}
.\overrightarrow{a} > = \overrightarrow{a}.\overrightarrow{p}$ que
prévoit la MQ soit respectée.
Inutile, absurde, mais est-ce possible?
Formalisons la question : Le résultat A de la mesure
$\overrightarrow{\sigma}.\overrightarrow{a}$ est déterminé par$
\overrightarrow{a}$ et $\lambda$.
Le résultat B de la mesure
$\overrightarrow{\sigma}.\overrightarrow{b}$ est déterminé par
$\overrightarrow{b}$ et $\lambda$.
\begin{equation}
A(\overrightarrow{a},\lambda)=\pm 1, B(\overrightarrow{b},\lambda)= \pm 1
\end{equation}
et la moyenne des résultats obtenus est conforme aux prédictions de la MQ.
\begin{equation}
P(\overrightarrow{a},\overrightarrow{b})=\int d\lambda \varrho(\lambda)
A(\overrightarrow{a},\lambda)B(\overrightarrow{b},\lambda)
\end{equation}
On constate alors que les équations 1 et 2 ci-dessus sont les mêmes
(au signe près pour la seconde) que celles utilisées par John Bell
dans l'article examiné plus haut. Est-ce une surprise? Non, dans les
deux cas, pas de réduction du paquet d'onde!
|
2,877,628,089,788 | arxiv | \section{Introduction}
\label{secintro}
Consider the classic finite-horizon Linear Quadratic (LQ) optimal control problem. In particular, consider the discrete linear time-invariant system governed by the difference equation
\begin{eqnarray}
\label{eqsys}
x_{t+1} = A\,x_t+B\,u_t,
\end{eqnarray}
where $A \in {\mathbb{R}}^{n \times n}$ and $B \in {\mathbb{R}}^{n \times m}$, and where, for all $t \ge 0$, $x_t\in {\mathbb{R}}^n$ represents the state and $u_t \in {\mathbb{R}}^m$ represents the control input. Let the initial state $x_0\in {\mathbb{R}}^n$ be given. The problem is to find a sequence of inputs $u_t$, with $t = 0,1, \ldots,T-1$, minimising the cost function
\begin{eqnarray}
\label{cost}
J(x_0,u) \stackrel{\text{\tiny def}}{=} \sum_{t=0}^{T-1} \left[ \begin{array}{cc} x_t^{\scalebox{.6}{\mbox{T}}} \;&\; u_t^{\scalebox{.6}{\mbox{T}}} \emat \left[ \begin{array}{cc} Q & S \\ S^{\scalebox{.6}{\mbox{T}}} & R \emat \left[ \begin{array}{c} x_t \\ u_t \emat+x_T^{\scalebox{.6}{\mbox{T}}}\,P\,x_T.
\end{eqnarray}
We assume that the weight matrices $Q\in {\mathbb{R}}^{n \times n}$, $S \in {\mathbb{R}}^{n \times m}$ and $R \in {\mathbb{R}}^{m \times m}$ are such that the {\em Popov matrix} $\Pi$ is symmetric and positive semidefinite, i.e.,
\begin{eqnarray}
\Pi \stackrel{\text{\tiny def}}{=} \left[ \begin{array}{cc} Q & S \\ S^{\scalebox{.6}{\mbox{T}}} & R \emat =\Pi^{\scalebox{.6}{\mbox{T}}} \ge 0.
\end{eqnarray}
We also assume that $P=P^{\scalebox{.6}{\mbox{T}}} \ge 0$.
The set of matrices $\Sigma=(A,B,\Pi)$ is often referred to as {\em Popov triple}, see e.g. \cite{Ionescu-OW-99}.
We recall that, for any time $t$, the set ${\cal U}_t$ of all optimal inputs can be {parameterised in terms of an arbitrary $m$-dimensional signal $v_t$ as}
${\cal U}_t=\{-K_t\,x_t+G_t\,v_t\}$, where\footnote{The symbol $M^\dagger$ denotes the Moore-Penrose pseudo-inverse of matrix $M$.}
\begin{eqnarray}
K_t & = & (R+B^{\scalebox{.6}{\mbox{T}}}\,X_{t+1}\,B)^\dagger (S^{\scalebox{.6}{\mbox{T}}}+B^{\scalebox{.6}{\mbox{T}}}\,X_{t+1}\,A), \\
G_t & = & I_m-(R+B^{\scalebox{.6}{\mbox{T}}}\,X_{t+1}\,B)^\dagger (R+B^{\scalebox{.6}{\mbox{T}}}\,X_{t+1}\,B),
\end{eqnarray}
in which $X_t$ is the solution of the Generalised Riccati Difference Equation GRDE($\Sigma$)
\begin{eqnarray}
\label{grde}
X_{t} = A^{\scalebox{.6}{\mbox{T}}}\,X_{t+1}\,A-(A^{\scalebox{.6}{\mbox{T}}}\,X_{t+1}\,B+S)(R+B^{\scalebox{.6}{\mbox{T}}}\,X_{t+1}\,B)^\dagger(B^{\scalebox{.6}{\mbox{T}}}\,X_{t+1}\,A+S^{\scalebox{.6}{\mbox{T}}})+Q \qquad
\end{eqnarray}
iterated backwards from $t=T-1$ to $t=0$ using the terminal condition
\begin{eqnarray}
\label{term}
X_T=P,
\end{eqnarray}
see \cite{Rappaport-S-71}. The equation characterising the set of optimal state trajectories is
\begin{eqnarray*}
x_{t+1}=(A-B\,K_t)\,x_t-B\,G_t\,v_t.
\end{eqnarray*}
The optimal cost is $J^\ast=x_0^{\scalebox{.6}{\mbox{T}}}\,X_0\,x_0$.\\[-2mm]
Despite the fact that it has been known for several decades that the generalised discrete Riccati difference equation provides the solution of the classic finite-horizon LQ problem,
this equation has not been studied with the same attention and thoroughness that has undergone the study of the standard discrete Riccati difference equation.
The purpose of this paper is to attempt to start filling this gap. In particular, we want to show a reduction technique for this equation that allows to compute its solution by solving a smaller equation with the same recursive structure, with obvious computational advantages.
In order to carry out this task, several ancillary results on the corresponding generalised Riccati equation are established, which constitute an extension of those valid for standard discrete algebraic Riccati equations presented in \cite{Ferrante-W-07} and \cite{Ferrante-04}.
In particular, these results show that the nilpotent part of the closed-loop matrix is independent of the particular solution of the generalised algebraic Riccati equation. Moreover, we provide a necessary and sufficient condition expressed in sole terms of the problem data for the existence of this nilpotent part of the closed-loop matrix. This condition, which appears to be straightforward for the standard algebraic Riccati equation, becomes more involved -- and interesting -- for the case of the generalised Riccati equation.
{We then show that every solution of the generalised algebraic Riccati equation coincide along the largest eigenspace associated with the eigenvalue at the origin of the closed-loop, and that this subspace can be employed to decompose the generalised Riccati difference equation into a nilpotent part, whose solution converges to the zero matrix in a finite number of steps {(not greater than $n$)} and a part which corresponds to a non-singular closed-loop matrix, and is therefore easy to handle with the standard tools of linear-quadratic optimal control.}
{ As a consequence, our analysis permits a generalisation of a long series of results
aiming to the closed form representation of the optimal control, see \cite{Ferrante-N-05,Ferrante-N-06,EZ,Ferrante-Ntogramatzidis-IEEE-2012} and, for the continuous-time counterpart, \cite{Ferrante-M-N,Ferrante-Ntog-EJC-07,Ferrante-N-10}.
}
{
Our analysis of the GRDE is based on the general theory on generalised {\em algebraic} Riccati equation presented in \cite{Stoorvogel-S-98} and on some recent developments derived in \cite{Ferrante-N-12-sub,Ferrante-N-12-sub2}. }
\section{The Generalised Discrete Algebraic Riccati Equation}
We begin this section by recalling two standard linear algebra results that are used in the derivations throughout the paper.
{
\begin{lemma}
\label{lem1}
Consider $P=\left[ \begin{smallmatrix} P_{11} & P_{12} \\[1mm] P_{12}^{\scalebox{.6}{\mbox{T}}} & P_{22} \end{smallmatrix} \right]=P^{\scalebox{.6}{\mbox{T}}} \ge0$. Then,
\begin{enumerate}
\item $\,\ker P_{12} \supseteq \ker P_{22}$;
\item $\,P_{12}\,P_{22}^\dagger\,P_{22}= P_{12}$;
\item $\,P_{12}\,(I-P_{22}^\dagger P_{22})=0$;
\item $\,P_{11}-P_{12} P_{22}^\dagger P_{12}^{\scalebox{.6}{\mbox{T}}} \ge 0$;
\end{enumerate}
\end{lemma}
%
\begin{lemma}
\label{lem1bis}
Consider $P=\left[ \begin{smallmatrix} P_{11} & P_{12} \\[1mm] P_{21} & P_{22} \end{smallmatrix} \right]$ where $P_{11}$ and $P_{22}$ are square and $P_{22}$ is non-singular. Then,
\begin{eqnarray}
\label{det11}
\det\,P=\det\, P_{22}\,\cdot\,\det(P_{11}-P_{12} P_{22}^{-1} P_{21}^{\scalebox{.6}{\mbox{T}}}).
\end{eqnarray}
\end{lemma}
%
}
We now introduce the so-called Generalised Discrete Algebraic Riccati Equation GDARE($\Sigma$), defined as
\begin{eqnarray}
\label{gdare}
X= A^{\scalebox{.6}{\mbox{T}}}\,X\,A-(A^{\scalebox{.6}{\mbox{T}}}\,X\,B+S)(R+B^{\scalebox{.6}{\mbox{T}}}\,X\,B)^\dagger(B^{\scalebox{.6}{\mbox{T}}}\,X\,A+S^{\scalebox{.6}{\mbox{T}}})+Q.
\end{eqnarray}
The algebraic equation (\ref{gdare}) subject to the constraint
\begin{eqnarray}
\label{kercond}
\ker (R+B^{\scalebox{.6}{\mbox{T}}}\,X\,B) \subseteq \ker (A^{\scalebox{.6}{\mbox{T}}}\,X\,B+S)
\end{eqnarray}
is usually referred to as Constrained Generalised Discrete Algebraic Riccati Equation CGDARE($\Sigma$):
\begin{eqnarray}
\label{cgdare}
\left\{ \begin{array}{ll}
X= A^{\scalebox{.6}{\mbox{T}}}\,X\,A-(A^{\scalebox{.6}{\mbox{T}}}\,X\,B+S)(R+B^{\scalebox{.6}{\mbox{T}}}\,X\,B)^\dagger(B^{\scalebox{.6}{\mbox{T}}}\,X\,A+S^{\scalebox{.6}{\mbox{T}}})+Q \\
\ker (R+B^{\scalebox{.6}{\mbox{T}}}\,X\,B) \subseteq \ker (A^{\scalebox{.6}{\mbox{T}}}\,X\,B+S) \end{array} \right.
\end{eqnarray}
It is obvious that CGDARE($\Sigma$) constitutes a generalisation of the classic Discrete Riccati Algebraic Equation DARE($\Sigma$)
\begin{eqnarray}
\label{dare}
X= A^{\scalebox{.6}{\mbox{T}}}\,X\,A-(A^{\scalebox{.6}{\mbox{T}}}\,X\,B+S)(R+B^{\scalebox{.6}{\mbox{T}}}\,X\,B)^{-1}(B^{\scalebox{.6}{\mbox{T}}}\,X\,A+S^{\scalebox{.6}{\mbox{T}}})+Q,
\end{eqnarray}
in the sense that any solution of DARE($\Sigma$) is also a solution of CGDARE($\Sigma$) but the {\em vice-versa} is not true in general. Importantly, however, the inertia of $R+B^{\scalebox{.6}{\mbox{T}}}\,X\,B$ is independent of the particular solution of the CGDARE($\Sigma$), \cite[Theorem 2.4]{Stoorvogel-S-98}. This implies that a given CGDARE($\Sigma$) cannot have one solution $X=X^{\scalebox{.6}{\mbox{T}}}$ such that $R+B^{\scalebox{.6}{\mbox{T}}} X\,B$ is non-singular and another solution $Y=Y^{\scalebox{.6}{\mbox{T}}}$ for which $R+B^{\scalebox{.6}{\mbox{T}}} Y\,B$ is singular. As such, {\bf i)} if $X$ is a solution of DARE($\Sigma$), then all solutions of CGDARE($\Sigma$) will also satisfy DARE($\Sigma$) and, {\bf ii)} if $X$ is a solution of
CGDARE($\Sigma$) such that $R+B^{\scalebox{.6}{\mbox{T}}}\,X\,B$ is singular, then DARE($\Sigma$) does not admit solutions. \\
{To simplify the notation, for any $X=X^{\scalebox{.6}{\mbox{T}}}\in {\mathbb{R}}^{n \times n}$ we define
\begin{eqnarray*}
R_X & \stackrel{\text{\tiny def}}{=} & R+B^{\scalebox{.6}{\mbox{T}}}\,X\,B\\
S_X & \stackrel{\text{\tiny def}}{=} & A^{\scalebox{.6}{\mbox{T}}}\,X\,B+S\\
K_X & \stackrel{\text{\tiny def}}{=} &(R+B^{\scalebox{.6}{\mbox{T}}}\,X\,B)^\dagger \, (B^{\scalebox{.6}{\mbox{T}}}\,X\,A+S^{\scalebox{.6}{\mbox{T}}})=R_X^\dagger S_X^{\scalebox{.6}{\mbox{T}}}\\
A_X& \stackrel{\text{\tiny def}}{=} & A-B\,K_X
\end{eqnarray*}
so that (\ref{kercond}) can be written as $\ker R_X \subseteq \ker S_X$.%
}
\section{GDARE and the extended symplectic pencil}
{In this section we adapt the analysis carried out in \cite{Ferrante-W-07} for standard discrete algebraic Riccati equations to the case of CGDARE($\Sigma$).}
Consider the so-called extended symplectic pencil $N-z\,M$, where
\begin{eqnarray*}
M\stackrel{\text{\tiny def}}{=}\left[ \begin{array}{ccc}
I_n & O & O \\
O & -A^{\scalebox{.6}{\mbox{T}}} & O \\
O & -B^{\scalebox{.6}{\mbox{T}}} & O \end{array} \right] \qquad \textrm{and} \qquad
N\stackrel{\text{\tiny def}}{=} \left[ \begin{array}{ccc}
A & O & B \\
Q & -I_n & S \\
S^{\scalebox{.6}{\mbox{T}}} & O & R \end{array} \right].
\end{eqnarray*}
{ This is an extension that may be reduced to the symplectic structure (see \cite{Wim,Ferrante-L-98}) when the matrix $R$ is invertible.}
We begin by giving a necessary and sufficient condition for $N$ to be singular. We will also show that, unlike the case in which the pencil $N-z\,M$ is regular, {the singularity of $N$ is not equivalent to the fact that the matrix pencil $N-z\,M$ has a generalised eigenvalue at zero.}
\begin{lemma}
\label{lem31}
Matrix $N$ is singular if and only if at least one of the two matrices $R$ and $A-B\,R^\dagger\,S^{\scalebox{.6}{\mbox{T}}}$ is singular.
\end{lemma}
\noindent{\bf{\em Proof:}\ \ }
First note that $N$ is singular if and only if such is $\left[ \begin{smallmatrix} A & B \\[1mm] S^{\scalebox{.6}{\mbox{T}}} & R \esmat$.
To see this fact, consider the left null-spaces.
Clearly, $\left[ \begin{array}{ccc} v_1{^{\scalebox{.6}{\mbox{T}}}} & v_2{^{\scalebox{.6}{\mbox{T}}}} & v_3{^{\scalebox{.6}{\mbox{T}}}} \emat\,N=0$, if and only if $v_2=0$ and $\left[ \begin{array}{cc} v_1{^{\scalebox{.6}{\mbox{T}}}} & v_3{^{\scalebox{.6}{\mbox{T}}}} \emat\,\left[ \begin{smallmatrix} A & B \\[1mm] S^{\scalebox{.6}{\mbox{T}}} & R \esmat=0$.\\ Now, if $R$ is singular, a non-zero vector $v_3$ exists such $v_3{^{\scalebox{.6}{\mbox{T}}}}\,R=0$. {Since from {\em (1)} in Lemma \ref{lem1} applied to the Popov matrix $\left[ \begin{smallmatrix} Q & S \\[1mm] S^{\scalebox{.6}{\mbox{T}}} & R \esmat$ the subspace inclusion
$\ker R \subseteq \ker S$ holds}, we have also $\left[ \begin{array}{cc} 0\; & v_3{^{\scalebox{.6}{\mbox{T}}}} \emat \left[ \begin{smallmatrix} A & B \\[1mm] S^{\scalebox{.6}{\mbox{T}}} & R \esmat=0$.
{If $R$ is invertible but $A-B\,R^\dagger\,S^{\scalebox{.6}{\mbox{T}}}=A-B\,R^{-1}\,S^{\scalebox{.6}{\mbox{T}}}$ is singular, {from (\ref{det11}) in Lemma \ref{lem1bis}} matrix $\left[ \begin{smallmatrix} A & B \\[1mm] S^{\scalebox{.6}{\mbox{T}}} & R \esmat$ is singular, and therefore so is $N$.
{\em Vice-versa}, if both $R$ and $A-B\,R^{-1}\,S^{\scalebox{.6}{\mbox{T}}}$ are non-singular, $\left[ \begin{smallmatrix} A & B \\[1mm] S^{\scalebox{.6}{\mbox{T}}} & R \esmat$
is non-singular in view of {(\ref{det11}) in Lemma \ref{lem1bis}}. Thus, $N$ is invertible.\hspace*{\fill}~\QED\par\endtrivlist\unskip}
\ \\[-2mm]
The following theorem (see \cite{Ferrante-N-12-sub2} for a proof) presents a useful decomposition of the extended symplectic pencil that parallels the classic one -- see e.g. \cite{Ferrante-W-07} -- which is valid in the case in which the pencil $N-z\,M$ is regular.
\begin{theorem}
\label{the0}
Let $X$ be a symmetric solution of CGDARE($\Sigma$). Let also $K_X$ be the associated gain and $A_X$ be the associated closed-loop matrix.
Two invertible matrices $U_X$ and $V_X$ of suitable sizes exist such that
\begin{eqnarray}
\label{decomp}
U_X\,(N-z\,M)\,V_X=\left[ \begin{array}{ccc}
A_X-z\,I_n & O & B \\
O & I_n-z\,A_X^{\scalebox{.6}{\mbox{T}}} & O \\
O & -z\,B^{\scalebox{.6}{\mbox{T}}} & R_X \end{array} \right].
\end{eqnarray}
\end{theorem}
From Theorem \ref{the0} we find that if $X$ is a solution of CGDARE($\Sigma$), in view of the triangular structure obtained above we have
\begin{eqnarray}
\label{det}
\det(N-z\,M)=(-1)^n \cdot \det(A_X-z\,I_n)\cdot \det(I_n-z\,A_X^{\scalebox{.6}{\mbox{T}}}) \cdot \det R_X.
\end{eqnarray}
When $R_X$ is non-singular, the dynamics represented by this matrix pencil are decomposed into a part governed by the generalised eigenstructure of $A_X-z\,I_n$, a part governed by the finite generalised eigenstructure of $I_n-z\,A_X^{\scalebox{.6}{\mbox{T}}}$, and a part which corresponds to the dynamics of the eigenvalues at infinity.
When $X$ is a solution of DARE($\Sigma$), the generalised eigenvalues\footnote{Recall that a generalised eigenvalue of a matrix pencil $N-z\,M$ is a value of $z \in {\mathbb{C}}$ for which the rank of the matrix pencil $N-z\,M$ is lower than its normal rank.} of {$N\,z-M$} are given by the eigenvalues of $A_X$, the reciprocal of the non-zero eigenvalues of $A_X$, and {a generalised eigenvalue} at infinity whose algebraic multiplicity is equal to $m$ plus the algebraic multiplicity of the eigenvalue of $A_X$ at the origin.
The matrix pencil $I_n-z\,A_X^{\scalebox{.6}{\mbox{T}}}$ has no generalised eigenvalues at $z=0$. This means that $z=0$ is a generalised eigenvalue of the matrix pencil $U_X\,(N-z\,M)\,V_X$ if and only if it is a generalised eigenvalue of the matrix pencil $A_X-z\,I_n$, because certainly
$z=0$ cannot cause the rank of $I_n-z\,A_X^{\scalebox{.6}{\mbox{T}}}$
to be smaller than its normal rank and because the normal rank of $N-z\,M$ is $2\,n+m$. This means that the Kronecker eigenstructure of the eigenvalue at the origin of $U_X\,(N-z\,M)\,V_X$ coincides with the Jordan eigenstructure of the eigenvalue at the origin of the closed-loop matrix $A_X$. Since the generalised eigenvalues of $N-z\,M$ do not depend on the particular solution $X=X^{\scalebox{.6}{\mbox{T}}}$ of CGDARE($\Sigma$), the same holds for the generalised eigenvalues and the Kronecker structure of $U_X\,(N-z\,M)\,V_X$ for any non-singular $U_X$ and $V_X$. Therefore, the nilpotent structure of the closed-loop matrix $A_X$ -- which is the Jordan eigenstructure of the generalised eigenvalue at the origin of $A_X$ -- if any, is independent of the particular solution $X=X^{\scalebox{.6}{\mbox{T}}}$ of CGDARE($\Sigma$). Moreover, since
\begin{eqnarray}
\label{newN}
U_X\,N\,V_X=\left[ \begin{array}{ccc}
A_X & O & B \\
O & I_n & O \\
O & O & R_X \end{array} \right],
\end{eqnarray}
we see that, when $R_X$ is invertible, $N$ is singular if and only if $A_X$ is singular. { Since from Lemma \ref{lem31} matrix
$N$ is singular if and only if at least one of the two matrices $R$ and $A-B\,R^\dagger\,S^{\scalebox{.6}{\mbox{T}}}$ is singular, we also have the following result. }
\begin{lemma}{{\bf (see e.g. \cite{Ferrante-04})}}
\label{prep1}
Let $R_X$ be invertible. Then, $A_X$ is singular if and only if at least one of the two matrices $R$ and $A-B\,R^\dagger\,S^{\scalebox{.6}{\mbox{T}}}$ is singular.
\end{lemma}
However, when the matrix $R_X$ is singular, it is no longer true that
$A_X$ is singular if and only if $R$ or $A-B\,R^\dagger\,S^{\scalebox{.6}{\mbox{T}}}$ is singular. Indeed, (\ref{newN}) shows that the algebraic multiplicity of the eigenvalue at the origin of $N$ is equal to the sum of the algebraic multiplicities of the eigenvalue at the origin of $A_X$ and $R_X$. Therefore, the fact that $N$ is singular does not necessarily imply that $A_X$ is singular.
{Indeed, Lemma \ref{prep1} can be generalised to the case where $R_X$ is possibly singular as follows.}
\begin{proposition}
\label{prep2}
The closed-loop matrix $A_X$ is singular if and only if $\operatorname{rank} R < \operatorname{rank} R_X$ or $A-B\,R^\dagger\,S^{\scalebox{.6}{\mbox{T}}}$ is singular.
\end{proposition}
\noindent{\bf{\em Proof:}\ \ } Given a square matrix $Z$, let us denote by $\mu(Z)$ the algebraic multiplicity of its eigenvalue at the origin. Then, we know from (\ref{newN}) that $\mu(N)=\mu \left( \left[ \begin{smallmatrix} A & B \\[1mm] S^{\scalebox{.6}{\mbox{T}}} & R \esmat \right)=\mu(A_X)+\mu(R_X)$. Consider a basis in the input space that isolates the invertible part of $R$. In other words, in this basis $R$ is written as $R=\left[ \begin{smallmatrix} R_1 & O \\[1mm] O & O \esmat$ where $R_1$ is invertible, while $B=\left[ \begin{array}{cc} B_1 & B_2 \emat$ and $S=\left[ \begin{array}{cc} S_1 & O \emat$ are partitioned accordingly. It follows that $\mu\left( \left[ \begin{smallmatrix} A & B \\[1mm] S^{\scalebox{.6}{\mbox{T}}} & R \esmat \right)=\mu(R)+\mu\left( \left[ \begin{smallmatrix} A & B_1 \\[1mm] S_1^{\scalebox{.6}{\mbox{T}}} & R_1 \esmat \right)$. As such,
\begin{eqnarray}
\label{mu}
\mu(A_X)=\mu \left( \left[ \begin{array}{cc} A & B \\ S^{\scalebox{.6}{\mbox{T}}} & R \emat \right)-\mu(R_X)=\mu \left( \left[ \begin{array}{cc} A & B_1 \\ S_1^{\scalebox{.6}{\mbox{T}}} & R_1 \emat \right)+\mu(R)-\mu(R_X).
\end{eqnarray}
First, we show that if $\operatorname{rank} R < \operatorname{rank} R_X$, then $A_X$ is singular. Since $\operatorname{rank} R < \operatorname{rank} R_X$, then obviously $\mu(R)>\mu(R_X)$, so that (\ref{mu}) gives $\mu(A_X)>0$. \\
Let now $A-B\,R^\dagger\,S^{\scalebox{.6}{\mbox{T}}}$ be singular, and let $\operatorname{rank} R = \operatorname{rank} R_X$. From (\ref{mu}) we find that $\mu(A_X)=\mu \left( \left[ \begin{smallmatrix} A & B_1 \\[1mm] S_1^{\scalebox{.6}{\mbox{T}}} & R_1 \esmat\right)$. However, $A-B\,R^\dagger\,S^{\scalebox{.6}{\mbox{T}}}=A-B_1\,R_1^{-1}\,S_1^{\scalebox{.6}{\mbox{T}}}$. If $A-B\,R^\dagger\,S^{\scalebox{.6}{\mbox{T}}}$ is singular, there exists a non-zero vector $k$ such that $\left[ \begin{array}{cc} k^{\scalebox{.6}{\mbox{T}}} & -k^{\scalebox{.6}{\mbox{T}}}\,B_1\,R_1^{-1}\emat \left[ \begin{smallmatrix} A & B_1 \\[1mm] S_1^{\scalebox{.6}{\mbox{T}}} & R_1 \esmat=0$. Hence, $\mu \left(\left[ \begin{smallmatrix} A & B_1 \\[1mm] S_1^{\scalebox{.6}{\mbox{T}}} & R_1 \esmat\right)>0$, and therefore also $\mu(A_X)>0$. \\
To prove that the converse is true, it suffices to show that if $A-B\,R^\dagger\,S^{\scalebox{.6}{\mbox{T}}}$ is non-singular
and $\operatorname{rank} R = \operatorname{rank} R_X$, then $A_X$ is non-singular. To this end, we observe that $\operatorname{rank} R=\operatorname{rank} R_X$ is equivalent to $\mu(R)=\mu(R_X)$ because $R$ and $R_X$ are symmetric. Thus, in view of (\ref{mu}), it suffices to show that if $A-B\,R^\dagger\,S^{\scalebox{.6}{\mbox{T}}}$ is non-singular, then $\mu \left(\left[ \begin{smallmatrix} A & B_1 \\[1mm] S_1^{\scalebox{.6}{\mbox{T}}} & R_1 \esmat\right)=0$. Indeed, {assume that $A-B\,R^\dagger\,S^{\scalebox{.6}{\mbox{T}}}=A-B_1\,R_1^{-1}\,S_1^{\scalebox{.6}{\mbox{T}}}$ is non-singular, and} take a vector $\left[ \begin{smallmatrix} v_1^{\scalebox{.6}{\mbox{T}}} & v_2^{\scalebox{.6}{\mbox{T}}} \esmat$ such that $\left[ \begin{smallmatrix} v_1^{\scalebox{.6}{\mbox{T}}} & v_2^{\scalebox{.6}{\mbox{T}}} \esmat\left[ \begin{smallmatrix} A & B_1 \\[1mm] S_1^{\scalebox{.6}{\mbox{T}}} & R_1 \esmat =0$. Then, since $R_1$ is invertible we get
$v_2^{\scalebox{.6}{\mbox{T}}}=-v_1^{\scalebox{.6}{\mbox{T}}}\,B_1\,R_1^{-1}$ and $v_1^{\scalebox{.6}{\mbox{T}}}\,(A-B_1\,R_1^{-1}\,S_1^{\scalebox{.6}{\mbox{T}}})=0$. Hence, $v_1=0$ since $A-B_1\,R_1^{-1}\,S_1^{\scalebox{.6}{\mbox{T}}}$ is non-singular, and therefore also $v_2=0$.
\hspace*{\fill}~\QED\par\endtrivlist\unskip
\begin{remark}
{\em
We recall that $\mu(R_X)$ is invariant for any {symmetric} solution $X$ of CGDARE($\Sigma$), \cite{Stoorvogel-S-98}.
Hence, as a direct consequence of (\ref{mu}), we have that $\mu(A_X)$ is the same for any {symmetric} solution $X$ of CGDARE($\Sigma$). This means, in particular, that the closed-loop matrix corresponding to a given {symmetric} solution of CGDARE($\Sigma$) is singular if and only if the closed-loop matrix corresponding to any other {symmetric} solution of CGDARE($\Sigma$) is singular.
In the next section we show that a stronger result holds: when present, the zero eigenvalue has the same Jordan structure for any pair $A_X$ and $A_Y$ of closed-loop matrices corresponding to any pair $X,Y$ of {symmetric} solutions of CGDARE($\Sigma$). Moreover, the generalised eigenspaces corresponding to the zero eigenvalue of $A_X$ and $A_Y$ coincide. The restriction of $A_X$ and $A_Y$ to this generalised eigenspace also coincide. Finally, $X$ and $Y$ coincide along this generalised eigenspace.
}
\end{remark}
\section{The subspace where all solutions coincide}
Given a solution $X=X^{\scalebox{.6}{\mbox{T}}}$ of CGDARE($\Sigma$), we denote by ${\cal U}$ the generalised eigenspace corresponding to the eigenvalue at the origin of $A_X$, i.e., ${\cal U} \stackrel{\text{\tiny def}}{=} \ker (A_X)^n$. {Notice that, in principle, ${\cal U}$ could depend on the particular solution $X$.
{In this section, and in particular in Theorem \ref{main},} we want to prove not only that ${\cal U}$ does {\em not} depend on the particular solution $X$, but also that
all solutions of CGDARE($\Sigma$) are coincident along ${\cal U}$. In other words, given two solutions
$X=X^{\scalebox{.6}{\mbox{T}}}$ and $Y=Y^{\scalebox{.6}{\mbox{T}}}$ of CGDARE($\Sigma$), we show that $\ker (A_X)^n=\ker (A_Y)^n$ and, given a basis matrix\footnote{Given a subspace ${\cal S}$, a basis matrix $S$ of ${\cal S}$ is such that $\operatorname{im} S={\cal S}$ and $\ker S=\{0\}$.} $U$ of the subspace ${\cal U}=\ker (A_X)^n=\ker (A_Y)^n$,} the change of coordinate matrix $T=[\,U\;\;\;U_c\,]$ yields
\begin{eqnarray}
T^{-1}\,X\,T=\left[ \begin{array}{cc} X_{11} & X_{12} \\ X_{12}^{\scalebox{.6}{\mbox{T}}} & X_{22} \end{array} \right] \quad \textrm{and} \quad
T^{-1}\,Y\,T=\left[ \begin{array}{cc} X_{11} & X_{12} \\ X_{12}^{\scalebox{.6}{\mbox{T}}} & Y_{22} \end{array} \right]. \label{alph}
\end{eqnarray}
{We begin by presenting a first simple result.
\begin{lemma}
Two symmetric solutions $X$ and $Y$ of CGDARE($\Sigma$) are coincident along the subspace ${\cal U}$ if and only if
${\cal U} \subseteq \ker (X-Y)$.
\end{lemma}
\noindent{\bf{\em Proof:}\ \ }
Suppose $X$ and $Y$ are coincident along the subspace ${\cal U}$, and are already written in the basis defined by $T$ in (\ref{alph}). In this basis ${\cal U}$ can be written as ${\cal U}=\operatorname{im} \left[ \begin{smallmatrix} I \\[1mm] O \end{smallmatrix} \right]$. If (\ref{alph}) holds, then we can write $X-Y=\left[ \begin{smallmatrix} O & O \\[1mm] O & \star \esmat$. Then, $(X-Y)\,{\cal U}=\left[ \begin{smallmatrix} O & O \\[1mm] O & \star \esmat\left[ \begin{smallmatrix} I \\[1mm] O \end{smallmatrix} \right]=\{0\}$. {\em Vice-versa}, if $(X-Y)\,{\cal U}=\{0\}$ and we write $X-Y=\left[ \begin{smallmatrix} \Delta_{11} & \Delta_{12} \\[1mm] \Delta_{12}^{\scalebox{.6}{\mbox{T}}} & \Delta_{22} \esmat$, we find that $\left[ \begin{smallmatrix} \Delta_{11} & \Delta_{12} \\[1mm] \Delta_{12}^{\scalebox{.6}{\mbox{T}}} & \Delta_{22} \esmat\left[ \begin{smallmatrix} I \\[1mm] O \end{smallmatrix} \right]=\{0\}$ implies $\Delta_{11}=0$ and $\Delta_{12}=0$.
\hspace*{\fill}~\QED\par\endtrivlist\unskip
}
\ \\
{We now present two results that will be useful to prove Theorem \ref{main}.} Let $X=X^{\scalebox{.6}{\mbox{T}}}\in {\mathbb{R}}^{n \times n}$. Similarly to \cite{Ferrante-W-07}, we define the function
\begin{eqnarray}
\label{gdaredef}
{\cal D}(X)\stackrel{\text{\tiny def}}{=} X-A^{\scalebox{.6}{\mbox{T}}}\,X\,A+(A^{\scalebox{.6}{\mbox{T}}}\,X\,B+S)(R+B^{\scalebox{.6}{\mbox{T}}}\,X\,B)^\dagger(B^{\scalebox{.6}{\mbox{T}}}\,X\,A+S^{\scalebox{.6}{\mbox{T}}})-Q.
\end{eqnarray}
If in particular $X=X^{\scalebox{.6}{\mbox{T}}}$ is a solution of GDARE($\Sigma$), then ${\cal D}(X)=0$.
{Recall that we have defined $R_X = R+B^{\scalebox{.6}{\mbox{T}}}\,X\,B$, $S_X = A^{\scalebox{.6}{\mbox{T}}}\,X\,B+S$ and $R_Y = R+B^{\scalebox{.6}{\mbox{T}}}\,Y\,B$, $S_Y \stackrel{\text{\tiny def}}{=} A^{\scalebox{.6}{\mbox{T}}}\,Y\,B+S$.}
\begin{lemma}
\label{41}
Let $X=X^{\scalebox{.6}{\mbox{T}}}\in {\mathbb{R}}^{n \times n}$ and $Y=Y^{\scalebox{.6}{\mbox{T}}}\in {\mathbb{R}}^{n \times n}$ be such that (\ref{kercond}) holds, i.e.,
\begin{eqnarray}
\ker R_X \subseteq \ker S_X \label{kercondX} \\
\ker R_Y \subseteq \ker S_Y.\label{kercondY}
\end{eqnarray}
Let $A_X = A-B\,K_X$ with $K_X = R_X^\dagger\,S_X^{\scalebox{.6}{\mbox{T}}}$ and $A_Y = A-B\,K_Y$ with $K_Y = R_Y^\dagger\,S_Y^{\scalebox{.6}{\mbox{T}}}$.
Moreover, let us define the difference $\Delta\stackrel{\text{\tiny def}}{=} X-Y$. Then,
\begin{eqnarray}
\label{ionescu}
{\cal D}(X)-{\cal D}(Y)=\Delta-A_Y^{\scalebox{.6}{\mbox{T}}}\,\Delta\,A_Y+A_Y^{\scalebox{.6}{\mbox{T}}}\,\Delta\,B\,R_X^\dagger\, B^{\scalebox{.6}{\mbox{T}}}\,\Delta\, A_Y.
\end{eqnarray}
\end{lemma}
The proof can be found in \cite[p.382]{Abou-Kandil-FIJ-03}.
{The following lemma is the counterpart of Lemma 2.2 in \cite{Ferrante-W-07} where the standard DARE was considered.}
\begin{lemma}
\label{lemWF}
Let $X=X^{\scalebox{.6}{\mbox{T}}}\in {\mathbb{R}}^{n \times n}$ and $Y=Y^{\scalebox{.6}{\mbox{T}}}\in {\mathbb{R}}^{n \times n}$ be such that (\ref{kercondX}-\ref{kercondY}) hold. {Let $\Delta=X-Y$.} Then,
\begin{eqnarray}
{\cal D}(X)-{\cal D}(Y)=\Delta-A_Y^{\scalebox{.6}{\mbox{T}}}\,\Delta \, A_X.
\end{eqnarray}
\end{lemma}
\noindent{\bf{\em Proof:}\ \ }
First, notice that
\begin{eqnarray*}
A_Y^{\scalebox{.6}{\mbox{T}}} \Delta\, B = [A^{\scalebox{.6}{\mbox{T}}}-(A^{\scalebox{.6}{\mbox{T}}} Y \,B +S)\,R_Y^\dagger B^{\scalebox{.6}{\mbox{T}}}] \Delta \,B.
\end{eqnarray*}
We now show that $\ker R_X \subseteq \ker (A_Y^{\scalebox{.6}{\mbox{T}}} \Delta\, B)$. To this end, let $P_X$ be a basis of the null-space of $R_X$. Hence, $(R+B^{\scalebox{.6}{\mbox{T}}} X B)P_X=0$. Then,
\begin{eqnarray*}
A_Y^{\scalebox{.6}{\mbox{T}}}\,\Delta\,B\,P_X &= & \left(A^{\scalebox{.6}{\mbox{T}}}-(A^{\scalebox{.6}{\mbox{T}}}\,Y\,B+S)\,R_Y^\dagger\,B^{\scalebox{.6}{\mbox{T}}}\right)\,(X-Y)\,B\,P_X \\
&= & A^{\scalebox{.6}{\mbox{T}}}\,X\,B\,P_X-(A^{\scalebox{.6}{\mbox{T}}}\,Y\,B+S)\,R_Y^\dagger\,B^{\scalebox{.6}{\mbox{T}}}\,X\,B\,P_X -A^{\scalebox{.6}{\mbox{T}}}\,Y\,B\,P_X \\
&& +(A^{\scalebox{.6}{\mbox{T}}} Y \,B +S)\,R_Y^\dagger\,B^{\scalebox{.6}{\mbox{T}}}\,Y\,B\,P_X \\
&& +(A^{\scalebox{.6}{\mbox{T}}} Y \,B +S)\,R_Y^\dagger\,R\,P_X- (A^{\scalebox{.6}{\mbox{T}}} Y \,B +S)\,R_Y^\dagger\,R\,P_X \\
& = &A^{\scalebox{.6}{\mbox{T}}}\,X\,B\,P_X+(A^{\scalebox{.6}{\mbox{T}}} Y \,B +S)\,R_Y^\dagger\,R_Y\,P_X-A^{\scalebox{.6}{\mbox{T}}}\,Y\,B\,P_X \\
& = & A^{\scalebox{.6}{\mbox{T}}}\,X\,B\,P_X+S_Y\,P_X-A^{\scalebox{.6}{\mbox{T}}}\,Y\,B\,P_X=(A^{\scalebox{.6}{\mbox{T}}}\,X\,B+S)\,P_X,
\end{eqnarray*}
which is zero since $\ker R_X \subseteq \ker S_X$ in view of { (\ref{kercondX}) in Lemma \ref{41}}. Now we want to prove that
\begin{eqnarray}
\label{eqQ}
A_Y^{\scalebox{.6}{\mbox{T}}} \Delta \,(A_Y-A_X) = A_Y^{\scalebox{.6}{\mbox{T}}} \, \Delta\, B\,R_X^\dagger \, B^{\scalebox{.6}{\mbox{T}}}\, \Delta\,A_Y.
\end{eqnarray}
Consider the term
\begin{eqnarray}
\label{ok}
A_Y^{\scalebox{.6}{\mbox{T}}} \Delta (A_Y-A_X) = A_Y^{\scalebox{.6}{\mbox{T}}} \Delta\, B \,(R_X^\dagger S_X-R_Y^\dagger S_Y).
\end{eqnarray}
{
Since $R_X^\dagger R_X$ is an orthogonal projection that projects onto $\operatorname{im} R_X^{\scalebox{.6}{\mbox{T}}} =\operatorname{im} R_X$, we have $\ker R_X=\operatorname{im} (I_m-R_X^\dagger R_X)$. Since as we have shown $\ker R_X \subseteq \ker (A_Y^{\scalebox{.6}{\mbox{T}}} \Delta\, B)$, from$\ker R_X=\operatorname{im} (I_m-R_X^\dagger R_X)$ we also have}
$A_Y^{\scalebox{.6}{\mbox{T}}} \Delta\, B \,(I_m-R_X^\dagger R_X)=0$, which means that $A_Y^{\scalebox{.6}{\mbox{T}}} \Delta\, B\,R_X^\dagger\, R_X=A_Y^{\scalebox{.6}{\mbox{T}}} \,\Delta\,B$. We use this fact on (\ref{ok}) to get
\begin{eqnarray}
A_Y^{\scalebox{.6}{\mbox{T}}} \Delta (A_Y\!-\!A_X) & \!=\! & A_Y^{\scalebox{.6}{\mbox{T}}} \Delta\, B\,R_X^\dagger [ (B^{\scalebox{.6}{\mbox{T}}} X A+S)-R_X\,R_Y^\dagger (B^{\scalebox{.6}{\mbox{T}}} Y A+S)] \nonumber \\
& \!=\! &A_Y^{\scalebox{.6}{\mbox{T}}} \Delta\, B\,R_X^\dagger [ (B^{\scalebox{.6}{\mbox{T}}} X A\!+\!S\!-\!B^{\scalebox{.6}{\mbox{T}}}\,Y\,A\!+\!B^{\scalebox{.6}{\mbox{T}}} Y \,A)\!-\!R_X\,R_Y^\dagger (B^{\scalebox{.6}{\mbox{T}}} Y A\!+\!S)] \nonumber \\
& \!=\! & A_Y^{\scalebox{.6}{\mbox{T}}} \Delta\, B\,R_X^\dagger [ B^{\scalebox{.6}{\mbox{T}}} \Delta\,A+(I_m-R_X\,R_Y^\dagger) (B^{\scalebox{.6}{\mbox{T}}} Y A+S)]. \label{final1}
\end{eqnarray}
Since $R_X=R+B^{\scalebox{.6}{\mbox{T}}} X \,B-B^{\scalebox{.6}{\mbox{T}}} Y \,B+B^{\scalebox{.6}{\mbox{T}}} Y \,B=R_Y+B^{\scalebox{.6}{\mbox{T}}}\,\Delta\,B$, eq. (\ref{final1}) becomes
\begin{eqnarray*}
A_Y^{\scalebox{.6}{\mbox{T}}} \Delta (A_Y-A_X) = A_Y^{\scalebox{.6}{\mbox{T}}} \Delta\, B\,R_X^\dagger [B^{\scalebox{.6}{\mbox{T}}} \Delta\,A + (I_m-R_Y\,R_Y^\dagger-B^{\scalebox{.6}{\mbox{T}}} \Delta\,B\,R_Y^\dagger)(B^{\scalebox{.6}{\mbox{T}}} Y A+S)] \\
= A_Y^{\scalebox{.6}{\mbox{T}}} \Delta B\,R_X^\dagger B^{\scalebox{.6}{\mbox{T}}} \Delta\, (A - B\,R_Y^\dagger)(B^{\scalebox{.6}{\mbox{T}}} Y A+S)= \Delta B\,R_X^\dagger B^{\scalebox{.6}{\mbox{T}}} \Delta\,A_Y,
\end{eqnarray*}
since from Lemma \ref{lem1} $(I_m-R_Y\,R_Y^\dagger)(B^{\scalebox{.6}{\mbox{T}}} Y A+S)=0$ from $\ker R_Y \subseteq \ker (A^{\scalebox{.6}{\mbox{T}}} Y\,B+S)$. Eq. (\ref{eqQ}) follows by recalling that $A_Y=A - B\,R_Y^\dagger\,S_Y$. Plugging (\ref{eqQ}) into (\ref{ionescu}) yields
\begin{eqnarray*}
{\cal D}(X)-{\cal D}(Y)=\Delta-A_Y^{\scalebox{.6}{\mbox{T}}}\,\Delta A_Y+A_Y^{\scalebox{.6}{\mbox{T}}} \Delta (A_Y-A_X)=\Delta-A_Y^{\scalebox{.6}{\mbox{T}}}\,\Delta A_X.
\end{eqnarray*}
\hspace*{\fill}~\QED\par\endtrivlist\unskip
Now we are ready to prove the main result of this section. This result {extends the analysis of Proposition 2.1 in \cite{Ferrante-W-07} to solutions of CGDARE($\Sigma$).}
\begin{theorem}
\label{main}
Let ${\cal U} = \ker (A_X)^n$ denote the generalised eigenspace corresponding to the eigenvalue at the origin of $A_X$. Then
\begin{enumerate}
\item All solutions of CGDARE($\Sigma$) are coincident along ${\cal U}$, i.e., given two solutions $X$ and $Y$ of CGDARE($\Sigma$),
\[
(X-Y)\,{\cal U}=\{0\};
\]
\item ${\cal U}$ does not depend on the solution $X$ of CGDARE($\Sigma$), i.e., given two solutions $X$ and $Y$ of CGDARE($\Sigma$), there holds
\begin{eqnarray*}
\ker (A_X)^n=\ker (A_Y)^n.
\end{eqnarray*}
\end{enumerate}
\end{theorem}
\noindent{\bf{\em Proof:}\ \ } Let us prove {\em (1)}. Consider a non-singular $T \in {\mathbb{R}}^{n \times n}$. Define the new quintuple
\begin{eqnarray*}
\tilde{A} \stackrel{\text{\tiny def}}{=} T^{-1}\,A\,T, \qquad \tilde{B}\stackrel{\text{\tiny def}}{=} T^{-1}\,B, \quad \tilde{Q}\stackrel{\text{\tiny def}}{=} T^{\scalebox{.6}{\mbox{T}}}\,Q\,T, \quad \tilde{S}\stackrel{\text{\tiny def}}{=} T^{\scalebox{.6}{\mbox{T}}} S, \quad \tilde{R}\stackrel{\text{\tiny def}}{=} R.
\end{eqnarray*}
It is straightforward to see that $X$ satisfies GDARE($\Sigma$) with respect to $(A,B,Q,R,S)$ if and only if $\tilde{X}\stackrel{\text{\tiny def}}{=} T^{\scalebox{.6}{\mbox{T}}} X\,T$ satisfies GDARE($\Sigma$) with respect to $(\tilde{A},\tilde{B},\tilde{Q},\tilde{R},\tilde{S})$, which for the sake of simplicity is denoted by $\tilde{{\cal D}}$, so that $\tilde{{\cal D}}(\tilde{X})=0$. The closed-loop matrix in the new basis is related to the closed-loop matrix in the original basis by
{
$$
\tilde{A}_{\tilde{X}} = \tilde{A}-\tilde{B}\,(\tilde{R}+\tilde{B}^{\scalebox{.6}{\mbox{T}}} \tilde{X} \,\tilde{B})^\dagger (\tilde{B}^{\scalebox{.6}{\mbox{T}}} \tilde{X} \,\tilde{A}+\tilde{S}^{\scalebox{.6}{\mbox{T}}}) =T^{-1}\,A_X\,T
$$
}
Moreover, if $\tilde{{\cal U}}=\ker (\tilde{A}_{\tilde{X}})^n$, then $\tilde{{\cal U}}=T^{-1}\, {\cal U}$ since $(\tilde{A}_{\tilde{X}})^n \tilde{{\cal U}}=0$ is equivalent to $T^{-1} (A_X)^n T\,\tilde{{\cal U}}=T^{-1} (A_X)^n\,{\cal U}=0$.
We choose an orthogonal change of coordinate matrix $T$ as $T=[\,U\;\;\;U_c\,]$, where $U$ is a basis matrix of ${\cal U}$. In this new basis
\begin{eqnarray*}
\tilde{A}_{\tilde{X}}=T^{-1}\,A_X\,T = \left[ \begin{array}{cc} U & U_c \end{array} \right]^{\scalebox{.6}{\mbox{T}}} A_X \left[ \begin{array}{cc} U & U_c \end{array} \right] \\
= \left[ \begin{array}{cc} U^{\scalebox{.6}{\mbox{T}}} A_X\,U & \star \\ U_c^{\scalebox{.6}{\mbox{T}}} A_X\,U & \star \end{array} \right]=
\left[ \begin{array}{cc} U^{\scalebox{.6}{\mbox{T}}} A_X\,U & \star \\ O & U_c^{\scalebox{.6}{\mbox{T}}} A_X\,U_c \end{array} \right],
\end{eqnarray*}
where the zero in the bottom left corner is due to the fact that the rows of $U_c^{\scalebox{.6}{\mbox{T}}} A_X$ are orthogonal to the columns of $U$. Moreover, the submatrix $N_0 \stackrel{\text{\tiny def}}{=} U^{\scalebox{.6}{\mbox{T}}} A_X\,U$ is nilpotent with the same nilpotency index\footnote{With a slight abuse of nomenclature, we use the term {\em nilpotency index} of a matrix $M$ to refer to the smallest integer $\nu$ for which $\ker (M)^\nu=\ker (M)^{\nu+1}$, which is defined also when $M$ is not nilpotent.} of $A_X$.
Notice also that $H_X \stackrel{\text{\tiny def}}{=} U_c^{\scalebox{.6}{\mbox{T}}} A_X\,U_c$ is non-singular.
Let $\tilde{X}$ be a solution of CGDARE($\tilde{\Sigma}$) in this new basis, and let it be partitioned as
\begin{eqnarray*}
\tilde{X}=\left[ \begin{array}{cc} \tilde{X}_{11} & \tilde{X}_{12} \\ \tilde{X}_{12}^{\scalebox{.6}{\mbox{T}}} & \tilde{X}_{22} \end{array} \right],
\end{eqnarray*}
where $\tilde{X}_{11}$ is $\nu \times \nu$, with $\nu=\textrm{dim} \,{\cal U}$. Consider another solution $\tilde{Y}$ of CGDARE($\tilde{\Sigma}$), partitioned as $Y=\left[ \begin{smallmatrix} \tilde{Y}_{11} & \tilde{Y}_{12} \\[1mm] \tilde{Y}_{12}^{\scalebox{.6}{\mbox{T}}} & \tilde{Y}_{22} \esmat$. Let $\Delta \stackrel{\text{\tiny def}}{=} \tilde{X}-\tilde{Y}$ be partitioned in the same way. Since $\tilde{X}$ and $\tilde{Y}$ are both solutions of CGDARE($\tilde{\Sigma}$), we get $\tilde{{\cal D}}(\tilde{X})=\tilde{{\cal D}}(\tilde{Y})=0$. Thus, in view of Lemma \ref{lemWF}, there holds
\begin{eqnarray}
\label{eqalpha}
\Delta-\tilde{A}_{\tilde{Y}}^{\scalebox{.6}{\mbox{T}}} \,\Delta \,\tilde{A}_{\tilde{X}}=0.
\end{eqnarray}
If $\Delta$ is partitioned as $\Delta=[\,\Delta_1\;\;\;\Delta_2\,]$ where $\Delta_1$ has $\nu$ columns, eq. (\ref{eqalpha}) becomes
\[
\left[ \begin{array}{cc} \Delta_1 & \Delta_2 \end{array} \right]-\tilde{A}_{\tilde{Y}}^{\scalebox{.6}{\mbox{T}}} \left[ \begin{array}{cc} \Delta_1 & \Delta_2 \end{array} \right]\left[ \begin{array}{cc} N_0 & \star \\ O & H_X \end{array} \right]=
\left[ \begin{array}{cc} \Delta_1 -\tilde{A}_{\tilde{Y}}^{\scalebox{.6}{\mbox{T}}} \Delta_1\,N_0 & \star \end{array} \right]=0,
\]
from which we get $\Delta_1=\tilde{A}_{\tilde{Y}}^{\scalebox{.6}{\mbox{T}}}\, \Delta_1\,N_0$. Thus,
\begin{eqnarray*}
\Delta_1 = \tilde{A}_{\tilde{Y}}^{\scalebox{.6}{\mbox{T}}} \Delta_1\,N_0=(\tilde{A}_{\tilde{Y}}^{\scalebox{.6}{\mbox{T}}})^2 \Delta_1\,N_0^2= \ldots =
(\tilde{A}_{\tilde{Y}}^{\scalebox{.6}{\mbox{T}}})^n \Delta_1\,(N_0)^n,
\end{eqnarray*}
which is equal to zero since $(N_0)^n$ is the zero matrix. Hence, $\Delta_1=0$. Thus, we have also
\[
\Delta\,{{\cal U}}=\left[ \begin{array}{cc} O \; &\; \star \emat \left( \operatorname{im}\left[ \begin{array}{c} I \\ O \emat\right)=\{0\}.
\]
Since $\Delta$ is symmetric, we get
\begin{eqnarray*}
\tilde{X}-\tilde{Y}=\left[ \begin{array}{cc} \tilde{X}_{11} & \tilde{X}_{12} \\ \tilde{X}_{12}^{\scalebox{.6}{\mbox{T}}} & \tilde{X}_{22} \end{array} \right]-\left[ \begin{array}{cc} \tilde{Y}_{11} & \tilde{Y}_{12} \\ \tilde{Y}_{12}^{\scalebox{.6}{\mbox{T}}} & \tilde{Y}_{22} \end{array} \right]=\left[ \begin{array}{cc} O & O \\ O & \tilde{X}_{22}-\tilde{Y}_{22} \end{array} \right],
\end{eqnarray*}
which leads to $\tilde{X}_{11}=\tilde{Y}_{11}$ and $\tilde{X}_{12}=\tilde{Y}_{12}$. \\
Let us prove {\em (2)}.
Since $\ker R_Y$ coincides with $\ker R_X$ by virtue of \cite[Theorem 4.3]{Ferrante-N-12-sub}, we find
\begin{eqnarray}
A_X-A_Y & =& B\,(R_Y^\dagger S_Y^{\scalebox{.6}{\mbox{T}}}-R_X^\dagger \,S_X^{\scalebox{.6}{\mbox{T}}}) \nonumber \\
& = & B\,R_Y^\dagger (S_Y^{\scalebox{.6}{\mbox{T}}}-R_Y\,R_X^\dagger \,S_X^{\scalebox{.6}{\mbox{T}}}). \label{sec}
\end{eqnarray}
Plugging
\begin{eqnarray}
\label{eqsy}
S_Y^{\scalebox{.6}{\mbox{T}}}=B^{\scalebox{.6}{\mbox{T}}} \,Y\,A+S^{\scalebox{.6}{\mbox{T}}}=B^{\scalebox{.6}{\mbox{T}}} \,\Delta\,A+S^{\scalebox{.6}{\mbox{T}}}+B^{\scalebox{.6}{\mbox{T}}}\,X\,A =B^{\scalebox{.6}{\mbox{T}}}\,\Delta\,A+S_X^{\scalebox{.6}{\mbox{T}}}
\end{eqnarray}
and
\begin{eqnarray}
\label{eqry}
R_Y =R+ B^{\scalebox{.6}{\mbox{T}}} \,Y\,B-B^{\scalebox{.6}{\mbox{T}}}\,X\,B+B^{\scalebox{.6}{\mbox{T}}}\,X\,B=R_X+B^{\scalebox{.6}{\mbox{T}}} \,\Delta\,B
\end{eqnarray}
into (\ref{sec}) yields
\begin{eqnarray*}
A_X-A_Y & = & B\,R_Y^\dagger (B^{\scalebox{.6}{\mbox{T}}}\,\Delta\,A-B^{\scalebox{.6}{\mbox{T}}}\,\Delta\,B\,R_X^\dagger \,S_X^{\scalebox{.6}{\mbox{T}}}) \\
& = & B\,R_Y^\dagger B^{\scalebox{.6}{\mbox{T}}}\,\Delta\,A_X.
\end{eqnarray*}
This means that the identity
\[
A_X-A_Y=B\,R_Y^\dagger B^{\scalebox{.6}{\mbox{T}}}\,\Delta\,A_X
\]
holds. By partitioning $\Delta=\left[ \begin{smallmatrix} O & \star \\[1mm]
O & \star \esmat$, we find that also $B\,R_Y^\dagger B^{\scalebox{.6}{\mbox{T}}}\,\Delta=\left[ \begin{smallmatrix} O & \star \\[1mm]
O & \star \esmat$, so that
\begin{eqnarray*}
A_Y &=& A_X-B\,R_Y^\dagger B^{\scalebox{.6}{\mbox{T}}}\,\Delta\,A_X \\
& = & \left[ \begin{array}{cc} N_0 & \star \\ O & H_X \emat -\left[ \begin{array}{cc} O \;&\; \star \\ O & \star \emat\left[ \begin{array}{cc} N_0 & \star \\ O & H_X \emat =\left[ \begin{array}{cc} N_0 & \star \\ O & H_Y \emat.
\end{eqnarray*}
Thus, $\ker (A_Y)^n \supseteq \ker (A_X)^n$. If we interchange the role of $X$ and $Y$, we obtain the opposite inclusion
$\ker (A_Y)^n \subseteq \ker (A_X)^n$. Notice, in passing, that this also implies that $H_Y$ is non-singular.
\hspace*{\fill}~\QED\par\endtrivlist\unskip
\section{The Generalised Riccati Difference Equation}
Consider the GRDE($\Sigma$) along with the terminal condition $X_T=P=P^{\scalebox{.6}{\mbox{T}}}\ge 0$. Let us define
\begin{eqnarray*}
{\cal R}(X) \stackrel{\text{\tiny def}}{=} A^{\scalebox{.6}{\mbox{T}}}\,X\,A-(A^{\scalebox{.6}{\mbox{T}}}\,X\,B+S)(R+B^{\scalebox{.6}{\mbox{T}}} X\,B)^\dagger(B^{\scalebox{.6}{\mbox{T}}}\,X\,A+S^{\scalebox{.6}{\mbox{T}}})+Q.
\end{eqnarray*}
With this definition, GRDE($\Sigma$) can be written as $X_{t}={\cal R}(X_{t+1})$. Moreover, GDARE($\Sigma$) can be written as
\begin{eqnarray*}
{\cal D}(X)=X-{\cal R}(X)=0.
\end{eqnarray*}
We have the following important result.
\begin{theorem}
\label{th51}
Let $X_\circ=X_\circ^{\scalebox{.6}{\mbox{T}}}$ be a solution of CGDARE($\Sigma$). Let $\nu$ be the index of nilpotency of $A_{X_\circ}$. Moreover, let $X_t$ be a solution of (\ref{grde}-\ref{term}) and define $\Delta_t \stackrel{\text{\tiny def}}{=} X_t-X_\circ$. Then, for $\tau \ge \nu$, we have $\Delta_{T-\tau}\,{\cal U}=\{0\}$.
\end{theorem}
\noindent{\bf{\em Proof:}\ \ }
Since $X_\circ=X_\circ^{\scalebox{.6}{\mbox{T}}}$ is a solution of CGDARE($\Sigma$), we have ${\cal D}(X_\circ)=0$. This is equivalent to saying that $X_\circ={\cal R}(X_\circ)$. From the definition of $\Delta_t$ we get in particular $\Delta_T=X_T-X_\circ$. With these definitions in mind, we find
\begin{eqnarray}
\Delta_t &=& {\cal R}(X_{t+1})-{\cal R}(X_\circ)=X_{t+1}-{\cal D}(X_{t+1})-X_\circ \nonumber \\
&= & \Delta_{t+1}-{\cal D}(X_{t+1})=\Delta_{t+1}-{\cal D}(X_{t+1})+{\cal D}(X_\circ) \nonumber \\
&=& \Delta_{t+1}-[{\cal D}(X_{t+1})-{\cal D}(X_\circ)]. \label{pippo}
\end{eqnarray}
However, we know from (\ref{ionescu}) that
\begin{eqnarray}
\label{ionescu1}
&&{\cal D}(X_{t+1})-{\cal D}(X_\circ) \nonumber \\
&& \hspace{.3cm} = \Delta_{t+1}-A_{X_\circ}^{\scalebox{.6}{\mbox{T}}}\,[\Delta_{t+1}-\Delta_{t+1}\,B\, (R+B^{\scalebox{.6}{\mbox{T}}} X_{t+1} B)^\dagger B^{\scalebox{.6}{\mbox{T}}}\,\Delta_{t+1} ]A_{X_\circ},
\end{eqnarray}
which, once plugged into (\ref{pippo}), gives
\begin{eqnarray}
\Delta_t &=& \Delta_{t+1}-\Delta_{t+1}+A_{X_\circ}^{\scalebox{.6}{\mbox{T}}}\,[\Delta_{t+1}+\Delta_{t+1}\,B\, (R+B^{\scalebox{.6}{\mbox{T}}} X_{t+1} B)^\dagger B^{\scalebox{.6}{\mbox{T}}}\,\Delta_{t+1} ]A_{X_\circ} \nonumber \\
&=& A_{X_\circ}^{\scalebox{.6}{\mbox{T}}}\,[I_n-\Delta_{t+1}\,B\, (R+B^{\scalebox{.6}{\mbox{T}}} X_{t+1} B)^\dagger B^{\scalebox{.6}{\mbox{T}}}\,] \Delta_{t+1} A_{X_\circ}=F_{t+1}\,\Delta_{t+1} \,A_{X_\circ}, \label{Riccati}
\end{eqnarray}
where
\[
F_{t+1} \stackrel{\text{\tiny def}}{=} A_{X_\circ}^{\scalebox{.6}{\mbox{T}}}-A_{X_\circ}^{\scalebox{.6}{\mbox{T}}}\Delta_{t+1}\,B\, (R+B^{\scalebox{.6}{\mbox{T}}} X_{t+1} B)^\dagger B^{\scalebox{.6}{\mbox{T}}}.
\]
It follows that we can write
\begin{eqnarray}
\Delta_{T-1} &=& F_T\,\Delta_T\,A_{X_\circ}, \nonumber \\
\Delta_{T-2} & =& F_{T-1}\,\Delta_{T-1}\,A_{X_\circ}=F_{T-1}\,F_T\,\Delta_T\,(A_{X_\circ})^2, \nonumber \\
& \vdots & \\
\Delta_{T-\tau} & = & \left(\prod_{i=T-\tau+1}^T F_i\right)\,\Delta_T\,(A_{X_\circ})^{\tau}. \label{ultima}
\end{eqnarray}
This shows that for $\tau \ge \nu$ we have $\ker \Delta_{T-\tau} \supseteq \ker (A_{X_\circ})^n$. \hspace*{\fill}~\QED\par\endtrivlist\unskip
\ \\[-2mm]
Now we show that the result given in Theorem \ref{th51} can be used to obtain a reduction for the generalised discrete-time Riccati difference equation. Consider the same basis induced by the change of coordinates used in Theorem \ref{main}, so that the first $\nu$ components of this basis span the subspace ${\cal U}=\ker (A_X)^n$. The closed-loop matrix in this basis can be written as
\begin{eqnarray*}
{A}_{{X_\circ}} = \left[ \begin{array}{cc} N_0 \;& \star \\ O & Z \end{array} \right],
\end{eqnarray*}
where $N_0$ is nilpotent and $Z$ is non-singular. Hence, $({A}_{{X_\circ}})^{\nu}=\left[ \begin{smallmatrix} O & \star \\[1mm] O & Z^{\nu} \esmat$, where we recall that $\nu$ is the nilpotency index of $A_{X_\circ}$. By writing (\ref{ultima}) in this basis, for $\tau \ge \nu$ we find
\begin{eqnarray*}
{\Delta}_{T-\tau}=\left[ \begin{array}{cc} \star \;&\; \star \\ \star \;&\; \star \emat \left[ \begin{array}{cc} O\; &\; \star \\ O \;&\; Z^{\tau} \end{array} \right]
=
\left[ \begin{array}{cc} O \;&\; \star \\ O \;&\; \star \end{array} \right]=\left[ \begin{array}{cc} O \;& \;O \\ O\; &\; \star \end{array} \right],
\end{eqnarray*}
where the last equality follows from the fact that ${\Delta}_{T-\tau}$ is symmetric.
Now, let us rewrite the Riccati difference equation (\ref{Riccati}) as
\begin{eqnarray}
\Delta_t= A_{X_\circ}^{\scalebox{.6}{\mbox{T}}}\,\Delta_{t+1} A_{X_\circ} -A_{X_\circ}^{\scalebox{.6}{\mbox{T}}}\,\Delta_{t+1}\,B (R+B^{\scalebox{.6}{\mbox{T}}} X_{t+1} B)^\dagger B^{\scalebox{.6}{\mbox{T}}}\, \Delta_{t+1} A_{X_\circ}.
\end{eqnarray}
For $t \le T-\nu$, we get $\Delta_t=\left[ \begin{smallmatrix} O & O \\[1mm] O & \Psi_t \esmat$, and the previous equation becomes
\begin{eqnarray*}
\left[ \begin{array}{cc} \! O \; & O \! \\ \! O \; & \Psi_t \! \emat & = &
\left[ \begin{array}{cc} \! N_0^{\scalebox{.6}{\mbox{T}}} & O \! \\ \! \star \! & \! Z^{\scalebox{.6}{\mbox{T}}} \! \end{array} \right]\left[ \begin{array}{cc} \! O \; & O \! \\ \! O \; & \Psi_{t+1} \! \emat\left[ \begin{array}{cc} \! N_0 & \; \star \! \\ \! O & \; Z \! \end{array} \right]\\
& & -
\left[ \begin{array}{cc} \! N_0^{\scalebox{.6}{\mbox{T}}} & O \! \\ \! \star \! & \! Z^{\scalebox{.6}{\mbox{T}}} \! \end{array} \right]\left[ \begin{array}{cc} \! O & O \! \\ \! O & \Psi_{t+1} \! \emat B\,(R+B^{\scalebox{.6}{\mbox{T}}} X_{t+1}\,B)^\dagger B^{\scalebox{.6}{\mbox{T}}} \left[ \begin{array}{cc} \! O & O \! \\ \! O & \Psi_{t+1} \! \emat\left[ \begin{array}{cc} \! N_0 & \; \star \! \\ \! O \! & \! Z \! \emat \\
& = &
\left[ \begin{array}{cc} \! O & O \! \\ \! O & Z^{\scalebox{.6}{\mbox{T}}} \! \, \Psi_{t+1} \, Z \! \emat \\
& & -
\left[ \begin{array}{cc} \! O & O \! \\ \! O & Z^{\scalebox{.6}{\mbox{T}}} \, \Psi_{t+1} \! \emat \! \! \left[ \begin{array}{c} \! B_1 \! \\ \! B_2 \! \emat \! \! \left(R \! + \! \left[ \begin{array}{cc} \! B_1^{\scalebox{.6}{\mbox{T}}} & B_2^{\scalebox{.6}{\mbox{T}}} \! \emat \! (\Delta_{t+1} \! + \! X_\circ) \! \left[ \begin{array}{c} \! B_1 \! \\ \! B_2 \! \emat \right)^\dagger\! \left[ \begin{array}{cc} \! B_1^{\scalebox{.6}{\mbox{T}}} & B_2^{\scalebox{.6}{\mbox{T}}} \! \emat \! \! \left[ \begin{array}{cc} \! O & O \! \\ \! O & \Psi_{t+1} Z \! \emat\! \! .
\end{eqnarray*}
By partitioning $X_\circ$ as { $X_\circ=\left[ \begin{smallmatrix} X_{\circ,11} & X_{\circ,12} \\[1mm] X_{\circ,12}^{\scalebox{.6}{\mbox{T}}} & X_{\circ,22} \esmat$}, we get
\begin{eqnarray*}
\left[ \begin{array}{cc} O & O \\ O & \Psi_t \emat &\!=\!&
\left[ \begin{array}{cc} O & O \\ O & Z^{\scalebox{.6}{\mbox{T}}} \, \Psi_{t+1} \, Z\emat \!-\!
\left[ \begin{array}{cc} O & O \\ O & Z^{\scalebox{.6}{\mbox{T}}} \, \Psi_{t+1} \emat \!\!\left[ \begin{array}{cc} \star & \star \\ \star & B_2\,(R_0\!+\!B_2^{\scalebox{.6}{\mbox{T}}}\,\Psi_{t+1}\,B_2)^\dagger \,B_2^{\scalebox{.6}{\mbox{T}}} \emat\!\! \left[ \begin{array}{cc} O & O \\ O & \Psi_{t+1} \, Z\emat \\
&\!=\!&
\left[ \begin{array}{cc} O & O \\ O & Z^{\scalebox{.6}{\mbox{T}}} \, \Psi_{t+1} \, Z\emat -
\left[ \begin{array}{cc} O & O \\ O & Z^{\scalebox{.6}{\mbox{T}}} \, \Psi_{t+1}\,B_2\,(R_0+B_2^{\scalebox{.6}{\mbox{T}}}\,\Psi_{t+1}\,B_2)^\dagger \,B_2^{\scalebox{.6}{\mbox{T}}}\,\Psi_{t+1} \, Z \emat,
\end{eqnarray*}
where $R_0 \stackrel{\text{\tiny def}}{=} R+B_2^{\scalebox{.6}{\mbox{T}}} \,X_{\circ,22}\,B_2$. Therefore, $\Psi_t$ satisfies the reduced homogeneous Riccati difference equation
\begin{eqnarray}
\label{reduced}
\Psi_t=Z^{\scalebox{.6}{\mbox{T}}} \, \Psi_{t+1} \, Z- Z^{\scalebox{.6}{\mbox{T}}} \, \Psi_{t+1}\,B_2\,(R_0+B_2^{\scalebox{.6}{\mbox{T}}}\,\Psi_{t+1}\,B_2)^\dagger \,B_2^{\scalebox{.6}{\mbox{T}}}\,\Psi_{t+1} \, Z.
\end{eqnarray}
The associated generalised discrete Riccati algebraic equation is
\begin{eqnarray}
\label{homog}
\Psi- Z^{\scalebox{.6}{\mbox{T}}} \, \Psi \, Z+Z^{\scalebox{.6}{\mbox{T}}} \, \Psi\,B_2\,(R_0+B_2^{\scalebox{.6}{\mbox{T}}}\,\Psi\,B_2)^\dagger \,B_2^{\scalebox{.6}{\mbox{T}}}\,\Psi \, Z=0.
\end{eqnarray}
Being homogeneous, this equation admits the solution $\Psi=0$. This fact has two important consequences:
\begin{itemize}
\item The closed-loop matrix associated with this solution is clearly $Z$, which is non-singular. On the other hand, we know that the nilpotent part of the closed-loop matrix is independent of the particular solution of CGDARE($\Sigma$) considered. This means that all solutions of (\ref{homog}) have a closed-loop matrix that is non-singular;
\item Given a solution $\Psi$ of (\ref{homog}), the null-space of $R_0+B_2^{\scalebox{.6}{\mbox{T}}}\,\Psi\,B_2$ coincides with the null-space of $R_0$, since the null-space of $R_0+B_2^{\scalebox{.6}{\mbox{T}}}\,\Psi\,B_2$ does not depend on the particular solution of (\ref{homog}) and we know that the zero matrix is a solution of (\ref{homog}).
\end{itemize}
As a result of this discussion, it turns out that given a reference solution $X_\circ$ of CGDARE($\Sigma$), the solution of GDRE($\Sigma$) with terminal condition $X_T=P$ can be computed backward as follows:
\begin{enumerate}
\item For the first $\nu$ steps, i.e., from $t=T$ to $t=T-\nu$, $X_t$ is computed by iterating the GDRE($\Sigma$) starting from the terminal condition $X_T=P$;
\item In the basis that isolates the nilpotent part of $A_X$, we have
\[
\Delta_{T-\nu}=\left[ \begin{array}{cc} O & O \\ O & \Psi_{T-\nu} \emat.
\]
From $t=T-\nu-1$ to $t=0$, the solution of GDRE($\Sigma$) can be found iterating the reduced order GDRE in (\ref{reduced}) starting from the terminal condition $\Psi_{T-\nu}$.
\end{enumerate}
\begin{remark}
{\em The advantage of using the reduced-order generalised difference Riccati algebraic equation (\ref{reduced}) consists in the fact that the closed-loop matrix of any solution of the associated generalised discrete Riccati algebraic equation is non-singular. Hence, when the reduced-order pencil given by the Popov triple $\left(Z,B_2,\left[ \begin{smallmatrix} 0 & 0 \\[1mm] 0 & R_0 \esmat\right)$ is regular, the solution of the
reduced-order generalised difference Riccati algebraic equation (\ref{reduced}) can also be computed in closed-form, using the results in \cite{Ferrante-N-06}. Indeed, consider a solution $\Psi$ of (\ref{homog}) with its non-singular closed-loop matrix $A_\Psi$ and let $Y$ be the corresponding solution of the closed-loop Hermitian Stein equation
\begin{eqnarray}
\label{stein}
A_\Psi\,Y\,A_\Psi^{{\scalebox{.6}{\mbox{T}}}}-Y+B_2\,(R_0+B_2^{{\scalebox{.6}{\mbox{T}}}}\,\Psi\,B_2)^{-1}B_2^{{\scalebox{.6}{\mbox{T}}}}=0.
\end{eqnarray}
The set of solutions of the extended symplectic difference equation for the reduced system is parameterised in terms of $K_1,K_2 \in \mathbb{R}^{(n-\nu) \times (n-\nu)}$ as
\begin{eqnarray}
\label{param}
\left[ \begin{array}{c} \Xi_{t} \\ \Lambda_t \\ \Omega_t \emat=
\left[ \begin{array}{c} \! I_{n-\nu} \! \\ \! \Psi \! \\ \! -K_\Psi \! \end{array} \right] (A_\Psi)^t \,K_1 \!+\! \left[ \begin{array}{c} \! Y\,A_\Psi^{{\scalebox{.6}{\mbox{T}}}} \! \\ \! (\Psi\,Y-I_{n-\nu})A_\Psi^{{\scalebox{.6}{\mbox{T}}}} \! \\ \! -K_{\star} \! \end{array} \right] (A_\Psi^{{\scalebox{.6}{\mbox{T}}}})^{T-t-1}\, K_2, & \!\quad 0 \le t \le T,\quad
\end{eqnarray}
where $K_\star \stackrel{\text{\tiny def}}{=} K_{\Psi}\,Y\,A_{\Psi}^{\scalebox{.6}{\mbox{T}}} - (R_0+B_2^{\scalebox{.6}{\mbox{T}}} \,\Psi\,B_2)^{-1}\,B_2^{\scalebox{.6}{\mbox{T}}}$.
The values of the parameter matrices $K_1$ and $K_2$ can be computed so that the terminal condition satisfies $X_T=I_n$ and $\Lambda_T=\Psi_{T-\nu}$. Such values exist because $A_\Psi$ is non-singular, and are given by
\begin{eqnarray*}
K_1 &=& (A_\Psi)^{-T}\left(I_{n-\nu}-Y\,(\Psi-\Psi_{T-\nu}) \right) \\
K_2 & = & \Psi-\Psi_{T-\nu}.
\end{eqnarray*}
Then, the solution of (\ref{reduced}) is given by $\Psi_t=\Lambda_t\,\Xi_t^{-1}$.
}
\end{remark}
{
\section{Concluding remarks}
%
In this paper we have considered the generalised Riccati difference equation with a terminal condition which arises in finite-horizon LQ optimal control. We have shown in particular that it is possible to identify and deflate the singular part of such equation using the corresponding generalised algebraic Riccati equation. The two advantages of this technique are the reduction of the dimension of the Riccati equation at hand as well as the fact that the reduced problem is non-singular, and can therefore be handled with the standard tools of the finite-horizon LQ theory. }
|
2,877,628,089,789 | arxiv | \section{Introduction}
The properties of particle dark matter remain unknown. Searches with direct detection experiments are one of the most promising ways of detecting dark matter through an interaction other than gravity. A positive detection is expected to yield information on the particle mass, the cross-section and information on the form of the interaction~\cite{Peter:2013aha,Catena:2014epa}. Although there has not yet been a conclusive detection~\cite{Bernabei:2013xsa, Pradler:2012qt, Aalseth:2014eft, Davis:2014bla, Angloher:2011uu, Brown:2011dp,Kuzniak:2012zm,Agnese:2013rvf}, direct detection experiments have demonstrated a remarkable record of increasing their sensitivity by an order of magnitude approximately every three years and this increase is expected to continue over the next decade~\cite{Cushman:2013zza}. Two-phase xenon experiments have proven to be particularly sensitive and we are approaching the tonne-scale era with the LUX~\cite{Akerib:2012ys} and XENON1T~\cite{Aprile:2012zx} experiments, and funding has been secured for the approximately five-tonne successor experiments, LZ~\cite{Malling:2011va,Akerib:2015cja} and XENONnT~\cite{Aprile:2014zvw,Aprile:2015uzo}. There is also a longer-term proposal for DARWIN~\cite{Baudis:2012bc}, an even larger $\sim20$-tonne experiment whose aim is to explore all of the dark matter parameter space not limited by neutrino backgrounds~\cite{Billard:2013qya}.
Multi-tonne xenon experiments bring new opportunities to search for rare signals. This is for two reasons. Firstly, the larger target mass means that there are more xenon nuclei for the dark matter to scatter off and secondly, larger experiments allow for backgrounds to be significantly reduced, even down to the irreducible background from coherent neutrino scattering. This is because more of the liquid xenon can be used to self-shield the fiducial volume where dark matter signals are searched for.
The canonical search with direct detection experiments is the elastic scattering process depicted in the left diagram of figure~\ref{fig:scattering}. The interaction with the dark matter particle causes the xenon nucleus to recoil with an energy typically in the range $1$\,--\,$100~\mathrm{keV}$. Since some nuclear isotopes have excitations in this energy range, it was long ago realised that these nuclear excitations could also play a role in the detection of dark matter~\cite{Goodman:1984dc, Ellis:1988nb}. In this case, some part of the energy transferred from the dark matter particle causes the excitation of the nucleus while the other part causes the nucleus to recoil. The excited nucleus then decays emitting a photon. This process is depicted in the right panel of figure~\ref{fig:scattering}. For experiments with xenon, there are two isotopes of interest, $^{129}\mathrm{Xe}$ and $^{131}\mathrm{Xe}$, which make up $26.4\%$ and $21.2\%$ of natural xenon and have an excitation energy and lifetime of $39.6~\mathrm{keV}$ and $80.2~\mathrm{keV}$, and $0.97~\mathrm{ns}$ and $0.48~\mathrm{ns}$ respectively. In this process the recoil energy of the nucleus and the energy from the prompt de-excitation of the nuclear isotope are measured. The experimental resolution of xenon detectors is $\sim10~\mathrm{ns}$~\cite{Aprile:2011dd} so the short lifetimes mean that the recoil and de-excitation cannot be separately resolved. However, the mean free path of the de-excitation photon is $\mathcal{O}(1)~\mathrm{mm}$~\cite{Malling:2014wza}, comparable to the spatial resolution of xenon detectors~\cite{Aprile:2011dd}, so a dedicated pulse-shape analysis may partially resolve the nuclear recoil and the photon energy deposition for a fraction of the events. We leave an analysis of the pulse-shape for the future and here make the assumption that the detector cannot resolve the recoil energy and photon energy i.e.\ it is only the total energy that is measured.
\begin{figure}[t!]
\centering
\includegraphics[width=0.43\columnwidth]{diagram2e.pdf}
\includegraphics[width=0.43\columnwidth]{diagram1e.pdf}
\caption{The left and right diagrams depict two dark matter signals at a direct detection experiment. The left diagram shows the canonical elastic scattering process where the dark matter simply causes the nucleus to recoil; an experiment measures the number of events and the nuclear recoil energy. The right diagram depicts the inelastic scattering process. In this case, the dark matter excites the xenon isotope which then promptly decays emitting a photon. For the~$^{129}\mathrm{Xe}$ and~$^{131}\mathrm{Xe}$ isotopes of xenon, the photon/excitation energies are $39.6~\mathrm{keV}$ and $80.2~\mathrm{keV}$ respectively. We assume that the photon mean free path is sufficiently short that the experiment measures the recoil of the nucleus at the same time as the prompt de-excitation photon.
\label{fig:scattering}}
\end{figure}
Nuclear structure functions are required in order to accurately predict the cross-section for dark matter to excite the nucleus. It is only recently that precision shell-model calculations of the structure functions for xenon isotopes have become available~\cite{Klos:2013rwa,Baudis:2013bba,Vietze:2014vsa} (see also~\cite{Toivanen:2008zz,Toivanen:2009zza} for earlier calculations). The contribution of different nucleons to inelastic scattering does not add coherently so this process does not benefit from the nucleon-number--squared $(\sim 10^4$) enhancement of elastic spin-independent interactions~\cite{Vietze:2014vsa}. The current absence of any elastic signal means that for these interactions, experiments would have to improve their sensitivity by at least this factor to begin to see the inelastic signal. Such a large sensitivity gain is unlikely to be achieved in the foreseeable future. In contrast, the structure functions for elastic and inelastic processes are more comparable for the axial-vector interaction, with the elastic structure function being only around $10$ times larger~\cite{Baudis:2013bba}. This is because elastic scattering for the axial-vector interaction is spin-dependent and also does not have the the nucleon-number--squared enhancement~\cite{Klos:2013rwa}. The initial discovery of dark matter will not be made with the inelastic process for the axial-vector interaction (because of the additional suppression of the inelastic rate from the structure function and the scattering kinematics). However, a detection of the inelastic signal would provide additional information to complement the elastic scattering signal. As a trivial example, while the elastic signal may come from either a spin-independent or spin-dependent interaction, the inelastic signal will only be detectable for a spin-dependent interaction so it detection would strongly point to a spin-dependent interaction. Further implications of measuring the inelastic signal are left for a future paper.
A number of {\it single-phase} xenon experiments have searched for the $39.6~\mathrm{keV}$ de-excitation from the $^{129}\mathrm{Xe}$ isotope~\cite{Belli:1991mx,Bernabei:2000qn,Uchida:2014cnn} (see also~\cite{Vergados:2013raa}). However these experiments generally set weaker limits than {\it two-phase} xenon experiments because they do not have the same ability as two-phase experiments to discriminate between signal and background processes. No search or sensitivity study has been carried out for a {\it two-phase} xenon detector. This is the aim of this paper: to characterise the inelastic scattering signal for a {\it two-phase} xenon detector, quantify the sensitivity of upcoming tonne-scale experiments to this inelastic process and assess whether a future detection can be made.
Our paper is structured as follows. In section~\ref{sec:model} we recap the basic principles of dark matter scattering and describe how we model the elastic and inelastic signals in two benchmark xenon detectors. In section~\ref{sec:backgrounds} we discuss the main backgrounds and calculate both the signal and background distributions in terms of the parameters that a xenon detector measures. Section~\ref{sec:discovery} describes our frequentist method for calculating the sensitivity of upcoming tonne-scale experiments while section~\ref{sec:results} contains our main results. We end with a discussion of interesting follow-up studies and our conclusions in section~\ref{sec:con}. A number of short appendices gather the formulae that we use for the generation of photons and electrons for nuclear and electronic interactions, a check of the statistical method that we employ, an explicit demonstration that the LUX neutron-only limits are generally stronger than the PICO proton-only limits and finally, a check of our results under an alternative dark matter halo model.
\section{Modelling elastic and inelastic recoils of xenon}
\label{sec:model}
In this section we first review the usual formalism for elastic scattering of dark matter with a xenon nucleus in terms of the recoil energy of the nucleus. We show that this is easily extended to the case of inelastic scattering. Xenon detectors do not directly measure the energy but rather the scintillation light. We describe our modelling of the generation and detection of the scintillation light, which is based on the NEST formalism~\cite{Szydagis:2011tk,Szydagis:2013sih,Mock:2013ila,Lenardo:2014cva}. We then describe the properties of present and upcoming tonne-scale direct detection experiments and discuss the observable signals and their rate.
\subsection{Scattering rates}
The differential event rate for both elastic and inelastic scattering of dark matter with a xenon nucleus of mass $m_A$ in the detector frame may be written as
\begin{equation}
\label{eq:dRdE}
\frac{d R}{d E_{\rm{R}}}=\frac{1}{m_A} \frac{\rho_{\rm{DM}}}{m_{\rm{DM}}} \int_{v_{\rm{min}}} d^3v \, v f_{\rm{DM}}(\vec{v}+\vec{v}_{\rm{E}}) \frac{d \sigma}{d E_{\rm{R}}} \;,
\end{equation}
where $E_{\rm{R}}$ is the recoil energy of the xenon nucleus, $m_{\rm{DM}}$ is the dark matter mass, $\rho_{\rm{DM}}=0.3~\mathrm{GeV}/\mathrm{cm}^3$ is the local dark matter density~\cite{Read:2014qva}, $v$ and $\vec{v}$ are the dark matter speed and velocity, and~$f_{\rm{DM}}(\vec{v})$ is the dark matter velocity distribution in the galactic frame. We assume the isothermal Standard Halo Model so that~$f_{\rm{DM}}(\vec{v})\propto\exp{\left(-v^2/v_0^2 \right)}$ is a Maxwell-Boltzmann distribution in the galactic frame with a hard cut-off at the galactic escape speed~$v_{\rm{esc}}$, for which we assume~$v_{\rm{esc}}=550~\mathrm{km}/\mathrm{s}$~\cite{Piffl:2013mla}. The solar circular speed is by convention taken as~$v_0=220~\mathrm{km}/\mathrm{s}$ and we boost from the galactic frame to the detector rest frame with $\vec{v}_{\rm{E}}=(0,v_0,0)+\vec{v}_{\rm{pec}}+\vec{v}_{\rm{e}}$, where $\vec{v}_{\rm{pec}}=(11.1,12.2,7.3)~\mathrm{km}/\mathrm{s}$~\cite{Schoenrich:2009bx} and we use the expression for $\vec{v}_{\rm{e}}$ from~\cite{McCabe:2013kea}.
The minimum speed to recoil with an energy $E_{\rm{R}}$ additionally depends on the excitation energy~$E^*$:
\begin{equation}
v_{\rm{min}}=\sqrt{\frac{m_A E_{\rm{R}}}{2 \mu_{A}^2}}+\frac{E^*}{\sqrt{2 m_A E_{\rm{R}}}}\;,
\end{equation}
where~$\mu_{A}$ is the nucleus-dark matter reduced mass. The minimum speed is larger for bigger~$E^*$ since part of the kinetic energy of the incoming dark matter particle is required to excite the nucleus. This means that for the same~$E_{\rm{R}}$, elastic and inelastic scatter processes probe different parts of~$f_{\rm{DM}}(\vec{v})$~\cite{Baudis:2013bba}.
In this paper we only consider axial-vector interactions of the type
\begin{equation}
\label{eq:A-V}
\mathcal{L}\propto -\bar{\chi}\gamma^{\mu}\gamma^5 \chi \cdot \sum_q A_q \bar{\psi}_q \gamma_{\mu}\gamma^5\psi_q\;,
\end{equation}
where~$\chi$ is the dark matter (here assumed to be a fermion),~$\psi_q$ are the light-quark fields ($q=u,d,s$) and $A_q$ are the (model-dependent) dark matter-quark coupling constants. The total spin-dependent differential cross-section applicable for this operator can generally be written as
\begin{equation}
\frac{d \sigma}{d E_{\rm{R}}}=\sum_{A=^{129}\mathrm{Xe},\,^{131}\mathrm{Xe}}\frac{4 \pi}{3}\frac{m_A}{2 \mu_n^2} \frac{\sigma^0_n}{v^2}\frac{f_A}{2 J_A+1}S^n_A(E_{\rm{R}})\;,
\end{equation}
where~$\mu_n$ is the nucleon-dark matter reduced mass, the sum is over the isotopes that have spin, $f_A$ is the fractional abundance of the xenon isotope, $J_A$ is the ground-state spin of the isotope ($J_{129}=1/2$ and $J_{131}=3/2$) and $\sigma^0_n$ is the elastic cross-section to scatter off a point-like neutron in the limit of zero-momentum transfer. The structure factors~$S_{A}(E_{\rm{R}})$ describe how the dark matter interacts with the nucleus and depend on the isotope. We take the central values of the one\,$+$\,two-body expressions from~\cite{Klos:2013rwa} and~\cite{Baudis:2013bba} for elastic and inelastic scattering respectively. In both cases we only consider the neutron structure factors~$S^n_{A}(E_{\rm{R}})$ since the proton structure factors are always at least a factor of 10 smaller.
\begin{figure}[t!]
\centering
\includegraphics[width=0.49\columnwidth]{drde100.pdf}
\includegraphics[width=0.49\columnwidth]{drde1000.pdf}
\caption{The left and right panels show the recoil spectra for the elastic and inelastic processes in terms of~$E_{\rm{R}}$, the energy of the recoiling nucleus, for two values of the dark matter mass~$m_{\rm{DM}}$ and the scattering cross-section~$\sigma^0_n$. The various curves show the individual rates for the xenon isotopes that participate in the scattering for the axial-vector (spin-dependent) interaction, namely~$^{129}\mathrm{Xe}$ and~$^{131}\mathrm{Xe}$, together with the total rate. The elastic rate always dominates implying that the initial discovery of dark matter will always be made with the elastic scattering process. The difference between the elastic and inelastic rates are smaller for heavier particles suggesting that it should be easier to find evidence for the inelastic process with heavier particles. We remind the reader that~$E_{\rm{R}}$ is not directly measured in a two-phase xenon detector, rather, it is the scintillation signals~$\mathrm{S1}$ and~$\mathrm{S2}$. \label{fig:spectrum}}
\end{figure}
The recoil spectra as a function of the xenon nucleus's recoil energy $E_{\rm{R}}$ are shown in figure~\ref{fig:spectrum}. The left and right panels show the spectra for $m_{\rm{DM}}=100~\mathrm{GeV}$ and $\sigma_n^0=10^{-40}~\mathrm{cm}^2$, and $m_{\rm{DM}}=1000~\mathrm{GeV}$ and $\sigma_n^0=10^{-39}~\mathrm{cm}^2$ respectively. The total elastic and inelastic spectrum are shown by the black dotted and black dashed lines respectively. The orange and green lines show the contribution of $^{129}\mathrm{Xe}$ and $^{131}\mathrm{Xe}$ to the elastic spectrum, while the blue and red lines show the contribution of $^{129}\mathrm{Xe}$ and $^{131}\mathrm{Xe}$ to the inelastic spectrum. The elastic spectrum is always larger than the inelastic spectrum with the most noticeable difference at small $E_{\rm{R}}$. The inelastic spectrum drops to zero at small $E_{\rm{R}}$ because energy and momentum conservation do not allow for the xenon nucleus to be excited while remaining at rest after the dark matter interaction. The larger elastic scattering rate implies that for the axial-vector interaction, a discovery of dark matter will always first be made with the elastic scattering process.
The elastic spectrum is dominated by scattering with $^{129}\mathrm{Xe}$ at low energy, while scattering with $^{129}\mathrm{Xe}$ dominates for all energies in the inelastic spectrum. Comparing the left and right panels, we see that the inelastic spectra are closer to the elastic spectra for $m_{\rm{DM}}=1000~\mathrm{GeV}$. At low energies and for the mass values shown, the elastic spectra display the characteristic scaling $dR/dE_{\rm{R}}\propto \sigma_n^0\, m_{\rm{DM}}^{-1}$. This scaling does not continue at higher recoil energies because for smaller masses, it is only the particles in the tail of the Maxwell-Boltzmann distribution that have sufficient kinetic energy to induce higher recoil energies of the xenon nucleus, thus producing an additional suppression in the rate (this is manifested mathematically through a higher value of $v_{\rm{min}}$ for smaller $m_{\rm{DM}}$). The inelastic spectra also do not show the characteristic scaling at any energy for these masses. This is for the same reason as in the elastic case, namely, many more incoming dark matter particles have a larger kinetic energy for higher masses. This is especially noticeable for the~$^{131}\mathrm{Xe}$ spectra where there is a factor $\sim6$ difference between the~$^{129}\mathrm{Xe}$ and~$^{131}\mathrm{Xe}$ spectra at~$m_{\rm{DM}}=100~\mathrm{GeV}$, while only a factor~$\sim3$ at~$m_{\rm{DM}}=1000~\mathrm{GeV}$. This suggests that it should be easier to find evidence for the inelastic process for heavier dark matter particles.
This discussion so far has only considered the recoil energy of the nucleus and has not accounted for the energy deposited by the photon from the de-excitation process. Although the de-excitation will not change the total (integrated) scattering rate, it must be accounted for when modelling the signal that a two-phase xenon detector measures. The next subsections address how we model this.
\subsection{Generating light and charge signals}
\label{sec:gensignal}
Two-phase xenon detectors do not directly measure the energy. Instead, a particle interacting in the fiducial volume of the liquid xenon produces two measurable signals referred to as the S1 and S2 signal.\footnote{We only use the position-corrected S1 and S2 signals (sometimes denoted cS1 and cS2)~\cite{Aprile:2012vw}.} An interaction in the liquid xenon produces ions and excitons which produce photons and electrons. The quantity S1 is a measure of the number of photoelectrons (PE) from the prompt scintillation due to the photons in the liquid xenon. The electrons are drifted in an electric field to the xenon gas phase, where they are extracted and accelerated. These extracted electrons create a secondary scintillation, denoted as S2. Electronic and nuclear events produce different characteristic S1 and S2 signals, which allows two-phase xenon detectors to discriminate between these two event classes. Canonical dark matter interactions result in nuclear events while most background events are electronic events. This ability to discriminate between nuclear and electronic events is one important reason why two-phase xenon detectors have been so successful in constraining the dark matter scattering cross-section.
For an energy deposition $E$, the expectation values for $\mathrm{S1}$ and $\mathrm{S2}$ can be expressed as $\langle\mathrm{S1}\rangle=g_1 \langle n_{\gamma}(E) \rangle$ and $\langle\mathrm{S2}\rangle=g_2\langle n_e (E) \rangle$ respectively, where the measurement gains $g_1$ and $g_2$ relate the number of produced photons $n_{\gamma}(E)$ and electrons $n_e (E)$ to the expected number of detected PEs.\footnote{The $g_1$ and $g_2$ notation is not used uniformly. It is typically used by the LUX collaboration but different notation exists elsewhere. For instance, refs.~\cite{Baudis:2014naa,Schumann:2015cpa} use $\epsilon$ instead of $g$. Ref.~\cite{Baudis:2014naa} also describes how this notation relates to the description in terms of~$\mathcal{L}_{\rm{eff}}$ and $Q_y$, parameters which some may find more familiar.} The gain $g_1$ is the probability that a photon produced at the centre of the detector strikes a photomultiplier tube (PMT) and is converted to a~PE. The gain $g_2=\epsilon \times Y$ is the product of the probability of extracting an electron from the liquid to the gas ($\epsilon$) and the amplification factor ($Y$) converting a single ionisation electron to photoelectrons. The S2 signal measured from the bottom PMTs ($\mathrm{S2_b}$) is usually used because the light collection efficiency is more homogeneous on these PMTs~\cite{Aprile:2012vw}. We therefore use $\mathrm{S2_b}$ and assume that it is related to the total $\mathrm{S2}$ signal by $\mathrm{S2_b}=0.43\times \mathrm{S2}$, as found in XENON100 and LUX~\cite{abrown,Akerib:2013tjd}. We will discuss realistic values of $g_1$ and $g_2$ in the next subsection and for now, leave them as free parameters in our discussion.
To simulate signal processes observed by a two-phase xenon detector in a realistic fashion, we generate the signal with a Monte Carlo simulation along the lines of ref.~\cite{Dahl:2009nta}. We use the NEST phenomenological model~\cite{Szydagis:2011tk,Szydagis:2013sih,Mock:2013ila,Lenardo:2014cva} to model the average number of photons~$n_{\gamma}(E)$ and electrons~$n_e(E)$ produced by an electronic- or nuclear-type interaction. In addition to their dependence on the energy~$E$, $n_{\gamma}(E)$ and $n_e(E)$ also depend on the electric drift field applied across the liquid, which varies for different experiments. The specific formulae used in our modelling are given in appendix~\ref{app:meanyields}. We must include fluctuation effects, which can be divided into two types: intrinsic and detector fluctuations. We discuss the implementation of each in turn beginning with the intrinsic fluctuations.
Our signal generation begins by drawing the energy~$E$ of the incident particle from the input energy spectrum. For dark matter events, this is simply the recoil spectrum eq.~\eqref{eq:dRdE} (as in fig.~\ref{fig:spectrum}), while the background distributions are discussed in section~\ref{sec:backgrounds} (displayed in fig.~\ref{fig:backrates}). For this energy, we find the total number of quanta $N_{\mathrm{quanta}}$ by drawing from a Normal distribution with mean $n_{\mathrm{quanta}}$ and variance $F\cdot n_{\mathrm{quanta}}$, where $F=0.05$ is the Fano factor~\cite{Doke:1976zz}. We next separate $N_{\mathrm{quanta}}$ into excitons and ions. The number of ions $N_{\mathrm{i}}$ is drawn from a binomial distribution with $N_{\mathrm{quanta}}$ trials and a probability $(1+n_{\mathrm{ex}}/n_{\mathrm{i}})^{-1}$ that an ion is produced. The number of excitons $N_{\mathrm{ex}}$ is simply $N_{\mathrm{ex}}=N_{\mathrm{quanta}}-N_{\mathrm{i}}$. Our expressions for~$n_{\gamma}(E)$ and~$n_e(E)$ also include the effect of recombination fluctuations; we assume that the number of ions that recombine~$N^{\mathrm{recom}}_{\mathrm{i}}$ follows a Normal distribution with mean~$r N_{\mathrm{i}}$ and variance $\sigma_R^2=(1-r)C N_{\mathrm{i}}^2$, where $C=0.0056$~\cite{Lenardo:2014cva}. Our final result is that $n_{e}(E)=N_{\mathrm{i}}-N^{\mathrm{recom}}_{\mathrm{i}}$ and $n_{\gamma}(E)=f_l(N_{\mathrm{ex}}+N^{\mathrm{recom}}_{\mathrm{i}})$, where $f_l$ is a quenching factor. The Monte Carlo process is the same for both nuclear and electronic recoils; the difference is that $n_{\mathrm{quanta}}$, $n_{\mathrm{ex}}$, $n_{\mathrm{i}}$, $r$ and $f_l$ differ for nuclear and electronic recoils. The calculation of the mean quantities $n_{\mathrm{quanta}}$, $n_{\mathrm{ex}}$, $n_{\mathrm{i}}$, $r$ and the quenching factor $f_l$ are described in appendix~\ref{app:meanyields}. There is also a small difference between gamma- and beta-electronic interactions that we account for by rescaling $n_{\gamma}(E)$ and $n_{e}(E)$ calculated for a beta-interaction to obtain the result for the gamma-interaction. We perform this rescaling at this point, after the intrinsic fluctuations. Further details are given in appendix~\ref{app:meanyields}.
We next include detector fluctuations in our calculation of~$\mathrm{S1}$ and~$\mathrm{S2_b}$. For~$\mathrm{S1}$, the number of photoelectrons~$N_{\mathrm{PE}}$ is drawn from a binomial distribution with~$n_{\gamma}(E)$ trials and success probability~$g_1$. The final result for~$\mathrm{S1}$ also accounts for the PMT resolution:~$\mathrm{S1}$ is drawn from a Normal distribution with mean~$N_{\mathrm{PE}}$ and variance~$\sigma^2_{\mathrm{PMT}} N_{\mathrm{PE}}$. For~$\mathrm{S2_b}$, the number of electrons~$N_e$ that are extracted from the liquid to the gas phase follows a binomial distribution with~$n_{e}(E)$ trials and success probability~$\epsilon$. To account for the amplification factor from converting ionisation electrons to photoelectrons, we draw from a Normal distribution with mean~$0.43\cdot Y\cdot N_e$ and variance~$\sigma^2_{\rm{PE_b}} N_e$ (the factor $0.43$ is the factor that relates $\mathrm{S2}$ and $\mathrm{S2_b}$).
When generating the $\mathrm{S1}$ signal from the inelastic scattering process, we combine the number of photons from the nuclear recoil with the photons from the de-excitation gamma-ray with energy $E^*$ after the intrinsic fluctuations (for which the two processes are treated independently) but before including the detector fluctuations. An analogous procedure is performed for $\mathrm{S2_b}$ except it is electrons that we combine before including the detector fluctuations.
\subsection{Two-phase xenon detector parameters}
\label{sec:detect}
The LUX and XENON collaborations have produced the most sensitive xenon detectors. The current LUX detector has a fiducial mass of $118~\mathrm{kg}$ and they have collected an exposure of 0.028 tonne-years~\cite{Akerib:2013tjd}, with an ultimate aim of collecting around 0.2 tonne-years~\cite{Akerib:2012ys}. The applied drift field of $181~\mathrm{V/cm}$ is lower than in previous experiments. For instance, ZEPLIN-III~\cite{Akimov:2011tj} and XENON100~\cite{Aprile:2012nq} had fields of $3400~\mathrm{V/cm}$ and $530~\mathrm{V/cm}$ respectively. However, the LUX light collection efficiency is much higher than in previous detectors, corresponding to a value of $g_1\approx0.12~\mathrm{PE}/\gamma$~\cite{Szydagis:2014xog,Huang2015}. Unfortunately the electron extraction efficiency is lower than was anticipated, with $\epsilon\approx 50\%$~\cite{Szydagis:2014xog,Huang2015}. The amplification factor is $Y\approx 24.6~\mathrm{PE}/e$~\cite{Akerib:2013tjd} so that $g_2\approx12~\mathrm{PE}/e$. The energy resolutions are $\sigma_{\mathrm{PMT}}\approx0.5~\mathrm{PE}/\gamma$ and $\sigma_{\mathrm{PE_b}}\approx 3.6~\mathrm{PE}/e$~\cite{Dobi:2014wza}.
XENON1T is the successor to XENON100. It will have a fiducial mass of approximately $1~\mathrm{tonne}$, a design drift field of~$1000~\mathrm{V/cm}$ and a light collection efficiency similar to LUX~\cite{Aprile:2012zx}. It is expected that the extraction efficiency $\epsilon$ will be 100\%, as achieved in XENON100. The amplification factor and resolutions will be similar to those in LUX and XENON100~\cite{Aprile:2013blg}.
The follow-up to LUX is LZ~\cite{Malling:2011va,Akerib:2015cja}, with a projected fiducial mass of approximately $5.6~\mathrm{tonnes}$ and a drift field of $700~\mathrm{V/cm}$~\cite{Kudryavtsev:2015vja}. XENONnT is the successor of XENON1T and is designed to have similar characteristics as XENON1T but with a total mass of approximately $7~\mathrm{tonnes}$~\cite{Aprile:2014zvw}. As XENONnT and LZ will run for a number of years, an exposure of 15 tonne-years is readily achievable. Finally, there are plans for DARWIN, a much larger experiment with a fiducial mass of around 20 tonnes~\cite{Baudis:2012bc}. Studies assuming a drift field of 500~V/cm and an ultimate exposure of 200 tonne-years have been performed~\cite{Schumann:2015cpa}. This large exposure gives an indication of the ultimate reach of xenon detectors.
Future collaborations will obviously aim to optimise their respective detectors. It may be difficult to increase or even maintain the light collection efficiency because larger detectors collect a smaller fraction of the scintillation signal. LZ's proposal is that~$g_1>0.075$~\cite{Akerib:2015cja} and DARWIN studies have assumed the value reached in LUX~\cite{Schumann:2015cpa}. It should be possible to maintain an extraction efficiency close to unity and the amplification factor may be as large as $Y=50~\mathrm{PE}/e$~\cite{Akerib:2015cja}.
In the remainder of this paper, we show results for two benchmark scenarios, {\it XenonA200} and {\it XenonB1000}, which should bracket the expected performance of upcoming experiments:
\begin{itemize}
\item {\it XenonA200} corresponds to a detector with lower drift field and lower extraction efficiency. We assume a drift field of $200~\mathrm{V/cm}$ and the parameters $g_1=0.07~\mathrm{PE}/\gamma$, $\epsilon=50\%$, $Y=25~\mathrm{PE}/e$ so that $g_2=12.5~\mathrm{PE}/e$.
\item {\it XenonB1000} corresponds to a detector with a higher drift field and perfect extraction efficiency. We assume a drift field of $1000~\mathrm{V/cm}$ and the parameters $g_1=0.12~\mathrm{PE}/\gamma$, $\epsilon=100\%$, $Y=50~\mathrm{PE}/e$ so that $g_2=50~\mathrm{PE}/e$.
\end{itemize}
In both scenarios, we assume that $\sigma_{\mathrm{PMT}}=0.5~\mathrm{PE}/\gamma$ and $\sigma_{\mathrm{PE_b}}= 3.6~\mathrm{PE}/e$ and that $\mathrm{S2}_{\mathrm{b}}=0.43\times \mathrm{S2}$. Our benchmark exposure is 15 tonne-years unless stated otherwise. Finally, we assume that all measurement efficiencies are $100\%$ since the signals of interest are far from thresholds (where the efficiencies begin to deviate from $100\%$).
\subsection{Observable signals and their rate}
\begin{figure}[t!]
\centering
\includegraphics[width=0.49\columnwidth]{S1S2b200.pdf}
\includegraphics[width=0.49\columnwidth]{S1S2b1000.pdf}
\caption{The solid and dashed contours show where 68\% and 95\% of events occur in terms of the observable scintillation signals~$\mathrm{S1}$ and~$\mathrm{S2_b}$ for the fixed input energies indicated. The left and right panels show results for the {\it XenonA200} and {\it XenonB1000} benchmark scenarios. The $\mathrm{keV}^{\beta}_{\mathrm{ER}}$, $\mathrm{keV}^{\gamma}_{\mathrm{ER}}$ and $\mathrm{keV}_{\mathrm{NR}}$ labels indicate that the energy originates from a beta-electronic, gamma-electronic and nuclear recoil. The brown and orange contours show the signal region for an event that occurs for inelastic scattering, which includes energy from both a nuclear recoil and a de-excitation photon. Inelastic signal events have higher $\mathrm{S1}$ and $\mathrm{S2}$ values than for elastic scattering events, for which the search window is typically $\mathrm{S1}\leq30~\mathrm{PE}$. All of the contours are tilted because of recombination fluctuations. Note the change of scale on both axes between the two panels.
\label{fig:S1S2plot}}
\end{figure}
Having described our procedure for generating light and charge signals, we can proceed to generate the observable signals for our two benchmark scenarios. The solid and dashed contours in figure~\ref{fig:S1S2plot} show where 68\% and 95\% of events occur in the $\mathrm{S1}$\,-\,$\mathrm{S2_b}$ plane for fixed input energies. The left and right panels correspond to the {\it XenonA200} and {\it XenonB1000} benchmark scenarios, respectively. The $\mathrm{keV}^{\beta}_{\mathrm{ER}}$, $\mathrm{keV}^{\gamma}_{\mathrm{ER}}$ and $\mathrm{keV}_{\mathrm{NR}}$ labels in figure~\ref{fig:S1S2plot} indicate that the energy originates from a beta-electronic, gamma-electronic and nuclear recoil, respectively. The black contours show the signal region for a nuclear recoil of~$40~\mathrm{keV}$, the red and blue contours show the signal region for a $39.6~\mathrm{keV}$ electronic event induced by a beta- and gamma-ray respectively, and the purple and pink contours show the signal region for a $80.2~\mathrm{keV}$ electronic event induced by a beta- and gamma-ray, respectively. The brown and orange contours show the signal region for an event that occurs for inelastic scattering: in this case the nuclear recoil energy is $40~\mathrm{keV}$ and the gamma-electronic energy is $39.6~\mathrm{keV}$ and $80.2 ~\mathrm{keV}$, corresponding to the energies of the photon emitted in the de-excitation of the $^{129}\mathrm{Xe}$ and $^{131}\mathrm{Xe}$ isotopes, respectively.
We first discuss the features common to both panels. These features are well known properties of two-phase xenon detectors and are reviewed in much more detail in~\cite{Chepel:2012sj}. We see that a nuclear recoil typically produces a much smaller $\mathrm{S1}$ and $\mathrm{S2_b}$ signal compared to an electronic recoil of the same energy. The usual XENON100 and LUX dark matter searches for elastic scattering define a $\mathrm{S1}$ search window up to $30~\mathrm{PE}$~\cite{Aprile:2012nq,Akerib:2013tjd}; we see that for inelastic signals, we will have to consider much higher values of $\mathrm{S1}$. The difference between a gamma- and beta-interaction of the same energy is relatively small, $\mathcal{O}(10\%)$, for both~$\mathrm{S1}$ and~$\mathrm{S2_b}$. It is also apparent that adding a nuclear recoil to an electronic recoil only slightly increases the $\mathrm{S1}$ and $\mathrm{S2_b}$ signals compared to a pure electronic recoil. Both panels show that the contours are tilted, matching the behaviour observed with real data (see e.g.~\cite{Xiao:2015psa}). This is especially obvious in the events where the $\mathrm{S1}$ and $\mathrm{S2_b}$ signals are dominated by electronic recoils while for nuclear recoils, the tilt is much smaller. The origin of the tilt is well-known: it is a result of recombination fluctuations, which are 100\% anti-correlated in scintillation~$(\mathrm{S1})$ and ionisation~$(\mathrm{S2})$~\cite{Aprile:2007qd}. In contrast, the detector fluctuations smear along constant $\mathrm{S1}$ and constant $\mathrm{S2}$ only. The tilt is smaller for the nuclear recoil region because the detector fluctuations are larger than recombination fluctuations.
We next discuss the features that differ between the panels. The first important difference is that the~$\mathrm{S1}$ and~$\mathrm{S2_b}$ signals are much larger in the right panel for all configurations (note that the scales differs in the two panels). The larger~$\mathrm{S2_b}$ signal is a result of two effects. The first is that the extraction efficiency~$\epsilon$ and amplification factor~$Y$ are both twice and therefore~$g_2$ is four times larger for {\it XenonB1000}. If this were the only effect, then $\mathrm{S2_b}$ would be four times larger for the same input energy. In fact, we see that~$\mathrm{S2_b}$ is around six times larger in the right panel. This is because the larger drift field also increases~$\mathrm{S2_b}$. The larger drift field reduces the recombination fraction so more of the ions survive to form the~$\mathrm{S2_b}$ signal. The higher drift field also reduces the~$\mathrm{S1}$ signal for the same reason; fewer ions recombine producing fewer prompt scintillation light. However,~$g_1$ is~70\% larger for {\it XenonB1000}, which compensates for the reduction from the higher drift field. This is why the~$\mathrm{S1}$ values are actually~$\sim20\%$ larger in the right panel. Finally, an important difference is that there is less overlap between the contours from an inelastic signal (brown and orange contours) and the contours from a potential beta-background source (red and purple contours) for {\it XenonB1000}. This greater separation is again an effect of the higher drift field. We will see in section~\ref{sec:results} that this better discrimination is ultimately responsible for the greater sensitivity of the {\it XenonB1000} benchmark scenario.
\begin{figure}[t!]
\centering
\includegraphics[width=0.49\columnwidth]{dRdS1spec_A200.pdf}
\includegraphics[width=0.49\columnwidth]{dRdS1spec_B1000.pdf}
\caption{The left and right panels show the recoil spectra for the elastic and inelastic processes in terms of~$\mathrm{S1}$ for the two benchmark scenarios {\it XenonA200} and {\it XenonB1000} described in section~\ref{sec:detect}. These spectra are for~$m_{\rm{DM}}=100~\mathrm{GeV}$ and~$\sigma_n^0=10^{-40}~\mathrm{cm}^2$. While the elastic spectrum falls off rapidly, the inelastic spectrum has two distinct peaks. The first peak is dominated by the $39.6~\mathrm{keV}$ de-excitation from~$^{129}\mathrm{Xe}$ while the second peak is dominated by the $80.2~\mathrm{keV}$ de-excitation from~$^{131}\mathrm{Xe}$. Similar to the left panel of figure~\ref{fig:spectrum}, the inelastic rate is suppressed by about two (three) orders of magnitude with respect to the elastic rate for the~$^{129}\mathrm{Xe}$ $\left(^{131}\mathrm{Xe}\right)$ isotope. \label{fig:dRdS1spect}}
\end{figure}
We previously only gave the differential event rate in terms of the xenon recoil energy $E_{\rm{R}}$ (cf.~eq.~\eqref{eq:dRdE} and figure~\ref{fig:spectrum}). We are now in a position to calculate the event rate in terms of the observable quantities $\mathrm{S1}$ and $\mathrm{S2_b}$. The differential event rate is~\cite{Aprile:2012vw}
\begin{equation}
\label{eq:dRdS1}
\frac{d^2 R}{d \mathrm{S1}\, d \mathrm{S2_b}}=\int d E_{\rm{R}} \frac{dR}{d E_{\rm{R}}} \mathrm{pdf}(\mathrm{S1}, \mathrm{S2_b}|E_{\rm{R}})\;,
\end{equation}
where $dR/dE_{\rm{R}}$ is eq.~\eqref{eq:dRdE} and $\mathrm{pdf}(\mathrm{S1}, \mathrm{S2_b}|E_{\rm{R}})$ is the probability density function, which we generate with the Monte Carlo process described in section~\ref{sec:gensignal}.
As an example of our results, we show in figure~\ref{fig:dRdS1spect} the differential event rate in terms of~$\mathrm{S1}$ for $m_{\rm{DM}}=100~\mathrm{GeV}$ and~$\sigma_n^0=10^{-40}~\mathrm{cm}^2$ (additionally, the results for $m_{\rm{DM}}=1000~\mathrm{GeV}$ and~$\sigma_n^0=10^{-39}~\mathrm{cm}^2$ are shown in figure~\ref{fig:drdsrates}). The left and right panels correspond to the ${\it XenonA200}$ and ${\it XenonB1000}$ benchmark scenarios, respectively. The $dR/d\mathrm{S1}$ spectrum is obtained by additionally integrating eq.~\eqref{eq:dRdS1} over $\mathrm{S2_b}$. The colour scheme of the lines matches figure~\ref{fig:spectrum}: the total elastic and inelastic spectrum are shown by the black dotted and black dashed lines respectively. The orange and green lines show the contribution of $^{129}\mathrm{Xe}$ and $^{131}\mathrm{Xe}$ to the elastic spectrum, while the blue and red lines show the contribution of $^{129}\mathrm{Xe}$ and $^{131}\mathrm{Xe}$ to the inelastic spectrum. As in the left panel of figure~\ref{fig:spectrum}, the inelastic rate for scattering with~$^{129}\mathrm{Xe}$ $\left(^{131}\mathrm{Xe}\right)$ is suppressed by about two (three) orders of magnitude with respect to the elastic rate.
Figure~\ref{fig:dRdS1spect} shows the well known fact that the elastic spectrum falls off rapidly with~$\mathrm{S1}$. In contrast, the inelastic spectrum has two distinct peaks whose origin is clear. The first peak is due to the $39.6~\mathrm{keV}$ de-excitation photon from the~$^{129}\mathrm{Xe}$ isotope while the second peak is from the $80.2~\mathrm{keV}$ de-excitation photon from the~$^{131}\mathrm{Xe}$ isotope. The peak $\mathrm{S1}$ values agree with the values shown in figure~\ref{fig:S1S2plot}. The peak differential rate is slightly higher for the inelastic process in the left panel (corresponding to {\it XenonA200}) because the spectrum is slightly more peaked in $\mathrm{S1}$ (the integrated rate is the same).
\section{Background rates}
\label{sec:backgrounds}
Our ultimate aim is to assess the discovery potential of the inelastic signal. In order to do this, the background signals must be quantified. A comprehensive study of the backgrounds for tonne-scale xenon detectors was performed in~\cite{Baudis:2013qla} and similar rates and distributions were also presented by the LZ~collaboration~\cite{Beltrame:2014,Akerib:2015cja}. We summarise the relevant results for our study and refer the reader to~\cite{Baudis:2013qla,Beltrame:2014,Akerib:2015cja} for further details.
The dominant backgrounds are those that produce electronic recoils in the signal range of interest, $\mathrm{S1}\leq600~\mathrm{PE}$, which corresponds to an energy range of approximately $0-300$~keV. As discussed in~\cite{Baudis:2013qla,Beltrame:2014}, the background rates that dominate in order of decreasing importance are the $2\nu\beta\beta$-decay of $^{136}\mathrm{Xe}$, elastic neutrino-electron scattering from~$pp$ and~$^7\mathrm{Be}$ solar neutrinos, decays of~$^{85}\mathrm{Kr}$ and~$^{222}\mathrm{Rn}$ and finally, radioactivity from detector materials. All of these backgrounds are beta-electronic sources~\cite{Malling:2014wza}. The background rates and their uncertainties used in this study are shown in figure~\ref{fig:backrates}. We comment on each of the rates in turn.
\begin{figure}[t!]
\centering
\includegraphics[width=0.49\columnwidth]{backgroundrates.pdf}
\caption{The main backgrounds and their rates for tonne-scale xenon experiments. The dominant background rate is from the $2\nu\beta\beta$-decay of the $^{136}\mathrm{Xe}$ isotope, which has an abundance of $8.86\%$ in natural xenon. There are two irreducible backgrounds from~$pp$ and~$^7\mathrm{Be}$ solar neutrinos scattering on atomic electrons. The percentage figure associated with each background is the uncertainty in the normalisation of that rate.
\label{fig:backrates}}
\end{figure}
We recalculated the background rates from the $2\nu\beta\beta$-decay of~$^{136}\mathrm{Xe}$ and the elastic scattering from~$pp$ and~$^7\mathrm{Be}$ solar neutrinos using updated parameters. For the $2\nu\beta\beta$-decay, the neutrinos escape the detector while the beta-particles contribute to the background rate. We calculated the rate assuming the $^{136}\mathrm{Xe}$ abundance is that of natural xenon: $8.86\%$. We use the most accurate measurement of the $^{136}\mathrm{Xe}$ half-life by EXO-200, who found~$T_{1/2}=(2.165\pm0.059)\times10^{21}~\mathrm{yr}$~\cite{Albert:2013gpz}, where we only quote the dominant systematic error. We use the distribution of the summed energies of the beta-particles from~\cite{Kotila:2012zza}. Our~$2\nu\beta\beta$ rate is in good agreement with that shown in~\cite{Baudis:2013qla,Beltrame:2014}. The dominant uncertainty of this rate is from~$T_{1/2}$, at the level of~$3\%$. In comparison, the abundance of the~$^{136}\mathrm{Xe}$ isotope can be measured with a~$0.05\%$ accuracy~\cite{brownsimgen}.
The largest flux of solar neutrinos is from the $pp$ chain. The~$pp$ flux measured by Borexino, $(6.6\pm0.7)\times10^{10}~\mathrm{cm}^{-2}\mathrm{s}^{-1}$~\cite{Bellini:2014uqa}, is in good agreement with the prediction of the Standard Solar Model (SSM) $6.03\times(1\pm0.06)\times10^{10}~\mathrm{cm}^{-2}\mathrm{s}^{-1}$~\cite{Serenelli:2011py}. The SSM prediction is well understood so we use the theoretical flux and error in our calculation. The second largest rate from solar neutrinos is from~$^7\mathrm{Be}$ neutrinos. Borexino measured a flux of $(4.84\pm0.24)\times10^{9}~\mathrm{cm}^{-2}\mathrm{s}^{-1}$~\cite{Bellini:2011rx}, also in good agreement with the SSM prediction~\cite{Serenelli:2011py}. We use the measured value and error in our calculation. We use the neutrino-electron scattering cross-section from~\cite{Marciano:2003eq}. Finally, we use the electron neutrino survival probabilities listed in~\cite{Bellini:2013lnn}. Our~$pp$ spectrum agrees well with~\cite{Baudis:2013qla,Beltrame:2014}. Our~$^7\mathrm{Be}$ spectrum agrees well with~\cite{Beltrame:2014} but is about a factor~1.6 smaller than in~\cite{Baudis:2013qla}. The origin of the discrepancy is unclear.\footnote{There is a typo in the neutrino-electron cross-section formula in~\cite{Baudis:2013qla} (the last term has the wrong dimensions) but this is not the origin of the discrepancy.} The third largest rate from solar neutrinos is from $^{13}\mathrm{N}$ neutrinos. This rate is approximately 300 times smaller than the $pp$ rate so it is a good approximation to ignore the contribution to the rate from all solar neutrinos except the~$pp$ and~$^7\mathrm{Be}$ neutrinos.
We use the~$^{85}\mathrm{Kr}$,~$^{222}\mathrm{Rn}$ and detector material background rates from the study in~\cite{Baudis:2013qla}, which assumes a~$^{85}\mathrm{Kr}$ contamination of~$0.1~\mathrm{ppt}$ and a~$^{222}\mathrm{Rn}$ level of~$0.1~\mu\mathrm{Bq}/\mathrm{kg}$. While the~$^{222}\mathrm{Rn}$ rate is similar in the LZ study, the~$^{85}\mathrm{Kr}$ rate is a factor four smaller~\cite{Massoli:2015}. We use the result from~\cite{Baudis:2013qla} because the assumptions entering the calculation are clearer. XENON100 and EXO-200 have measured the~$^{85}\mathrm{Kr}$ contamination and~$^{222}\mathrm{Rn}$ level with an accuracy of~$17\%$~\cite{Lindemann:2013kna} and~$10\%$~\cite{Albert:2013gpz} respectively, and we assume the same accuracy will be achieved in the future.
The detector material background rate is reduced by self-shielding of the liquid xenon so larger detectors, which have more xenon with which to shield, have a smaller rate. The rate reported here was for a DARWIN study and assumes a 14~tonne fiducial mass. The rate for LZ with a 5.6~tonne fiducial mass is about three times larger~\cite{Beltrame:2014}. As before, we use the result from~\cite{Baudis:2013qla} because the assumptions entering the calculation are clearer. In any case, as this rate is always subdominant, a factor three difference has an almost negligible impact on our results. Both the material and $^{222}\mathrm{Rn}$ background rates begin to increase after $\sim170~\mathrm{keV}$ however they always remain subdominant to the~$2\nu\beta\beta$-decay rate~\cite{Baudis:2013qla}. The detector material rate for XENON1T is predicted to~$10\%$~\cite{Massoli:2015} and we assume the same accuracy will be achieved in future experiments.
Before leaving this sub-section, we briefly comment on background sources that do not produce electronic recoils. It is also possible that neutrons may directly excite the~$^{129}\mathrm{Xe}$ and~$^{131}\mathrm{Xe}$ isotopes creating another background source (that neutrons can excite the signal is actually an advantage since at least in principle, it allows the signal region to be calibrated in a real detector). The self-shielding of the liquid xenon and a dedicated muon-veto system to reject muon-induced neutrons means that the neutron rate can be reduced to less than $5\times10^{-5}~\mathrm{counts/t/yr/keV}$ for single-scatter neutrons that elastically scatter of xenon~\cite{Schumann:2015cpa}. For comparison, the dark matter signal rate that we consider in this paper is $\sim10^{-2}~\mathrm{counts/t/yr/keV}$ for $\sigma_n^0\simeq10^{-40}~\mathrm{cm}^2$ (cf.~fig.~\ref{fig:drdsrates}). Although there are no detailed studies that discuss inelastic neutron scattering (and such a study is beyond the scope of this paper), the inelastic neutron scattering cross-section is generally of the same order of magnitude as the elastic scattering cross-section~\cite{NDS}. Therefore, we assume that the inelastic neutron scattering rate is similar to the elastic scattering rate and is therefore always significantly smaller than the dark matter signal rate, so we ignore this background contribution in our study. A more detailed study to confirm this assumption is desirable.
\subsection{Comparing background and signal rates}
\label{subsec:comp}
We now have everything to model the signal and the background for the {\it XenonA200} and {\it XenonB1000} detector scenarios. The left and right panels in figure~\ref{fig:drdsrates} show the background and signal rates as a function of~S1 for the two benchmark scenarios. The red and blue lines show the signal rate corresponding to inelastic scattering off the~$^{129}\mathrm{Xe}$ and~$^{131}\mathrm{Xe}$ isotopes for~$m_{\rm{DM}}=1000~\mathrm{GeV}$ and~$\sigma_n^0=10^{-39}~\mathrm{cm}^2$. Comparing the background rates in the left and right panels, we see only minor differences. In both cases the dominant background is from the $2\nu\beta\beta$-decay of~$^{136}\mathrm{Xe}$ (solid purple line). The most obvious difference is in the rate from detector materials (dotted grey line), where for {\it XenonB1000}, the rate is higher for~$\mathrm{S1}$ values corresponding to the $80.2~\mathrm{keV}$ de-excitation. The main point to take away from both panels of figure~\ref{fig:drdsrates} is that the signal rate is always at least 30 times smaller than the background rate. This demonstrates that observing this signal with a single-phase xenon experiment that only measures the~S1 signal will be very challenging.
\begin{figure}[t!]
\centering
\includegraphics[width=0.49\columnwidth]{dRdSback200.pdf}
\includegraphics[width=0.49\columnwidth]{dRdSback1000.pdf}
\caption{The left and right panels show the background and inelastic signal rates for the two benchmark scenarios {\it XenonA200} and {\it XenonB1000} described in section~\ref{sec:detect}. The~$^{129}\mathrm{Xe}$ (blue line) and~$^{131}\mathrm{Xe}$ (red line) inelastic spectra are for~$m_{\rm{DM}}=1000~\mathrm{GeV}$ and~$\sigma_n^0=10^{-39}~\mathrm{cm}^2$. The dominant background rate is from the $2\nu\beta\beta$-decay of the $^{136}\mathrm{Xe}$ isotope (solid purple line), which is always at least~30 times larger than the dark matter signal. These panels demonstrate that observing the inelastic signal with a single-phase xenon experiment that only measures the~S1 signal will be very challenging because of the large background rate.
\label{fig:drdsrates}}
\end{figure}
Two-phase experiments provide additional information in the form of the~$\mathrm{S2}$ signal. In figure~\ref{fig:StwoSone} we therefore plot the signal and background distributions in the~$\log_{10}\left(\mathrm{S2_b}/\mathrm{S1} \right)$ -- $\mathrm{S1}$ plane traditionally used by two-phase xenon experiments. The black and purple lines show the electronic and nuclear recoil bands, respectively. The solid lines show the median while the dashed lines show $\pm1.28 \sigma$ around the median, such that~$10\%$ of events are above and~$10\%$ below the dashed lines. The bands are calculated by passing a constant energy spectrum through our detector simulations for nuclear and beta-electronic recoils. The overall shape of the bands, and in particular that they separate at large~$\mathrm{S1}$, matches the behaviour observed with real detectors (see e.g.~\cite{Dahl:2009nta}). The blue and red contours indicate where 68\% and 95\% of events occur for inelastic scattering off the~$^{129}\mathrm{Xe}$ and~$^{131}\mathrm{Xe}$ isotopes for~$m_{\rm{DM}}=1000~\mathrm{GeV}$ and~$\sigma_n^0=10^{-39}~\mathrm{cm}^2$, respectively. Unlike the contour regions shown in figure~\ref{fig:S1S2plot}, these contours are not elliptical but have a more extended shape. This shape change arises because figure~\ref{fig:StwoSone} includes the effect of all possible recoil energies of the nucleus while figure~\ref{fig:S1S2plot} was for a single nuclear recoil energy. Finally, the circles and triangles show the simulated events expected for an exposure of one tonne-year and the dark matter parameters mentioned above. The open grey circles show the electronic background events, which as expected from figure~\ref{fig:drdsrates}, become more abundant at higher values of~$\mathrm{S1}$. The filled blue and filled red triangles show the inelastic events from the $39.6~\mathrm{keV}$ and $80.2~\mathrm{keV}$ de-excitation after scattering off the~$^{129}\mathrm{Xe}$ and~$^{131}\mathrm{Xe}$ isotopes. The open green triangles show the events from elastic scattering off xenon, which are more abundant at smaller values of~$\mathrm{S1}$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.49\columnwidth]{200_events_1tyr.pdf}
\includegraphics[width=0.49\columnwidth]{1000_events_1tyr.pdf}
\caption{The left and right panels show a simulation of the background and signal regions for the {\it XenonA200} and {\it XenonB1000} benchmark scenarios described in section~\ref{sec:detect}. The black and purple bands show the electronic and nuclear recoil bands which contain~$80\%$ of the background and elastic scattering dark matter signals. The red and blue contours show where~68\% and~95\% of events occur for inelastic scattering with the~$^{129}\mathrm{Xe}$ and~$^{131}\mathrm{Xe}$ isotopes for~$m_{\rm{DM}}=1000~\mathrm{GeV}$ and~$\sigma_n^0=10^{-39}~\mathrm{cm}^2$. The circles and triangles show the simulated events expected for an exposure of one tonne-year. The open grey circles show the background events, the filled blue and filled red triangles show the inelastic events arising from inelastic scattering off the~$^{129}\mathrm{Xe}$ and~$^{131}\mathrm{Xe}$ isotopes, while the open green triangles show the events from elastic scattering off xenon. Two-phase xenon experiments allow for some discrimination between the inelastic signal and the background events because the signal region extends below the electronic recoil band.
\label{fig:StwoSone}}
\end{figure}
Both panels of figure~\ref{fig:StwoSone} show that the signal and background distributions are slightly displaced. The displacement occurs for two reasons. The first is that nuclear recoils also have a lower~$\mathrm{S2_b}$ for the same~$\mathrm{S1}$ compared to an electronic event (this is why the nuclear band is below the electronic recoil band) and the second is that gamma-electronic interactions have a higher~$\mathrm{S1}$ and lower~$\mathrm{S2_b}$ than a beta-interaction of the same energy (as shown in figure~\ref{fig:S1S2plot}). Both effects mean that the inelastic signal region lies below the electronic recoil band. This displacement is crucial as it allows for some discrimination between signal and background events. This means that two-phase xenon detectors should have a significantly better sensitivity than single-phase detectors.
Finally, we discuss the differences between the two benchmark scenarios. All of the signals have larger~$\mathrm{S1}$ and~$\mathrm{S2_b}$ values for the {\it XenonB1000} scenario because of the larger~$g_1$ and~$g_2$ values. The extent to which the signal regions extend below the electronic recoil band depends on the detector parameters, particularly the applied electric drift field. For {\it XenonA200} (left panel) which has a drift field of $200~\mathrm{V/cm}$, $78\%$ of the~$^{129}\mathrm{Xe}$ inelastic events signal fall below the lower dashed line of the electronic recoil band. In comparison, for {\it XenonB1000} (right panel) where the drift field is $1000~\mathrm{V/cm}$, $92\%$ of the~$^{129}\mathrm{Xe}$ inelastic events signal fall below this line. For the~$^{131}\mathrm{Xe}$ signal region, $92\%$ of the events fall below the lower dashed line of the electronic recoil band for both the {\it XenonA200} and {\it XenonB1000} benchmark scenarios so we expect a similar sensitivity to the inelastic signal from the~$^{131}\mathrm{Xe}$ isotope for these scenarios.
To demonstrate that the drift field is primarily responsible for the separation of the signal region and electronic recoil band, we repeated this analysis with the detector parameters of {\it XenonA200} but with a drift field of $1000~\mathrm{V/cm}$ instead of $200~\mathrm{V/cm}$. In this case, we found that $91\%$ of the~$^{129}\mathrm{Xe}$ inelastic events signal fall below the lower dashed line of the electronic recoil band, similar to the $92\%$ obtained for {\it XenonB1000}. Similarly, for the detector parameters of {\it XenonB1000} but with a drift field of $200~\mathrm{V/cm}$ instead of $1000~\mathrm{V/cm}$, we found a value of $80\%$, similar to the value $78\%$ obtained for {\it XenonA200}.
Figure~\ref{fig:StwoSone} was generated for~$m_{\rm{DM}}=1000~\mathrm{GeV}$ and~$\sigma_n^0=10^{-39}~\mathrm{cm}^2$ but similar signal regions hold for other masses. This should not be too surprising since most of the~$\mathrm{S1}$ and~$\mathrm{S2_b}$ signal originates from the de-excitation photon whose energy is always the same. The primary change is that the ratio of the~$^{129}\mathrm{Xe}$ to~$^{131}\mathrm{Xe}$ rate is larger for smaller mass values (cf.~figures~\ref{fig:dRdS1spect} and~\ref{fig:drdsrates}).
\section{Characterising the detection sensitivity}
\label{sec:discovery}
In this section we describe our method for characterising the sensitivity of two-phase xenon experiments to the inelastic scattering process. We will do this by calculating the `discovery limit' or as we will call it, the discovery reach. This was introduced in~\cite{Billard:2011zj} and has been used extensively to characterise the limiting effect of the neutrino background (see e.g.~\cite{Billard:2013qya}). We first describe the formalism behind this frequentist approach and then provide specific details of our calculation.
The discovery reach is the smallest cross-section for which~90\% of experiments make at least a~$3\sigma$ discovery of the signal under consideration. To calculate it, we make use of the frequentist test statistic for the discovery of a positive signal~\cite{Cowan:2010js}:
\begin{equation}
q_0=\begin{cases}
-2\ln \lambda(0) &\hat{\sigma}_n^0\geq 0\\
0 &\hat{\sigma}_n^0<0
\end{cases}
\end{equation}
where the profile likelihood ratio is
\begin{equation}
\lambda(0)=\frac{L(\sigma_n^0=0,\hat{\hat{\vec{A}}}_{\mathrm{BG}})}{L(\hat{\sigma}_n^0,\hat{\vec{A}}_{\mathrm{BG}})}
\end{equation}
and the hats (\;$\hat{}$\;,\;$\hat{\hat{}}$\;) indicate that the parameters are those that maximise the extended likelihood $L$. Here $\vec{A}_{\mathrm{BG}}=\{A_{2 \nu \beta \beta}, A_{pp}, A_{\mathrm{Kr}}, A_{\mathrm{Rn}}, A_{\mathrm{Be}}, A_{\mathrm{mat}} \}$ are the amplitudes of the six background components discussed in section~\ref{sec:backgrounds}.
In our case the extended likelihood~\cite{Barlow:1990vc} (for a given value of the dark matter mass) is
\begin{equation}
\begin{split}
L(\sigma_n^0,\vec{A}_{\mathrm{BG}})&=\frac{\left(\mu_{\mathrm{DM}}+\sum^6_{j=1} \mu_{\mathrm{BG}j}\right)^N}{N!} \exp\left({-\mu_{\mathrm{DM}}-\sum^6_{j=1} \mu_{\mathrm{BG}j}} \right)\cdot \prod^6_{m=1} L_m(A_{\mathrm{BG}m})\\
&\cdot\prod^N_{i=1}\Biggl[\frac{\mu_{\mathrm{DM}}}{\mu_{\mathrm{DM}}+\sum^6_{k=1} \mu_{\mathrm{BG}k}} f_{\mathrm{DM}}(\mathrm{S1}_i,\log_{10}(\mathrm{S2_b}/\mathrm{S1})_i)\\
&\qquad+\sum^6_{j=1}\frac{\mu_{\mathrm{BG}j}}{\mu_{\mathrm{DM}}+\sum^6_{k=1} \mu_{\mathrm{BG}k} } f_{\mathrm{BG}j} (\mathrm{S1}_i,\log_{10}(\mathrm{S2_b}/\mathrm{S1})_i) \Biggr]\;,
\end{split}
\end{equation}
where $\mu_{\mathrm{DM}}\propto \sigma_n^0$ and $\mu_{\mathrm{BG}j} \propto A_{\mathrm{BG} j}$ are the mean number of events from dark matter and the background processes respectively, $f_{\mathrm{DM}}$ and $f_{\mathrm{BG}}$ are the unit normalised two-dimensional probability distribution functions for the signal and background processes in the $\mathrm{S1}$ -- $\log_{10}(\mathrm{S2_b}/\mathrm{S1})$ plane, $N$ is the total number of observed events and $\{\mathrm{S1}_i,\log_{10}(\mathrm{S2_b}/\mathrm{S1})_i\}$ are the values for a single event. Finally, $L_m(A_{\mathrm{BG}m})$ are the individual likelihood functions for the background normalisations, which we assume are Normal distributions with a standard deviation given by the respective error quoted in figure~\ref{fig:backrates}. As we are dealing with hypothetical experiments, we generate the unit normalised two-dimensional probability distribution functions $f_{\mathrm{DM}}$ and $f_{\mathrm{BG}j}$ from Monte Carlo by generating approximately two~million events for each process in the $\mathrm{S1}$ -- $\log_{10}(\mathrm{S2_b}/\mathrm{S1})$ plane.
The results of Wilks~\cite{Wilks:1938dza} and Wald~\cite{Wald:1943} allow us to relate the significance with which we can reject the background-only hypothesis $(\sigma_n^0=0)$ to the test statistic in a simple way:
\begin{equation}
\label{eq:Z0}
Z_0=\sqrt{q_0}\;,
\end{equation}
where $Z_0$ is the number of standard deviations. In appendix~\ref{app:WilksWald}, we explicitly demonstrate that the approximation of Wald is good so that~eq.~\eqref{eq:Z0} is accurate. To obtain the discovery reach for each value of $m_{\rm{DM}}$, we simulate a minimum of~2500 mock experiments and find the cross-section $\sigma_n^0$ for which 90\% of experiments have $Z_0 \geq3$. We will present a separate discovery reach for inelastic scattering off the~$^{129}\mathrm{Xe}$ and~$^{131}\mathrm{Xe}$ isotopes. We are able to do this because, as figure~\ref{fig:StwoSone} shows, the two signal regions are well separated from each other and also from the nuclear recoil band, which contains events from the elastic scattering process. The profile likelihood analysis takes into account the expected dark matter signal in the $\mathrm{S1}$ -- $\log_{10}(\mathrm{S2_b}/\mathrm{S1})$ plane so no cuts to identify a signal region are required. However in practice, to improve the run-time of our calculations, for each discovery reach calculation we restrict our analysis to the~$\mathrm{S1}$ and~$\log_{10}(\mathrm{S2_b}/\mathrm{S1})$ values around the dark matter signal region of interest. The restricted region is chosen to contain at least~$95\%$ of the events for each dark matter signal under consideration. By trying different regions, we found that our results are not sensitive to any reasonable choice. In appendix~\ref{app:cutcount}, we also provide a discovery reach calculation using a more conservative cut-and-count method. This serves as a useful cross-check against the profile likelihood analysis.
\section{Discovery reach for two-phase xenon detectors}
\label{sec:results}
We present in figure~\ref{fig:limit} the main results of this paper. The red and blue lines show the discovery reach for detecting inelastic scattering off the~$^{129}\mathrm{Xe}$ and~$^{131}\mathrm{Xe}$ isotopes for an exposure of 15~tonne-years. These lines show the smallest cross-section for which~90\% of experiments are able to make at least a~$3\sigma$ discovery of the inelastic signal. The left and right panels show the results for the {\it XenonA200} and {\it XenonB1000} benchmark scenarios (described in section~\ref{sec:detect}). The black dashed line shows the LUX 90\%~CL limit on the spin-dependent dark matter-neutron cross-section from their reanalysis of the 2013 search for dark matter that elastically scatters with the xenon isotopes~\cite{Akerib:2013tjd,Akerib:2015rjg,Akerib:2016lao}. The black dot-dashed line shows the projected limit from the XENON1T search for elastically scattering dark matter particles after an exposure of two tonne-years, which should be achieved by 2018.
\begin{figure}[t!]
\centering
\includegraphics[width=0.49\columnwidth]{limit_15tyr_200.pdf}
\includegraphics[width=0.49\columnwidth]{limit_15tyr_1000.pdf}
\caption{This figure shows the sensitivity of two-phase xenon experiments to the inelastic scattering process, which is our main result. The blue and red lines in both panels show the discovery reach for inelastically scattering off the~$^{129}\mathrm{Xe}$ and~$^{131}\mathrm{Xe}$ isotopes, respectively. The left and right panels show the results for the {\it XenonA200} and {\it XenonB1000} benchmark scenarios assuming a 15~tonne-year exposure. In the parameter space above these lines, 90\% of experiments will make at least a~$3 \sigma$ detection of the inelastic signal. The discovery reach for inelastically scattering off the~$^{129}\mathrm{Xe}$ isotope is better for the {\it XenonB1000} scenario, while for scattering off~$^{131}\mathrm{Xe}$, both benchmark scenarios have similar sensitivity. Also shown is the LUX exclusion limit (black dashed line) from their search for elastically scattering dark matter and the projected exclusion limit from XENON1T assuming a two tonne-year exposure (black dot-dashed line). \label{fig:limit}}
\end{figure}
For both xenon isotopes and both benchmark scenarios, figure~\ref{fig:limit} shows that the discovery reach of the inelastic signal is below the current LUX exclusion limit for a dark matter mass greater than~$\sim100$~GeV. This means that for dark matter particles that are heavier than this, it is possible for the inelastic signal to be detected by a future two-phase xenon detector that collects an exposure of 15 tonne-year (such as LZ or XENONnT). The parameter space where the inelastic signal may be detected is populated by many dark matter models, including neutralino scenarios where the higgsino component is large (see e.g.~\cite{Cohen:2010gj,Chalons:2012xf,Bertone:2015tza}). XENON1T is expected to be significantly more sensitive than LUX and will probe all of the parameter space where the inelastic signal may be detected with a 15~tonne-year exposure. Therefore, if the inelastic signal is ever to be detected with this exposure, XENON1T should find evidence for the elastic scattering dark matter signal by~2018.
Comparing the left and right panels of figure~\ref{fig:limit}, we see that the discovery reach for inelastic scattering off the~$^{131}\mathrm{Xe}$ isotope (red line) is similar for both benchmark scenarios. This is because the ability to discriminate between signal and background processes is similar for both scenarios. In contrast, the discovery reach for inelastic scattering off the~$^{129}\mathrm{Xe}$ isotope (blue line) is a factor~$\sim3.5$ lower for the {\it XenonB1000} benchmark scenario. This is because, as figure~\ref{fig:StwoSone} shows, more of the signal region lies below the electronic recoil band for the {\it XenonB1000} scenario so the discrimination power is better (cf.~the discussion at the end of section~\ref{subsec:comp} where the discrimination power was quantified).
\begin{figure}[t!]
\centering
\includegraphics[width=0.49\columnwidth]{xe136_200.pdf}
\includegraphics[width=0.49\columnwidth]{xe136_1000.pdf} \\ \vspace{3mm}
\includegraphics[width=0.49\columnwidth]{exp_200.pdf}
\includegraphics[width=0.49\columnwidth]{exp_1000.pdf}
\caption{The upper panels show how the discovery reach changes as the rate of the main background, the $2\nu\beta\beta$-decay of the $^{136}\mathrm{Xe}$ isotope, is varied. The cross-section is normalised to the discovery reach for an abundance of~$8.86\%$, the~$^{136}\mathrm{Xe}$ abundance of natural xenon. The lower panels show the variation in the discovery reach for different exposures, normalised to the cross-section for a 15~tonne-year exposure. In each panel, we show the discovery reach for two values of the dark matter mass and find that the variation does is independent of the mass. The left and right panels show the result for our two benchmark scenarios. They show that the variation is only weakly dependent on the detector parameters. By reducing the~$^{136}\mathrm{Xe}$ abundance to 1\%, the same sensitivity can be achieved with an exposure that is $\sim35\%$ smaller. The inelastic signal search regions are not background free so the discovery reach scales only as $\sim(\mathrm{exposure})^{-0.7}$. \label{fig:variations}}
\end{figure}
The discovery reach shown in figure~\ref{fig:limit} assumes an exposure of 15 tonne-years and the backgrounds rates discussed in section~\ref{sec:backgrounds}. We now explore how the discovery reach changes as we vary these assumptions. Firstly, we examine variations in the background rate. As we showed in section~\ref{sec:backgrounds}, the dominant background is the from the $2\nu\beta\beta$-decay of the $^{136}\mathrm{Xe}$ isotope so it is possible to reduce this background rate by reducing the abundance of the~$^{136}\mathrm{Xe}$ isotope. Depleting (or enriching) the~$^{136}\mathrm{Xe}$ isotope from xenon is relatively straightforward as demonstrated by experiments that use xenon enriched in~$^{136}\mathrm{Xe}$ to search for neutrinoless double beta decay. The upper two panels of figure~\ref{fig:variations} show how the discovery reach changes as we vary the~$^{136}\mathrm{Xe}$ abundance. The left and right panels show the results for the {\it XenonA200} and {\it XenonB1000} benchmark scenarios, respectively. The results are shown for two dark matter mass values and we plot the discovery reach cross-section normalised to the discovery reach assuming that the fractional abundance of~$^{136}\mathrm{Xe}$ is~$8.86\%$, which is the abundance in natural xenon and is the value that we have assumed throughout the paper. As expected, the discovery reach extends to smaller values of the cross-section as the fractional abundance is reduced. Figure~\ref{fig:variations} shows that the variation does not depend on the dark matter mass and is only weakly dependent on the benchmark scenario. For both scenarios, we see that lowering the~$^{136}\mathrm{Xe}$ abundance to~$1\%$ means that the smallest cross-section for which the inelastic signal may be discovered is reduced by $\sim35\%$ .
Secondly, we examine variations in the exposure. These results are shown in the lower two panels of figure~\ref{fig:variations}, where the discovery reach cross-section has been normalised to the discovery reach assuming a 15~tonne-year exposure. Figure~\ref{fig:variations} again demonstrates that the variation does not depend on the dark matter mass and is only weakly dependent on the benchmark scenario. The discovery reach for inelastically scattering off~$^{129}\mathrm{Xe}$ for the {\it XenonA200} scenario decreases more slowly as the exposure increases compared to the other scenarios because this signal region is most dominated by background processes (cf.~the discussion at the end of section~\ref{subsec:comp}). For a background free signal region, the improvement in the discovery reach is expected to scale as $(\mathrm{exposure})^{-1}$. As there is always some background contamination for the inelastic signals and the benchmark scenarios that we consider, we instead find that the discovery reach scales as $\sim(\mathrm{exposure})^{-0.7}$. In practice, this means that for a 200 tonne-year exposure, a benchmark exposure used in sensitivity studies for DARWIN, the various discovery reach lines presented in figure~\ref{fig:limit} should be lowered by a factor~$\sim 5$.
We end this section by returning to the question of whether a two tonne-year exposure of XENON1T can probe all of the parameter space where the inelastic signal may be discovered. With the scaling of the exposure determined in figure~\ref{fig:variations}, we find that the exposure required to reach the XENON1T exclusion limit for all scenarios except the~$^{129}\mathrm{Xe}$ signal in the {\it XenonB1000} benchmark scenario is~$\sim 500$~tonne-year. Such a large exposure is unlikely to be achieved in the foreseeable future. For the~$^{129}\mathrm{Xe}$ signal in the {\it XenonB1000} scenario, an exposure of approximately $\{225, 90, 70 \}$~tonne-year for $m_{\rm{DM}}=\{150, 1000, 10000 \}~\mathrm{GeV}$ is required for the discovery cross-section to reach the XENON1T limit shown in figure~\ref{fig:limit}. The exposure can be reduced to approximately $\{165, 60, 45 \}$~tonne-year if the~$^{136}\mathrm{Xe}$ abundance is reduced to~$1\%$. This demonstrates that it may be possible to discover inelastic scattering off the~$^{129}\mathrm{Xe}$ isotope even for cross-sections below the XENON1T limit, but only for optimal detector parameters (as in the {\it XenonB1000} scenario) and with large exposures that will only be achieved with detectors such as DARWIN.
\section{Conclusions and outlook}
\label{sec:con}
The canonical search for dark matter with direct detection experiments is for an elastic scattering process where the dark matter simply causes the nucleus to recoil. It was long ago realised that low-lying inelastic transitions of the nucleus may also play an important role since the dark matter's kinetic energy is sufficient to excite the target nucleus. In this instance, rather than just measuring the recoil of the nucleus, direct detection experiments measure the nuclear recoil energy together with the photon-energy released when the nucleus transitions back to the ground state (see figure~\ref{fig:scattering}).
The inelastic scattering rate does not have the nucleon-number--squared enhancement~$(\sim10^4)$ found with elastic {\it spin-independent} interactions so the inelastic signal will only be measurable for {\it spin-dependent} interactions, whose elastic scattering rate also does not have the nucleon-number--squared enhancement. Two-phase xenon detectors are an excellent probe of the elastic and inelastic spin-dependent interaction having two isotopes sensitive to these processes,~$^{129}\mathrm{Xe}$ and~$^{131}\mathrm{Xe}$, that each comprise approximately~$25\%$ of natural xenon and have low-lying excitations at $39.6~\mathrm{keV}$ and $80.2~\mathrm{keV}$, respectively. The purpose of this paper was to quantify the sensitivity of future tonne-scale two-phase xenon experiments, such as LZ, XENONnT and DARWIN, to the inelastic signal. We do this for the axial-vector interaction (eq.~\ref{eq:A-V}), for which accurate calculations of the nuclear structure functions are available.
We considered two benchmark scenarios, {\it XenonA200} and {\it XenonB1000} (described in section~\ref{sec:detect}), whose most important difference is the applied drift field of $200~\mathrm{V/cm}$ and $1000~\mathrm{V/cm}$, respectively. The parameters in these scenarios were chosen because they should bracket the performance of future experiments. We implemented a realistic Monte Carlo simulation of a two-phase xenon detector to model these scenarios, relying on the NEST phenomenological model to describe the interactions of the nucleus and photon in liquid xenon. This was vital so that we could translate energies into the measurable quantities, the primary (S1) and secondary (S2) scintillation signals (see figure~\ref{fig:S1S2plot}). We also had to quantify the background rates, finding that the $2\nu\beta\beta$-decay of~$^{136}\mathrm{Xe}$ dominates (see figure~\ref{fig:backrates}).
We demonstrated that two-phase xenon detectors allow for some discrimination between the inelastic signal and the background events because the signal region has a smaller $\log_{10}\left(\mathrm{S2_b}/\mathrm{S1} \right)$ value compared to the main backgrounds (see figure~\ref{fig:StwoSone}). Our main results were shown in figures~\ref{fig:limit} and~\ref{fig:variations}, where we quantified the sensitivity of our benchmark scenarios to the inelastic signal in terms of the discovery reach, which is the smallest cross-section for which~90\% of experiments detect the signal with at least a~$3\sigma$ significance. This cross-section is below the current LUX exclusion limit for a dark matter mass greater than~$\sim100$~GeV, implying that for dark matter particles that are heavier than this, it is possible for the inelastic signal to be detected with a future two-phase xenon detector. Except in the case of optimal detector parameters (as in the {\it XenonB1000} scenario) and large exposures (more than 50~tonne-years), XENON1T, with a two tonne-year exposure, will probe all of the parameter space where the inelastic signal may be detected with their search for elastically scattering dark matter.
We end by discussing some of the possible extensions of this work. Firstly, we were only able to consider the inelastic signal from the axial-vector interaction since this is the only spin-dependent interaction for which the inelastic structure functions have been calculated. It would be desirable to calculate the discovery reach of other spin-dependent operators such as VA $(\bar{\chi}\gamma^{\mu}\chi \bar{\psi}_q\gamma_{\mu} \gamma^5\psi_q)$ or SP $(\bar{\chi}\chi \bar{\psi}_q \gamma^5\psi_q)$. Secondly, in this work we assumed that nuclear recoil and photon scintillation signals could not be distinguished. It may be possible that for some fraction of the events, the photon travels sufficiently far from the initial interaction to give a distinctive pulse shape different from background events. This would further improve the sensitivity to inelastic scattering signals. Thirdly, we focussed solely on two-phase xenon experiments as these are better able to distinguish between signal and background signals compared to single-phase xenon detectors. However it may be possible that tonne-scale single-phase detectors can improve their sensitivity to the inelastic signal by performing an annual modulation search or by modelling the~$\mathrm{S1}$ pulse shape. Finally, we have stated that a detection of the inelastic signal together with the elastic signal would point strongly to a spin-dependent interaction over a spin-independent interaction from a single xenon experiment. It would be desirable to have a dedicated study to quantify this statement and to concretely demonstrate how a measurement of both signals would help pin down the nature of the interaction between dark matter and the Standard Model particles.
\acknowledgments{
I am particularly grateful to Nassim Bozorgnia for discussions that reignited my interest in this topic and for providing data from the EAGLE simulation, and to Andrew Brown for answering {\it many} of my naive questions. I'm also grateful for discussions with Rafael Lang and Simon Fiorucci at the Rencontres de Blois conference, with Martin Hoferichter, Philipp Klos and Achim Schwenk at the Mainz Institute for Theoretical Physics (MITP), with Alastair Currie and Marc Schumann at the TAUP conference, to Felix Kahlhoefer for comments on an early draft of this manuscript, and to Sebastian Liem for discussions on statistics. This work is part of the research programme of the Foundation for Fundamental Research on Matter~(FOM), which is part of the Netherlands Organisation for Scientific Research~(NWO).
}
|
2,877,628,089,790 | arxiv | \section{Introduction} Let $S(\infty)$ denote the group
whose elements are finite permutations of $\{1,2,3,\ldots\}$. The
group $S(\infty)$ is called the infinite symmetric group, and it
is a model example of a "big" group. The harmonic analysis for
such groups is an active topic of modern research, with
connections to different areas of mathematics from enumerative
combinatorics to random growth models and to the theory of
Painlev\'e equations. A theory of harmonic analysis on the
infinite symmetric and infinite-dimensional unitary groups is
developed by Kerov, Olshanski and Vershik \cite{KOV1, KOV2},
Borodin \cite{Borodin-1}, Borodin and Olshanski
\cite{Borodin-2,Borodin-3}. For an introduction to harmonic
analysis on the infinite symmetric group see Olshanski
\cite{olshanski2003}. Paper by Borodin and Deift
\cite{BorodinDeift} studies differential equations arising in the
context of harmonic analysis on the infinite-dimensional unitary
group, and paper by Borodin and Olshanski \cite{BorodinCombin}
describes the link to problems of enumerative combinatorics, and
to certain random growth models. For very recent works on the
subject see, for example, Vershik and Tsilevich \cite{vershik},
Borodin and Kuan \cite{BorodinKuan}.
Set
$$ G=S(\infty)\times S(\infty),
$$
$$
K=\diag S(\infty)=\left\{(g,g)\in G \mid g\in
S(\infty)\right\}\subset G.
$$
Then $(G,K)$ is an infinite dimensional Gelfand pair in the sense
of Olshanski \cite{olshanskiGelfandPairs}. It can be shown that
the biregular spherical representation of $(G,K)$ in the space
$\ell^2\left(S(\infty)\right)$ is irreducible. Thus the
conventional scheme of noncommutative harmonic analysis is not
applicable to the case of the infinite symmetric group.
In 1993, Kerov, Olshanski and Vershik \cite{KOV1} (Kerov,
Olshanski and Vershik \cite{KOV2} contains the details)
constructed a family $\{T_z: z\in\mathbb C\}$ of unitary representations
of the bisymmetric infinite group $G=S(\infty)\times
S(\infty)$. Each representation $T_z$ acts in the Hilbert space
$L^2(\mathfrak{S},\mu_t)$, where $\mathfrak{S}$ is a certain compact space called the space of
virtual permutations, and $\mu_t$ is a distinguished $G$-invariant probability measure on
$\mathfrak{S}$ (here $t=|z|^2$). The representations $T_z$
(called the generalized regular representations) are reducible.
Moreover, it is possible to extend the definition of $T_z$ to the
limit values $z=0$ and $z=\infty$, and it turns out that
$T_{\infty}$ is equivalent to the biregular representation
of $S(\infty)\times S(\infty)$. Thus, the family $\{T_z\}$ can be
viewed as a deformation of the biregular representation.
Once the representations $T_z$
are constructed, the main problem of the harmonic analysis on the
infinite symmetric group is in decomposition of the generalized
regular representations $T_z$ into irreducible ones.
One of the initial steps in this direction can be described as
follows. Let $\textbf{1}$ denote the function on $\mathfrak{S}$
identically equal to 1. Consider this function as a vector of
$L^2(\mathfrak{S},\mu_t)$. Then $\textbf{1}$ is a spherical
vector, and the pair $(T_z,\textbf{1})$ is a spherical
representation of the pair $(G,K)$, see, for example, Olshanski
\cite{olshanski2003}, Section 2. The spherical function of
$(T_z,\textbf{1})$ is the matrix coefficient
$(T_z(g_1,g_2)\textbf{1},\textbf{1})$, where $(g_1,g_2)\in
S(\infty)\times S(\infty)$. Set
$$
\chi_z(g)=\left(T_z(g,e)\textbf{1},\textbf{1}\right),\; g\in
S(\infty).
$$
The function $\chi_z$ can be understood as a character of the
group $S(\infty)$ corresponding to $T_z$. Kerov, Olshanski and
Vershik \cite{KOV1,KOV2} found the restriction
of
$\chi_z$ to $S(n)$ in terms of irreducible characters of $S(n)$.
Namely, let $\mathbb Y_n$ be the set of Young diagrams with $n$ boxes.
For $\lambda\in\mathbb Y_n$ denote by $\chi^{\lambda}$ the corresponding
irreducible character of the symmetric group $S(n)$ of degree
$n$. Then for any $n=1,2,\ldots$ the following formula holds true
\begin{equation}\label{EquationHiDecomposition}
\chi_z\biggl|_{S(n)}=\sum\limits_{\lambda\in\mathbb Y_n}M^{(n)}_{z,\bar{z}}(\lambda)\frac{\chi^{\lambda}}{\chi^{\lambda}(e)}.
\end{equation}
In this formula $M^{(n)}_{z,\bar{z}}$ is a probability measure
(called the $z$-measure) on the set of Young diagrams with $n$
boxes, or on the set of integer partitions of $n$. Formula
(\ref{EquationHiDecomposition}) defines the $z$-measure
$M^{(n)}_{z,\bar{z}}$ as a weight attached to the corresponding
Young diagram in the decomposition of the restriction of $\chi_z$
to $S(n)$ in irreducible characters of $S(n)$. Expression
(\ref{EquationHiDecomposition}) enables to reduce the problem of
decomposition of $T_z$ into irreducible components to the problem
on the computation of spectral counterparts of
$M^{(n)}_{z,\bar{z}}$.
In addition to their role in the harmonic analysis on the infinite
symmetric group the $z$-measures described above are quite
interesting objects by themselves. It is possible to introduce
more general objects, namely measures $M_{z,z'}^{(n)}$ on Young diagrams
with $n$ boxes. Such measures depend on two complex parameters
$z,z'$. If $z'=\bar{z}$, then $M_{z,z'}^{(n)}$ coincide with the
$z$-measures in equation (\ref{EquationHiDecomposition}). Under
suitable restrictions on $z$ and $z'$ the weights $M_{z,z'}^{(n)}$
are nonnegative and their sum is equal to 1. Thus $M^{(n)}_{z,z'}$
can be understood as probability measures on $\mathbb Y_n$. For special
values of parameters $z,z'$ the $z$-measures turn into discrete
orthogonal polynomial ensembles which in turn related to
interesting probabilistic models, see Borodin and Olshanski
\cite{BorodinCombin}. In addition, the $z$-measures are a
particular case of the Schur measures introduced by Okounkov in
\cite{okounkov1}. The $z$-measures $M_{z,z'}^{(n)}$ were studied
in details in the series of papers by Borodin and Olshanski
\cite{Borodin-4, BorodinCombin, BO1, BO}, in Okounkov
\cite{okounkov0}, and in Borodin, Olshanski, and Strahov
\cite{BorodinOlshanskiStrahov}.
Moreover, as it follows from Kerov \cite{kerov}, Borodin and
Olshanski \cite{BO1} it is natural to consider a deformation
$M_{z,z',\theta}^{(n)}$ of $M_{z,z'}^{(n)}$, where $\theta>0$ is
called the parameter of deformation (or the Jack parameter). Then
the measures $M^{(n)}_{z,z'}$ can be thought as the $z$-measures
with the Jack parameter $\theta=1$. It is shown in Borodin and
Olshanski \cite{BO1}, that $M_{z,z'}^{(n)}$ are in many ways
similar to log-gas (random-matrix) models with arbitrary
$\beta=2\theta$. In particular, if $\theta=2$ or $\theta=1/2$ one
expects that $M_{z,z'}^{(n)}$ will lead to Pfaffian point
processes, similar to ensembles of Random Matrix Theory of
$\beta=4$ or $\beta=1$ symmetry types, see Borodin and Strahov
\cite{BS}, Strahov \cite{strahov} for the available results in
this direction.
It is the purpose of the present paper to describe the origin of
$z$-measures with the Jack parameters $\theta=2$ and
$\theta=1/2$ in the representation theory. First we recall the
notion of the $z$-measures with an arbitrary Jack parameter
$\theta>0$. Then we consider the symmetric group $S(2n)$ viewed
as the group of permutations of the set
$\{-n,\ldots,-1,1,\ldots,n\}$, and its subgroup $H(n)$ defined as
the centralizer of the product of transpositions $(-n,n),
(-n+1,n-1),\ldots ,(-1,1)$. The group $H(n)$ is called the
hyperoctahedral group of degree $n$. One knows that $(S(2n),
H(n))$ are Gelfand pairs, and their inductive limit,
$(S(2\infty),H(\infty))$, is an infinite dimensional Gelfand pair,
see Olshanski \cite{olshanskiGelfandPairs}. We describe the
construction of a family of unitary spherical representations
$T_{z,\frac{1}{2}}$ of the infinite dimensional Gelfand pair
$(S(2\infty),H(\infty))$ and show that $z$-measures with the Jack
parameters $\theta=1/2$ appear as coefficients in the
decomposition of the spherical functions of $T_{z,\frac{1}{2}}$
into spherical functions of the Gelfand pair $(S(2n),H(n))$. Due
to the fact that $z$-measures with the Jack parameters $\theta=2$
and $\theta=1/2$ are related to each other in a very simple way,
see Proposition \ref{PropositionMSymmetries}, the construction
described above provides a representation-theoretic interpretation
for $z$-measures with the Jack parameter $\theta=2$ as well.
Therefore, it is natural to refer to such $z$-measures as to the
$z$-measures for the infinite dimensional Gelfand pair
$(S(2\infty),H(\infty))$, or, more precisely, as to the
$z$-measures of the representation $T_{z,\frac{1}{2}}$.
The fact that these measures play a role in the harmonic analysis
was mentioned in Borodin and Olshanski \cite{BO1}, and in our
explanation of this representation-theoretic aspect we used many
ideas from Olshanski \cite{olshanskiletter}.
\textbf{Acknowledgements} I am grateful to Grigori Olshanski for
numerous discussions and many valuable comments at different
stages of this work.
\section{The $z$-measures on partitions with the general
parameter $\theta>0$}\label{Sectionztheta} We use Macdonald
\cite{macdonald} as a basic reference for the notations related to
integer partitions and to symmetric functions. In particular,
every decomposition
$$
\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_l):\;
n=\lambda_1+\lambda_2+\ldots+\lambda_{l},
$$
where $\lambda_1\geq\lambda_2\geq\ldots\geq\lambda_l$ are positive
integers, is called an integer partition. We identify integer
partitions with the corresponding Young diagrams. The set of
Young diagrams with $n$ boxes is denoted by $\mathbb Y_n$.
Following Borodin and Olshanski \cite{BO1}, Section 1, and Kerov
\cite{kerov} let $M_{z,z',\theta}^{(n)}$ be a complex measure on
$\mathbb Y_n$ defined by
\begin{equation}\label{EquationVer4zmeasuren}
M_{z,z',\theta}^{(n)}(\lambda)=\frac{n!(z)_{\lambda,\theta}(z')_{\lambda,\theta}}{(t)_nH(\lambda,\theta)H'(\lambda,\theta)},
\end{equation}
where $n=1,2,\ldots $, and where we use the following notation
\begin{itemize}
\item $z,z'\in\mathbb C$ and $\theta>0$ are parameters, the parameter
$t$ is defined by
$$
t=\frac{zz'}{\theta}.
$$
\item $(t)_n$ stands for the Pochhammer symbol,
$$
(t)_n=t(t+1)\ldots (t+n-1)=\frac{\Gamma(t+n)}{\Gamma(t)}.
$$
\item
$(z)_{\lambda,\theta}$ is a multidemensional analogue of the
Pochhammer symbol defined by
$$
(z)_{\lambda,\theta}=\prod\limits_{(i,j)\in\lambda}(z+(j-1)-(i-1)\theta)
=\prod\limits_{i=1}^{l(\lambda)}(z-(i-1)\theta)_{\lambda_i}.
$$
Here $(i,j)\in\lambda$ stands for the box in the $i$th row
and the $j$th column of the Young diagram $\lambda$, and we
denote by $l(\lambda)$ the number of nonempty rows in the
Young diagram $\lambda$.
\item
$$
H(\lambda,\theta)=\prod\limits_{(i,j)\in\lambda}\left((\lambda_i-j)+(\lambda_j'-i)\theta+1\right),
$$
$$
H'(\lambda,\theta)=\prod\limits_{(i,j)\in\lambda}\left((\lambda_i-j)+(\lambda_j'-i)\theta+\theta\right),
$$
where $\lambda'$ denotes the transposed diagram.
\end{itemize}
\begin{prop}\label{PropositionHH}
The following symmetry relations hold true
$$
H(\lambda,\theta)=\theta^{|\lambda|}H'(\lambda',\frac{1}{\theta}),\;\;(z)_{\lambda,\theta}
=(-\theta)^{|\lambda|}\left(-\frac{z}{\theta}\right)_{\lambda',\frac{1}{\theta}}.
$$
Here $|\lambda|$ stands for the number of boxes in the diagram
$\lambda$.
\end{prop}
\begin{proof}
These relations follow immediately from definitions of
$H(\lambda,\theta)$ and $(z)_{\lambda,\theta}$.
\end{proof}
\begin{prop}\label{PropositionMSymmetries}
We have
$$
M_{z,z',\theta}^{(n)}(\lambda)=M_{-z/\theta,-z'/\theta,1/\theta}^{(n)}(\lambda').
$$
\end{prop}
\begin{proof}
Use definition of $M_{z,z',\theta}^{(n)}(\lambda)$, equation
(\ref{EquationVer4zmeasuren}), and apply Proposition
\ref{PropositionHH}.
\end{proof}
\begin{prop}\label{Prop1.3}
We have
$$
\sum\limits_{\lambda\in\mathbb Y_n}M_{z,z',\theta}^{(n)}(\lambda)=1.
$$
\end{prop}
\begin{proof}
See Kerov \cite{kerov}, Borodin and Olshanski
\cite{BO1,BOHARMONICFUNCTIONS}.
\end{proof}
\begin{prop}
If parameters $z, z'$ satisfy one of the three conditions listed
below, then the measure $M_{z,z',\theta}^{(n)}$ defined by
expression (\ref{EquationVer4zmeasuren}) is a probability measure
on $Y_n$. The conditions are as follows.\begin{itemize}
\item Principal series: either
$z\in\mathbb C\setminus(\mathbb Z_{\leq 0}+\mathbb Z_{\geq 0}\theta)$ and $z'=\bar z$.
\item The complementary series: the parameter $\theta$ is a rational number, and both $z,z'$
are real numbers lying in one of the intervals between two
consecutive numbers from the lattice $\mathbb Z+\mathbb Z\theta$.
\item The degenerate series: $z,z'$ satisfy one of the
following conditions\\
(1) $(z=m\theta, z'>(m-1)\theta)$ or $(z'=m\theta,
z>(m-1)\theta)$;\\
(2) $(z=-m, z'<-m+1)$ or $(z'=-m,
z<m-1)$.
\end{itemize}
\end{prop}
\begin{proof} See Propositions 1.2, 1.3 in Borodin and Olshanski
\cite{BO1}.
\end{proof}
Thus, if the conditions in the Proposition above are satisfied,
then $M_{z,z',\theta}^{(n)}$ is a probability measure defined on
$\mathbb Y_n$, as follows from Proposition \ref{Prop1.3}. In case when
$z, z'$ are taken either from the principal series or the
complementary series we refer to $z, z'$ as to admissible
parameters of the $z$-measure under considerations. We will refer
to $M_{z,z',\theta}^{(n)}(\lambda)$ as to the $z$-measure with the
deformation (Jack) parameter $\theta$.
\begin{rem}
When both $z,z'$ go to infinity, expression
(\ref{EquationVer4zmeasuren}) has a limit
\begin{equation}\label{EquationPlancherelInfy}
M_{\infty,\infty,\theta}^{(n)}(\lambda)=\frac{n!\theta^{n}}{H(\lambda,\theta)H'(\lambda,\theta)}
\end{equation}
called the Plancherel measure on $\mathbb Y_n$ with general $\theta>0$.
Statistics of the Plancherel measure with the general Jack
parameter $\theta>0$ is discussed in many papers, see, for
example, a very recent paper by Matsumoto \cite{matsumoto}, and
references therein. Matsumoto \cite{matsumoto} compares limiting
distributions of rows of random partitions with distributions of
certain random variables from a traceless Gaussian
$\beta$-ensemble.
\end{rem}
\section{The spaces $ X(n)$ and their projective limit}
\subsection{The homogeneous space $X(n)=H(n)\setminus S(2n)$}
Let $S(2n)$ be the permutation group of $2n$ symbols realized as
that of the set $\{-n,\ldots,-1,1,\ldots,n\}$. Let $\breve{t}\in
S(2n)$ be the product of the transpositions $(-n,n),(-n+1,n-1),
\ldots, (-1,1)$. By definition, the group $H(n)$ is the
centralizer of $\breve{t}$ in $S(2n)$. We can write
$$
H(n)=\left\{\sigma\biggl|\sigma\in S(2n), \sigma
\breve{t}\sigma^{-1}=\breve{t}\right\}.
$$
The group $H(n)$ is called the hyperoctahedral group of degree
$n$.
Set $X(n)=H(n)\setminus S(2n)$. Thus $X(n)$ is the space of right
cosets of the subgroup $H(n)$ in $S(2n)$.
It is not hard to check that the set $X(n)$ can be realized as the
set of all pairings of $\{-n,\ldots,-1,1,\ldots,n\}$ into $n$
unordered pairs. Thus every element $\breve{x}$ of $X(n)$ is
representable as a collection of $n$ unordered pairs,
\begin{equation}\label{representationofelement}
\breve{x}\in X(n)\longleftrightarrow
\breve{x}=\biggl\{\{i_1,i_2\},\ldots,\{i_{2n-1},i_{2n}\}\biggr\},
\end{equation}
where $i_1,i_2,\ldots,i_{2n}$ are distinct elements of the set
$\{-n,\ldots,-1,1,\ldots,n\}$.
For example, if $n=2$, then $S(4)$ is the permutation group of
$\{-2,-1,1,2\}$, the element $\breve{t}$ is the product of
transpositions $(-2,-1)$ and $(1,2)$, the subgroup $H(2)$ is
\begin{equation}
\begin{split}
H(2)=\biggl\{&\left(
\begin{array}{cccc}
-2 & -1 & 1 & 2 \\
-2 & -1 & 1 & 2 \\
\end{array}\right),\; \left(
\begin{array}{cccc}
-2 & -1 & 1 & 2 \\
2 & -1 & 1 & -2 \\
\end{array}\right),\\
&\left(
\begin{array}{cccc}
-2 & -1 & 1 & 2 \\
-2 & 1 & -1 & 2 \\
\end{array}\right),\;
\left(
\begin{array}{cccc}
-2 & -1 & 1 & 2 \\
2 & 1 & -1 & -2 \\
\end{array}\right),\\
&\left(
\begin{array}{cccc}
-2 & -1 & 1 & 2 \\
-1 & -2 & 2 & 1 \\
\end{array}\right),\;
\left(
\begin{array}{cccc}
-2 & -1 & 1 & 2 \\
1 & -2 & 2 & -1 \\
\end{array}\right),\\
&\left(
\begin{array}{cccc}
-2 & -1 & 1 & 2 \\
-1 & 2 & -2 & 1 \\
\end{array}\right),\;
\left(
\begin{array}{cccc}
-2 & -1 & 1 & 2 \\
1 & 2 & -2 & -1 \\
\end{array}\right)
\biggr\},
\end{split}
\nonumber
\end{equation}
and the set $X(2)$ is the set consisting of three elements, namely
$$
\biggl\{\{-2,-1\},\{1,2\}\biggr\},
\biggl\{\{-2,1\},\{-1,2\}\biggr\}, \;\mbox{and}\;
\biggl\{\{-2,-2\},\{-1,1\}\biggr\}.
$$
So each element of $X(2)$ is the pairing of $\{-2,-1,1,2\}$ into
(two) unordered pairs.
We have
\begin{equation}
|X(n)|=\frac{|S(2n)|}{|H(n)|}=\frac{(2n)!}{2^nn!}=1\cdot
3\cdot\ldots \cdot (2n-1). \nonumber
\end{equation}
\subsection{Canonical projections $p_{n,n+1}:X(n+1)\rightarrow X(n)$. The projective limit of the spaces $X(n)$}
\label{SubsectionSpaceX}
Given an element $\breve{x}'\in X(n+1)$ we define its derivative
element $\breve{x}\in X(n)$ as follows. Represent $\breve{x}'$ as $n+1$
unordered pairs, as it is explained in the previous Section. If
$n+1$ and $-n-1$ are in the same pair, then $\breve{x}$ is
obtained from $\breve{x}'$ by deleting this pair. Suppose that
$n+1$ and $-n-1$ are in different pairs. Then $\breve{x}'$ can be
written as
$$
\breve{x}'=\biggl\{\{i_1,i_2\},\ldots, \{i_m,-n-1\},\ldots,
\{i_k,n+1\},\ldots, \{i_{2n+1},i_{2n+2}\}\biggr\}.
$$
In this case $\breve{x}$ is obtained from $\breve{x}'$ by removing
$-n-1$, $n+1$ from pairs $\{i_m,-n-1\}$ and $\{i_k, n+1\}$
correspondingly, and by replacing two these pairs, $\{i_m,-n-1\}$
and $\{i_k, n+1\}$, by one pair $\{i_m,i_k\}$. The map
$\breve{x}'\rightarrow \breve{x}$, denoted by $p_{n,n+1}$, will be
referred to as the canonical projection of $X(n+1)$ onto $X(n)$.
Consider the sequence
$$
X(1)\leftarrow\ldots\leftarrow X(n)\leftarrow
X(n+1)\leftarrow\ldots
$$
of canonical projections, and let
$$
X=\varprojlim X(n)
$$
denote the projective limit of the sets $X(n)$. By definition, the
elements of $X$ are arbitrary sequences $\breve{x}=(\breve{x}_1,
\breve{x}_2,\ldots )$, such that $\breve{x}_n\in X(n)$, and
$p_{n,n+1}(\breve{x}_{n+1})=\breve{x}_n$. The set $X$ is a closed
subset of the compact space of all sequences $(\breve{x}_n)$,
therefore, it is a compact space itself.
In what follows we denote by $p_n$ the projection $X\rightarrow
X(n)$ defined by $p_n(\breve{x})=\breve{x}_n$.
\subsection{Cycles. Representation of elements of $X(n)$ in terms of
arrow configurations on circles}\label{SectionArrows} Let $\breve{x}$ be
an element of $X(n)$. Then $\breve{x}$ can be identified with arrow
configurations on circles. Such arrow configurations can be
constructed as follows. Once $\breve{x}$ is written as a collection of
$n$ unordered pairs, one can represent $\breve{x}$ as a union of cycles
of the form
\begin{equation}\label{formcycle}
j_1\rightarrow -j_2\rightarrow j_2\rightarrow -j_3\rightarrow
j_3\rightarrow\ldots\rightarrow -j_k \rightarrow j_k \rightarrow
-j_1\rightarrow j_1,
\end{equation}
where $j_1,j_2,\ldots ,j_k$ are distinct integers from the set
$\{-n,\ldots ,n\}$. For example, take
\begin{equation}\label{element}
\breve{x}=\biggl\{\{1,3\},\{-2,5\}, \{2,-1\},
\{-3,-5\},\{4,-6\},\{-4,6\}\biggr\}.
\end{equation}
Then $\breve{x}\in X(3)$, and it is possible to think about $\breve{x}$ as a
union of two cycles, namely
\begin{equation}\label{circle1}
1\rightarrow 3\rightarrow-3\rightarrow-5\rightarrow
5\rightarrow-2\rightarrow 2\rightarrow-1\rightarrow 1, \nonumber
\end{equation}
and
\begin{equation}\label{circle2}
4\rightarrow-6\rightarrow 6\rightarrow -4\rightarrow 4. \nonumber
\end{equation}
Cycle (\ref{formcycle}) can be represented as a circle with
attached arrows. Namely, we put on a circle points labelled by
$|j_1|$, $|j_2|$,$\ldots$, $|j_k|$, and attach arrows to these
points according to the following rules. The arrow attached to
$|j_1|$ is directed clockwise. If the next integer in the cycle
(\ref{formcycle}), $j_2$, has the same sign as $j_1$, then the
direction of the arrow attached to $|j_2|$ is the same as the
direction of the arrow attached to $|j_1|$, i.e. clockwise.
Otherwise, if the sign of $j_2$ is opposite to the sign of $j_1$,
the direction of the arrow attached to $|j_2|$ is opposite to the
direction of the arrow attached to $|j_1|$, i.e. counterclockwise.
Next, if the integer $j_3$ has the same sign as $j_2$, then the
direction of the arrow attached to $|j_3|$ is the same as the
direction of the arrow attached to $|j_2|$, etc. For example, the
representation of of the element $\breve{x}$ defined by (\ref{element})
in terms of arrow configurations on circles is shown on Fig. 1.
\begin{figure}
\begin{picture}(100,150)
\put(-10,100){\circle{200}}
\put(100,100){\circle{200}}
\put(-10,120){\circle*{2}} \put(-10,80){\circle*{2}}
\put(-30,100){\circle*{2}} \put(10,100){\circle*{2}}
\put(-10,124){$1$} \put(-10,69){$5$} \put(-42,100){$2$}
\put(14,100){$3$}
\put(-10,120){\vector(1,0){10}} \put(-10,80){\vector(-1,0){10}}
\put(-30,100){\vector(0,1){10}} \put(10,100){\vector(0,1){10}}
\put(100,120){\circle*{2}} \put(100,80){\circle*{2}}
\put(100,120){\vector(1,0){10}} \put(100,80){\vector(-1,0){10}}
\put(100,124){$4$} \put(100,69){$6$}
\end{picture}
\caption{The representation of the element
$$
\breve{x}=\biggl\{\{1,3\},\{-2,5\}, \{2,-1\},
\{-3,-5\},\{4,-6\},\{-4,6\}\biggr\}
$$
in terms of arrow configurations on circles. The first circle
(from the left) represents cycle $1\rightarrow
3\rightarrow-3\rightarrow-5\rightarrow 5\rightarrow-2\rightarrow
2\rightarrow-1\rightarrow 1$, and the second circle represents
cycle $4\rightarrow-6\rightarrow 6\rightarrow -4\rightarrow 4$.}
\end{figure}
In this representation the projection $p_{n,n+1}:X(n+1)\rightarrow
X(n)$ is reduced to removing the point $n+1$ together with the
attached arrow.
\section{The $t$-measures on $X$}
\subsection{Probability measures $\mu_t^{(n)}$ on $X(n)$, and $\mu_t$ on $X$}
The measures $\mu_t^{(n)}$ on the spaces $X(n)$ are natural
analogues of the Ewens measures on the group $S(n)$ described in
Kerov, Olshanski and Vershik \cite{KOV2}.
\begin{defn}\label{DefinitionEwensMeasures} For $t>0$ we set
$$
\mu_t^{(n)}(\breve{x})=\frac{t^{[\breve{x}]_n}}{t(t+2)\ldots
(t+2n-2)},
$$
where $\breve{x}\in X(n)$, and $[\breve{x}]_n$ denotes the number
of cycles in $\breve{x}$, or the number of circles in the
representation of $\breve{x}$ in terms of arrow configurations, see
Section \ref{SectionArrows} .
\end{defn}
\begin{prop}\label{PROPOSITION4.2}
a) We have
\begin{equation}\label{normtequation}
\sum\limits_{\breve{x}\in X(n)}\mu_t^{(n)}(\breve{x})=1.
\end{equation}
Thus $\mu_t^{(n)}(\breve{x})$ can be understood as a probability
measure
on $X(n)$. \\
b) Given $t>0$, the canonical projections $p_{n,n+1}$ preserve the
measures $\mu_t^{(n)}(\breve{x})$, which means that the condition
\begin{equation}\label{mutnproperty}
\mu_t^{(n+1)}\biggl(\{\breve{x}'\;\vert\; \breve{x}'\in
X(n+1),p_{n,n+1}(\breve{x}')=\breve{x}\}\biggr)=\mu_t^{(n)}(\breve{x})
\end{equation}
is satisfied for each $ \breve{x}\in X(n)$.
\end{prop}
\begin{proof}
If $n=1$, then $X(1)$ consists of only one element, namely
$\{-1,1\}$, and from Definition \ref{DefinitionEwensMeasures} we
immediately see that equation (\ref{normtequation}) is satisfied
in this case.
Let $\breve{x}$ be an arbitrary element of $X(n)$. Represent
$\breve{x}$ in terms of circles with attached arrows, as it is
explained in Section \ref{SectionArrows}. Consider the set
\begin{equation}\label{set}
\left\{\breve{x}'|\; \breve{x}'\in X(n+1),
p_{n,n+1}(\breve{x}')=\breve{x}\right\}.
\end{equation}
It is not hard to see that this set consists of $2n+1$ points.
Indeed, given $\breve{x}\in X(n)$ we can obtain $\breve{x}'$ from
set (\ref{set}) (i.e. $\breve{x}'$ which lies above $\breve{x}$
with respect to the canonical projection $p_{n,n+1}$) by adding
an arrow to existing circle in $2n$ ways, or by creating a new
circle.
If $\breve{x}'$ is obtained from $\breve{x}$ by creating a new
circle, then
$$
[\breve{x}']_{n+1}=[\breve{x}]_n+1,\;\; \mbox{and}\;\;
t^{[\breve{x}']_{n+1}}=t^{[\breve{x}]_{n}+1}.
$$
If $\breve{x}'$ is obtained from $\breve{x}$ by adding an arrow to
an existing circle, then
$$
[\breve{x}']_{n+1}=[\breve{x}]_n.
$$
Therefore, the relation
$$
\sum\limits_{\breve{x}'\in
X(n+1)}t^{[\breve{x}']_{n+1}}=(t+2n)\sum\limits_{\breve{x}\in
X(n)}t^{[\breve{x}]_{n}}
$$
is satisfied. From the recurrent relation above we obtain
$$
\sum\limits_{\breve{x}'\in
X(n+1)}t^{[\breve{x}']_{n+1}}=t(t+2)\ldots (t+2n).
$$
This formula is equivalent to equation (\ref{normtequation}), and
the first statement of the Proposition is proved.
Let us now prove the second statement of the Proposition. We need
to show that the condition (\ref{mutnproperty}) is satisfied for
each $\breve{x}\in X(n)$. We have
\begin{equation}\label{z1}
\mu_t^{(n+1)}\biggl(\{\breve{x}'\;\vert\; \breve{x}'\in
X(n),p_{n,n+1}(\breve{x}')=\breve{x}\}\biggr)=\sum\limits_{\breve{x}':\breve{x}'\in
X(n+1),
p_{n,n+1}(\breve{x}')=\breve{x}}\frac{t^{[\breve{x}']_{n+1}}}{t(t+2)\ldots
(t+2n)},
\end{equation}
where we have used the definition of $\mu_t^{(n)}$, Definition
\ref{DefinitionEwensMeasures}. By the same argument as in the
proof of the first statement of the Proposition the sum in the
righthand side of equation (\ref{z1}) can be decomposed into two
sums. This first sum runs over those $\breve{x}'$ that are
obtained from $\breve{x}$ by adding an arrow to one of the
existing circles of $\breve{x}$. This sum is equal to
\begin{equation}\label{z2}
\frac{(2n)t^{[\breve{x}]_{n}}}{t(t+2)\ldots (t+2n)}.
\end{equation}
The second sum runs over those $\breve{x}'$ that are obtained from
$\breve{x}$ by creating a new circle. There is only one such
$\breve{x}'$, and its contribution is
\begin{equation}\label{z3}
\frac{t\; t^{[\breve{x}]_{n}}}{t(t+2)\ldots (t+2n)}.
\end{equation}
Adding expressions (\ref{z2}) and (\ref{z3}) we obtain
$$
\mu_t^{(n+1)}\biggl(\{\breve{x}'\;\vert\; \breve{x}'\in
X(n),p_{n,n+1}(\breve{x}')=\breve{x}\}\biggr)=\frac{(2n)t^{[\breve{x}]_{n}}}{t(t+2)\ldots
(t+2n)}+\frac{t\; t^{[\breve{x}]_{n}}}{t(t+2)\ldots (t+2n)},
$$
and the righthand side of the above equation is
$\mu_t^{(n)}(\breve{x})$.
\end{proof}
It follows from Proposition \ref{PROPOSITION4.2} that for any
given $t>0$, the canonical projection $p_{n-1,n}$ preserves the
measures $\mu_t^{(n)}$. Hence the measure
$$
\mu_t=\varprojlim \mu_t^{(n)}
$$
on $X$ is correctly defined, and it is a probability measure. Note
that as in the case considered in Kerov, Olshanski, and Vershik
\cite{KOV2}, Section 2, the probability space $(X,\mu_t)$ is
closely related to the Chines Restaurant Process construction, see
Aldous \cite{Aldous}, Pitman \cite{Pitman}.
\subsection{The group $S(2\infty)$ and its action on the space $X$}
First we describe the right action of the group $S(2n)$ on the
space $X(n)$, and then we extend it to the right action of
$S(2\infty)$ on $X$.
Let $\breve{x}_n\in X(n)$. Then $\breve{x}_n$ can be written as a
collection of $n$ unordered pairs (equation
(\ref{representationofelement})). Let $g$ be a permutation from
$S(2n)$,
$$
g:\;\; \left(\begin{array}{ccccc}
-n & -n+1 & \ldots & n-1 & n \\
g(-n) & g(-n+1) & \ldots & g(n-1) & g(n) \\
\end{array}\right).
$$
The right action of the group $S(2n)$ on the space $X(n)$ is
defined by
$$
\breve{x}_n\cdot
g=\biggl\{\{g(i_1),g(i_2)\},\{g(i_3),g(i_4)\},\ldots,
\{g(i_{2n-1}),g(i_{2n})\}\biggr\}.
$$
\begin{prop}The canonical projection $p_{n,n+1}$ is equivariant
with respect to the right action of the group $S(2n)$ on the space
$X(n)$, which means
$$
p_{n,n+1}(\breve{x}\cdot g)=p_{n,n+1}(\breve{x})\cdot g,
$$
for all $\breve{x}\in X(n+1)$, and all $g\in S(2n)$.
\end{prop}
\begin{proof}
Let $\breve{x}$ be an arbitrary element of $X(n+1)$. Represent
$\breve{x}\in X(n+1)$ in terms of configurations of arrows on
circles, as it is described in Section \ref{SectionArrows}. In
this representation the right action of an element $g$ from
$S(2n)$ on $x$ is reduced to permutations of numbers $1,2,\ldots
n$ on the circles, and to changes in the directions of the arrows
attached to these numbers. The number $n+1$, and the direction of
the arrow attached to $n+1$ remains unaffected by the action of
$S(2n)$. Since $p_{n,n+1}(\breve{x})$ is obtained from $\breve{x}$
by deleting $n+1$ together with the attached arrow, the statement
of the Proposition follows.
\end{proof}
Since the canonical projection $p_{n,n+1}$ is equivariant, the
right action of $S(2n)$ on $X(n)$ can be extended to the right
action of $S(2\infty)$ on $X$. For $n=1,2,\ldots $ we identify
$S(2n)$ with the subgroup of permutations $g\in S(2n+2)$
preserving the elements $-n-1$ and $n+1$ of the set
$\{-n-1,-n,\ldots,-1,1,\ldots,n,n+1\}$, i.e.
$$
S(2n)=\biggl\{g\biggl|g\in S(2n+2),\; g(-n-1)=-n-1,\; \mbox{and}\;
g(n+1)=n+1 \biggr\}.
$$
Let $S(2)\subset S(4)\subset S(4)\ldots $ be the collection of
such subgroups. Set
$$
S(2\infty)=\bigcup_{n=1}^{\infty}S(2n).
$$
Thus $S(2\infty)$ is the inductive limit of subgroups $S(2n)$,
$$
S(2\infty)=\varinjlim S(2n).
$$
If $\breve{x}=(\breve{x}_1,\breve{x}_2,\ldots )\in X$, and $g\in
S(2\infty)$, then the right action of $S(2\infty)$ on
$X=\varprojlim X(n)$,
$$X\times
S(2\infty)\longrightarrow X,
$$ is defined as $\breve{x}\cdot g=\check{y}$, where
$\breve{x}_n\cdot g=\breve{y}_n$ for all $n$ so large that $g\in
S(2\infty)$ lies in $S(2n)$.
\begin{prop}\label{Proposition1.444444}We have
$$
p_n(\breve{x}\cdot g)=p_n(\breve{x})\cdot g
$$
for all $\breve{x}\in X$, $g\in S(2\infty)$, and for all $n$ so
large that $g\in S(2n)$.
\end{prop}
\begin{proof}The claim follows immediately from the very
definition of the projection $p_n$, and of the right action of
$S(2\infty)$ on $X$.
\end{proof}
\subsection{The fundamental cocycle}\label{SectionCocycle}
Recall that $[.]_n$ denotes
the number of cycles in the cycle representation of an element
from $X(n)$ (see Section \ref{SectionArrows} where the cycle
structure of the elements from $X(n)$ was introduced).
\begin{prop}
For any $\breve{x}=(\breve{x}_n)\in X$, and $g\in S(2\infty)$, the
quantity
$$
c(\breve{x};g)=[p_n(\breve{x}\cdot
g)]_n-[p_n(\breve{x})]_n=[p_n(\breve{x})\cdot g]_n-[p_n(\breve{x})]_n
$$
does not depend on $n$ provided that $n$ is so large that $g\in
S(2n)$.
\end{prop}
\begin{proof} Let $g$ be an element of $S(2n)$. To prove the
Proposition it is enough to show that the condition
\begin{equation}\label{pcompatibility}
[p_{n,n+1}(\breve{x})\cdot g]_n-[p_{n,n+1}(\breve{x})]_n=[\breve{x}\cdot
g]_{n+1}-[\breve{x}]_{n+1}
\end{equation}
is satisfied for any element $\breve{x}$ of $X(n+1)$. Since $g\in S(2n)$
can be always represented as a product of transpositions, and
since $p_{n,n+1}$ is equivariant with respect to the right action
of the group $S(2n)$, it is enough to prove (\ref{pcompatibility})
for the case when $g$ is a transposition. Thus we assume that $g$
is a transposition $(ij)\in S(2n)$, where $i$ and $j$ are two
different elements of the set $\{-n,\ldots,-1,1,\ldots,n\}$.
Let $\breve{x}$ be an element of $X(n+1)$. Write $\breve{x}$ as a collection
of cycles as it is explained in Section \ref{SectionArrows}.
Assume that both $i$ and $j$ belong to the same cycle of $\breve{x}$. We
check that the multiplication of $\breve{x}$ by $(i,j)$ from the right
either splits this cycle into two, or transforms it into a
different cycle. Thus we have
$$
[\breve{x}\cdot g]_{n+1}-[\breve{x}]_{n+1}=1\;\;\mbox{or}\;\; 0.
$$
The value of the difference $[\breve{x}\cdot g]_{n+1}-[\breve{x}]_{n+1}$
depends on the mutual configuration of $-i, i, -j$, and $j$ in the
cycle containing $i, j$. More explicitly, if the pair with $-i$ is
situated from the left to the pair with $i$, and, at the same
time, the pair with $-j$ is situated from the left to the pair
with $j$, then the value of $[\breve{x}\cdot g]_{n+1}-[\breve{x}]_{n+1}$ is
$1$. In this case the cycle under considerations has the form
$$
k_1\rightarrow\ldots \rightarrow k_{m}\rightarrow -i \rightarrow
i\rightarrow-k_{m+1}\rightarrow\ldots\rightarrow
k_{p}\rightarrow-j\rightarrow j\rightarrow -k_{p+1}\rightarrow
\ldots \rightarrow -k_1\rightarrow k_1,
$$
or the form
$$
k_1\rightarrow\ldots \rightarrow k_{m}\rightarrow -j \rightarrow
j\rightarrow-k_{m+1}\rightarrow\ldots\rightarrow
k_{p}\rightarrow-i\rightarrow i\rightarrow -k_{p+1}\rightarrow
\ldots \rightarrow -k_1\rightarrow k_1,
$$
and the corresponding mutual configuration of $-i, i, -j$, and $j$
is
$$
\{.,-i\}\{i,.\}...\{.,-j\}\{j,.\},
$$
or
$$
\{.,-j\}\{j,.\}...\{.,-i\}\{i,.\}.
$$
If in the cycle under considerations the pair with $-i$ stands
from the right to the pair with $i$, and, at the same time, the
pair with $-j$ is situated from the right to the pair with $j$,
then the value of $[\breve{x}\cdot g]_{n+1}-[\breve{x}]_{n+1}$ is $1$ as well.
In this case the mutual configuration of $-i, i, -j$, and $j$ is
$$
\{.,i\}\{-i,.\}...\{.,j\}\{-j,.\},
$$
or
$$
\{.,j\}\{-j,.\}...\{.,i\}\{-i,.\}.
$$
Otherwise, if the mutual configuration of $-i, i, -j$, and $j$ is
different from those described above, then $[\breve{x}\cdot
g]_{n+1}-[\breve{x}]_{n+1}=0$.
On the other hand, the numbers $i$ and $j$ belong to the one and
the same cycle of $p_{n,n+1}(\breve{x})$ if and only if they belong to
one and the same cycle of $\breve{x}$. Moreover, if $i$ and $j$ belong
to the same cycle of $\breve{x}$, then the mutual configuration of $-i,
i, -j$, and $j$ is the same as in $p_{n,n+1}(\breve{x})$. Thus we
conclude that equation (\ref{pcompatibility}) holds true if $i$
and $j$ belong to the same cycle of $\breve{x}$.
If $i$ and $j$ belong to different cycles then the two cycles of
$\breve{x}$ containing the elements $i$ and $j$ merge into a single
cycle of the product $\breve{x}\cdot (ij)$, and we clearly have
$$
[\breve{x}\cdot (ij)]_{n+1}-[\breve{x}]_{n+1}=-1.
$$
The same equation holds true if we replace $\breve{x}$ by
$p_{n,n+1}(\breve{x})$, so equation (\ref{pcompatibility}) holds true
when $i$ and $j$ belong to different cycles as well.
\end{proof}
\subsection{Quasiinvariance of $\mu_t$}
\begin{prop} Each of measures $\mu_t$, $0<t<\infty$, is
quasiinvariant with respect to the action of $S(2\infty)$ on the
space $X=\varprojlim X(n)$. More precisely,
$$
\frac{\mu_t(d\breve{x}\cdot g)}{\mu_t(d\breve{x})}=t^{c(\breve{x};g)};\;\;\breve{x}\in
X,\; g\in S(2\infty),
$$
where $c(\breve{x};g)$ is the fundamental cocycle of Section
\ref{SectionCocycle}.
\end{prop}
\begin{proof}
We need to check that
\begin{equation}\label{1.7-1.7.2}
\mu_t(V\cdot g)=\int_{V}t^{c(\breve{x};g)}\mu_t(d\breve{x}),\;\; g\in
S(2\infty)
\end{equation}
for every Borel subset $V\subseteq X$. Choose $m$ so large that
$g\in S(2m)$, and let $n\geq m$. Take $\breve{y}\in X(n)$, and set
$V_n(\breve{y})=p_n^{-1}(\breve{y})\subset X$. This is a cylinder set. It is
enough to check equation (\ref{1.7-1.7.2}) for $V=V_n(\breve{y})$. Note
that $V_n(\breve{y})\cdot g=V_n(\breve{y}\cdot g)$. This follows from the fact
that the projection $p_n$ is equivariant with respect to the right
action of the group, see Proposition \ref{Proposition1.444444}.
From the definition of $\mu_t$ we conclude that
$\mu_t(V_n(\breve{y}))=\mu_t^{(n)}(\{\breve{y}\})$, hence
$$
\mu_t\left(V_n(g)\cdot g\right)=\mu_t^{(n)}(\{\breve{y}\cdot g\}).
$$
On the other hand,
$$
c(\breve{x};g)=[p_n(\breve{x}\cdot g)]_n-[p_n(\breve{x})]_n=[\breve{y}\cdot g]_n-[\breve{y}]_n
$$
for all $\breve{x}\in V_n(y)$. Therefore, equation (\ref{1.7-1.7.2})
takes the form
$$
\mu_t^{(n)}\left(\left\{\breve{y}\cdot g\right\}\right)=t^{[\breve{y}\cdot
g]_n-[\breve{y}]_n}\mu_t^{(n)}\left(\{\breve{y}\}\right).
$$
Using the very definition of $\mu_t^{(n)}$ we check that the
equation just written above holds true. Therefore, equation
(\ref{1.7-1.7.2}) holds true as well.
\end{proof}
\section{The representations $T_{z,\frac{1}{2}}$}
The aim of this Section is to introduce a family
$T_{z,\frac{1}{2}}$ of unitary representations of the group
$S(2\infty)$. These representations are parameterized by points
$z\in \mathbb C\setminus\{0\}$, and can be viewed as the analogues of the
generalized regular representations introduced in Kerov,
Olshanski, and Vershik \cite{KOV1, KOV2}. As in the case of the
generalized regular representations, each element of the family
$T_{z,\frac{1}{2}}$ can be approximated by the regular
representation of the group $S(2n)$. This enables us to give an
explicit formula for the restriction of the spherical function of
the representation $T_{z,\frac{1}{2}}$ to $S(2n)$, and to
introduce the measures on Young diagrams associated with
representations $T_{z,\frac{1}{2}}$. Then it will be shown that
these measures can be understood as the $z$-measures with the Jack
parameter $\theta=\frac{1}{2}$ in the notation of Section
\ref{Sectionztheta}. Thus the $z$-measures with the Jack parameter
$\theta=\frac{1}{2}$ will be associated to representations
$T_{z,\frac{1}{2}}$ in a similar way as the $z$-measures with the
Jack parameter $\theta=1$ are associated with generalized regular
representations in Kerov, Olshanski, and Vershik \cite{KOV2},
Section 4.
\subsection{Definition of $T_{z,\frac{1}{2}}$}
Let $(\mathfrak{X},\Sigma,\mu)$ be a measurable space. Let $G$ be a group
which acts on $\mathfrak{X}$ from the right, and preserves the Borel
structure. Assume that the measure $\mu$ is quasiinvariant, i.e.
the condition
$$
d\mu(\breve{x}\cdot g)=\delta(\breve{x};g)d\mu(\breve{x})
$$
is satisfied for some nonnegative $\mu$-integrable function
$\delta(\breve{x};g)$ on $\mathfrak{X}$, and for every $g$, $g\in G$. Set
\begin{equation}\label{3.1-3.1}
\left(T(g) f\right)(\breve{x})=\tau(\breve{x};g)f(\breve{x}\cdot g),\; f\in
L^{2}(\mathfrak{X},\mu),
\end{equation}
where $|\tau(\breve{x};g)|^2=\delta(\breve{x};g)$. If
$$\tau(\breve{x};g_1g_2)=\tau(\breve{x}\cdot
g_1;g_2)\tau(\breve{x};g_1),\; \breve{x}\in\mathfrak{X}, g_1,g_2\in G,
$$
then equation (\ref{3.1-3.1}) defines a unitary representation $T$
of $G$ acting in the Hilbert space $L^2(\mathfrak{X};\mu)$. The function
$\tau(\breve{x};g)$ is called a multiplicative cocycle.
Let $z\in\mathbb C$ be a nonzero complex number. We apply the general
construction described above for the space $\mathfrak{X}=X$, the group
$G=S(2\infty)$, the measure $\mu=\mu_t$ (where $t=|z|^2$), and the
cocycle $\tau(\breve{x};g)=z^{c(\breve{x};g)}$. In this way we get a unitary
representation of $S(2\infty)$, $T_{z,\frac{1}{2}}$, acting in the
Hilbert space $L^2(X,\mu_t)$ according to the formula
$$
\left(T_{z,\frac{1}{2}}(g)f\right)(\breve{x})=z^{c(\breve{x};g)}f(\breve{x}\cdot
g),\; f\in L^2(X,\mu_t),\;\breve{x}\in X,\; g\in S(2\infty).
$$
\subsection{Approximation by quasi-regular representations}
\begin{defn} For $n=1,2,\ldots $ let $\mu_1^{(n)}$ denote the
normalized Haar measure on $X(n)$. The regular representation
$Reg^n$ of the group $S(2n)$ acting in the Hilbert space
$L^{(2)}(X(n),\mu_1^{(n)})$ is defined by
$$
\left(Reg^n(g)f\right)(\breve{x})=f(\breve{x}\cdot g),\; \breve{x}\in X(n),\; g\in
S(2n),\; f\in L^2(X(n),\mu_t).
$$
\end{defn}
\begin{prop}\label{Proposition3.3-3.2} The representations $Reg^n$ and
$T_{z,\frac{1}{2}}|_{L^{2}(X(n),\mu_t^{(n)})}$ of $S(2n)$ are
equivalent.
\end{prop}
\begin{proof} Set
\begin{equation}\label{3.3-3.2.1}
F_z^{(n)}(\breve{x})=\left(\frac{1\cdot 3\cdot \ldots \cdot
(2n-1)}{t\cdot (t+2)\cdot\ldots\cdot(t+2n-2)
}\right)^{1/2}z^{[\breve{x}]_n},\; \breve{x}\in X(n),
\end{equation}
and denote by $f_z^{(n)}$ the operator
of multiplication by $F_z^{(n)}$. Since
$$
|F_z^{(n)}(\breve{x})|^2=\frac{\mu_t^{(n)}(\breve{x})}{\mu_1^{(n)}(\breve{x})},
$$
the operator $f_z^{(n)}$ carries $L^2(X(n),\mu_t^{(n)})$ onto
$L^2(X(n),\mu_1^{(n)})$, and defines an isometry. Moreover, it is
straightforward to check that $f_z^{(n)}$ intertwines for the
$S(2n)$-representations $Reg^n$ and
$T_{z,\frac{1}{2}}|_{L^{2}(X(n),\mu_t^{(n)})}$.
\end{proof}
Next we need the notion of the inductive limits of
representations. Let $G(1)\subseteq G(2)\subseteq\ldots $ be a
collection of finite groups, and set $G=\bigcup_{n=1}^{\infty}
G(n)$. Thus $G$ is the inductive limit of the groups $G(n)$.
Assume that for each $n$ a unitary representation $T_n$ of $G(n)$
is defined. Denote by $H(T_n)$ the Hilbert space in which the
representation $T_n$ acts, and denote by $H$ the Hilbert
completion of the space $\bigcup_{n=1}^{\infty}H(T_n)$. We also
assume that an isometric embedding $\alpha_n: H(T_n)\rightarrow
H(T_{n+1})$ is given, and that this embedding is intertwining for
the $G(n)$-representations $T_n$ and $T_{n+1}|_{G(n)}$.
\begin{defn}
A unitary representation $T$ of the group $G$ acting in the
Hilbert space $H$, and uniquely defined by
$$
T(g)\xi=T_n(g)\xi,\;\mbox{if}\; g\in G(n)\;\mbox{and}\;\xi\in
H(T_n)
$$
is called the inductive limit of representations $\{T_n\}$.
\end{defn}
Consider the following diagram
$$
\begin{array}{cccccccccc}
H(T_1)&\stackrel{f_1}{\longrightarrow}& H(T_2) &
\stackrel{f_2}{\longrightarrow}& H(T_3)
&\stackrel{f_3}{\longrightarrow}&\ldots\\
\vert &&\vert &&\vert \\
\vert\lefteqn{F_1}&& \vert\lefteqn{F_2}&& \vert\lefteqn{F_3}\\
\downarrow && \downarrow && \downarrow\\
H(S_1)&\stackrel{\rho_1}{\longrightarrow}& H(S_2) &
\stackrel{\rho_2}{\longrightarrow}& H(S_3)
&\stackrel{\rho_3}{\longrightarrow}&\ldots
\end{array}
$$
Here $\{T_n\}_{n=1}^{\infty}$ and $\{S_n\}_{n=1}^{\infty}$ are
collections of representations of $G(1), G(2),\ldots $, and $
G(1)\subseteq G(2)\subseteq\ldots $. The following fact is almost
obvious, and we formulate it as a Proposition without proof.
\begin{prop}\label{PropositionA1}
Assume that for each $n=1,2,\ldots $ the following conditions are
satisfied\begin{itemize}
\item The linear map $F_n$ is from $H(T_n)$ onto $H(S_n)$, which is intertwining for $T_n$
and $S_n$.
\item The linear map $f_n$ is an isometric embedding of
$H(T_n)$ into $H(T_{n+1})$, which is intertwining for the
$G(n)$-representations $T_n$ and $T_{n+1}|_{G(n)}$.
\item The map $\rho_n$ is an isometric embedding of $H(S_n)$
into $H(S_{n+1})$ such that the diagram
$$
\begin{array}{ccc}
H(T_n)&\stackrel{f_n}{\longrightarrow}& H(T_n)\\
\vert &&\vert \\
\vert\lefteqn{F_n}&& \vert\lefteqn{F_{n+1}}\\
\downarrow && \downarrow\\
H(S_n)&\stackrel{\rho_n}{\longrightarrow}& H(S_{n+1})
\end{array}
$$
is commutative, i.e., the condition
$f_n=F_{n+1}^{-1}\circ\rho_n\circ F_n$ holds true.
\end{itemize}
Then the inductive limits of $\{T_n\}_{n=1}^{\infty}$, and of
$\{S_n\}_{n=1}^{\infty}$ are well-defined, and these inductive
limits are equivalent.
\end{prop}
\begin{prop}
Define the operator $L_z^{(n)}$,
$$ L_z^{(n)}:
L^{2}(X(n),\mu_1^{(n)})\longrightarrow L^{2}(X(n+1),\mu_1^{(n)})
$$
as follows: if $f\in L^{2}(X(n),\mu_1^{(n)})$, and $\breve{x}\in
X(n+1)$, then
\begin{equation}\label{Lz}
\left(L_z^{(n)}f\right)(\breve{x})=
\left\{%
\begin{array}{lll}
z\sqrt{\frac{2n+1}{2n+t}}f(\breve{x}), & \breve{x}\in X(n)\subset X(n+1), \\
\\
\sqrt{\frac{2n+1}{2n+t}}f(p_{n,n+1}(\breve{x})), & \breve{x}\in X(n+1)\setminus X(n).\\
\end{array}%
\right.
\end{equation}
For any nonzero complex number $z$ the operator $L_z^{(n)}$
provides an isometric embedding
$L^{2}(X(n),\mu_1^{(n)})\longrightarrow L^{2}(X(n+1),\mu_1^{(n)})$
which intertwines for the $S(2n)$-representations $Reg^n$ and
$Reg^{n+1}|_{S(2n)}$. Let $T_{z,\frac{1}{2}}'$ denote the
inductive limit of the representations $Reg^n$ with respect to the
embedding
$$
\begin{array}{ccccc}
L^{2}(X(1),\mu_1^{(1)})&\stackrel{L_z^{(1)}}{\longrightarrow}&
L^{2}(X(2),\mu_2^{(2)})&\stackrel{L_z^{(2)}}{\longrightarrow}&\ldots
\end{array}
$$
Then the representations $T_{z,\frac{1}{2}}'$ and
$T_{z,\frac{1}{2}}$ are equivalent.
\end{prop}
\begin{proof}
For $f\in L^{2}(X(n),\mu_t^{(n)})$, and $\breve{x}\in X(n+1)$ set
$$
\left(\alpha^{(n)}f\right)(\breve{x})=f(p_{n,n+1}(\breve{x})).
$$
Then $\alpha^{(n)}$ is an isometric embedding of
$L^{2}(X(n),\mu_t^{(n)})$ into $L^{2}(X(n+1),\mu_t^{(n+1)})$.
Using the definition of the representation $T_{z,\frac{1}{2}}$ it
is straightforward to verify that $\alpha^{(n)}$ intertwines for
the $S(2n)$-representations
$T_{z,\frac{1}{2}}|_{L^{2}(X(n),\mu_t^{(n)})}$ and
$T_{z,\frac{1}{2}}|_{L^{2}(X(n+1),\mu_t^{(n+1)})}$. This enables
us to consider $T_{z,\frac{1}{2}}$ as the inductive limit of
$S(2n)$-representations of
$T_{z,\frac{1}{2}}|_{L^{2}(X(n),\mu_t^{(n)})}$. Now examine the
following diagram
$$
\begin{array}{cccccccccc}
L^{2}(X(1),\mu_t^{(1)})&\stackrel{\alpha^{(1)}}{\longrightarrow}&
L^{2}(X(2),\mu_t^{(2)}) &
\stackrel{\alpha^{(2)}}{\longrightarrow}& L^{2}(X(3),\mu_t^{(3)})
&\stackrel{\alpha^{(3)}}{\longrightarrow} &\ldots\\
\vert &&\vert &&\vert \\
\vert\lefteqn{f_z^{(1)}}&& \vert\lefteqn{f_z^{(2)}}&& \vert\lefteqn{f_z^{(3)}}\\
\downarrow && \downarrow && \downarrow\\
L^{2}(X(1),\mu_1^{(1)})&\stackrel{L^{(1)}_z}{\longrightarrow}&
L^{2}(X(2),\mu_1^{(2)}) & \stackrel{L^{(2)}_z}{\longrightarrow}&
L^{2}(X(3),\mu_1^{(3)}) &\stackrel{L^{(3)}_z}{\longrightarrow}
&\ldots
\end{array}
$$
where the operators $f_z^{(n)}$ are that of multiplications by
$F_z^{(n)}$ introduced in the proof of Proposition
\ref{Proposition3.3-3.2}. Recall that $f_z^{(n)}$ intertwines for
the $S(2n)$-representations $Reg^n$ and
$T_{z,\frac{1}{2}}|_{L^{2}(X(n),\mu_t^{(n)})}$. We determine
$L_z^{(n)}$ from the condition of commutativity of the diagram
$$
\begin{array}{ccc}
L^{2}(X(n),\mu_t^{(n)}) &
\stackrel{\alpha^{(n)}}{\longrightarrow}&
L^{2}(X(n+1)),\mu_t^{(n+1)})\\
\vert &&\vert \\
\vert\lefteqn{f_z^{(n)}}&& \vert\lefteqn{f_z^{(n)}}\\
\downarrow && \downarrow\\
L^{2}(X(n),\mu_1^{(n)})&\stackrel{L^{(n+1)}_z}{\longrightarrow}&
L^{2}(X(n+1),\mu_1^{(n+1)})
\end{array}
$$
and obtain that $L^{(n)}_z$ is given by formula (\ref{Lz}).
Moreover, from equation (\ref{Lz}) we see that $L_z^{(n)}$ defines
the isometric embedding of $L^{2}(X(n),\mu_t^{(n)})$ into
$L^{2}(X(n+1),\mu_t^{(n+1)})$. Now we use Proposition
\ref{PropositionA1} to conclude that the inductive limit
$T_{z,\frac{1}{2}}'$ of the representations $Reg^n$ with respect
to the embedding
$$
\begin{array}{ccccc}
L^{2}(X(1),\mu_1^{(1)})&\stackrel{L_z^{(1)}}{\longrightarrow}&
L^{2}(X(2),\mu_2^{(2)})&\stackrel{L_z^{(2)}}{\longrightarrow}&\ldots
\end{array}
$$
is well-defined, and it is equivalent to $T_{z,\frac{1}{2}}$.
\end{proof}
\subsection{A formula for the spherical function of $T_{z,\frac{1}{2}}$}
Let $(G,K)$ be a Gelfand pair, and let $T$ be a unitary
representation of $G$ acting in the Hilbert space $H(T)$. Assume
that $\xi$ is a unit vector in $H(T)$ such that $\xi$ is
$K$-invariant, and such that the span of vectors of the form
$T(g)\xi$ (where $g\in G$) is dense in $H(T)$. In this case $\xi$
is called the spherical vector, and the matrix coefficient
$(T(g)\xi,\xi)$ is called the spherical function of the
representation $T$. Two spherical representations are equivalent
if and only if their spherical functions are coincide.
\begin{prop}
Denote by $\varphi_z$ the spherical function of
$T_{z,\frac{1}{2}}$. Then we have
\begin{equation}\label{sphericalfunctionTz}
\varphi_z|_{S(2n)}(g)=\left(Reg^n(g)F_z^{(n)},F^{(n)}_z\right)_{L^{2}(X(n),\mu_1^{(n)})}.
\end{equation}
\end{prop}
\begin{proof}
Let $f_0\equiv 1$ be a unit vector, and let us consider $f_0$ as
an element of $L^{2}(X(n),\mu_t^{(n)})$. Then we find
$$
\left(T_{z,\frac{1}{2}}(g)f_0\right)(\breve{x})=z^{c(\breve{x};g)},\; \breve{x}\in
X(n),\; g\in S(2n).
$$
If $g\in H(n)$, then $c(\breve{x};g)=0$. In this case we obtain that
$f_0$ is invariant under the action of $H(n)$, so $f_0$ can be
understood as the cyclic vector of the $S(2n)$-representation
$T_{z,\frac{1}{2}}|_{L^{2}(X(n),\mu_t^{(n)})}$. On the other hand,
the $S(2n)$-representation
$T_{z,\frac{1}{2}}|_{L^{2}(X(n),\mu_t^{(n)})}$ is equivalent to
$Reg^n$. This representation, $Reg^n$, acts in the space
$L^{2}(X(n),\mu_1^{(n)})$, and from the proof of Proposition
\ref{Proposition3.3-3.2} we conclude that the cyclic vector of the
$S(2n)$-representation $Reg^n$ is $F_z^{(n)}$ defined by formula
(\ref{3.3-3.2.1}). This gives expression for the spherical
function of $T_{z,\frac{1}{2}}$ in the statement of the
Proposition.
\end{proof}
\section{Definition of $z$-measures associated with the representations $T_{z,\frac{1}{2}}$}
\subsection{The space $C(S(2n), H(n))$}
Consider the set of functions on $S(2n)$ constant on each double
coset $H(n)gH(n)$ in $S(2n)$. We shall denote this set by
$C(S(2n),H(n))$. Therefore,
$$
C(S(2n),H(n))=\left\{f|f(hgh')=f(g),\;\mbox{where}\; h, h'\in
H(n),\;\; \mbox{and}\;\; g\in S(2n)\right\}.
$$
We equip $C(S(2n),H(n))$ with the scalar product $<.,>_{S(2n)}$
defined by
$$
<f_1,f_2>_{S(2n)}=\frac{1}{|S(2n)|}\sum\limits_{g\in
S(2n)}f_1(g)\overline{f_2(g)}.
$$
\begin{prop}\label{Proposition4.1-4.1.1}
The space $C(S(2n), H(n))$ is isometrically isomorphic to the
space $L^2(X(n),\mu_1^{(n)})^{H(n)}$ defined as a subset of
functions from $L^2(X(n),\mu_1^{(n)})$ invariant with respect to
the right action of $H(n)$,
\begin{equation}
\begin{split}
L^2(X(n),\mu_1^{(n)})^{H(n)}=\biggl\{f|f\in L^2(X(n),\mu_1^{(n)}),
f(\breve{x})=f(\breve{x}\cdot h),\\
\;\mbox{where}\; \breve{x}\in X(n),\;\;
\mbox{and}\;\; h\in H(n)\biggr\}.
\end{split}
\nonumber
\end{equation}
\begin{proof} The claim of the Proposition is almost trivial. Indeed,
the fact that $C(S(2n),H(n))$ is isomorphic to
$L^2(X(n),\mu_1^{(n)})^{H(n)}$ is obvious from the definition of
these spaces. We have
\begin{equation}
\begin{split}
<f_1,f_2>_{S(2n)}&=\frac{1}{|S(2n)|}\sum\limits_{g\in
S(2n)}f_1(g)\overline{f_2(g)}\\
&=\frac{1}{|X(n)|}\sum\limits_{\breve{x}\in
X(n)}f_1(\breve{x})\overline{f_2(\breve{x})}=(f_1,f_2)_{L^2(X(n),\mu_1^{(n)})^{H(n)}},
\end{split}
\nonumber
\end{equation}
for any two functions $f_1, f_2$ from $C(S(2n),H(n))$. Therefore,
the isomorphism between $C(S(2n),H(n))$ and
$L^2(X(n),\mu_1^{(n)})^{H(n)}$ is isometric.
\end{proof}
\end{prop}
\subsection{The spherical functions of the Gelfand pair
$(S(2n),H(n))$} It is known (see Macdonald \cite{macdonald},
Section VII.2) that $(S(2n),H(n))$ is a Gelfand pair. In
particular, this implies that there is an orthogonal basis
$\{w^{\lambda}\}$ in $C(S(2n),H(n))$ whose elements,
$w^{\lambda}$, are the spherical functions of $(S(2n),H(n))$. The
elements $w^{\lambda}$ are parameterized by Young diagrams with
$n$ boxes, and are defined by
$$
w^{\lambda}(g)=\frac{1}{|H(n)|}\sum\limits_{h\in
H(n)}\chi^{2\lambda}(gh),
$$
see Macdonald \cite{macdonald}, Sections VII.1 and VII.2. Here
$\chi^{2\lambda}$ is the character of the irreducible
$S(2n)$-module corresponding to
$2\lambda=(2\lambda_1,2\lambda_2,\ldots )$. By Proposition
\ref{Proposition4.1-4.1.1} the spherical functions $w^{\lambda}$
define an orthogonal basis in $L^2(X(n),\mu_1^{(n)})^{H(n)}$.
Besides, the zonal spherical functions $w^{\lambda}$ satisfy to
the following relations
\begin{equation}
w^{\lambda}(e)=1,\;\;\mbox{for any}\;\;\lambda\in\mathbb Y_n,
\end{equation}
\begin{equation}\label{4.2-4.1}
(w^{\lambda},w^{\mu})_{L^2(X(n),\mu_1^{(n)})^{H(n)}}=\frac{\delta_{\lambda,\mu}}{\dim
2\lambda},
\end{equation}
\begin{equation}\label{4.2-4.2}
\frac{1}{|X(n)|}\sum\limits_{\breve{x}\in X(n)}w^{\lambda}(\breve{x}\cdot
g)w^{\mu}(\breve{x})=\delta^{\lambda,\mu}\frac{w^{\lambda}(g)}{\dim
2\lambda},\;\;g\in S(2n).
\end{equation}
Here $\dim 2\lambda=\chi^{2\lambda}(e)$. The relations just
written above follow from general properties of spherical
functions, see Macdonald \cite{macdonald}, Section VII.1.
\subsection{The z-measures $M^{(n)}_{z,\frac{1}{2}}$ of the representation $T_{z,\frac{1}{2}}$}
\begin{defn}\label{Definition of zmeasure} Let $z$ be a nonzero complex number, $\lambda$ be a
Young diagram with $n$ boxes, and let
$$
\tilde{w}^{\lambda}=\left(\dim 2\lambda\right)^{1/2}\cdot
w^{\lambda}
$$
be the normalized zonal spherical
function of the Gelfand pair $(S(2n), H(n))$ parameterized by
$\lambda$. Set
\begin{equation}\label{4.3-4.3.1}
M_{z,\frac{1}{2}}^{(n)}(\lambda)=\left|(F_z^{(n)},\tilde{w}^{\lambda})_{L^{2}(X(n),\mu_1^{(n)})}\right|^2,
\end{equation}
where $F_z^{(n)}$ is a vector from $L^2(X(n),\mu_1^{(n)})$ defined
by equation (\ref{3.3-3.2.1}). The function
$M_{z,\frac{1}{2}}^{(n)}$ defined on the set of Young diagrams
with $n$ boxes is called the $z$-measure of the representation
$T_{z,\frac{1}{2}}$.
\end{defn}
The relation with the representation $T_{z,\frac{1}{2}}$ is clear
from the following Proposition.
\begin{prop} Denote by $\varphi_z$ the spherical function of
$T_{z,\frac{1}{2}}$. We have
\begin{equation}\label{4.3-4.4}
\varphi_z|_{S(2n)}(g)=\sum\limits_{|\lambda|=n}M_{z,\frac{1}{2}}^{(n)}(\lambda)w^{\lambda}(g),\;\;
g\in S(2n).
\end{equation}
\end{prop}
\begin{proof} The functions $\{\tilde{w}^{\lambda}\}$ define an
orthonormal basis in $L^2(X(n),\mu_1^{(n)})^{H(n)}$. On the other
hand, we can check that $F_z^{(n)}$ is an element of
$L^2(X(n),\mu_1^{(n)})^{H(n)}$. Therefore, we must have
\begin{equation}\label{4.3-4.5}
F_z^{(n)}(\breve{x})=\sum\limits_{|\lambda|=n}a_z^{(n)}(\lambda)\tilde{w}^{\lambda}(\breve{x}),\;
\breve{x}\in X(n).
\end{equation}
We insert expression (\ref{4.3-4.5}) into formula
(\ref{sphericalfunctionTz}). This gives
\begin{equation}
\varphi_z|_{S(2n)}(g)=\frac{1}{|X(n)|}\sum\limits_{\breve{x}\in
X(n)}\sum\limits_{|\lambda|=n}\sum\limits_{|\mu|=n}\overline{a_z^{(n)}(\lambda)}a_z^{(n)}(\mu)\tilde{w}^{\lambda}(\breve{x}\cdot
g)\tilde{w}^{\mu}(\breve{x}). \nonumber
\end{equation}
Using equation (\ref{4.2-4.2}) we find that
\begin{equation}\label{M1}
\varphi_z|_{S(2n)}(g)=\sum\limits_{|\lambda|=n}|a_z^{(n)}(\lambda)|^2w^{\lambda}(
g).
\end{equation}
From equations (\ref{4.3-4.3.1}) and (\ref{4.3-4.5}) we see that
\begin{equation}\label{M2}
M_{z,\frac{1}{2}}^{(n)}(\lambda)=|a_z^{(n)}(\lambda)|^2,
\end{equation} which gives the formula in the statement of the
Proposition.
\end{proof}
\begin{cor} We have
$$
\sum\limits_{|\lambda|=n}M_{z,\frac{1}{2}}^{(n)}(\lambda)=1,
$$
i.e. $M_{z,\frac{1}{2}}^{(n)}(\lambda)$ can be understood as a
probability measure on the set of Young diagrams with $n$ boxes.
\end{cor}
\begin{proof}
This follows from equations (\ref{M1}), (\ref{M2}), and from the
fact that
$$
\varphi_z|_{S(2n)}(e)=w^{\lambda}(e)=1.
$$
\end{proof}
\subsection{An explicit formula for $M^{(n)}_{z,\frac{1}{2}}$}
\begin{prop}\label{TheoremExplicitFormulaForMz}(Olshanski \cite{olshanskiletter}) The $z$-measure
$M_{z,\frac{1}{2}}^{(n)}(\lambda)$ admits the following explicit
formula
$$
M_{z,\frac{1}{2}}^{(n)}(\lambda)=\frac{n!}{\left(\frac{z\bar
z}{2}\right)_n}\cdot
\frac{\prod\limits_{(i,j)\in\lambda}(z+2(j-1)-(i-1))(\bar{z}+2(j-1)-(i-1))}{h(2\lambda)},
$$
where $h(2\lambda)$ denotes the product of the hook-lengths of
$2\lambda=(2\lambda_1,2\lambda_2,\ldots )$, and $(.)_n$ stands for
the Pochhammer symbol,
$$
(a)_n=a(a+1)\ldots (a+n-1)=\frac{\Gamma(a+n)}{\Gamma(a)}.
$$
In particular, it follows that $M_{z,\frac{1}{2}}^{(n)}(\lambda)$
is exactly the $z$-measure with the Jack parameter $\theta=1/2$ in
the notation of Section \ref{Sectionztheta},
$$
M_{z,\frac{1}{2}}^{(n)}(\lambda)=M^{(n)}_{z,\bar
z,\theta=\frac{1}{2}}(\lambda).
$$
\end{prop}
\begin{proof}
We start from formula (\ref{4.3-4.3.1}), and observe that this
formula can be rewritten as
\begin{equation}\label{Mzt}
M_{z,\frac{1}{2}}^{(n)}(\lambda)=\frac{1}{[(2n)!]^2}\left|\widehat{(F_z^{(n)},\tilde{w}^{\lambda})}\right|^2,
\end{equation}
where $F_z^{(n)}$, $\tilde{w}^{\lambda}$ are understood as two
functions from $C(S(2n),H(n))$, and the scalar product
$\widehat{(f_1,f_2)}$ is defined by
$$
\widehat{(f_1,f_2)}=\sum\limits_{g\in
S(2n)}f_1(g)\overline{f_2(g)}.
$$
To compute the scalar product in equation (\ref{Mzt}), we use the
characteristic map,
$$
C(S(2n), H(n))\stackrel{ch''}{\longrightarrow} \Lambda_{\mathbb C}^n,
$$
introduced in Macdonald \cite{macdonald},
Section VII.2. Her $\Lambda^n$ denotes the set of the homogeneous
symmetric polynomials of degree $n$, and $\Lambda^n_{\mathbb C}$ is the
linear span of these polynomials with complex coefficients. the
characteristic map, $ch''$, is defined by
\begin{equation}\label{ch''}
ch''(f)=|H(n)|\sum\limits_{|\rho|=n}z_{\rho}^{-1}2^{-l(\rho)}p_{\rho}f(\rho).
\end{equation}
Here the symbol $z_{\rho}$ is defined by
$$
z_{\rho}=\prod\limits_{i\geq 1}i^{m_i}\cdot m_i!,
$$
where $m_i=m_i(\rho)$ is the number of parts of $\rho$ equal to
$i$. In equation (\ref{ch''}) $l(\rho)$ stands for the number of
nonzero parts in $\rho$, $p_{\rho}=p_{\rho_1}p_{\rho_2}\ldots $,
where $p_k$ stands for $k$th power sum, and $f(\rho)$ is the value
of $f$ at elements of the double coset parameterized by the Young
diagram $\rho$, see Macdonald, Section VII.2. The map $ch''$ is an
isometry of $C(S(2n), H(n))$ onto $\Lambda_{\mathbb C}^n$. Therefore,
\begin{equation}\label{t1-1}
M_{z,\frac{1}{2}}^{(n)}(\lambda)=\frac{1}{[(2n)!]^2}\left|(ch''(F_z^{(n)}),ch''(\tilde{w}^{\lambda}))\right|^2,
\end{equation}
where the scalar product is defined by
$$
(p_{\rho},p_{\sigma})=\delta_{\rho\sigma}2^{l(\rho)}z_{\rho}.
$$
It remains to find $ch''(F_z^{(n)})$, $ch''(\tilde{w}^{\lambda})$,
and to compute the scalar product in the righthand side of
equation (\ref{t1-1}). We have
\begin{equation}\label{t1-2}
ch''(\tilde{w}^{\lambda})=(\dim
2\lambda)^{1/2}J_{\lambda}^{(\alpha=2)},
\end{equation}
where $J_{\lambda}^{(\alpha)}$ stands for the Jack polynomial with
the Jack parameter $\alpha$ parameterized by the Young diagram
$\lambda$ (in notation of Macdonald, Section VI). In order to find
$ch''(F_z^{(n)})$ it is enough to obtain a formula for
$ch''(N^{[.]_n})$. We have
$$
ch''(N^{[.]_n})=|H(n)|\sum\limits_{|\rho|=n}z_{\rho}^{-1}p_{\rho}\left(\frac{N}{2}\right)^{l(\rho)}.
$$
Since
$$
\left(\frac{N}{2}\right)^{l(\rho)}=p_{\rho}(\underset{N/2}{\underbrace{1,\ldots,1}}),
$$
we can use equation (1.4) in Section I.4 of Macdonald
\cite{macdonald}, and write
$$
\left(\frac{N}{2}\right)^{l(\rho)}=|H(n)|\left\{\prod\limits_{i=1}^{\infty}(1-x_i)^{-\frac{N}{2}}\right\}_n.
$$
Here $\{.\}_n$ denotes the component of degree $n$. Now we have
$$
\prod\limits_{i=1}^{\infty}(1-x_i)^{-\frac{N}{2}}=\sum\limits_{\lambda}
\frac{1}{h(2\lambda)}J_{\lambda}^{(2)}(x)J_{\lambda}^{(2)}(\underset{N/2}{\underbrace{1,\ldots,1}}).
$$
The value
$J_{\lambda}^{(2)}(\underset{N/2}{\underbrace{1,\ldots,1}})$ is
known,
$$
J_{\lambda}^{(2)}(\underset{N/2}{\underbrace{1,\ldots,1}})=\prod\limits_{(i,j)\in\lambda}(N+2(j-1)-(i-1)).
$$
This gives us the following formula
$$
\prod\limits_{i=1}^{\infty}(1-x_i)^{-\frac{N}{2}}=
\sum\limits_{|\lambda|=n}
\frac{1}{h(2\lambda)}J_{\lambda}^{(2)}(x)\prod\limits_{(i,j)\in\lambda}(N+2(j-1)-(i-1)),
$$
and we obtain
\begin{equation}\label{t1-3}
\begin{split}
&ch''(F_z^{(n)})=\left(\frac{1\cdot 3\cdot\ldots\cdot
(2n-1)}{t(t+2)\ldots (t+2n-2)}\right)^{1/2}ch''(z^{[.]_n})\\
&=\left(\frac{1\cdot 3\cdot\ldots\cdot (2n-1)}{t(t+2)\ldots
(t+2n-2)}\right)^{1/2}|H(n)|\sum\limits_{|\lambda|=n}
\frac{1}{h(2\lambda)}J_{\lambda}^{(2)}(x)\prod\limits_{(i,j)\in\lambda}(N+2(j-1)-(i-1)).
\end{split}
\end{equation}
Finally, using the orthogonality relation
$$
(J_{\lambda}^{(2)},J_{\mu}^{(2)})=\delta_{\lambda\mu}h(2\lambda)
$$
we find from equations (\ref{t1-1})-(\ref{t1-3}) that
\begin{equation}
\begin{split}
M_{z,\frac{1}{2}}^{(n)}(\lambda)=&\frac{|H(n)|^2}{[(2n)!]^2}\left(\frac{1\cdot
3\cdot\ldots\cdot
(2n-1)}{t(t+2)\ldots (t+2n-2)}\right)\dim 2\lambda\\
&\times\prod\limits_{(i,j)\in\lambda}(z+2(j-1)-(i-1))(\bar
z+2(j-1)-(i-1)).
\end{split}
\nonumber
\end{equation}
Noting that $|H(n)|=2^nn!$, and that $\dim
2\lambda=\frac{(2n)!}{h(2\lambda)}$ we arrive to the first formula
in the statement of the Proposition. The fact that
$M_{z,\frac{1}{2}}^{(n)}(\lambda)$ coincides with the $z$-measure
with $\theta=1/2$ in the notation of the Section
\ref{Sectionztheta} can now be checked directly using formulae for
$H(\lambda,\theta)$ and $H'(\lambda,\theta)$ stated in Section
\ref{Sectionztheta}.
\end{proof}
|
2,877,628,089,791 | arxiv | \section{Introduction}
Progress on our understanding of the neutrinos continues to be exhilarating. This progress is due mainly to experiments on neutrino oscillation. Here, we explain the physics of oscillation in vacuum and in matter.
\section{The physics of neutrino oscillation}
Treatments of the physics of neutrino oscillation may be found in, for example, [1, 2]. Here, we give a slightly modified treatment, and explain some points that have caused puzzlement, such as the fact that, even if neutrinos are their own antiparticles, their interaction with matter can still cause a difference between neutrino and so-called ``antineutrino'' oscillations.
We assume that the couplings of the neutrinos and charged leptons to the $W$ boson are correctly described by the Standard Model, extended to take leptonic mixing into account. These couplings are then summarized by the Lagrangian
\beeq
{\cal L}_W = -\frac{g}{\sqrt{2}} \sum_{{\stackrel{\alpha=e,\mu,\tau}{\sss i=1,2,3}}} (\overline{\ell_{L\alpha}} \gamma^\lambda U_{\alpha i} \nu_{Li} W_\lambda^- + \overline{\nu_{Li}} \gamma^\lambda U_{\alpha i}^* \ell_{L\alpha} W_\lambda^+) ~~.
\label{eq2.1}
\eeeq
Here, $L$ denotes left-handed chiral projection, $\ell_\alpha$ is the charged-lepton mass eigenstate of flavor $\alpha$ ($\ell_e$ is the electron, $\ell_\mu$ the muon, and $\ell_\tau$ the tau), and $\nu_i$ is a neutrino mass eigenstate. The constant $g$ is the semiweak coupling constant, and $U$ is the leptonic mixing matrix [3].
Supposing, as assumed by \Eq{2.1}, that there are only three charged-lepton mass eigenstates, and three neutrino mass eigenstates, $U$ is $3\times3$, and may be written as
\beeq
U = \left[ \begin{array}{ccc}
U_{e1} & U_{e2} & U_{e3} \\
U_{\mu 1} & U_{\mu 2} & U_{\mu 3} \\
U_{\tau 1} & U_{\tau 2} & U_{\tau 3}
\end{array}\right] ~~ .
\label{eqI.3}
\eeeq
In the extended Standard Model, the $3\times 3$ mixing matrix $U$ is unitary, and we shall assume that this is also true in nature. However, we note that if there are ``sterile'' neutrinos (neutrinos that do not couple to the $W$ or $Z$ boson), then there are $N>3$ neutrino mass eigenstates, and the leptonic mixing matrix $U$ that is unitary is $N\times N$, rather than $3\times 3$. The $3\times 3$ matrix of \Eq{I.3} is then just a submatrix, and is not unitary [4].
Supposing that the unitary mixing matrix is $N\times N$, not because of the existence of sterile neutrinos but because there are $N$ conventional lepton generations, how many physically-significant parameters does $U$ contain? To see how many, we note first that an $N\times N$ complex matrix contains $N^2$ entries, each of which may have a real and an imaginary part. Thus, the matrix can be fully specified by $2N^2$ real parameters. If the matrix is unitary, then each of its columns must be a vector of unit length: $\sum_\alpha |U_{\alpha i}|^2 = 1; \;i=1,N$. Together, these conditions are $N$ constraints. In addition, each pair of columns in $U$ must be orthogonal vectors: $\sum_\alpha U_{\alpha i}^* \,U_{\alpha j} = 0; \; i,j=1,N$ with $i\neq j$. Taking into account that each of these $N(N-1)/2$ orthogonality conditions has both a real and an imaginary part, we see that these conditions impose $N(N-1)$ constraints. Thus, the number of independent parameters in a general $N\times N$ unitary matrix is $2N^2 -N -N(N-1) = N^2$.
However, in the case of our unitary matrix, $U$, some of these parameters may be removed. From \Eq{2.1}, $\langle\ell_\alpha |{\cal L}_W| \nu_i W^-\!\rangle \;\propto U_{\alpha i}$. Now, without affecting the physics, we are always free to redefine the state $\langle\ell_\alpha|$ by multiplying it by a phase factor: $\langle\ell_\alpha| \rightarrow \langle\ell_\alpha^\prime| = \langle\ell_\alpha| e^{-i \varphi_\alpha}$. Clearly, this has the effect of multiplying the $U_{\alpha i}$, for all $i$, by the same factor: $U_{\alpha i} \rightarrow U_{\alpha i}^\prime = e^{-i \varphi_\alpha}U_{\alpha i}$. If there are $N \; \ell_\alpha$, this phase redefinition of them may be used to remove $N$ phases from $U$. It might be thought that analogous phase redefinition of the neutrinos $\nu_i$ could be used to remove additional phases.
However, unlike the quarks and charged leptons, the neutrino mass eigenstates $\nu_i$ may be their own antiparticles: $\overline{\nu_i } = \nu_i$. This possibility motivates the search for neutrinoless nuclear double beta decay, as discussed at this school by K. Zuber. If $\overline{\nu_i } = \nu_i$, then physically significant phases cannot be eliminated by phase redefinition of the $\nu_i$ [5]. To allow for the possibility that $\overline{\nu_i } = \nu_i$, we shall retain the phases that can be eliminated only when $\overline{\nu_i } \neq \nu_i$. Then $U$ is left with $N^2 - N$ physically significant parameters. These are commonly chosen to be ``mixing angles''---parameters that would be present even if $U$ were real---and complex phase factors. To see how many of the parameters are mixing angles, and how many are phases, let us imagine for a moment that $U$ is real.
Then it can be fully specified by its $N^2$ real entries. These are subject to the unitarity requirement that the N columns of $U$ all have unit length: $\sum_\alpha U_{\alpha i}^2 = 1,\; i=1,N$, and the requirement that all $N(N-1)/2$ pairs of columns be orthogonal: $\sum_\alpha U_{\alpha i} U_{\alpha j} = 0,\; i,j=1,N$ with $i\neq j$. Hence, a real mixing matrix $U$ for $N$ generations has $N^2-N-N(N-1)/2 = N(N-1)/2$ physically significant parameters, and a complex one has this number of mixing angles. Since a complex $U$ has $N(N-1)$ physically significant parameters in all, the fact that $N(N-1)/2$ of them are mixing angles means that the remaining $N(N-1)/2$ must be complex phase factors.
In summary, a complex $N\times N$ unitary mixing matrix $U$ for $N$ lepton generations may contain---
\begin{center}
\begin{tabular}{cc}
N(N-1)/2 & mixing angles \\
N(N-1)/2& complex phase factors \\
\hline
N(N-1) & physically significant parameters in all
\end{tabular}
\end{center}
Throughout most of these lecture notes, we will assume that $N=3$. Then the mixing matrix contains three mixing angles and three complex phase factors. It can be shown that this matrix can be written in the form
\begin{eqnarray}
U & = & \left[ \begin{array}{ccc}
1 & 0 & 0 \\
0 & c_{23} & s_{23} \\
0 & -s_{23} & c_{23}
\end{array} \right] \times \left[ \begin{array}{ccc}
c_{13} & 0 & s_{13}e^{-i\delta} \\
0 & 1 & 0 \\
-s_{13}e^{i\delta} & 0 & c_{13}
\end{array} \right] \times \left[ \begin{array}{ccc}
c_{12} & s_{12} & 0 \\
-s_{12} & c_{12} & 0 \\
0 & 0 & 1
\end{array} \right] \nonumber \\
& \times & \left[ \begin{array}{ccc}
e^{i\xi_1 /2} & 0 & 0 \\
0 & e^{i\xi_2 /2} & 0 \\
0 & 0 & 1
\end{array} \right] ~~ .
\label{eqI.5.1}
\end{eqnarray}
Here, $c_{ij} \equiv \cos \theta_{ij}$ and $s_{ij} \equiv \sin \theta_{ij}$, where the $\theta_{ij}$ are the three mixing angles. The quantities $\delta,\; \xi_1$, and $\xi_2$ are the three complex phases.
From \Eq{2.1}, we observe that the amplitude for the decay $W^+ \rightarrow \overline{\ell_\alpha} + \nu_i$ to yield the particular charged-lepton mass eigenstate $ \overline{\ell_\alpha}$ in combination with the particular neutrino mass eigenstate $\nu_i$ is proportional to $U_{\alpha i}^*$. Thus, if we define the ``neutrino state of flavor $\alpha$'', $|\nu_\alpha\rangle$, with $\alpha = e, \mu$, or $\tau$, to be the neutrino state that accompanies the particular charged lepton $ \overline{\ell_\alpha}$ in leptonic $W^+$ decay, then we must have
\beeq
|\nu_\alpha\rangle = \sum_{i=1}^3 U_{\alpha i}^* \, |\nu_i\rangle ~~ .
\label{eq2.2}
\eeeq
From \Eq{2.1}, the amplitude for this $\nu_\alpha$ to interact and produce the particular charged-lepton $\ell_\beta$ is proportional to
\beeq
\sum_{i=1}^3 U_{\beta i} U_{\alpha i}^* = \delta_{\beta\alpha} ~~ ,
\label{eq2.3}
\eeeq
where we have invoked the unitarity of $U$. We see that when a $\nu_e$, the neutrino born in a $W^+$ decay that produced an $\bar{e}$, interacts and produces a second charged lepton, the latter can only be an $e$. Similarly for $\nu_\mu$ and $\nu_\tau$.
We may invert \Eq{2.2} to obtain
\beeq
|\nu_i\rangle = \sum_{\alpha = e, \mu, \tau} U_{\alpha i} |\nu_\alpha\rangle ~~ .
\label{eqI2.1}
\eeeq
This expresses the mass eigenstate $|\nu_i\rangle$ in terms of the states of definite flavor, $|\nu_\alpha\rangle$. We see that the flavor-$\alpha$ fraction of $|\nu_i\rangle$ is simply $|U_{\alpha i}|^2$.
\subsection{Neutrino oscillation in vacuum}
Consider the vacuum neutrino oscillation experiment depicted schematically in the upper part of Figure \ref{f1}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=14cm]{SUSSP61_Fig1}
\caption{Neutrino flavor change (oscillation) in vacuum. ``Amp'' denotes an amplitude.}
\label{f1}
\end{figure}
A neutrino source produces, via $W$ exchange, the charged lepton $ \overline{\ell_\alpha}$ of flavor $\alpha$, plus an accompanying neutrino that, by definition, must be a $\nu_\alpha$. The neutrino then propagates, in vacuum, a distance $L$ to a target/detector. There, it interacts via $W$ exchange and produces a second charged lepton $\ell_\beta$ of flavor $\beta$. Thus, at the moment of its interaction in the detector, the neutrino is a $\nu_\beta$. If the flavors $\alpha$ and $\beta$ are different, then, during the neutrino's trip to the detector, it has changed, or ``oscillated'', from a $\nu_\alpha$ into a $\nu_\beta$.
In the neutrino mass eigenstate basis, the particle that travels from the neutrino source to the detector is one or another of the mass eigenstates $\nu_i$. In a given event, we will not know which $\nu_i$ was actually involved. Hence, the amplitude for the oscillation $\nu_\alpha \rightarrow \nu_\beta$, Amp\,($\nu_\alpha \rightarrow \nu_\beta$), is a coherent sum over the contributions of all the $\nu_i$, as shown in the lower part of Figure~\ref{f1}. The contribution of an individual $\nu_i$ is a product of three factors. The first is the amplitude for the neutrino produced together with the charged lepton $\overline{\ell_\alpha} $ to be, in particular, a $\nu_i$. From \Eq{2.1}, this amplitude is $U_{\alpha i}^*$, as indicated in Figure~\ref{f1}. The second factor, Prop\,($\nu_i$), is the amplitude for the mass eigenstate $\nu_i$ to propagate from the source to the detector. The final factor is the amplitude for the charged lepton created when the $\nu_i$ interacts in the detector to be, in particular, an $\ell_\beta$. From \Eq{2.1}, this amplitude is $U_{\beta i}$.
From elementary quantum mechanics, the propagation amplitude Prop($\nu_i$) is simply $\exp{[-im_i\tau_i}]$, where $m_i$ is the mass of $\nu_i$, and $\tau_i$ is the proper time that elapses in the $\nu_i$ rest frame during its propagation. By Lorentz invariance, $m_i\tau_i = E_i t - p_i L$, where $L$ is the lab-frame distance between the neutrino source and the detector, $t$ is the lab-frame time taken for the beam to traverse this distance, and $E_i$ and $p_i$ are, respectively, the lab-frame energy and momentum of the $\nu_i$ component of the beam.
Once the absolute square $|$Amp$\,(\nu_\alpha \rightarrow \nu_\beta)|^2$ is taken to compute the probability for the oscillation $\nu_\alpha \rightarrow \nu_\beta$, only the {\em relative} phases of the propagation amplitudes Prop\,($\nu_i$) for different mass eigenstates will have physical consequences. From the discussion above, the relative phase of Prop\,($\nu_1$) and Prop\,($\nu_2$), $\delta\phi(12)$ is given by
\begin{eqnarray}
\delta\phi(12) & = & (E_2 t - p_2 L) - (E_1t-p_1L) \nonumber \\
& = & (p_1 - p_2)L - (E_1 - E_2)t ~~ .
\label{eq2.4}
\end{eqnarray}
In practice, experiments do not measure the transit time $t$. However, Lipkin has shown [6] that, to an excellent approximation, $t$ may be taken to be $L/\bar{v}$, where
\beeq
\bar{v} \equiv \frac{p_1 + p_2}{E_1 + E_2}
\label{eq2.5}
\eeeq
is an approximation to the average of the velocities of the $\nu_1$ and $\nu_2$ components of the beam. We then have
\begin{eqnarray}
\delta\phi(12) & \cong & \frac{p_1^2 - p_2^2}{p_1 + p_2}L - \frac{E_1^2-E_2^2}{p_1 + p_2}L \nonumber \\
& = & (m_2^2-m_1^2)\frac{L}{p_1 + p_2} \cong (m_2^2-m_1^2)\frac{L}{2E} ~~ ,
\label{eq2.6}
\end{eqnarray}
where, in the last step, we have used the fact that for highly relativistic neutrinos, $p_1$ and $p_2$ are both approximately equal to the beam energy $E$. We conclude that all the relative phases in Amp($\,\nu_\alpha \rightarrow \nu_\beta$) will be correct if we take
\beeq
\mathrm{Prop}\,(\nu_i) = e^{-im_i^2\,L/2E} ~~.
\label{eq2.7}
\eeeq
Combining the factors that appear in the lower part of Figure~\ref{f1}, we have
\beeq
\mathrm{Amp}\,(\nu_\alpha \rightarrow \nu_\beta) = \sum_i U_{\alpha i}^* e^{-im_i^2\,L/2E} U_{\beta i} ~~ .
\label{eq2.8}
\eeeq
Squaring, and making judicious use of the unitarity of $U$, we find that the probability of $\nu_\alpha \rightarrow \nu_\beta$, P\,($\nu_\alpha\rightarrow\nu_\beta$), is given by
\begin{eqnarray}
\mathrm{P}(\nu_\alpha\rightarrow\nu_\beta) & = &|\mathrm{Amp}\,(\nu_\alpha\rightarrow\nu_\beta)|^2 \nonumber \\
& = & \delta_{\alpha\beta} - 4\sum_{i>j} \Re (U_{\alpha i}^*U_{\beta i}U_{\alpha j}U_{\beta j}^*) \sin^2(\Delta m^2_{ij}L/4E) \nonumber \\
& & \phantom{ \delta_{\alpha\beta}}+ 2\sum_{i>j} \Im (U_{\alpha i}^*U_{\beta i}U_{\alpha j}U_{\beta j}^*) \sin(\Delta m^2_{ij} L/2E) ~~ .
\label{eq2.9}
\end{eqnarray}
Here, $\Delta m^2_{ij} \equiv m^2_{i} - m^2_{j}$ is the splitting between the squared masses of $\nu_i$ and $\nu_j$. It is clear from the derivation of \Eq{2.9} that this expression would hold for any number of flavors and equal number of mass eigenstates.
Given that the particles described by the oscillation probability of \Eq{2.9} are born with an $\overline{\ell_\alpha}$ and convert into an $\ell_\beta$ in the detector, they are {\em neutrinos}, rather than {\em antineutrinos} (should there be a difference). To obtain the corresponding oscillation probability for antineutrinos, we observe that $\overline{\nu_\alpha}\rightarrow\overline{\nu_\beta}$ is the CPT-mirror image of $\nu_\beta\rightarrow\nu_\alpha$. Thus, if CPT invariance holds,
\beeq
\mathrm{P}(\overline{\nu_\alpha}\rightarrow\overline{\nu_\beta}) = \mathrm{P}(\nu_\beta\rightarrow\nu_\alpha) ~~ .
\label{eq2.10}
\eeeq
Now, from \Eq{2.9}, we see that
\beeq
\mathrm{P}(\nu_\beta\rightarrow\nu_\alpha;\; U) = \mathrm{P}(\nu_\alpha\rightarrow\nu_\beta;\; U^*) ~~ .
\label{eq2.11}
\eeeq
Hence, assuming CPT invariance holds,
\beeq
\mathrm{P}(\overline{\nu_\alpha}\rightarrow\overline{\nu_\beta};\; U) = \mathrm{P}(\nu_\alpha\rightarrow\nu_\beta;\; U^*) ~~ .
\label{eq2.12}
\eeeq
That is, the probability for oscillation of an antineutrino is the same as that for a neutrino, except that the mixing matrix $U$ is replaced by its complex conjugate. Thus, from \Eq{2.9},
\begin{eqnarray}
\mathrm{P}(\optbar{\nu_\alpha}\rightarrow\optbar{\nu_\beta}) & = & \delta_{\alpha\beta} - 4\sum_{i>j} \Re (U_{\alpha i}^*U_{\beta i}U_{\alpha j}U_{\beta j}^*) \sin^2(\Delta m^2_{ij} L/4E) \nonumber \\
& & \phantom{ \delta_{\alpha\beta}}\raisebox{-.8ex}{$\stackrel{+}{{\sss (}-{\sss )}}$} 2\sum_{i>j} \Im (U_{\alpha i}^* U_{\beta i}U_{\alpha j}U_{\beta j}^*) \sin(\Delta m^2_{ij} L/2E) ~~ .
\label{eq2.13}
\end{eqnarray}
We see that if $U$ is not real, the probabilities for $\nu_\alpha\rightarrow\nu_\beta$ and for the corresponding antineutrino oscillation, $\overline{\nu_\alpha}\rightarrow\overline{\nu_\beta}$, will in general differ. Since $\nu_\alpha\rightarrow\nu_\beta$ and $\overline{\nu_\alpha} \rightarrow \overline{\nu_\beta}$ are CP-mirror-image processes, this difference will be a violation of CP invariance.
As \Eq{2.13} makes clear, neutrino oscillation in vacuum from one flavor $\alpha$ into a different one $\beta$ implies nonzero mass splittings $\Delta m^2_{ij}$, hence nonzero neutrino masses. It also implies nontrivial leptonic mixing. That is, the mixing matrix $U$ cannot be diagonal.
Including the so-far omitted factors of $\hbar$ and $c$, we have
\beeq
\Delta m^2_{ij} \frac{L}{4E} = 1.27 \, \Delta m^2_{ij} \mathrm{(eV}^2) \frac{L\mathrm{(km)}} {E\mathrm{(GeV)}} ~~ .
\label{eq2.14}
\eeeq
From \Eq{2.13}, if the $U$ matrix cooperates, the probability for $\nu_\alpha\rightarrow\nu_\beta,\; \beta \neq \alpha$, will be appreciable if the kinematical phase difference in \Eq{2.14} is ${\cal O}(1)$ or larger. This requires only that for some $ij$,
\beeq
\Delta m^2_{ij} \mathrm{(eV}^2)\;\raisebox{-.6ex}{$\stackrel{>}{\sim}$}\; \frac {E\mathrm{(GeV)}}{L\mathrm{(km)}} ~~ .
\label{eq2.15}
\eeeq
Thus, for example, an experiment that studies 1\,GeV neutrinos that travel a distance $L \sim 10^4$km, the diameter of the earth, will be sensitive to neutrino (mass)$^2$ splittings $\Delta m^2_{ij}$ as small as 10$^{-4}$eV$^2$. Through quantum interference between neutrino mass eigenstates of different masses, neutrino oscillation gives us sensitivity to very tiny (mass)$^2$ splittings. However, as \Eq{2.13} underscores, oscillation cannot determine the masses $m_i$ of the individual mass eigenstates. To learn those will require another approach.
There are basically two kinds of neutrino oscillation experiments. In the first, an {\em appearance} experiment, one starts with a beam of neutrinos that initially are purely of flavor $\alpha$, and looks for the appearance in this beam of neutrinos of a new flavor $\beta,\; \beta \neq \alpha$, that were not originally present in the beam. In the second kind of experiment, a {\em disappearance} experiment, one starts with a known flux of $\nu_\alpha$, and looks to see whether some of the initial $\nu_\alpha$ flux disappears as the beam travels.
By the definition of ``probability'', the probability that a neutrino changes flavor, plus the probability that it does not change flavor, must equal unity. That is, we must have
\beeq
\sum_\beta \mathrm{P}(\nu_\alpha\rightarrow\nu_\beta) = \sum_\beta \mathrm{P}(\overline{\nu_\alpha}\rightarrow\overline{\nu_\beta}) =1 ~~ ,
\label{eq2.16}
\eeeq
where the sum is over all final flavors $\beta$, including the initial flavor $\alpha$. From the unitarity of $U$, which implies that $\sum_\beta U_{\beta i}U_{\beta j}^* = \delta_{ij}$, it immediately follows that the oscillation probabilities of \Eq{2.13} do obey this constraint.
Neutrino flavor oscillation does not change the total flux in a neutrino beam. It merely redistributes it among the flavors. However, if we create a beam of neutrinos that at birth are of some active (i.e., weakly interacting) flavor, (muon neutrinos, for example), and some of these neutrinos oscillate into sterile (i.e., non-interacting) flavors, then some of the total {\em active} neutrino flux will have disappeared.
The combination of the CPT-invariance constraint of \Eq{2.10} and the probability constraint of \Eq{2.16} has powerful consequences for CP violation. To see this, consider the CP-violating differences
\beeq
\Delta_{\alpha\beta} \equiv \mathrm{P}(\nu_\alpha\rightarrow\nu_\beta) - \mathrm{P}(\overline{\nu_\alpha}\rightarrow\overline{\nu_\beta}) ~~ .
\label{eq2.17}
\eeeq
If CPT invariance holds, then from \Eq{2.10}
\beeq
\Delta_{\beta\alpha} = -\Delta_{\alpha\beta} ~~ .
\label{eq2.18}
\eeeq
In particular,
\beeq
\Delta_{\alpha\alpha} = 0 ~~ .
\label{eq2.19}
\eeeq
That is, there can be no CP-violating difference between the survival probabilities P$(\nu_\alpha\rightarrow\nu_\alpha)$ and P$(\overline{\nu_\alpha}\rightarrow\overline{\nu_\alpha})$. Hence, there can be no observable CP violation in a disappearance experiment. Now, from \Eq{2.16}, it follows that
\beeq
\sum_\beta \Delta_{\alpha\beta} = 0 ~~ ,
\label{eq2.20}
\eeeq
where the sum runs over all flavors, including $\beta=\alpha$. However, in view of \Eq{2.19}, \Eq{2.20} implies that
\beeq
\sum_{\beta\neq\alpha} \Delta_{\alpha\beta} = 0 ~~ .
\label{eq2.21}
\eeeq
If there are only three neutrino flavors, $\nu_e, \;\nu_\mu$, and $\nu_\tau$, then this constraint implies that, in particular,
\beeq
\Delta_{e\mu} + \Delta_{e\tau} = 0 \hspace{1.5cm} \mathrm{and} \hspace{1.5cm}
\Delta_{\mu e} + \Delta_{\mu\tau} = 0 ~~ .
\label{eq2.22}
\eeeq
From these relations and \Eq{2.18}, we see that
\beeq
\Delta_{e\mu} = \Delta_{\mu\tau} = \Delta_{\tau e} = -\Delta_{\mu e} = -\Delta_{\tau\mu} = -\Delta_{e\tau} \equiv \Delta ~~ .
\label{eq2.23}
\eeeq
In summary, if CPT holds, then the CP-violating difference $\Delta_{\alpha\beta} = \mathrm{P}(\nu_\alpha\rightarrow\nu_\beta) - \mathrm{P}(\overline{\nu_\alpha}\rightarrow\overline{\nu_\beta})$ can be nonvanishing only for $\beta \neq \alpha$. If, in addition, there are only three flavors, then the six possibly-nonvanishing $\Delta_{\alpha\beta}$, shown in \Eq{2.23}, must all be equal, apart from a predicted minus sign [7]. (If there are more than three flavors, then \Eq{2.23} need not hold.)
Counter to intuition, the CP-violating difference $\Delta_{\alpha\beta} \equiv \mathrm{P}(\nu_\alpha\rightarrow\nu_\beta) - \mathrm{P}(\overline{\nu_\alpha}\rightarrow\overline{\nu_\beta})$ between neutrino and what we conventionally call ``antineutrino'' oscillation probabilities can still be nonvanishing even when the $\nu_i$ are identical to their antiparticles.
Indeed, $\Delta_{\alpha\beta}$ is actually completely independent of whether the $\nu_i$ are their own antiparticles or not. We illustrate this by comparing the processes $\nu_\mu \rightarrow \nu_e$ and
``$\overline{\nu_\mu} \rightarrow \overline{\nu_e}$'', depicted in Figure~\ref{f1-1}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=14cm]{SUSSP61_Fig1-1}
\caption{The CP-mirror-image oscillations $\nu_\mu \rightarrow \nu_e$ and ``$\overline{\nu_\mu} \rightarrow \overline{\nu_e}$''. In each process, the particle that travels down the beamline is one or another of the mass eigenstates, and the amplitude is a coherent sum over the contributions of these eigenstates, as indicated. In ``$\overline{\nu_\mu} \rightarrow \overline{\nu_e}$'', the mass eigenstate $\overline{\nu_i}$ may or may not be identical, apart from its polarization, to the corresponding $\nu_i$ in $\nu_\mu \rightarrow \nu_e$. The propagator for this $\overline{\nu_i}$, $\exp(-im^2_i L/2E)$, is identical to that for the corresponding $\nu_i$ in either case. The elements of the $U$ matrix that, according to \Eq{2.1}, appear at the beam-particle production and detection vertices are shown.}
\label{f1-1}
\end{figure}
In $\nu_\mu \rightarrow \nu_e$, the neutrino is created together with a $\mu^+$ in $\pi^+$ decay. After traveling down a beamline to a detector, it is detected via its production of an $e^-$. In the corresponding ``antineutrino'' oscillation, ``$\overline{\nu_\mu} \rightarrow \overline{\nu_e}$'', the particle that travels down the beamline is created together with a $\mu^-$ in $\pi^-$ decay, and is detected via its production in the detector of an $e^+$.
One never directly observes the particle that travels down the beamline; it is an intermediate state. In terms of the charged leptons that one does ( or at least can) observe, $\nu_\mu \rightarrow \nu_e$ and ``$\overline{\nu_\mu} \rightarrow \overline{\nu_e}$'' are clearly different, CP-mirror-image processes: the first involves a $\mu^+$ and $e^-$, while the second involves a $\mu^-$ and $e^+$.
Thus, even if $\overline{\nu_i} = \nu_i$, $\nu_\mu \rightarrow \nu_e$ and ``$\overline{\nu_\mu} \rightarrow \overline{\nu_e}$'' can have different probabilities, and if they do, the difference is a violation of CP invariance.
Even if $\overline{\nu_i} = \nu_i$, the beam particle will create an $e^-$ in $\nu_\mu \rightarrow \nu_e$, but an $e^+$ in ``$\overline{\nu_\mu} \rightarrow \overline{\nu_e}$'', because it is oppositely polarized in the two processes. Due to the chirally left-handed structure of the weak interaction, reflected in the Lagrangian of \Eq{2.1}, the beam particle will have helicity $h = -1/2$ in the first process, but $h = +1/2$ in the second. Due to this same parity-violating left-handed structure, the $h = -1/2$ beam particle will create an $e^-$ (via the first term in \Eq{2.1}) in $\nu_\mu \rightarrow \nu_e$, while the $h = +1/2$ beam particle will create an $e^+$ (via the second term in \Eq{2.1}) in ``$\overline{\nu_\mu} \rightarrow \overline{\nu_e}$''.
From the amplitude factors displayed in Figure~\ref{f1-1}, we see that while
\beeq
\mathrm{Amp} (\nu_\mu \rightarrow \nu_e) = \sum_i U_{\mu i}^* e^{-im^2_i \frac{L}{2E}} U_{ei} ~~ ,
\eeeq
\beeq
\mathrm{Amp} (\overline{\nu_\mu} \rightarrow \overline{\nu_e}) = \sum_i U_{\mu i} e^{-im^2_i \frac{L}{2E}} U_{ei}^*~~ .
\label{eq2.23b}
\eeeq
These expressions hold whether $\overline{\nu_i} = \nu_i$ or not. Thus, in either case, if the CP-violating phase $\delta$ in \Eq{I.5.1} is not zero or $\pi$, so that $U$ is complex, the interference terms in P($\nu_\mu \rightarrow \nu_e$) and P($\overline{\nu_\mu} \rightarrow \overline{\nu_e}$) will differ. As a result, the CP-violating difference P($\nu_\mu \rightarrow \nu_e$) -- P($\overline{\nu_\mu} \rightarrow \overline{\nu_e}$) will be nonzero. Furthermore, the value of this difference will not depend on whether $\overline{\nu_i} = \nu_i$, and this value will be correctly implied by \Eq{2.13}, which holds regardless of whether $\overline{\nu_i} = \nu_i$.
The general expression for P$(\optbar{\nu_\alpha} \rightarrow \optbar{\nu_\beta})$, \Eq{2.13}, simplifies considerably in some important special cases. One such case is the simplified world in which there are only two charged leptons, say $e$ and $\mu$, two corresponding neutrinos of definite flavor, $\nu_e$ and $\nu_\mu$, and two neutrino mass eigenstates, $\nu_1$ and $\nu_2$, that make up $\nu_e$ and $\nu_\mu$. From our earlier analysis of the number of parameters in a mixing matrix, we know that the $2 \times 2$ unitary mixing matrix $U$ for this two-flavor world may contain one mixing angle and one complex phase factor. It may easily be shown that $U$ may be written in the form
\beeq
U \equiv \left[ \begin{array}{cc}
U_{e1} & U_{e2} \\ U_{\mu 1} & U_{\mu 2}
\end{array} \right] =
\left[ \begin{array}{cc}
\phantom{-}\cos\theta & \sin\theta \\ -\sin\theta & \cos\theta
\end{array} \right] \times
\left[ \begin{array}{cc}
e^{i\xi /2} & 0 \\ 0 & 1
\end{array} \right] ~~,
\label{eqI.7}
\eeeq
where $\theta$ is the mixing angle and $\xi$ is the phase.
With $\Delta m^2_{21} \equiv \Delta m^2$ the sole (mass)$^2$ splitting in the problem, we find from the $U$ of \Eq{I.7} and the general expression of \Eq{2.13} that
\beeq
\mathrm{P}(\optbar{\nu_e}\rightarrow\optbar{\nu_\mu}) = \mathrm{P}(\optbar{\nu_\mu}\rightarrow\optbar{\nu_e}) = \sin^2 2\theta \sin^2(\Delta m^2 L/4E) ~~ ,
\label{eq2.24}
\eeeq
and that
\beeq
\mathrm{P}(\optbar{\nu_e}\rightarrow\optbar{\nu_e}) = \mathrm{P}(\optbar{\nu_\mu}\rightarrow\optbar{\nu_\mu}) = 1 - \sin^2 2\theta \sin^2(\Delta m^2 L/4E) ~~ .
\label{eq2.25}
\eeeq
As we know, the real world contains (at least) three charged leptons $\ell_\alpha$, three corresponding neutrinos of definite flavor $\nu_\alpha$, and three underlying neutrino mass eigenstates $\nu_i$ that make up the $\nu_\alpha$. Thus, the two-neutrino oscillation formulae of Eqs.\ (\ref{eq2.24}) and (\ref{eq2.25}) do not apply. However, if there are only three flavors, then under certain circumstances, rather similar simple formulae do apply. To see this, we note that the three-neutrino (mass)$^2$ spectrum has been observed to have the form shown in Figure~\ref{f2} [2].
\begin{figure}[!htbp]
\centering
\includegraphics[width=14cm]{SUSSP61_Fig2}
\caption{The three-neutrino (mass)$^2$ spectrum.}
\label{f2}
\end{figure}
The splitting $\Delta m^2_{21}$, which drives the behavior of solar neutrinos, is roughly 30 times smaller than $\Delta m^2_{32} \cong \Delta m^2_{31}$, which drives the behavior of atmospheric neutrinos. (It is not known whether the closely-spaced pair $\nu_1$-$\nu_2$ is at the bottom or the top of the spectrum.) If an experiment is performed with $L/E$ such that $\Delta m^2_{32}\,L/E = {\cal O}(1)$, then $\Delta m^2_{21}\,L/E \ll 1$, and in first approximation, this experiment cannot ``see'' the small splitting $\Delta m^2_{21}$. Neglecting this small splitting in \Eq{2.13}, this equation and the unitarity of $U$ imply that, for $\beta \neq \alpha$,
\beeq
\mathrm{P}(\optbar{\nu_\alpha}\rightarrow\optbar{\nu_\beta}) \cong 4|U_{\alpha 3}U_{\beta 3}|^2 \sin^2(\Delta m^2_{32} L/4E) ~~ .
\label{eq2.26}
\eeeq
Similarly, they imply that, for $\beta = \alpha$,
\beeq
\mathrm{P}(\optbar{\nu_\alpha}\rightarrow\optbar{\nu_\alpha}) \cong 1 - 4|U_{\alpha 3}|^2 (1-|U_{\alpha 3}|^2) \sin^2(\Delta m^2_{32} L/4E) ~~ .
\label{eq2.27}
\eeeq
We see that, by measuring these simple oscillation probabilities, experiments with $\Delta m^2_{32}\, L/4E \\ = {\cal O}(1)$ can determine the flavor content of the isolated member of the spectrum, $\nu_3$.
\subsection{Neutrino oscillation in matter}{\label{s2.2}
Inside matter, the coherent forward scattering of neutrinos from the electrons, protons, and neutrons that make up the matter leads to neutrino effective masses and mixing angles that differ from their vacuum counterparts. As a result, inside matter, the probabilities for neutrino oscillations differ from their vacuum counterparts.
The Standard-Model interactions between neutrinos and other particles do not change flavor. Thus, barring hypothetical non-Standard-Model flavor-changing interactions, the observation of neutrino flavor change implies neutrino mass and leptonic mixing, even if the observation involves neutrinos passing through matter.
Neutrino propagation in matter may be conveniently treated via the laboratory-frame Schr\"{o}dinger time-evolution equation
\beeq
i \frac{\partial}{\partial t} \Psi(t) = {\cal H} \Psi(t) ~~ .
\label{eq2.28}
\eeeq
Here, $t$ is the time, and $\Psi(t)$ is a multi-component neutrino wave function. Its $\alpha$ component, $\Psi_\alpha (t)$, is the amplitude for the neutrino to have flavor $\alpha$ at time $t$. If there are $N$ flavors, the Hamiltonian ${\cal H}$ is an $N \times N$ matrix in flavor space. In matter, this matrix includes interaction energies arising from neutrino-matter interactions mediated by $W$ or $Z$ exchange. According to the Standard Model, the $Z$-mediated interactions neither change neutrino flavor nor depend on the flavor. Thus, they add to ${\cal H}$ a term proportional to the identity matrix. Such a term shifts all the eigenvalues of ${\cal H}$ by a common amount, leaving the splittings between the eigenvalues unchanged. Now, as we have seen when discussing neutrino flavor oscillation in vacuum, the amplitude for oscillation depends only on the {\em relative} phases of the different neutrino eigenstates. This means that it depends only on the splittings between the eigenvalues, and will not be affected by an interaction that merely shifts all the eigenvalues by the same amount. Thus, if our purpose is to treat neutrino flavor oscillation, we may omit the $Z$-exchange contribution to ${\cal H}$.
The $W$-exchange contribution is another matter. From the Standard Model, it follows that coherent forward $\nu_e$-electron scattering via the $W$-exchange diagram of Figure~\ref{f3} adds to the $\nu_e$-$\nu_e$ element of ${\cal H}$, ${\cal H}_{\nu_e \nu_e}$, an interaction energy
\beeq
V = \sqrt{2} G_F N_e ~~ .
\label{eq2.29}
\eeeq
\begin{figure}[!htbp]
\centering
\includegraphics[scale=1.00]{SUSSP61_Fig3}
\caption{The $W$-exchange interaction that modifies neutrino flavor oscillation in matter.}
\label{f3}
\end{figure}
Here, $G_F$ is the Fermi coupling constant, and $N_e$ is the number of electrons per unit volume in the matter through which the neutrinos are passing. The Fermi constant appears in $V$ because it is a measure of the amplitude for the diagram in Figure~\ref{f3}, and the density $N_e$ appears because the coherent scattering amplitude will obviously depend on how many electrons are present to contribute. The Standard Model tells us that for antineutrinos in matter, $V$ is replaced by $-V$.
Since $\nu_e$ is the only neutrino flavor that couples to an electron and a $W$, $W$-mediated $\nu - e$ scattering affects {\em only} the $\nu_e$-$\nu_e$ element of ${\cal H}$. Thus, its contribution to ${\cal H}$ is not proportional to the identity matrix, and it does affect neutrino flavor oscillation.
Neutrino flavor change in matter is illustrated by the case where there are only two significant flavors, say $\nu_e$ and $\nu_\mu$, and, correspondingly, two significant mass eigenstates. The Hamiltonian ${\cal H}$ in the Schr\"{o}dinger equation, \Eq{2.28}, is then a $2\times 2$ matrix in $\nu_e$-$\nu_\mu$ space. Taking into account $\nu_e$-$e$ scattering via $W$ exchange, but omitting some irrelevant contributions that are proportional to the identity matrix, we readily find [1] that
\beeq
{\cal H} = \frac{\Delta m^2_M}{4E} \left[ \begin{array}{cc}
-\cos 2\theta_M & \sin 2\theta_M \\
\phantom{-}\sin 2\theta_M & \cos 2\theta_M
\end{array} \right] ~~ .
\label{eq2.30}
\eeeq
Here, $E$ is the energy of the neutrinos, and $\Delta m^2_M$ and $\theta_M$ are, respectively, the effective (mass)$^2$ splitting and the effective mixing angle in matter. These effective quantities are related to their vacuum counterparts, $\Delta m^2$ and $\theta$, by
\beeq
\Delta m^2_M = \Delta m^2 \sqrt{\sin^2 2\theta + (\cos 2\theta - x_\nu)^2}
\label{eq2.31}
\eeeq
and
\beeq
\sin^2 2\theta_M = \frac{\sin^2 2\theta}{\sin^2 2\theta + (\cos 2\theta - x_\nu)^2} ~~ .
\label{eq2.32}
\eeeq
In these expressions,
\beeq
x_\nu \equiv \frac{2\sqrt{2} G_F N_e E}{\Delta m^2}
\label{eq2.33}
\eeeq
is a measure of the importance of the effects of matter. In vacuum, $N_e$ and consequently $x_\nu$ vanishes, and, as confirmed by Eqs.\ (\ref{eq2.31}) and (\ref{eq2.32}), $\Delta m^2_M$ and $\theta_M$ revert to the vacuum values, $\Delta m^2$ and $\theta$, respectively.
Imagine an accelerator-generated neutrino beam that travels a distance $L \sim $1000\,km through the earth to a detector. The electron density $N_e$ encountered by this beam will be that of the earth's mantle, and approximately constant. Then $x_\nu,\; \Delta m^2_M ,\; \theta_M,\; $ and ${\cal H}$ will all be $\sim$\,position-independent. From Eqs.\ (\ref{eq2.30}) and (\ref{eq2.28}) and straightforward quantum mechanics, it follows that
\beeq
\mathrm{P}(\nu_e \rightarrow \nu_\mu) = \mathrm{P}(\nu_\mu \rightarrow \nu_e) = \sin^2 2\theta_M \sin^2(\Delta m^2_M L/4E) ~~ .
\label{eqI.4.1}
\eeeq
This is the usual two-neutrino oscillation result, \Eq{2.24}, except that the vacuum parameters $\theta$ and $\Delta m^2$ are replaced by their counterparts in matter, $\theta_M$ and $\Delta m^2_M$. If $N_e \rightarrow 0$, so that $x_\nu \rightarrow 0$, the oscillation probabilities in matter of \Eq{I.4.1} become the vacuum probabilities of \Eq{2.24}, as they must.
The size of the effect of matter may be judged by the size of $x_\nu$. For the illustrative beam that we are considering, the actual three-neutrino vacuum (mass)$^2$ splitting $\Delta m^2$ that will most strongly influence flavor oscillation is probably the large one, $\Delta m^2_{31} \cong \Delta m^2_{32}$. Experimentally, $\Delta m^2_{31} \simeq 2.4 \times 10^{-3}$\,eV$^2$ [8]. For this $\Delta m^2$, we find from \Eq{2.33} that
\beeq
|x_\nu| \simeq E / 12\,\mathrm{GeV} ~~.
\label{eqI.4.2}
\eeeq
Thus, for $E = 0.5$\,GeV, the matter effect is quite small, for $E = 2$\,GeV it is modest, and for $E = 20$\,GeV it is large.
As already mentioned, when antineutrinos, rather than neutrinos, propagate through matter, the interaction energy $V$ is replaced by $-V$. It follows readily that, as a result, $x_\nu$, \Eq{2.33}, is replaced in Eqs.~(\ref{eq2.31})-(\ref{eq2.32}) by
\beeq
x_{\bar{\nu}} \equiv -x_\nu ~~ .
\label{eq2.34}
\eeeq
We see that this change has the consequence that, within matter, the effective (mass)$^2$ splitting and the effective mixing angle for antineutrinos are different than they are for neutrinos. As a result, within matter the flavor oscillation of an antineutrino beam will differ from that of a neutrino beam. The two-flavor Hamiltonian ${\cal H}$ is still given by \Eq{2.30}, and the two-flavor oscillation probability in matter of constant density is still given by \Eq{I.4.1}, but the quantities $\Delta m^2_M$ and $\theta_M$ have different values than they did in the neutrino case.
Earlier, we raised the possibility that neutrinos are their own antiparticles. That is, we imagined that, {\em for a given momentum $\vec{p}$ and helicity $h$,} perhaps each neutrino mass eigenstate $\nu_i$ is identical to its antiparticle: $\nu_i(\vec{p},h) = \overline{\nu_i}(\vec{p},h)$. Suppose that this is indeed the case. Will the interaction with matter still cause the flavor oscillation of an ``antineutrino'' beam to differ from that of a ``neutrino'' beam within matter? In practical terms the answer is ``yes''. The reason is that, in practice, the ``neutrino'' and ``antineutrino'' beams that we study are never of the same helicity. A ``neutrino'' is the particle ``$\nu$'' produced, for example, in the decay $W^+ \rightarrow e^+ + \nu$. As already noted, owing to the chirally left-handed structure of the weak interaction, this ``$\nu$'' will be of left-handed helicity: $h = -1/2$. In contrast, an ``antineutrino'' is the particle ``$\bar{\nu}$'' produced in $W^- \rightarrow e^- + \bar{\nu}$. As already noted, owing to the same structure of the weak interaction, this ``$\bar{\nu}$'' will be of right-handed helicity: $h = +1/2$. Since the weak interaction is not invariant under parity, the interaction in matter of the left-handed ``$\nu$'' and the right-handed ``$\bar{\nu}$'' will be quite different, even if helicity is the only difference between the ``$\nu$'' and the ``$\bar{\nu}$''. Only the first term on the right-hand side of \Eq{2.1} can couple an incoming left-handed beam particle to an electron, while only the second term can couple an incoming right-handed beam particle. These two terms lead to different scattering amplitudes. These amplitudes do not depend on whether the ``$\nu$'' and ``$\bar{\nu}$'' beams differ only in helicity, or in some other way as well.
Future accelerator neutrino experiments hope to study $\nu_\mu \rightarrow \nu_e$ and $\overline{\nu_\mu } \rightarrow \overline{\nu_e}$ in matter under conditions where all three of the known neutrino mass eigenstates $\nu_{1, 2, 3}$, or equivalently both of the known splittings $\Delta m^2_{31}$ and $\Delta m^2_{21}$, play significant roles. The oscillation probabilities are then more complicated than the expression of \Eq{I.4.1}. However, since $\alpha \equiv \Delta m^2_{21} / \Delta m^2_{31} \sim 1/30$ [2] and $\sin^2 2\theta_{13} < 0.2$ [9], the probability for $\nu_\mu \rightarrow \nu_e$ in matter is well approximated by [10]
\beeq
P(\nu_\mu \rightarrow \nu_e) \cong \sin^2 2\theta_{13}\,T_1 - \alpha \sin 2\theta_{13}\,T_2 +\alpha \sin 2\theta_{13}\,T_3 + \alpha^2 T_4 ~~,
\label{eq2.40}
\eeeq
where
\beeq
T_1 = \sin^2 \theta_{23} \frac{\sin^2 [(1-x_\nu)\Delta]}{(1-x_\nu)^2} ~~ ,
\label{eq2.41}
\eeeq
\beeq
T_2 = \sin\delta \sin 2\theta_{12} \sin 2\theta_{23} \sin\Delta \frac{\sin(x_\nu\Delta )}{x_\nu} \frac{\sin [(1-x_\nu)\Delta ]}{(1-x_\nu)} ~~ ,
\label{eq2.42}
\eeeq
\beeq
T_3 = \cos\delta \sin 2\theta_{12} \sin 2\theta_{23} \cos\Delta \frac{\sin(x_\nu\Delta)}{x_\nu} \frac{\sin [(1-x_\nu)\Delta ]}{(1-x_\nu)} ~~ ,
\label{eq2.43}
\eeeq
and
\beeq
T_4 = \cos^2 \theta_{23}\sin^2 2\theta_{12} \frac{\sin^2 (x_\nu\Delta)}{x_\nu^2} ~~ .
\label{eq2.44}
\eeeq
In these expressions, $\Delta \equiv \Delta m^2_{31} L/4E$ is the kinematical phase of the oscillation. and $x_\nu$ is the matter-effect quantity defined by \Eq{2.33}, with $\Delta m^2$ now taken to be $\Delta m^2_{31}$. In the appearance probability P($\nu_\mu \rightarrow \nu_e$), the $T_1$ term represents the oscillation due to the splitting $\Delta m^2_{31}$, the $T_4$ term represents the oscillation due to the splitting $\Delta m^2_{21}$, and the $T_2$ and $T_3$ terms are the CP-violating and CP-conserving interference terms, respectively.
The probability for the corresponding antineutrino oscillation, P($\overline{\nu_\mu } \rightarrow \overline{\nu_e}$), is the same as the probability P($\nu_\mu \rightarrow \nu_e$) given by
Eqs.~(\ref{eq2.40})-(\ref{eq2.44}), but with $x_\nu$ replaced by $x_{\bar{\nu}} = -x_\nu$ and $\sin \delta$ by $-\sin \delta$: both the matter effect and CP violation lead to a difference
between the $\nu_\mu \rightarrow \nu_e$ and $\overline{\nu_\mu } \rightarrow \overline{\nu_e}$ oscillation probabilities. In view of the dependence of $x_\nu$ on $\Delta m^2_{31}$, and in particular on the sign of $\Delta m^2_{31}$, the matter effect can reveal whether the neutrino mass spectrum has the closely-spaced $\nu_1$-$\nu_2$ pair at the bottom or the top (see Figure~\ref{f2}).
However, to determine the nature of the spectrum, and to establish the presence of CP violation, it obviously will be necessary to disentangle the matter effect from CP violation in the neutrino-antineutrino oscillation probability difference that is actually observed. To this end, complementary measurements will be extremely important. These can take
advantage of the differing dependences on the matter effect and on CP violation in
P($\nu_\mu \rightarrow \nu_e$) and P($\overline{\nu_\mu } \rightarrow \overline{\nu_e}$).
\section*{Acknowledgments}
It is a pleasure to thank H. Lipkin, S. Parke, and L. Stodolsky for useful conversations relevant to the physics of these lectures. I am grateful to Susan Kayser for her crucial role in the preparation of the manuscript.
\section*{References}
\frenchspacing
\begin{small}
\reference{1.} Kayser, B., ``Neutrino Physics'', in the{\it Proceedings of the SLAC Summer Institute of 2004}, eConf {\bf C040802}, L004 (2004): hep-ph/0506165.
\reference{2.} Kayser, B., ``Neutrino Mass, Mixing, and Flavor Change'', to appear in the 2008 edition of the {\it Review of Particle Physics}, by The Particle Data Group. This reference includes the phenomenology of neutrino oscillation and a summary of what we have learned about the neutrinos so far from experiment.
\reference{3.} This matrix is sometimes referred to as the Maki-Nakagawa-Sakata matrix, or as the Pontecorvo-Maki-Nakagawa-Sakata matrix, in recognition of the pioneering contributions of these scientists to the physics of mixing and oscillation. \\
See Maki, Z., Nakagawa, M., and Sakata, S., {\it Prog.\ Theor.\ Phys.\ } {\bf 28}, 870 (1962); \\
Pontecorvo, B., {\it Zh.\ Eksp.\ Teor.\ Fiz.\ }{\bf 53}, 1717 (1967) [{\it Sov.\ Phys.\ JETP\ }{\bf 26}, 984 (1968)].
\reference{4.} For a discussion of the possibility of a nonunitary leptonic mixing matrix,
see Antusch, S. {\it et al.}, {\it JHEP\ } {\bf 0610}, 084, (2006).
\reference{5.} Kayser, B., ``CP Effects When Neutrinos Are Their Own Antiparticles'', in {\it CP Violation}, ed. C. Jarlskog (World Scientific, Singapore, 1989) p. 334.
\reference{6.} Lipkin, H., \plb{642}{06}{366}.
\reference{7.} We thank S. Petcov for a long-ago conversation on how to obtain this result in a simple way.
\reference{8.} The MINOS Collaboration (Michael, D. {\it et al.}), \prl {97}{06}{191801}, and talks by MINOS collaboration members updating their results.
\reference{9.} The CHOOZ Collaboration (Apollonio, M. {\it et al.}),{\it Eur.\ Phys.\ J.\ }{\bf C27}, 331 (2003); \\
Fogli, G. {\it et al.}, {\it Prog.\ Part.\ Nucl.\ Phys.\ }{\bf 57}, 742 (2006).
\reference{10.} Cervera, A. {\it et al.}, {\it Nucl.\ Phys.\ } {\bf B579}, 17 (2000); \\
Freund, M., \prd{64}{01}{053003}.
\end{small}
\end{document}
|
2,877,628,089,792 | arxiv | \section{Introduction}\label{sect:introduction}
The derivation of non-local form factors in the semiclassical theory
of massive matter fields on a classical curved background has several
interesting applications. The calculation in the higher derivative
vacuum sector \cite{apco,fervi} (see also \cite{Codello:2012kq})
supports the idea of the gravitational decoupling which is relevant for the
graceful exit from the general version of anomaly induced inflation
\cite{susykey,Shocom,asta}. Indeed, this mechanism is not sufficient
for deriving the Starobinsky inflation \cite{star,star83} from
quantum corrections, but one can hope that more detailed study of
the gravitational decoupling may be useful for constructing the
corresponding field theoretical model \cite{StabInstab}.
An important application of the effective approach to quantum field
theory in curved spacetime is the possible running of cosmological
and Newton's constants at low energies, such as the typical energy
scale in the late cosmology (which we shall call IR). If such a
running takes place, there could be measurable implications in both
cosmology (see e.g.\ \cite{CC-Gruni}) and astrophysics (see, e.g.,
\cite{RotCurves}). Unfortunately, from the quantum field theory
side, there is no way to consistently calculate such a running. The
reason is that the existing methods of quantum calculations in curved
space are essentially based on the expansion of all quantities around
the flat space-time. For instance, the normal coordinate expansion
and Schwinger-DeWitt technique are based on the expansions into a power
series in the curvature tensor and its covariant derivatives. Such an
expansion is not sufficient to establish the physical running of the
cosmological and Newton's constants. An observation of such a
running requires at least the expansion around space-times of
constant nonzero curvature \cite{DCCrun}, which is not available,
except some special cases \cite{Verd}, which are not sufficient to
observe the decoupling. In the case when a variation with respect to the scale of
the cosmological and Newton's constants does not take place, there
would be a discrepancy between the well established running of these
constants in the Minimal Subtraction ($\overline{\rm MS}$) renormalization scheme
\cite{nelpan82,buch84} (see \cite{book} for an introduction) and
the absence of the non-local form factors for the corresponding terms
in the effective action.
The reason why there are no non-local form factors in the
zero and second-derivative sectors of the gravitational action
can be easily seen from the comparison with the fourth-derivative
terms \cite{apco}. The non-local form factors can emerge in the
square of the Weyl tensor
$C_{\al\be\rho\si}\,k_1\big(\textstyle{\frac{\Box}{m^2}}\big)
C^{\al\be\rho\si}$, or
in the square of the scalar curvature
$R\,k_2\big(\textstyle{\frac{\Box}{m^2}}\big)R$.
At the same time it is unclear how to introduce such a form factor
for the cosmological constant, because the d'Alembert operator
acting on a constant gives zero. Furthermore, if a non-local form
factor is inserted into the Einstein-Hilbert action, a function of
$\Box$ acting on $R$ is equivalent to a sum of the series of surface
terms. The simplest solution which was proposed in \cite{apco} was
to replace the cosmological constant by the non-local expressions
\beq
R_{\al\be}\,\frac{1}{\Box^2}\,R^{\al\be}
\qquad
\mbox{and}
\qquad
R\,\frac{1}{\Box^2}\,R,
\label{CCreplace}
\eeq
which have the same global scaling as a constant. Similar
replacement can be done for the Einstein-Hilbert Lagrangian
by using the terms
\beq
R_{\al\be}\,\frac{1}{\Box}\,R_{\al\be}
\qquad
\mbox{and}
\qquad
R\,\frac{1}{\Box}\,R \,.
\label{EHreplace}
\eeq
The problem with this approach is that the semiclassical form factors
can not be derived for the terms \eqref{CCreplace} and \eqref{EHreplace}
within the existing field theoretical methods. Thus, the interesting
cosmological applications of the models based on \eqref{CCreplace} and
\eqref{EHreplace} which were considered in \cite{Maggiore} are as
phenomenological as the non-covariant running which is considered
in \cite{CC-Gruni,DCCrun}, and the unique advantage, from the
conceptual point of view, is that those are covariant expressions,
which are easier to work with. In fact these structures are becoming increasingly
of interest even in the context of quantum gravity, in which they might play the role
of template to reconstruct the effective action \cite{Knorr:2018kog}.
Recently an alternative approach to the physical running of the
inverse Newton's constant has been initiated in \cite{Ribeiro:2018pyo}
which is based on \cite{Codello:2012kq}.
The consideration was performed for the two dimensional ($2D$) case
and is related to some older works by Avramidi and collaborators \cite{Avramidi:2007zz,Avramidi:1997hy}.
The idea is to derive the non-local form factors for the Einstein-Hilbert
term, regardless of the fact that the corresponding structures will be
total derivatives.\footnote{Similar structures have already been explored in the quantum gravity literature \cite{Hamber:2010an,Hamber:2011kc}.}
There is a serious justification of this approach,
but we postpone this part of the discussions for the last section. In what
follows we generalize the calculations of \cite{Ribeiro:2018pyo} to
four dimensions ($4D$) and perform full consideration of the
non-local terms. For the sake of completeness we checked all the
non-local contributions for higher derivative terms, which are well
known from \cite{apco,fervi} and \cite{Codello:2012kq}. One of
the reasons for this is the detailed discussion of the distinctions and
similarities between the form factors for $R$, $\Box R$ and $R^2$
terms. As we know from previous work (see, e.g., the discussion in
\cite{anom2003} with a special emphasize to the role of non-local
form factors in massive semiclassical theory), the renormalization of
the surface terms results in the finite non-surface contributions, and
the explicit form of the non-local surface terms derived here makes
our understanding of this relation more detailed.
The outline of the paper is as follows. In Sec.~\ref{sect:non-local-effective-action} we discuss the structure
of the effective action and its renormalization, and construct
the necessary equations to observe the gravitational version of the
Applequist-Carazzone theorem \cite{AC}
for the Newton constant in $4D$
curved space. In Secs. \ref{sect:scalar}, \ref{sect:dirac} and
\ref{sect:proca} we give explicit formulas for nonminimally coupled
scalars, Dirac spinors and Proca fields respectively. Finally, in Sec.~\ref{sect:conclusions}
we draw our conclusions, present a general
analysis of the results and comment on possible physical
interpretations and the prospects of further developments.
The two Appendices are included to clarify further the main text,
namely in Appendix \ref{sect:heat-kernel} we briefly present the heat
kernel method which is used for the computations, while in Appendix
\ref{sect:uv-structure} we survey the ultraviolet structure of the
effective action and its physical implications.
\section{Nonlocal effective action}\label{sect:non-local-effective-action}
We are interested in the contribution to the vacuum effective action
of a set of free massive matter fields which includes $n_{\rm s}$
nonminimally coupled scalars, $n_{\rm f}$ Dirac fermions and
$n_{\rm p}$ Proca fields. The integration of the free matter fields
fluctuations on curved background leads to the expression
\beq
\Ga[g] &=& n_{\rm s} \Gamma_{\rm s}[g]
+ n_{\rm f} \Gamma_{\rm f}[g] + n_{\rm p} \Gamma_{\rm p}[g]\,,
\eeq
in which $\Gamma_{\rm s}[g]$, $\Gamma_{\rm f}[g]$ and
$\Gamma_{\rm p}[g]$ denote the individual contributions for a
single field of each matter specie. The individual contributions
are\footnote{Starting from this section we assume the Wick rotation
and all notations are Euclidean. The positively defined Laplacian
operator $\D$ is defined in Appendix A and
$R_{\mu\nu}=\pa_\la \Ga^\la_{\mu\nu}+\dots$. At the same time
in all physical discussions we use pseudo-Euclidean notations.}
\begin{equation}
\begin{split}
&
\Gamma_{\rm s}[g]
= \frac{1}{2} \Tr_{\rm s} \ln \left( \D + \xi R +m_{\rm s}^2\right) ,
\\
&
\Gamma_{\rm f}[g] = -\Tr_{\rm f} \ln \left(\slashed D+m_{\rm f}\right) ,
\\
&
\Gamma_{\rm p}[g] = \frac{1}{2}\Tr_{\rm v} \ln\left( \delta_\mu^\nu \D
+\nabla_\mu\nabla^\nu + R_\mu{}^\nu + \delta_\mu^\nu m_{\rm v}^2\right) ,
\end{split}
\end{equation}
in which each trace is taken over the appropriate degrees of freedom
and $\D$ is defined as positive in Euclidean space. A little work is needed
to cast all functional traces in the same form. Squaring the Dirac operator
we arrive at the expression
\begin{equation}
\begin{split}
\Gamma_{\rm f}[g] &= -\frac{1}{2} \Tr_{\rm f}
\ln \left( \D + \frac{R}{4} + m_{\rm f}^2\right)\,.
\end{split}
\end{equation}
When dealing with the Proca operator we need to take care of the
longitudinal
modes, which can be done in at least two equivalent ways \cite{bavi85,BuGui}
and results in
\begin{equation}
\begin{split}
\Gamma_{\rm p}[g] &= \frac{1}{2} \Tr_{\rm v} \ln\left(\delta_\mu^\nu \D+R_\mu{}^\nu+ \delta_\mu^\nu m_{\rm v}^2\right)
- \frac{1}{2} \Tr_{\rm s} \ln\left(\D +m_{\rm v}^2\right)\,.
\end{split}
\end{equation}
Now each trace acts on the logarithm of an operator of Laplace-type
\beq
\Ga[g] &=& \frac{1}{2} \Tr \ln \left(\D +E +m^2\right)
\eeq
for an appropriate endomorphism $E$ acting on the field's bundle. A standard way to compute traces of Laplace-type operators
is to use the heat kernel. We can represent the above trace as an integral over the heat kernel proper time $s$,
\begin{equation}
\begin{split} \label{eq:effective-action-divergent}
\Ga[g] &= -\frac{1}{2} \tr \int_0^\infty \frac{{\rm d}s}{s} \int{\rm d}^4x\sqrt{g} ~ {\rm e}^{-sm^2} {\cal H}(s;x,x)\,,
\end{split}
\end{equation}
in which we have also separated the original trace into an integration
over spacetime and a trace over the internal indices, and introduced the local
heat kernel ${\cal H}(s;x,x')$ (see Appendix \ref{sect:heat-kernel} for a brief
explanation regarding the heat kernel technique).
The effective action \eqref{eq:effective-action-divergent} has ultraviolet
divergencies, and a simple way to regulate them is through dimensional
regularization \cite{bro-cass}.
For this purpose we continue
the leading power $s^{-\frac{d}{2}}$ of the heat kernel
to a generic number $d$ of dimensions,
and introduce both a reference scale $\mu$ to preserve the mass dimension of all quantities
when leaving four dimensions and a small parameter $\epsilon=4-d$.
The result of this substitution is the regularized effective action
\begin{equation}
\begin{split} \label{eq:effective-action-regularized}
\Ga[g] &= -\frac{\mu^{\epsilon}}{2}
\tr \int_0^\infty \frac{{\rm d}s}{s}
\int{\rm d}^4x \sqrt{g}~ {\rm e}^{-sm^2} {\cal H}(s;x,x).
\end{split}
\end{equation}
Since all fields are massive the above effective action has no
infrared divergences, thanks to the exponential damping factor
caused by the mass for large values of $s$. However, there are
ultraviolet divergences which appear as inverse powers of
$\epsilon$ and require renormalization. We follow the
standard practice of subtracting poles of the parameter
$\bar{\epsilon}$, which is defined as
\begin{equation}
\begin{split}
\frac{1}{\bar{\epsilon}}
&
= \frac{1}{\ep} + \frac{1}{2}\ln \left(\frac{4\pi\mu^2}{m^2}\right)
- \frac{\ga}{2}
\end{split}
\end{equation}
(here $\ga$ is the Euler's constant),
instead of simply subtracting $\epsilon$ poles, exploiting the freedom of the
choice of renormalization scheme.
In the process of regularization and renormalization it is often
convenient to deal with dimensionless quantities. Keeping in mind
that at the moment the energy scales at our disposal are the Laplacian
$\Delta_g$ and the mass $m^2$, we find convenient to introduce the
following dimensionless operators
\begin{equation}
\label{eq:dimensionless-operators}
z = \frac{\D}{m^2}\,,
\qquad
a = \sqrt{\frac{4z}{4+z}}\,,
\qquad
Y = 1-\frac{1}{a} \ln\left|{\frac{1+a/2}{1-a/2}}\right|\,.
\end{equation}
With the above definitions we have all the ingredients to discuss the
form that the effective action can take up to the second order in a
curvature expansion. We have that to this order the most general form
can be narrowed down to the sum of a local and a non-local part
\begin{equation}
\begin{split} \label{eq:effective-action-full}
\Ga[g] &=
\Ga_{\rm loc}[g]
+ \frac{m^2}{2(4\pi)^2}\int {\rm d}^4 x \sqrt{g}\,
B(z) R
\\
&
+ \frac{1}{2(4\pi)^2}\int {\rm d}^4 x \sqrt{g}\, \Bigl\{
C^{\mu\nu\alpha\beta} \, C_1(z) \, C_{\mu\nu\alpha\beta}
+ R \, C_2(z) \, R
\Bigr\}\,,
\end{split}
\end{equation}
in which $C_{\mu\nu\rho\theta}$ is the four dimensional Weyl tensor.
Since the divergences are local expressions, all dimensional poles are
contained in the local part of effective action $\Ga_{\rm loc}[g]$.
The renormalization can be performed through the introduction of
appropriate counter terms and generically results in a renormalized
action of the form
\begin{equation}
\begin{split}
\label{eq:effective-action-local-renormalized}
S_{\rm ren}[g]
&= \int {\rm d}^4 x \sqrt{g}\,\left\{b_0 + b_1 R + a_1 C^2 +a_2 {\cal E}_4
+a_3 \Box R + a_4 R^2\right\}\,,
\end{split}
\end{equation}
in which $\,{\cal E}_4$ is the operator associated to the Euler's
characteristic, which is the Gauss-Bonnet topological term in $d=4$.
The renormalized action features the couplings that have to be
experimentally determined in order for the theory to be predictive.
The couplings include the cosmological constant $\Lambda$ and
the Newton's constant $G$ through the relations $b_0=2\Lambda G^{-1}$
and $b_1=-G^{-1}$.
The minimal subtraction ($\overline{\rm MS}$) procedure induces a
running of all the couplings which is encoded in beta functions that we
denote as $\beta^{\overline{\rm MS}}_g$ in which $g$ is any of the
couplings of \eqref{eq:effective-action-local-renormalized}. In what
follows we formulate the beta functions for the parameters $b_0$ and
$b_1$, instead of $\Lambda$ and $G$.
The minimal subtraction scheme - based one-loop renormalization
group flow induced by the beta functions
of the couplings of \eqref{eq:effective-action-local-renormalized}
has been known for a long time for all the field types listed in this
section. In this work we concentrate instead on the non-local
contributions of the effective action. In \eqref{eq:effective-action-full}
we have introduced three new covariant functions $B(z)$, $C_1(z)$
and $C_2(z)$ of the rescaled Laplacian $z$. These functions are known
as form factors of the effective action and represent a true physical
prediction which comes from the formalism: in fact one can imagine
to pick a specific observable -- either from cosmology or from particle
physics -- and compute it in terms of the form factors themselves
\cite{Donoghue:2017pgk}. A simple way to understand the physical
consequences of the effective action, which is related to the general
concept of renormalization group, is to use them to construct new
non-local beta functions which are sensitive to the presence of the
mass scale $m^2$.
Let us first recall that the non-local form factors of the heat kernel of Appendix \ref{sect:heat-kernel},
and consequently the non-local contributions to the effective action \eqref{eq:effective-action-full},
are obtained for asymptotically flat Euclidean spacetimes in which curvatures are small
(schematically $\left|\nabla^2 {\cal R}\right| \gg \left|{\cal R}^2\right|$ for any curvature tensor ${\cal R}$) \cite{bavi90}.
In practice, the asymptotic flatness offers a special reference frame which can be used to construct meaningful
Fourier transformations and in which the expansion in curvatures can be related to
the expansion in fluctuations of the metric. In fact, this is precisely the frame in which the form factors are computed
in \cite{apco,fervi,Codello:2012kq}, even though the final expressions are always presented in a manifestly covariant form.
In short, this implies that the Laplace operator $\Delta_g$ is in one-to-one correspondence with
the square $q^2$ of a momentum $q_\mu$ of the asymptotic frame upon Fourier transformation.
This representation is especially useful for the renormalization group applications,
where one has to take derivatives with respect to the scale parameter.
The straightforward way to derive the beta functions is to subtract
the divergences at the scale $q=\left|q_\mu\right|$.
For convenience, let us define the
dimensionless scale $\hat{q}= q/m$ which is simply $q$ in units of the mass; by
definition after the Fourier transform $\hat{q}$ is related to
$z$ as $\hat{q}^2=z$ and the renormalization group flow is
parametrized by
\begin{equation}
\begin{split}
q\frac{\partial}{\partial q}
~ = ~ \hat{q} \frac{\partial}{\partial \hat{q}}
~ = ~ 2 z \frac{\partial}{\partial z}\,.
\end{split}
\end{equation}
Let us begin by discussing the renormalization group flow of the
terms that are quadratic in the curvatures which has been studied
in detail in \cite{apco,fervi}. A simple inspection suggests the
non-local generalization of the beta functions of $a_1$
\begin{equation}
\begin{split}
\beta_{a_1}
~=~
2 z \,\frac{\partial}{\partial z}\,
\left[\frac{1}{2(4\pi)^2} C_1(z)\right]
~ = ~
\frac{z}{(4\pi)^2} C'_1(z)\,,
\end{split}
\end{equation}
in which we indicate the derivative with a prime. The same can
be done for the coupling $a_4$
\begin{equation}
\begin{split}
\beta_{a_4} ~= ~ \frac{z}{(4\pi)^2} C'_2(z)\,.
\end{split}
\end{equation}
In practice the form factors $C_1(z)$ and $C_2(z)$ play the role of
non-local scale-dependent generalizations of the couplings. Since
our heat kernel methods work on spaces that are asymptotically flat,
we do not have enough information to compute the running of the
topological term in this context (although it is still possible to
complement this result with standard Seeley-DeWitt methods).
Now let us turn our attention to the couplings of the terms that are
linear in the curvature $R$. On the one hand we have that the
renormalized action features two couplings -- $b_1$ and $a_3$ --
but on the other hand there is only a single form factor $B(z)$
acting on $R$ in \eqref{eq:effective-action-full}. Naively we are
tempted to define a master beta function
\begin{equation}\label{eq:def-psi}
\begin{split}
\Psi ~ = ~ \frac{1}{(4\pi)^2}\, z\,\partial_z \left[\frac{B(z)}{z}\right],
\end{split}
\end{equation}
which we denoted with a new symbol to avoid confusion. The running
function $\Psi$ is defined to take into account that in
\eqref{eq:effective-action-full} we are measuring a dimensionfull
quantity -- the coefficient of $R$ -- in units of $m^2$ while instead
our rescaling should be done in units of $q^2$, hence the quotient
with $z=q^2/m^2$ that restores the right units. While we will find
useful to study this object later on, at this stage it is not clear if its
renormalization group flow should be associated to the coupling
$b_1$ or to $a_3$.
Returning to \eqref{eq:effective-action-local-renormalized} it is easy
to see that if $m^2\gg q^2$ the operator $R$ will dominate
over the operator $\Box R$, and conversely if $m^2\ll q^2$ the operator
$\Box R$ will dominate over the operator $R$. This implies that in the
high energy limit $z\gg 1$ the function $\Psi$ should encode information
of $\beta_{a_3}$, while in the opposite limit $z \sim 0$ the function
$\Psi$ should encode information of $\beta_{b_1}$. This property is
discussed in more detail later. However, $\beta_{b_1}$ and $\beta_{a_3}$
have well known ultraviolet limits which we would like to preserve,
associated to $\overline{\rm MS}$ as we will also see later.
We find that the best solution is to define the following beta functions
\begin{equation}
\begin{split}
\label{eq:definitions-running}
\beta_{a_3} ~ = ~ -\frac{1}{(4\pi)^2} \,z\,\partial_z
\left[\frac{B(z)-B(0)}{z}\right],
\qquad
\beta_{b_1} ~ = ~ \frac{m^2}{(4\pi)^2} \,z\,
\partial_z\big[B(z)-B_\infty(z)\big].
\end{split}
\end{equation}
The first equation is implied by the comparison with
\eqref{eq:effective-action-local-renormalized} and
includes the removal of the constant part that should be attributed to $b_1$.
In the second equation
we subtract the dominating $\Box R$ effect from the running of
$B(z)$ in the form of $B_\infty(z)$
which is the leading logarithmic asymptotic behavior for
$z\simeq\infty$ of $B(z)$ itself. The leftover terms of the
subtraction is thus identified with the running of the operator
$R$ and hence the coupling $b_1$. In the practical computations
instead of subtracting the leading logarithm at infinity,
we will subtract instead the combination
\begin{equation}
\begin{split}
a(1-Y)\simeq \ln (z) \,,
\end{split}
\end{equation}
which is shown to be valid for $z\gg 1$ using the definitions
\eqref{eq:dimensionless-operators}. General features of the definitions
\eqref{eq:definitions-running} and their ultraviolet properties are
discussed in more detail in Appendix \ref{sect:uv-structure}.
In the next section we present explicit results for the form factor and
the beta function for the Einstein-Hilbert term. The full set of the
form factors and the expressions for all the non-local beta functions
in the fourth-derivatives sector, and also the corresponding
$\overline{\rm MS}$ beta functions can be found in the
papers \cite{fervi,Codello:2012kq}.
All results will be collected in a mini review to appear shortly
\cite{minireview}.
However, there are still some general
properties that we can discuss here in anticipation. For all the
couplings and all the beta functions we can show that there are
sensible ultraviolet $z\gg 1$ and infrared $z \sim 0$ limits.
Each beta function satisfies the additional property
\begin{equation}
\begin{split}
\beta_g
&= \beta^{\overline{\rm MS}}_g
+ {\cal O}\left(\frac{m^2}{q^2}\right) \qquad {\rm for }
\qquad
q^2\gg m^2,
\end{split}
\end{equation}
where $g$ is any of the couplings.
Furthermore, all the renormalization group running the subject to
the effect of decoupling towards the infrared, meaning that when
$q^2$ goes below the $m^2$ threshold fluctuations stop propagating
and have no effect on the quantum physics anymore. We have that
\begin{equation}
\begin{split}
\beta_g
&= {\cal O}\left(\frac{q^2}{m^2}\right) \qquad {\rm for }\qquad q^2 \ll m^2,
\end{split}
\end{equation}
which is the practical evidence of the Applequist-Carazzone theorem
in four dimensional curved space.
Finally, it is interesting to observe the practical implications of
the discussion on the function $\Psi(z)$. As argued above, the
limits $m^2\ll q^2$ and $m^2\gg q^2$ should see the operators
$\Box R$ and $R$ dominating the running $\Psi(z)$ respectively.
For all the matter types that we consider we have the following two
limits
\begin{equation}
\begin{split}
\Psi &= \begin{cases}
- \beta^{\overline{\rm MS}}_{a_3}
& \qquad {\rm for} \quad q^2 \gg m^2
\\
\frac{m^2}{q^2} ~ \beta^{\overline{\rm MS}}_{b_1}
& \qquad {\rm for} \quad q^2 \ll m^2
\end{cases}
\end{split}
\end{equation}
which reflect the previous consideration. Notice that while the
ultraviolet limit can be straightforwardly proven on the basis of
the definitions of $\beta^{\overline{\rm MS}}_{a_3}$ and $\Psi$,
the infrared limit is much less trivial. Notice also that the infrared
limit does not sharply decouple, because it grows with the square
of the mass, but this is to be expected since we are measuring a
massive quantity in units of $q$ for $q\to 0$. To get rid of the
divergence it is sufficient to switch to measuring the same quantity
in units of $m$ in the infrared.
\section{Nonminimally coupled scalar field}\label{sect:scalar}
The effective action of the nonminimally coupled scalar field
can be obtained specifying the endomorphism $E=\xi R$
in the non-local heat kernel expansion and then performing
the integration in $s$ \cite{Martini:2018ska}.
We find the local contributions of the regularized action to be
\begin{equation}
\begin{split}
\Ga_{\rm loc}[g] &=
\frac{1}{2(4\pi)^2} \int {\rm d}^4 x\sqrt{g} \, \Bigl\{
-m^4\Bigl(\frac{1}{\bar{\epsilon}}+\frac{3}{4}\Bigr)
- 2m^2\Bigl( \xi-\frac{1}{6} \Bigr)\frac{1}{\bar{\epsilon}} R
\\ & \qquad\qquad
+\frac{1}{3} \Bigl( \xi-\frac{1}{5} \Bigr)\frac{1}{\bar{\epsilon}} \Box R
-\frac{1}{60\bar{\epsilon}} C_{\mu\nu\rho\theta} C^{\mu\nu\rho\theta}
-\Bigl( \xi-\frac{1}{6} \Bigr)^2 \frac{1}{\bar{\epsilon}} R^2
\Bigr\}\,.
\end{split}
\end{equation}
The minimal subtraction of the divergences of local contributions
induces the following $\overline{\rm MS}$ beta functions
for the terms with up to one curvature
\begin{equation}
\begin{array}{lll}
\beta_{b_0}^{\overline{\rm MS}} = \frac{1}{(4\pi)^2} \frac{m^4}{2} \,,
& \qquad
\beta_{b_1}^{\overline{\rm MS}} = \frac{1}{(4\pi)^2} m^2 \xibar \,,
& \qquad
\beta_{a_3}^{\overline{\rm MS}} = - \frac{1}{(4\pi)^2} \frac{1}{6} \left(\xi-\frac{1}{5}\right)\,.
\end{array}
\end{equation}
The non-local part of the effective action includes the following form factor
\begin{equation}
\begin{split}
\frac{B(z)}{z} &=
-\frac{4 Y}{15 a^4}+\frac{Y}{9 a^2}-\frac{1}{45 a^2}
+\frac{4}{675}+\xibar \left(-\frac{4 Y}{3 a^2}-\frac{1}{a^2}
+\frac{5}{36}\right)\,,
\end{split}
\end{equation}
while $C_1(z)$ and $C_2(z)$ confirm the results reported in
\cite{apco}. Using our definitions the non-local beta functions
of the couplings associated to the curvature $R$ are
\beq
\beta_{b_1}
&=&
\frac{z}{(4\pi)^2}\Bigl\{
\frac{2 Y}{5 a^4}-\frac{2 Y}{9 a^2}
+ \frac{1}{30 a^2}-\frac{aY}{180}
+\frac{a}{120}+\frac{Y}{24}-\frac{1}{40}
\nonumber
\\
&+&
\xibar \left(\frac{2 Y}{3 a^2}+\frac{a Y}{6}-\frac{a}{4}
-\frac{Y}{2}+\frac{1}{2}\right)
\Bigr\}
\label{Gff}
\eeq
and
\beq
\beta_{a_3} &=& \frac{1}{(4\pi)^2}
\Bigl\{
-\frac{2 Y}{3 a^4}
+\frac{Y}{3 a^2}-\frac{1}{18 a^2}
-\frac{Y}{24}+\frac{7}{360}
\nonumber
\\
&+&
\xibar
\left(-\frac{2 Y}{a^2}+\frac{Y}{2}-\frac{1}{6}\right)
\Bigr\}\,.
\label{a3ff}
\eeq
The Eqs.~\eqref{Gff} and \eqref{a3ff} provide all necessary
ingredients to study the Applequist-Carazzone theorem of both
parameters.
Plots of these beta functions are given in Fig.~\ref{figure:plots-scalar}.
As we have explained in the Introduction, the most interesting is
the decoupling theorem for the running of the Newton's constant
which is related to the inverse of $b_1=-G^{-1}$. The non-local
beta function of the couplings $b_1$ and $a_3$ in units of the
mass have the two limits
\begin{equation}
\begin{split}
\frac{\beta_{b_1}}{m^2} &= \begin{cases}
\frac{1}{(4\pi)^2}\xibar + \frac{1}{(4\pi)^2}
\left\{\left(\frac{3}{5}-\xi\right)
-\xi \ln\left(\frac{q^2}{m^2}\right)\right\} \frac{m^2}{q^2}
+{\cal O}\left(\frac{m^2}{q^2}\right)^{2}
&
\quad {\rm for} \quad q^2 \gg m^2,
\\
\frac{1}{(4\pi)^2}\left(\frac{4}{9}\xi-\frac{77}{900}\right)
\frac{q^2}{m^2} +{\cal O}\left(\frac{q^2}{m^2}\right)^{\frac{3}{2}}
&
\quad {\rm for} \quad q^2 \ll m^2
\end{cases
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}
\beta_{a_3} &= \begin{cases}
-\frac{1}{6(4\pi)^2}\left(\xi-\frac{1}{5}\right)
+ \frac{1}{(4\pi)^2} \left\{ \frac{5}{18}-2\xi
+ \xibar \ln\left(\frac{q^2}{m^2}\right)\right\} \frac{m^2}{q^2}
+{\cal O}\left(\frac{m^2}{q^2}\right)^{2}
& \,\,\, {\rm for} \,\,\, q^2 \gg m^2,
\\
\frac{1}{(4\pi)^2} \frac{1}{840}
\left(3-14\xi\right)\frac{q^2}{m^2}
+{\cal O}\left(\frac{q^2}{m^2}\right)^2
& \,\,\, {\rm for} \,\,\, q^2 \ll m^2.
\end{cases}
\end{split}
\end{equation}
The last expressions show standard quadratic decoupling in the IR
for both parameters,
exactly as in the usual QED situation \cite{AC} and as for the fourth
derivative non-surface gravitational terms \cite{apco,fervi}. In the high
energy limit (UV)
we meet the usual $\overline{\rm MS}$ beta function plus a small correction
to it.
\begin{figure}
\includegraphics[height=4.5cm]{beta-b1-scalar.eps}\qquad \includegraphics[height=4.5cm]{beta-a3-scalar.eps}
\caption{Plots of the beta functions $\beta_{b_1}$ and $\beta_{a_3}$ rescaled by a factor $(4\pi)^2$
that are induced by a single scalar field for the values $\xi=0$ (blue) and $\xi=\frac{1}{6}$ (yellow)
as a function of the variable $a$ defined in \eqref{eq:dimensionless-operators}. The plot ranges from the IR at $a=0$ ($q^2\ll m^2$) to the UV at $a=2$ ($q^2\gg m^2$).
The effects of the Applequist-Carazzone theorem are seen on the left where the beta functions become zero. The beta function $\beta_{b_1}$ for the special conformal value $\xi=\frac{1}{6}$
is zero also in the UV.}
\label{figure:plots-scalar}
\end{figure}
\section{Dirac field}\label{sect:dirac}
The effective action of the minimally coupled Dirac fields requires the specification of the endomorphism $E=R/4$.
The final result turns out to be proportional to the dimension $d_\gamma$ of the Clifford algebra and hence to
the number of spinor components. We do not set $d_\gamma=4$, but choose instead to leave it arbitrary
so that the formulas can be generalized to other spinor species easily.
We find the local regularized action to be
\begin{equation}
\begin{split}
\Ga_{\rm loc}[g] &=\frac{d_\gamma}{2(4\pi)^2} \int {\rm d}^4 x\sqrt{g} \, \Bigl\{
m^4\Bigl(\frac{1}{\bar{\epsilon}}+\frac{3}{4}\Bigr)
+\frac{m^2}{6\bar{\epsilon}} R -\frac{1}{60\bar{\epsilon}} \Box R
-\frac{1}{40\bar{\epsilon}} C_{\mu\nu\rho\theta} C^{\mu\nu\rho\theta}
\Bigr\}\,.
\end{split}
\end{equation}
The minimal subtraction of the $1/\bar{\epsilon}$ divergences induces the following $\overline{\rm MS}$ beta functions
\begin{equation}
\begin{array}{lll}
\beta_{b_0}^{\overline{\rm MS}}
= -\frac{d_\gamma}{(4\pi)^2} \frac{m^4}{2} \,,
& \qquad
\beta_{b_1}^{\overline{\rm MS}}
= -\frac{d_\gamma}{(4\pi)^2} \frac{m^2}{12} \,,
& \qquad
\beta_{a_3}^{\overline{\rm MS}}
= \frac{d_\gamma}{(4\pi)^2} \frac{1}{120} \,.
\end{array}
\end{equation}
The non-local part of the effective action includes the following form factor
\begin{equation}
\begin{split}
\frac{B(z)}{z} &=
d_\gamma\Bigl\{-\frac{7}{400}+\frac{19}{180a^2}+\frac{4Y}{15a^4} \Bigr\}\,,
\end{split}
\end{equation}
while $C_1(z)$ and $C_2(z)$ agree with \cite{apco}.
The non-local beta functions are
\begin{equation}
\begin{split}
\beta_{b_1} &= \frac{d_\gamma z}{(4\pi)^2}\Bigl\{-\frac{2 Y}{5 a^4}+\frac{Y}{6 a^2}-\frac{1}{30 a^2}-\frac{a Y}{120}+\frac{a}{80}-\frac{1}{60} \Bigr\}\,,
\\
\beta_{a_3} &= \frac{d_\gamma}{(4\pi)^2} \Bigl\{ \frac{2 Y}{3 a^4}-\frac{Y}{6 a^2}+\frac{1}{18 a^2}-\frac{1}{180} \Bigr\}\,.
\end{split}
\end{equation}
Likewise the scalar case the non-local beta functions of $b_1$ and $a_3$
have the two limits
\begin{equation}
\begin{split}
\frac{\beta_{b_1}}{m^2}
&= \begin{cases}
-\frac{d_\gamma}{(4\pi)^2}\frac{1}{12}
-\frac{d_\gamma}{(4\pi)^2}\left[\frac{7}{20}
-\frac{1}{4}\ln\left(\frac{q^2}{m^2}\right)\right]\frac{m^2}{q^2} +{\cal O}\left(\frac{m^2}{q^2}\right)^{{2}}
& \qquad {\rm for} \quad q^2 \gg m^2\,;
\\
-\frac{d_\gamma}{(4\pi)^2}\frac{23}{900} \frac{q^2}{m^2} +{\cal O}\left(\frac{q^2}{m^2}\right)^{\frac{3}{2}}
& \qquad {\rm for} \quad q^2 \ll m^2\,.
\end{cases}\\
\beta_{a_3} &= \begin{cases}
\frac{d_\gamma}{(4\pi)^2} \frac{1}{120} + \frac{d_\gamma}{(4\pi)^2}\left\{\frac{2}{9}
-\frac{1}{12}\ln\left(\frac{q^2}{m^2}\right)\right\}\frac{m^2}{q^2}
+{\cal O}\left(\frac{m^2}{q^2}\right)^{{2}}
& \qquad {\rm for} \quad q^2 \gg m^2 \,;
\\
\frac{d_\gamma}{(4\pi)^2} \frac{1}{1680} \frac{q^2}{m^2}
+{\cal O}\left(\frac{q^2}{m^2}\right)^2
& \qquad {\rm for} \quad q^2 \ll m^2\,.
\end{cases}
\end{split}
\end{equation}
Once again, there is a standard quadratic decoupling in the IR
for both parameters, while in the UV we find the $\overline{\rm MS}$
beta function and a sub-leading correction.
\section{Proca field}\label{sect:proca}
The minimally coupled Proca field could be understood as a
four-components vector field, but one of these components is subtracted
through a single scalar ghost, so it has effectively three degrees of
freedom in four dimensions. The local regularized action is
\begin{equation}
\begin{split}
\Ga_{\rm loc}[g] &=
\frac{1}{2(4\pi)^2} \int {\rm d}^4 x\sqrt{g} \, \Bigl\{
-m^4\Bigl(\frac{3}{\bar{\epsilon}}+\frac{9}{4}\Bigr)
-\frac{m^2}{\bar{\epsilon}} R +\frac{2}{15\bar{\epsilon}} \Box R
-\frac{13}{60\bar{\epsilon}} C_{\mu\nu\rho\theta} C^{\mu\nu\rho\theta}
-\frac{1}{36} R^2
\Bigr\}\,.
\end{split}
\end{equation}
The minimal subtraction of the $1/\bar{\epsilon}$ poles induces the
following $\overline{\rm MS}$ beta functions
\begin{equation}
\begin{array}{lll}
\beta_{b_0}^{\overline{\rm MS}} = \frac{1}{(4\pi)^2} \frac{3m^4}{2} \,,
& \qquad
\beta_{b_1}^{\overline{\rm MS}} = \frac{1}{(4\pi)^2} \frac{m^2}{2} \,,
& \qquad
\beta_{a_3}^{\overline{\rm MS}} = -\frac{1}{(4\pi)^2} \frac{1}{15} \,.
\end{array}
\end{equation}
The non-local part of the effective action includes the following form factors
\begin{equation}
\begin{split}
\frac{B(z)}{z} &= \frac{157}{1800} -\frac{17}{30 a^2} -\frac{4 Y}{5 a^4}-\frac{Y}{3 a^2} \,,
\end{split}
\end{equation}
and $C_1(z)$ and $C_2(z)$ reproduce \cite{apco}. The non-local beta functions are
\begin{equation}
\begin{split}
\beta_{b_1} &= \frac{z}{(4\pi)^2} \left\{ \frac{6 Y}{5 a^4}-\frac{Y}{3 a^2}+\frac{1}{10 a^2}+\frac{a Y}{15}-\frac{a}{10}-\frac{Y}{8}+\frac{7 }{40} \right\}
\\
\beta_{a_3} &= \frac{1}{(4\pi)^2} \Bigl\{ -\frac{2 Y}{a^4}-\frac{1}{6 a^2}+\frac{Y}{8}-\frac{1}{40} \Bigr\}\,.
\end{split}
\end{equation}
The beta functions of $b_1$ and $a_3$ have the two limits
\begin{equation}
\begin{split}
\frac{\beta_{b_1}}{m^2} &= \begin{cases}
\frac{1}{(4\pi)^2}\frac{1}{2}+\frac{1}{(4\pi)^2}\left(\frac{4}{5}
-\ln\left(\frac{q^2}{m^2}\right)\right)\frac{m^2}{q^2} +{\cal O}\left(\frac{m^2}{q^2}\right)^{2}
& \qquad {\rm for} \quad q^2 \gg m^2\,;
\\
\frac{1}{(4\pi)^2}\frac{169}{900} \frac{q^2}{m^2}
+{\cal O}\left(\frac{q^2}{m^2}\right)^{\frac{3}{2}}
& \qquad {\rm for} \quad q^2 \ll m^2\,.
\end{cases}\\
\beta_{a_3} &= \begin{cases}
-\frac{1}{(4\pi)^2} \frac{1}{15}
- \frac{1}{(4\pi)^2}\left\{\frac{7}{6}
-\frac{1}{2}\ln\left(\frac{q^2}{m^2}\right)\right\}\frac{m^2}{q^2}
+{\cal O}\left(\frac{m^2}{q^2}\right)^{2}
& \qquad {\rm for} \quad q^2 \gg m^2 \,;
\\
-\frac{1}{(4\pi)^2} \frac{1}{168} \frac{q^2}{m^2}
+{\cal O}\left(\frac{q^2}{m^2}\right)^2 & \qquad {\rm for}
\quad q^2 \ll m^2\,.
\end{cases}
\end{split}
\end{equation}
We can observe that for the Proca field there is the same quadratic
decoupling for both couplings, and the same $\overline{\rm MS}$
beta function plus a small correction in the UV.
\section{Conclusions}
\label{sect:conclusions}
We computed the covariant non-local form factors of the Euclidean
effective action of nonminimal scalars, Dirac spinors and Proca fields
up to the second order of the curvature expansion on asymptotically
flat space. The calculations were performed by means of heat kernel
method for the massive quantum fields and an arbitrary external
metric. We checked explicitly that the results for the fourth derivative
terms confirmed the
previous ones derived by \cite{apco,fervi,Codello:2012kq} which were
obtained by both Feynman diagrams and heat kernel method as presented
in the paper of Barvinsky and Vilkovisky \cite{bavi90}. We used the
results for the effective action to find suitable beta functions
which arise from the subtraction of the divergences at a physical
momentum scale $q^2$. These beta functions are special because
they display two important limits: in the ultraviolet they reproduce
the universal results coming from the minimal subtraction of the
poles of dimensional regularization, while in the infrared (IR)
limit $q^2 \ll m^2$ they exhibit a quadratic decoupling, as
expected from the Applequist-Carazzone theorem. The decoupling
can be observed for both inverse Newton constant and for $a_3$.
With respect to the global scaling the $\,\Box R$-term is the same
as the $R^2$ term. It is well known that the finite contribution for
the $R^2$ term is linked to the divergences of the $\,\Box R$-term,
while the finite nonlocal contribution for the surface $\,\Box R$
term has smaller relevance than the one for the second derivative term.
The main new result of our work is the non-local form factors for the
Einstein-Hilbert term, which has the form $k(\Box)R$. For the
non-zero mass $m$ of the quantum field such a form factor can be
expanded into power series in the ratio $\Box/m^2$ and thus it
represents a power series of total derivatives. If we forget that the
total derivatives do not contribute to the equations of motion,
these form factors show typical quadratic decoupling in the
IR limit $q^2 \ll m^2$. The same effect can be observed from
both form factors in the effective action and from the ``physical''
beta functions defined in the Momentum Subtraction scheme
of renormalization.
The relevant question is whether there is a manner to construct
a physical application for the results for the total derivative terms.
In this respect we can note that the total derivative terms may be
relevant in the case of manifolds with boundaries. In the
theoretical cosmology there are objects of this type called domain
walls, and it would be interesting to consider the implications of
our results in this case. Even more simple is the situation in
cosmology. One can regard the cosmological spacetime of the
expanding universe as a manifold with boundary (horizon) which
has a size defined by the inverse of Hubble parameter. Taking this
into account, the natural interpretation is that we have, for the
Einstein-Hilbert term, the decoupling in the form of identification
\beq
\frac{q^2}{m^2}\,\,\longrightarrow \,\,\frac{H^2}{m^2}.
\label{idH}
\eeq
Indeed, the quadratic decoupling for the inverse Newton constant
in the IR is not what we need for the phenomenological models of
quantum corrections in cosmology \cite{CC-Gruni} or
astrophysics \cite{RotCurves}. Using the approach of \cite{CC-Gruni}
one can easily see that in this case the energy conservation law will
tell us that the cosmological constant does not show any significant
running in the IR. This is the result which some of the present authors
could not achieve in \cite{DCCrun}. In our opinion, however, this
conclusion can not be seen as final, since it is based on the
qualitative and phenomenological identification of the scale \eqref{idH}.
Nevertheless, one can expect that the study based on surface terms can be
useful in the further exploration of this interesting subject.
\bigskip
\noindent\emph{Acknowledgements.}
Authors are grateful to Tiago G.~Ribeiro for involvement in the
early stages of the project. O.Z.\ is grateful to Carlo~Pagani for several
discussions on this and related topics. O.Z.\ acknowledges support
from the DFG under the Grant Za~958/2-1. The work of I.Sh.\ was
partially supported by Conselho Nacional de Desenvolvimento
Cient\'{i}fico e Tecnol\'{o}gico - CNPq (grant 303893/2014-1)
and Funda\c{c}\~{a}o de Amparo \`a Pesquisa de Minas Gerais -
FAPEMIG (project APQ-01205-16).
T.P.N.\ wishes to acknowledge
CAPES for the support through the PNPD program.
S.A.F.\ acknowledges support from the DAAD and the Ministerio
de Educaci\'on Argentino under the ALE-ARG program.
|
2,877,628,089,793 | arxiv | \section{Introduction}
\begin{figure}
\centering
\includegraphics[width=0.47\linewidth]{Bolt2/0020.png} \includegraphics[width=0.47\linewidth]{Bolt2/0232.png} \\
\vspace{0.1em}
\includegraphics[width=0.47\linewidth]{CarScale/0005.png} \includegraphics[width=0.47\linewidth]{CarScale/0202.png} \\
\includegraphics[width=0.8\linewidth]{fig1_mark.pdf} \\
\caption{Comparisons between one-stage Siamese-RPN~\cite{li2018high} and C-RPN on two challenging sequences: {\it Bolt2} (the top row) with similar distractors and {\it CarScale} (the bottom row) with large scale changes. We observe that C-RPN can distinguish the target from distractors, while Siamese-RPN drifts to the background in {\it Bolt2}. In addition, compared to using a single regressor in Siamese-RPN, multi-regression in C-RPN can better localize the target in presence of large scale changes in {\it CarScale}. Best viewed in color.}
\label{fig:1}
\vspace{-2mm}
\end{figure}
Visual tracking is one of the most fundamental problems in computer vision, and has a long list of applications such as robotics, human-machine interaction, intelligent vehicle, surveillance and so forth. Despite great advances in recent years, visual tracking remains challenging due to many factor including occlusion, scale variation, etc.
Recently, Siamese network has drawn great attention in the tracking community owing to its balanced accuracy and speed. By formulating object tracking as a matching problem, Siamese trackers~\cite{tao2016siamese,bertinetto2016fully,valmadre2017end,he2018twofold,held2016learning,li2018high,wang2018learning,zhu2018distractor} aim to learn {\it offline} a generic similarity function from a large set of videos. Among these methods, the work of~\cite{li2018high} proposes a one-stage Siamese-RPN for tracking by introducing the regional proposal network (RPN), originally used for object detection~\cite{ren2015faster,liu2016ssd}, into Siamese network. With the proposal extraction by RPN, this approach simultaneously performs classification and localization from multiple scales, achieving excellent performance. Besides, the use of RPN avoids applying the time-consuming pyramid for target scale estimation~\cite{bertinetto2016fully}, resulting in a super real-time solution.
\subsection{Problem and Motivation}
Despite having achieved promising result, Siamese-RPN may drift to the background especially in presence of similar semantic distractors (see Fig.~\ref{fig:1}). We identify two reasons accounting for this.
First, the distribution of training samples is imbalanced: (1) positive samples are far less than negative samples, leading to ineffective training of the Siamese network; and (2) most negative samples are easy negatives (non-similar non-semantic background) that contribute {\it little} useful information in learning a discriminative classifier~\cite{lin2017focal}. As a consequence, the classifier is dominated by the easily classified background samples, and degrades when encountering difficult similar semantic distractors.
Second, low-level spatial features are not fully explored. In Siamese-RPN (and other Siamese trackers), only features of the last layer, which contain more semantic information, are explored to distinguish target/background. In tracking, nevertheless, background distractors and the target may belong to the same category, and/or have similar semantic features~\cite{wang2015visual}. In such case, the high-level semantic features are less discriminative in distinguishing target/background.
In addition to the issues above, the one-stage Siamese-RPN applies a single regressor for target localization using pre-defined anchor boxes. These boxes are expected to work well when having a high overlap with the target. However, for {\it model-free} visual tracking, no prior information regarding the target object is known, and it is hard to estimate how the scale of target changes. Using pre-defined coarse anchor boxes in a single step regression is insufficient for accurate localization~\cite{gidaris2015object,cai2018cascade} (see again Fig.~\ref{fig:1}).
The class imbalance problem is addressed in two-stage object detector (\eg, Faster R-CNN~\cite{ren2015faster}). The first proposal stage rapidly filters out most background samples, and then the second classification stage adopts sampling heuristics such as a fixed foreground-to-background ratio to maintain a manageable balance between foreground and background. In addition, two steps of regressions achieve accurate localization even for objects with extreme shapes.
Motivated by the two-stage detector, we propose a multi-stage tracking framework by cascading a sequence of RPNs to solve the class imbalance problem, and meanwhile fully explore features across layers for robust visual tracking.
\subsection{Contribution}
As the {\bf first contribution}, we present a novel multi-stage tracking framework, the Siamese Cascaded RPN (C-RPN), to solve the problem of class imbalance by performing hard negative sampling~\cite{viola2001rapid,shrivastava2016training}. C-RPN consists of a sequence of RPNs cascaded from the high-level to the low-level layers in the Siamese network. In each stage (level), an RPN performs classification and localization, and outputs the classification scores and the regression offsets for the anchor boxes in this stage. The easy negative anchors are then filtered out, and the rest, treated as hard examples, are utilized as training samples for the RPN of the next stage. Through such process, C-RPN performs stage by stage hard negative sampling. As a result, the distributions of training samples are sequentially more balanced, and the classifiers of RPNs are sequentially more discriminative in distinguishing more difficult distractors (see Fig.~\ref{fig:1}).
{\bf Another benefit} of C-RPN is the more accurate target localization compared to the one-stage SiamRPN~\cite{li2018high}. Instead of using the pre-defined coarse anchor boxes in a single regression step, C-RPN consists of multiple steps of regressions due to multiple RPNs. In each stage, the anchor boxes (including {\it locations} and {\it sizes}) are adjusted by the regressor, which provides better initialization for the regressor of next stage. As a consequence, C-RPN progressively refines the target bounding box, leading to better localization as shown in Fig.~\ref{fig:1}.
Leverage features from different layers in the neural networks has been proven to be beneficial for improving model discriminability~\cite{long2015fully,lin2017refinenet,lin2017feature}. To fully explore both the high-level semantic and the low-level spatial features for visual tracking, we make the {\bf second contribution} by designating a novel feature transfer block (FTB). Instead of separately using features from a single layer in one RPN, FTB enables us to fuse the high-level features into low-level RPN, which further improves its discriminative power to deal with complex background, resulting in better performance of C-RPN. Fig.~\ref{fig:cpn} illustrates the framework of C-RPN.
Last but not least, the {\bf third contribution} is to implement a tracker based on the proposed C-RPN. In extensive experiments on six benchmarks, including OTB-2013~\cite{wu2013online}, OTB-2015~\cite{wu2015object}, VOT-2016~\cite{kristan2016visual}, VOT-2017~\cite{kristan2017visual}, LaSOT~\cite{fan2018lasot} and TrackingNet~\cite{muller2018trackingnet}, our C-RPN consistently achieves the state-of-the-art results and runs in real-time.
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{cascadeRPN.pdf}
\caption{Illustration of the architecture of C-RPN, including a Siamese network for feature extraction and cascaded regional proposal networks for sequential classifications and regressions. The FTB transfers the high-level semantic features for the low-level RPN, and ``A'' represents the set of anchor boxes, which are gradually refined stage by stage. Best viewed in color.}
\label{fig:cpn}
\vspace{-2mm}
\end{figure*}
\section{Related Work}
Visual tracking has been extensively researched in recent decades. In the following we discuss the most related work, and refer readers to~\cite{smeulders2013visual,yilmaz2006object,li2018deep} for recent surveys.
\vspace{0.12em}
\noindent{\bf Deep tracking.} Inspired by the successes in image classification~\cite{krizhevsky2012imagenet,he2016deep}, deep convolutional neural network (CNN) has been introduced into visual tracking and demonstrated excellent performances~\cite{wang2013learning,wang2015visual,nam2016learning,danelljan2017eco,fan2017sanet,ma2015hierarchical,danelljan2016beyond,song2018vital}. Wang {\it et al.}~\cite{wang2013learning} propose a stacked denoising autoencoder to learn generic feature representation for object appearance modeling in tracking. Wang {\it et al.}~\cite{wang2015visual} introduce a fully convolutional neural network tracking (FCNT) approach by transferring the pre-trained deep features to improve tracking accuracy. Ma {\it et al.}~\cite{ma2015hierarchical} replace hand-craft features in correlation filter tracking with deep features, achieving remarkable gains. Nam and Han~\cite{nam2016learning} propose a light architecture of CNNs with online fine-tuning to learn generic feature for tracking target. Fan and Ling~\cite{fan2017sanet} extend this approach by introducing a recurrent neural network (RNN) to capture object structure. Song {\it et al.}~\cite{song2018vital} apply adversary learning in CNN to learn richer representation for tracking. Danelljan {\it et al.}~\cite{danelljan2016beyond} propose continuous convolution filters for correlation filter tracking, and later optimize this method in~\cite{danelljan2017eco}.
\vspace{0.12em}
\noindent{\bf Siamese tracking.} Siamese network has attracted increasing interest for visual tracking because of its balanced accuracy and accuracy. Tao {\it et al.}~\cite{tao2016siamese} utilize Siamese network to off-line learn a matching function from a large set of sequences, then use the fixed matching function to search for the target in a local region. Bertinetto {\it et al.}~\cite{bertinetto2016fully} introduce a fully convolutional Siamese network (SiamFC) for tracking by measuring the region-wise feature similarity between the target object and the candidate. Owing to its light structure and without model update, SiamFC runs efficiently at 80 fps. Held {\it et al.}~\cite{held2016learning} propose the GOTURN approach by learning a motion prediction model with the Siamese network. Valmadre {\it et al.}~\cite{valmadre2017end} use a Siamese network to learn the feature representation for correlation filter tracking. He {\it et al.}~\cite{he2018twofold} introduce a two-fold Siamese network for tracking. Wang {\it et al.}~\cite{wang2018learning} incorporate attention mechanism into Siamese network to learn a more discriminative metric for tracking. Notably, Li {\it et al.}~\cite{li2018high} combine Siamese network with RPN, and propose a one-stage Siamese-RPN tracker, achieving excellent performance. Zhu {\it et al.}~\cite{zhu2018distractor} introduce more negative samples to train a distractor-aware Siamese-RPN tracker. Despite improvement, this approach requires large extra training data from other domains.
\vspace{0.12em}
\noindent{\bf Multi-level features.} The features from different layers in the neural network contain different information. The high-level feature consists of more abstract semantic cues, while the low-level layers contains more detailed spatial information~\cite{long2015fully}. It has been proven that tracking can be benefited using multi-level features. In~\cite{ma2015hierarchical}, Ma {\it et al.} separately use features in three different layers for three correlation models, and fuse their outputs for the final tracking result. Wang {\it et al.}~\cite{wang2015visual} develop two regression models with features from two layers to distinguish similar semantic distractors.
\vspace{0.12em}
\noindent{\bf Our approach.} In this paper, we focus on solving the problem of class imbalance to improve model discriminability. Our approach is related but different from the Siamese-RPN tracker~\cite{li2018high}, which applies one-stage RPN for classification and localization and skips the data imbalance problem. In contrast, our approach cascades a sequence of RPNs to address the data imbalance by performing hard negative sampling, and progressively refines anchor boxes for better target localization using multi-regression. Our method is also related to~\cite{ma2015hierarchical,wang2015visual} using multi-level features for tracking. However, unlike~\cite{ma2015hierarchical,wang2015visual} in which multi-level features are separately used for independent models, we propose a feature transfer block to fuse features across layer for each RPN, improving its discriminative power in distinguishing the target object from complex background.
\section{Siamese Cascaded RPN (C-RPN)}
In this section, we detail the Siamese Cascaded RPN (referred to as C-RPN) as shown in Fig.~\ref{fig:cpn}.
C-RPN contains two subnetworks: the Siamese network and the cascaded RPN. The Siamese network is utilized to extract the features of the target template $\mathbf{x}$ and the search region $\mathbf{z}$. Afterwards, C-RPN receives the features of $\mathbf{x}$ and $\mathbf{z}$ for each RPN. Instead of only using the features from one layer, we apply feature transfer block (FTB) to fuse the features from high-level layers for RPN. An RPN simultaneously performs classification and localization on the feature maps of $\mathbf{z}$. According to the classification scores and regression offsets, we filter out the easy negative anchors (\eg, an anchor whose negative confidence is larger than a preset threshold $\theta$), and refine the locations and sizes of the rest anchors, which are used for training RPN in the next stage.
\subsection{Siamese Network}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{RPN.pdf}
\caption{Architecture of RPN. Best viewed in color.}
\label{fig:rpn}
\vspace{-2mm}
\end{figure}
As in~\cite{bertinetto2016fully}, we adopt the modified AlexNet~\cite{krizhevsky2012imagenet} to develop our Siamese network. The Siamese network comprises two identical branches, the z-branch and the x-branch, which are employed to extract features from the target template $\mathbf{z}$ and the search region $\mathbf{x}$, respectively (see Fig.~\ref{fig:cpn}). The two branches are designed to share parameters to ensure the same transformation applied to both $\mathbf{z}$ and $\mathbf{x}$, which is crucial for the similarity metric learning. More details about the Siamese network can be referred to~\cite{bertinetto2016fully}.
Different from~\cite{li2018high} that only uses the features from the last layer of the Siamese network for tracking, we leverage the features from multiple levels to improve model robustness. For convenience in next, we denote $\varphi_{i}(\mathbf{z})$ and $\varphi_{i}(\mathbf{x})$ as the feature transformations of $\mathbf{z}$ and $\mathbf{x}$ from the conv-$i$ layer in the Siamese network with $N$ layers\footnote{For notation simplicity, we name each layer in the Siamese network in an {\bf inverse} order, \ie, conv-$N$, conv-$(N-1)$, $\cdots$, conv-$2$, conv-$1$ for the low-level to the high-level layers.}.
\subsection{One-Stage RPN in Siamese Network}
Before describing C-RPN, we first review the one-stage Siamese RPN tracker~\cite{li2018high}, which consists of two branches of classification and regression for anchors, as depicted in Fig.~\ref{fig:rpn}. It takes as inputs the feature transformations $\varphi_{1}(\mathbf{z})$ and $\varphi_{1}(\mathbf{x})$ of $\mathbf{z}$ and $\mathbf{x}$ from the last layer of the Siamese network, and outputs classification scores and regression offsets for anchors. For simplicity, we remove the subscripts in feature transformations in next.
To ensure classification and regression for each anchor, two convolution layers are utilized to adjust the channels of $\varphi(\mathbf{z})$ into suitable forms, denoted as $[\varphi(\mathbf{z})]_{cls}$ and $[\varphi(\mathbf{z})]_{reg}$, for classification and regression, respectively. Likewise, we apply two convolution layers for $\varphi(\mathbf{x})$ but keep the channels unchanged, and obtain $[\varphi(\mathbf{x})]_{cls}$ and $[\varphi(\mathbf{x})]_{reg}$. Therefore, the classification scores $\{c_i\}$ and the regression offsets $\{r_i\}$ for each anchor can be computed as
\begin{equation}
\label{eq1}
\begin{split}
\{c_i\} &= \mathrm{corr}([\varphi(\mathbf{z})]_{cls}, [\varphi(\mathbf{x})]_{cls})\\
\{r_i\} &= \mathrm{corr}([\varphi(\mathbf{z})]_{reg}, [\varphi(\mathbf{x})]_{reg})
\end{split}
\end{equation}
where $i$ is the anchor index, and $\rm corr(\mathbf{a}, \mathbf{b})$ denotes correlation between $\mathbf{a}$ and $\mathbf{b}$ where $\mathbf{a}$ is served as the kernel. Each $c_i$ is a 2d vector, representing for negative and positive confidences of the $i^{\rm th}$ anchor. Similarly, each $r_i$ is a 4d vector which represents the offsets of center point location and size of the anchor to groundtruth. Siamese RPN is trained with a multi-task loss consisting of two parts, \ie, the classification loss (\ie, softmax loss) and the regression loss (\ie, smooth $L_1$ loss). We refer readers to~\cite{li2018high,ren2015faster} for further details.
\subsection{Cascaded RPN}
As mentioned earlier, previous Siamese trackers mostly ignore the problem of class imbalance, resulting in degenerated performance in presence of similar semantic distractors. Besides, they only use the high-level semantic features from the last layer, which does not fully explore multi-level features. To address these issues, we propose a multi-stage tracking framework by cascading a set of $L$ ($L \le N$) RPNs.
For RPN$_l$ in the $l^{\rm th}$ ($1 < l \le L$) stage, it receives fused features $\Phi_l(\mathbf{z})$ and $\Phi_l(\mathbf{x})$ of the conv-$l$ layer and the high-level layers from FTB, instead of features $\varphi_l(\mathbf{z})$ and $\varphi_l(\mathbf{x})$ from a single separate layer~\cite{li2018high,bertinetto2016fully}. The $\Phi_l(\mathbf{z})$ and $\Phi_l(\mathbf{x})$ are obtained as follows,
\begin{equation}
\label{eq4}
\begin{split}
\Phi_{l}(\mathbf{z}) &= \mathrm{FTB} \big(\Phi_{l-1}(\mathbf{z}), \varphi_{l}(\mathbf{z})\big) \\
\Phi_{l}(\mathbf{x}) &= \mathrm{FTB} \big(\Phi_{l-1}(\mathbf{x}), \varphi_{l}(\mathbf{x})\big)
\end{split}
\end{equation}
where $\mathrm{FTB}(\cdot,\cdot)$ denotes the FTB as described in Section~\ref{ftb}. For RPN$_1$, $\Phi_{1}(\mathbf{z}) = \varphi_{1}(\mathbf{z})$ and $\Phi_{1}(\mathbf{x}) = \varphi_{1}(\mathbf{x})$. Therefore, the classification scores $\{c_i^{l}\}$ and the regression offsets $\{r_i^{l}\}$ for anchors in stage $l$ are calculated as
\begin{equation}
\label{eq5}
\begin{split}
\{c_i^{l}\} &= \mathrm{corr}([\Phi_l(\mathbf{z})]_{cls}, [\Phi_l(\mathbf{x})]_{cls})\\
\{r_i^{l}\} &= \mathrm{corr}([\Phi_l(\mathbf{z})]_{reg}, [\Phi_l(\mathbf{x})]_{reg})
\end{split}
\end{equation}
where $[\Phi_{l}(\mathbf{z})]_{cls}$, $[\Phi_{l}(\mathbf{x})]_{cls}$, $[\Phi_{l}(\mathbf{z})]_{reg}$ and $[\Phi_{l}(\mathbf{x})]_{reg}$ are derived by performing convolutions on $\Phi_{l}(\mathbf{z})$ and $\Phi_{l}(\mathbf{x})$.
Let $A_{l}$ denote the anchor set in stage $l$. With classification scores $\{c_i^{l}\}$, we can filter out anchors in $A_{l}$ whose negative confidences are larger than a preset threshold $\theta$, and the rest are formed into a new set of anchor $A_{l+1}$, which is employed for training RPN$_{l+1}$. For RPN$_1$, $A_1$ is pre-defined. Besides, in order to provide a better initialization for regressor of RPN$_{l+1}$, we refine the center locations and sizes of anchors in $A_{l+1}$ using the regression results $\{r_i^{l}\}$ in RPN$_{l}$, thus generate more accurate localization compared to a single step regression in Siamese RPN~\cite{li2018high}, as illustrated in Fig.~\ref{fig:4}. Fig.~\ref{fig:cpn} shows the cascade architecture of C-RPN.
\begin{figure}
\centering
\includegraphics[width=0.47\linewidth]{Jump/0055.png} \includegraphics[width=0.47\linewidth]{Jump/0107.png} \\
\includegraphics[width=0.8\linewidth]{fig1_mark.pdf} \\
\caption{Localization using a single regressor and multiple regressors.The multiple regressors in C-RPN can better handle large scale changes for more accurate localization. Best viewed in color.}
\label{fig:4}
\vspace{-2mm}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{det_maps.pdf}\\
\caption{Response maps in different stages. Image (a) is the region of interest, and (b) shows the response maps obtained by RPN in three stages. We can see that RPN is sequentially more discriminative in distinguishing distractors. Best viewed in color.}
\label{fig:5}
\vspace{-2mm}
\end{figure}
The loss function $\ell_{{\rm RPN}_{l}}$ for RPN$_l$ is composed of classification loss function $L_{\rm cls}$ (softmax loss) and regression loss function $L_{\rm loc}$ (smooth $L_1$ loss) as follows,
\begin{equation}
\label{rpn_loss}
\ell_{{\rm RPN}_{l}}(\{c_i^{l}\}, \{r_i^{l}\}) = \sum_{i}L_{\rm cls}(c_i^{l}, c_i^{l*}) + \lambda \sum_{i}c_{i}^{l*}L_{\rm loc}(r_i^{l}, r_i^{l*})
\end{equation}
where $i$ is the anchor index in $A_l$ of stage $l$, $\lambda$ a weight to balance losses, $c_{i}^{l*}$ the label of anchor $i$, and $r_{i}^{l*}$ the true distance between anchor $i$ and groundtruth. Following~\cite{ren2015faster}, $r_{i}^{l*}=(r_{i(\rm x)}^{l*}, r_{i (\rm y)}^{l*}, r_{i (\rm w)}^{l*}, r_{i (\rm h)}^{l*})$ is a 4d vector, such that
\begin{equation}
\begin{aligned}
r_{i(\rm x)}^{l*} &= (x^*-x_a^{l})/w_a^{l} & r_{i(\rm y)}^{l*} &= (y^*-y_a^{l})/h_a^{l} \\
r_{i(\rm w)}^{l*} &= {\rm log}(w^*/w_a^{l}) & r_{i(\rm h)}^{l*} &= {\rm log}(y^*/h_a^{l})
\end{aligned}
\end{equation}
where $x$, $y$, $w$ and $h$ are center coordinates of a box and its width and height. Variables $x^*$ and $x_a^{l}$ are for groundtruth and anchor of stage $l$ (likewise for $y$, $w$ and $h$). It is worth noting that, different from~\cite{li2018high} using fixed anchors, the anchors in C-RPN are progressively adjusted by the regressor in the previous stage, and computed as
\begin{equation}
\label{eq8}
\begin{aligned}
x_a^{l} &=x_a^{l} + w_{a}^{l-1} r_{i(\rm x)}^{l-1} & y_a^{l} &=y_a^{l} + h_{a}^{l-1} r_{i(\rm y)}^{l-1} \\
w_a^{l} &= w_a^{l-1} {\rm exp}(r_{i(\rm w)}^{l-1}) & h_a^{l} &= h_a^{l-1} {\rm exp}(r_{i(\rm h)}^{l-1})
\end{aligned}
\end{equation}
For the anchor in the first stage, $x_a^{1}$, $y_a^{1}$, $w_a^{1}$ and $h_a^{1}$ are pre-defined.
The above procedure forms the proposed cascaded RPN. Due to the rejection of easy negative anchors, the distribution of training samples for each RPN is gradually more balanced. As a result, the classifier of each RPN is sequentially more discriminative in distinguishing difficult distractors. Besides, multi-level feature fusion further improves the discriminability in handing complex background. Fig.~\ref{fig:5} shows the discriminative powers of different RPNs by demonstrating detection response map in each stage.
The loss function $\ell_{\rm CRPN}$ of C-RPN consists of the loss functions of all RPN$_l$. For each RPN, loss function is computed using Eq. (\ref{rpn_loss}), and $\ell_{\rm CRPN}$ is expresses as
\begin{equation}
\label{crpn_loss}
\ell_{\rm CRPN} = \sum_{l=1}^{L} \ell_{\rm RPN_{l}}
\end{equation}
\subsection{Feature Transfer Block}
\label{ftb}
To effectively leverage multi-level features, we introduce FTB to fuse features across layers so that each RPN is able to share high-level semantic feature to improve the discriminability. In detail, a deconvolution layer is used to match the feature dimensions of different sources. Then, different features are fused using element-wise summation, followed a ReLU layer. In order to ensure the same groundtruth for anchors in each RPN, we apply the interpolation to rescale the fused features such that the output classification maps and regression maps have the same resolution for all RPN. Fig.~\ref{fig:6} shows the feature transferring for RPN$_{l}$ ($l>1$).
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{FTB.pdf} \\
\caption{Overview of feature transfer block. Best viewed in color.}
\label{fig:6}
\vspace{-2mm}
\end{figure}
\subsection{Training and Tracking}
\noindent
{\bf Training.} The training of C-RPN is performed on the image pairs that are sampled within a random interval from the same sequence as in~\cite{li2018high}. The multi-task loss function in Eq. (\ref{crpn_loss}) enables us to train C-RPN in an end-to-end manner. Considering that the scale of target changes smoothly in two consecutive frames, we employ one scale with different ratios for each anchor. The ratios of anchors are set to $[0.33, 0.5, 1, 2, 3]$ as in~\cite{li2018high}.
For each RPN, we adopt the strategy as in object detection~\cite{ren2015faster} to determine positive and negative training samples. We define the positive samples as anchors whose Intersection over union (IOU) with groundtruth is larger than a threshold $\tau_{\rm pos}$, and negative samples as anchors whose IoU with groundtruth bounding box is less than a threshold $\tau_{\rm neg}$. We generate at most 64 samples from one image pair.
\begin{algorithm}[!t]\small
\caption{Tracking with C-RPN}\label{crpn_alg}
{\bf Input:} frame sequences $\{\mathbf{X}_{t}\}_{t=1}^{T}$ and groundtruth bounding box $\mathbf{b}_1$ of $\mathbf{X}_1$, trained model C-RPN\;
{\bf Output:} Tracking results $\{\mathbf{b}_{t}\}_{t=2}^{T}$\;
Extract target template $\mathbf{z}$ in $\mathbf{X}_1$ using $\mathbf{b}_1$ \;
Extract features $\{\varphi_{l}(\mathbf{z})\}_{l=1}^{L}$ for $\mathbf{z}$ from C-RPN\;
Initialize anchors $A_1$\;
\For{$t=2$ {\rm to} $T$}
{
Extract the search region $\mathbf{x}$ in $\mathbf{X}_t$ using $\mathbf{b}_{t-1}$ \;
Extract features $\{\varphi_{l}(\mathbf{x})\}_{l=1}^{L}$ for $\mathbf{x}$ from C-RPN\;
\For {$l=1$ {\rm to} $L$}
{
\eIf {$l$ {\rm equals to 1}}
{
$\Phi_{l}(\mathbf{z}) = \varphi_{l}(\mathbf{z})$, $\Phi_{l}(\mathbf{x}) = \varphi_{l}(\mathbf{x})$\;
}
{
$\Phi_{l}(\mathbf{z})$, $\Phi_{l}(\mathbf{x})$ $\leftarrow$ Eq. (\ref{eq4}) \;
}
$\{c_{i}^{l}\}$, $\{r_{i}^{l}\}$ $\leftarrow$ Eq. (\ref{eq5})\;
Remove any anchor $i$ from $A_l$ whose negative confidence $c_{i(\rm neg)}^{l} > \theta$ \;
$A_{l+1}$ $\leftarrow$ Refine the rest anchors in $A_{l}$ with $\{r_{i}^{l}\}$ using Eq. (\ref{eq8})\;
}
Target proposals $\leftarrow$ $A_{L+1}$ \;
Selet the best proposal as tracking result $\mathbf{b}_{k}$ using strategies in~\cite{li2018high}\;
}
\end{algorithm}
\vspace{0.1 em}
\noindent {\bf Tracking.} We formulate tracking as multi-stage detection. For each video, we pre-compute feature embeddings for the target template in the first frame. In a new frame, we extract a region of interest according to the result in last frame, and then perform detection using C-RPN on this region. In each stage, an RPN outputs the classification scores and regression offsets for anchors. The anchors with negative scores lager then $\theta$ are discarded, and the rest are refined and taken over by RPN in next stage. After the last stage $L$, the remained anchors are regarded as target proposals, from which we determine the best one as the final tracking result using strategies in~\cite{li2018high}. Alg.~\ref{crpn_alg} summarizes the tracking process by C-RPN.
\section{Experiments}
\noindent
{\bf Implementation detail.} C-RPN is implemented in Matlab using MatConvNet~\cite{vedaldi2015matconvnet} on a single Nvidia GTX 1080 with 8GB memory. The backbone Siamese network adopts the modified AlexNet~\cite{krizhevsky2012imagenet} by removing group convolutions. Instead of training from scratch, we borrow the parameters from the pretrained model on ImageNet~\cite{deng2009imagenet}. During training, the parameters of first two layers are frozen. The number $L$ of stages is set to 3. The thresholds $\theta$, $\tau_{\rm pos}$ and $\tau_{\rm neg}$ are empirically set to 0.95, 0.6 and 0.3. C-RPN is trained end-to-end over 50 epochs using SGD, and the learning rate is annealed geometrically at each epoch from $10^{-2}$ to $10^{-6}$. We train C-RPN using the training data from~\cite{fan2018lasot} for experiment under Protocol \uppercase\expandafter{\romannumeral2} on LaSOT~\cite{fan2018lasot}, and using VID~\cite{russakovsky2015imagenet} and YT-BB~\cite{real2017youtube} for other experiments.
Note that the comparison with Siamese-RPN~\cite{li2018high} is fair since the same training data is used for training.
\subsection{Experiments on OTB-2013 and OTB-2015}
\begin{figure}[t]
\centering
\includegraphics[width=0.495\linewidth]{otb/suc_otb_2013.pdf} \includegraphics[width=0.495\linewidth]{otb/suc_otb_2015.pdf}
\caption{Comparisons with stage-of-the-art tracking approaches on OTB-2013~\cite{wu2013online} and OTB-2015~\cite{wu2015object}. C-RPN achieves the best results on both benchmarks. Best viewed in color.}
\label{fig:otb}
\vspace{-2mm}
\end{figure}
We conduct experiments on the popular OTB-2013~\cite{wu2013online} and OTB-2015~\cite{wu2015object} which consist of 51 and 100 fully annotated videos, respectively. C-RPN runs at around 36 fps.
Following~\cite{wu2013online}, we adopt the {\it precision} plot in {\it one-pass evaluation} (OPE) to assess different trackers. The comparison with 14 state-of-the-art trackers (SiamRPN~\cite{li2018high}, DaSiamRPN~\cite{zhu2018distractor}, TRACA~\cite{choi2018context}, ACT~\cite{chen2018real}, BACF~\cite{galoogahi2017learning}, ECO-HC~\cite{danelljan2017eco}, CREST~\cite{song2017crest}, SiamFC~\cite{bertinetto2016fully}, Staple~\cite{bertinetto2016staple}, PTAV~\cite{fan2017parallel}, SINT~\cite{tao2016siamese}, CFNet~\cite{valmadre2017end}, HDT~\cite{qi2016hedged} and HCFT~\cite{ma2015hierarchical}) is shown in Fig.~\ref{fig:otb}. C-RPN achieves the best performance on both two benchmarks. In specific, we obtain the 0.675 and 0.663 precision scores on OTB-2013 and OTB-2015, respectively. In comparison with the baseline one-stage SiamRPN with 0.658 and 0.637 precision scores, we obtain improvements by 1.9\% and 2.6\%, showing the advantages of multi-stage RPN in accurate localization. DaSiamRPN uses extra negative training data from other domains to improve the ability to handle similar distractors, and obtains 0.655 and 0.658 precision scores. Without using extra training data, C-RPN outperforms DaSiamRPN by 2.0\% and 0.5\%. More results and comparisons on OTB-2013~\cite{wu2013online} and OTB-2015~\cite{wu2015object} are shown in the supplementary material.
\subsection{Experiments on VOT-2016 and VOT-2017}
\textbf{VOT-2016}~\cite{kristan2016visual} consists of 60 sequences, aiming at assessing the short-term performance of trackers. The overall performance of a tracking algorithm is evaluated using Expected Average Overlap (EAO) which takes both accuracy and robustness into account. The speed of a tracker is represented with a normalized speed (EFO).
We evaluate C-RPN on VOT-2016, and compare it with 11 trackers including the baseline SiamRPN~\cite{li2018high} and other top ten approaches in VOT-2016. Fig.~\ref{fig:vot16} shows the EAO of different trackers. C-RPN achieves the best results, significantly outperforming the baseline SiamRPN and other approaches. Tab.~\ref{tab:vot16} lists the detailed comparisons of different trackers on VOT-2016. From Tab.~\ref{tab:vot16}, we can see that C-RPN outperforms other trackers in both accuracy and robustness, and runs efficiently.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{vot16/vot16.pdf}
\caption{Comparisons on VOT-2016~\cite{kristan2016visual}. Larger (right) value indicates better performance. Our C-RPN significantly outperforms the baseline and other approaches. Best viewed in color.}
\label{fig:vot16}
\end{figure}
\renewcommand\arraystretch{1.05}
\begin{table}[t]\small
\centering
\caption{Detailed comparisons on VOT-2016~\cite{kristan2016visual}. The best two results are highlighted in \textcolor{red}{\bf red} and \textcolor{blue}{\bf blue} fonts, respectively.}
\begin{tabular}{rcccc}
\hline
Tracker & EAO & Accuracy & Failure & EFO \\
\hline \hline
C-RPN & \textcolor[rgb]{ 1, 0, 0}{\textbf{0.363}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{0.594}} & 0.95 & 9.3 \\
\hline
SiamRPN~\cite{li2018high} & \textcolor[rgb]{ 0, 0, 1}{\textbf{0.344}} & 0.560 & 1.12 & \textcolor[rgb]{ 0, 0, 1}{\textbf{23.0}} \\
C-COT~\cite{danelljan2016beyond} & 0.331 & 0.539 & \textcolor[rgb]{ 0, 0, 1}{\textbf{0.85}} & 0.5 \\
TCNN~\cite{kristan2016visual} & 0.325 & 0.554 & 0.96 & 1.1 \\
SSAT~\cite{kristan2016visual} & 0.321 & \textcolor[rgb]{ 0, 0, 1}{\textbf{0.577}} & 1.04 & 0.5 \\
MLDF~\cite{kristan2016visual} & 0.311 & 0.490 & \textcolor[rgb]{ 1, 0, 0}{\textbf{0.83}} & 1.2 \\
Staple~\cite{bertinetto2016staple} & 0.295 & 0.544 & 1.35 & 11.1 \\
DDC~\cite{kristan2016visual} & 0.293 & 0.541 & 1.23 & 0.2 \\
EBT~\cite{zhu2016beyond} & 0.291 & 0.465 & 0.90 & 3.0 \\
SRBT~\cite{kristan2016visual} & 0.290 & 0.496 & 1.25 & 3.7 \\
STAPLEp~\cite{kristan2016visual} & 0.286 & 0.557 & 1.32 & \textcolor[rgb]{ 1, 0, 0}{\textbf{44.8}} \\
DNT~\cite{chi2017dual} & 0.278 & 0.515 & 1.18 & 1.1 \\
\hline
\end{tabular}%
\label{tab:vot16}%
\vspace{-2mm}
\end{table}%
\textbf{VOT-2017}~\cite{kristan2017visual} contains 60 sequences, which are developed by replacing the least 10 challenging videos in VOT-2016~\cite{kristan2016visual} with 10 difficult sequences. Different from VOT-2016~\cite{kristan2016visual}, VOT-2017~\cite{kristan2017visual} introduces a new real-time experiment by taking into both tracking performance and efficiency. We compare C-RPN with SiamRPN~\cite{li2018high} and other top ten approaches in VOT-2017 using the EAO of baseline and real-time experiments, as shown in Tab.~\ref{tab:vot17}. From Tab.~\ref{tab:vot17}, C-RPN achieves a EAO score of 0.289, which significantly outperforms the one-stage SiamRPN~\cite{li2018high} with EAO score of 0.243. In addition, compared with LSART~\cite{sun2018learning} and CFWCR~\cite{kristan2017visual}, C-RPN shows competitive performance. In real-time experiment, C-RPN obtains the best result with EAO score of 0.273, outperforming all other trackers.
\renewcommand\arraystretch{1.05}
\begin{table}[t]\small
\centering
\caption{Comparisons on VOT-2017~\cite{kristan2017visual}. The best two results are highlighted in \textcolor{red}{\bf red} and \textcolor{blue}{\bf blue} fonts, respectively.}
\begin{tabular}{rcc}
\hline
Tracker & \tabincell{c}{Baseline EAO} & \tabincell{c}{Real-time EAO} \\
\hline\hline
C-RPN & 0.289 & \textcolor[rgb]{ 1, 0, 0}{\textbf{0.273}} \\
\hline
SiamRPN~\cite{li2018high} & 0.243 & \textcolor[rgb]{ 0, 0, 1}{\textbf{0.244}} \\
LSART~\cite{sun2018learning} & \textcolor[rgb]{ 1, 0, 0}{\textbf{0.323}} & 0.055 \\
CFWCR~\cite{kristan2017visual} & \textcolor[rgb]{ 0, 0, 1}{\textbf{0.303}} & 0.062 \\
CFCF~\cite{gundogdu2018good} & 0.286 & 0.059 \\
ECO~\cite{danelljan2017eco} & 0.280 & 0.078 \\
Gnet~\cite{kristan2017visual} & 0.274 & 0.060 \\
MCCT~\cite{kristan2017visual} & 0.270 & 0.061 \\
C-COT~\cite{danelljan2016beyond} & 0.267 & 0.058 \\
CSRDCF~\cite{lukezic2017discriminative} & 0.256 & 0.100 \\
SiamDCF~\cite{kristan2017visual} & 0.249 & 0.135 \\
MCPF~\cite{zhang2017multi} & 0.248 & 0.060 \\
\hline
\end{tabular}%
\label{tab:vot17}%
\vspace{-2mm}
\end{table}%
\subsection{Experiment on LaSOT}
LaSOT~\cite{fan2018lasot} is a recent large-scale dataset aiming at both training and evaluating trackers. We compare C-RPN to 35 approaches, including ECO~\cite{danelljan2017eco}, MDNet~\cite{nam2016learning}, SiamFC~\cite{bertinetto2016fully}, VITAL~\cite{song2018vital}, StructSiam~\cite{zhang2018structured}, TRACA~\cite{choi2018context}, BACF~\cite{galoogahi2017learning} and so forth. We refer readers to~\cite{fan2018lasot} for more details about the compared trackers. We do not compare C-RPN to Siamese-RPN~\cite{li2018high} because neither its implementation nor results on LaSOT are available.
Following~\cite{fan2018lasot}, we report the results of {\it success} (SUC) for different trackers as shown in Fig.~\ref{fig:lasot}. It shows that our C-RPN outperforms all other state-of-the-art trackers under two protocols. We achieve SUC scores of 0.459 and 0.455 under protocol \uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral2}, outperforming the second best tracker MDNet with SUC scores 0.413 and 0.397 by 4.6\% and 5.8, respectively. In addition, C-RPN runs at around 23 fps on LaSOT, which is more efficient than MDNet with around 1 fps. Compared with the Siamese network-based tracker SiamFC with 0.358 and 0.336 SUC scores, C-RPN gains the improvements by 11.1\% and 11.9\%. Due to limited space, we refer readers to supplementary material for more details about results and comparisons on LaSOT.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.49\linewidth]{lasot/suc_all_bk.pdf}\includegraphics[width=0.49\linewidth]{lasot/suc.pdf}
\caption{Comparisons with state-of-the-art tracking methods on LaSOT~\cite{fan2018lasot}. C-RPN outperforms existing approaches on success by large margins under all two protocols. Best viewed in color.}
\label{fig:lasot}
\vspace{-2mm}
\end{figure*}
\subsection{Experiment on TrackingNet}
TrackingNet~\cite{muller2018trackingnet} is proposed to assess the performance of a tracker in the wild. We evaluate C-RPN on its testing set with 511 videos. Following~\cite{muller2018trackingnet}, we use three metrics {\it precision} (PRE), {\it normalized precision} (NPRE) and {\it success} (SUC) for evaluation. Tab.~\ref{tab:tn} demonstrates the comparison results to trackers with top PRE scores\footnote{The result of C-RPN on TrackingNet~\cite{muller2018trackingnet} is evaluated by the server provided by the organizer at \url{http://eval.tracking-net.org/web/challenges/challenge-page/39/leaderboard/42}. The results of compared trackers are reported from~\cite{muller2018trackingnet}. Full comparison is shown in the supplementary material.}, showing that C-RPN achieves the best results on all three metrics. In specific, C-RPN obtains the PRE score of 0.619, NPRE score of 0.746 and SUC score of 0.669, outperforming the second best tracker MDNet with PRE score of 0.565, NPRE score of 0.705 and SUC score of 0.606 by 5.4\%, 4.1\% and 6.3\%, respectively. Besides, C-RPN runs efficiently at a speed of around 32 fps.
\renewcommand\arraystretch{1.05}
\begin{table}[t]\small
\centering
\caption{Comparisons on TrackingNet~\cite{muller2018trackingnet} with the best two results highlighted in \textcolor{red}{\bf red} and \textcolor{blue}{\bf blue} fonts, respectively.}
\begin{tabular}{rccc}
\hline
& PRE & NPRE & SUC \\
\hline \hline
C-RPN & \textcolor[rgb]{ 1, 0, 0}{\textbf{0.619}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{0.746}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{0.669}} \\
\hline
MDNet~\cite{nam2016learning} & \textcolor[rgb]{ 0, 0, 1}{\textbf{0.565}} & \textcolor[rgb]{ 0, 0, 1}{\textbf{0.705}} & \textcolor[rgb]{ 0, 0, 1}{\textbf{0.606}} \\
CFNet~\cite{valmadre2017end} & 0.533 & 0.654 & 0.578 \\
SiamFC~\cite{bertinetto2016fully} & 0.533 & 0.663 & 0.571 \\
ECO~\cite{danelljan2017eco} & 0.492 & 0.618 & 0.554 \\
CSRDCF~\cite{lukezic2017discriminative} & 0.48 & 0.622 & 0.534 \\
SAMF~\cite{li2014scale} & 0.477 & 0.598 & 0.504 \\
ECO-HC~\cite{danelljan2017eco} & 0.476 & 0.608 & 0.541 \\
Staple~\cite{bertinetto2016staple} & 0.470 & 0.603 & 0.528 \\
Staple\_CA~\cite{mueller2017context} & 0.468 & 0.605 & 0.529 \\
BACF~\cite{galoogahi2017learning} & 0.461 & 0.580 & 0.523 \\
\hline
\end{tabular}%
\label{tab:tn}%
\vspace{-1mm}
\end{table}%
\subsection{Ablation Experiment}
To validate the impact of different components, we conduct ablation experiments on LaSOT (Protocol \uppercase\expandafter{\romannumeral2})~\cite{fan2018lasot} and VOT-2017~\cite{kristan2017visual}.
\renewcommand\arraystretch{1.05}
\begin{table}[t]\small
\centering
\caption{Effect on the number of stages in C-RPN.}
\begin{tabular}{r@{}c@{}c@{}c@{}}
\hline
\# Stages & \multicolumn{1}{l}{One stage} & \multicolumn{1}{l}{Two stages} & \multicolumn{1}{l}{Three stages} \\
\hline \hline
SUC on LaSOT & 0.417 & 0.446 & 0.455 \\
Speed on LaSOT & 48 fps & 37 fps & 23 fps \\
\hline
EAO on VOT-2017 & 0.248 & 0.278 & 0.289 \\
\hline
\end{tabular}%
\label{tab:stage}%
\vspace{-1mm}
\end{table}%
\renewcommand\arraystretch{1.05}
\begin{table}[t]\small
\centering
\caption{Effect on negative anchor filtering (NAF) in C-RPN.}
\begin{tabular}{rcc}
\hline
& \multicolumn{1}{l}{C-RPN w/o NAF} & \multicolumn{1}{l}{C-RPN w/ NAF} \\
\hline\hline
SUC on LaSOT & 0.439 & 0.455 \\
\hline
EAV on VOT-2017 & 0.282 & 0.289\\
\hline
\end{tabular}%
\label{tab:easyneg}%
\vspace{-2mm}
\end{table}%
\renewcommand\arraystretch{1.05}
\begin{table}[t]\small
\centering
\caption{Effect on feature transfer block in C-RPN.}
\begin{tabular}{rcc}
\hline
& \multicolumn{1}{l}{C-RPN w/o FTB} & \multicolumn{1}{l}{C-RPN w/ FTB} \\
\hline\hline
SUC on LaSOT & 0.442 & 0.455 \\
\hline
EAV on VOT-2017 & 0.278 & 0.289\\
\hline
\end{tabular}%
\label{tab:ftb}%
\vspace{-2mm}
\end{table}%
\vspace{0.1 em}
\noindent
{\bf Number of stages?} As shown in Tab.~\ref{tab:stage}, adding the second stage significantly improves one-stage baseline. The SUC on LaSOT is improved by 2.9\% from 0.417 to 0.446, and the EAO on VOT-2017 is increased by 3.5\% from 0.248 to 0.283. The third stage produces 0.9\% and 0.6\% improvements on LaSOT and VOT-2017, respectively. We observe that the improvement by the second stage is higher than that by the third stage. This suggests that most difficult background is handled in the second stage. Adding more stages may lead to further improvements, but also the computation (speed from 48 to 23 fps).
\vspace{0.1 em}
\noindent
{\bf Negative anchor filtering?} Filtering out the easy negatives aims to provide more balanced training samples for RPN in next stage. To show its effectiveness, we set threshold $\theta$ to 1 such that all refined anchors will be send to the next stage. Tab.~\ref{tab:easyneg} shows that removing negative anchors in C-RPN can improve the SUC on LaSOT by 1.6\% from 0.439 to 0.455, and the EAO on VOT-2017 by 0.7\% from 0.282 to 0.289, respectively, which evidences balanced training samples are crucial for training more discriminative RPN.
\vspace{0.1 em}
\noindent
{\bf Feature transfer block?} As demonstrated in Tab.~\ref{tab:ftb}, FTB improves the SUC on LaSOT by 1.3\% from 0.442 to 0.455 without losing much efficiency, and the EAO on VOT-2017 by 1.1\% from 0.278 to 0.289, validating the effectiveness of multi-level feature fusion in improving performance.
These studies show that each ingredient brings individual improvement, and all of them work together to produce the excellent tracking performance.
\vspace{-2mm}
\section{Conclusion}
In this paper, we propose a novel multi-stage framework C-RPN for tracking. Compared with previous arts, C-RPN demonstrates more robust performance in handling complex background such as similar distractors by performing hard negative sampling within a cascade architecture. In addition, the proposed FTB enables effective feature leverage across layers for more discriminative representation. Moreover, C-RPN progressively refines the target bounding box using multiple steps of regressions, leading to more accurate localization. In extensive experiments on six popular benchmarks, C-RPN consistently achieves the state-of-the-art results and runs in real-time.
\newpage
{\small
\bibliographystyle{ieee}
|
2,877,628,089,794 | arxiv | \section{Introduction}
The computational cost of the exact solution of the realistic extended many-electron problem is believed to be exponential in the number of degrees of freedom, necessitating the development of accurate approximate methods able to capture interacting electron physics.\cite{Dirac29}
While mature tools for obtaining ground state energetics for both molecular and solid state problems exist,\cite{Kohn65,MartinInteracting16} solid state experiments are often performed at finite temperature and yield as the measured result not energy differences but single-and two-particle response functions, requiring a description of finite temperature excitations.
Many-body perturbation theory\cite{MartinInteracting16} accurately describes these phenomena where interactions are weak. However, many systems of interest are believed to be outside the regime of validity of perturbative approximations. In these systems, a non-perturbative solution is desired for a subset of the correlated degrees of freedom embedded into a background of more weakly correlated, perturbatively treated states.
Ideally such an embedding construct should be numerically tractable and defined in terms of one or more small parameters that allow its tuning from a crude but computationally cheap, approximate solution to the exact but exponentially expensive one.
Several such theories have been developed. They include the dynamical mean field theory (DMFT),\cite{Georges96,Kotliar06} its combination with electronic structure methods, such as LDA+DMFT~\cite{Anisimov97,Lichtenstein98,Sun02} and GW+DMFT~\cite{Biermann03,Biermann05}, the self-energy functional theory,\cite{Potthoff03} and most recently the self-energy embedding theory (SEET).\cite{Zgid15,Tran15b,Tran16} All of them require a compromise between accuracy and numerical tractability or time to solution.
In this paper, we show that SEET can be understood as a conserving functional approximation to an exact Luttinger-Ward functional.\cite{Luttinger60} This functional framework of SEET allows us to compare this theory to other functional approximations, and show in particular that DMFT, HF+DMFT, and GW+DMFT can be understood as a special case of SEET and to illustrate how the additional freedom given by SEET can be employed to systematically improve results.
In particular, we focus on various aspects of electron `screening' and downfolding and how they are treated in various approximations.
This paper proceeds as follows. In Sec.~\ref{sec:System}, we introduce the system under study, the SEET definition, DMFT, and several combinations of DMFT with many-body perturbation theory. In Sec.~\ref{sec:relationship}, we compare the different approaches based on their functionals. In Sec.~\ref{sec:Screening}, we focus in detail on various aspects of electron screening. We form conclusions in Sec.~\ref{sec:Conclusions}.
\section{System and formalism}\label{sec:System}
We consider a system described by a Hamiltonian with full two-body interaction $v_{ijkl}$ and one-body terms $t_{ij}$ in a finite orbital basis:
\begin{align}
H=\sum_{ij}^Nt_{ij}a^{\dagger}_{i}a_{j}+\sum_{ijkl}^Nv_{ijkl}a^{\dagger}_{i}a^{\dagger}_{j}a_{l}a_{k}, \label{eqn:realistic_ham1}
\end{align}
where the indices $i$, $j$, $k$, and $l$ enumerate all $N$ basis orbitals present in the system. In case of a periodic system, Eq.~\ref{eqn:realistic_ham1} may in particular contain one-body terms connecting any orbital in any unit cell to any other orbital in any other unit cell, and general two-body integrals $v$ mixing interactions between any of the orbitals in any of the unit cells in the system.
Physical properties including thermodynamic quantities (energies and entropies), frequency dependent single-particle (Green's functions and self-energies) and two-particle quantities (susceptibilities) can be described in a functional approach. \cite{Luttinger60,Baym62,Albladh99,Potthoff06} In this approach, a $\Phi$- functional $\Phi[G]$ of the Green's function $G$, which contains all linked closed skeleton diagrams,\cite{Luttinger60} is used to express the grand potential as
\begin{align}
\Omega = \Phi - \text{Tr} \log G^{-1} - \text{Tr} \Sigma G,
\end{align}
and it satisfies
\begin{align}
\frac{\delta \Phi}{\delta G}=\Sigma[G], \label{eq:dphidg}
\end{align}
where the self-energy $\Sigma$ is defined with respect to a non-interacting Green's function $G_0$ via the Dyson equation
\begin{align}
G=G_0 + G_0 \Sigma G.\label{eq:dyson}
\end{align}
The functional formalism is useful because approximations to $\Phi$ that can be formulated as a subset of the terms of the exact $\Phi$ functional can be shown to respect the conservation laws of electron number, energy, momentum, and angular momentum by construction.\cite{Baym61,Baym62} In addition, $\Phi$-derivability ensures that quantities obtained by thermodynamic or coupling constant integration from non-interacting limits are consistent.\cite{Baym62} Functional theory therefore provides a convenient framework for constructing perturbative \cite{Baym62,Hedin65,Bickers89,Bickers91} and non-perturbative \cite{Georges96,Potthoff03,Zgid15,Tran15b,Tran16} diagrammatic approximations.
On the other hand, approximations based on a $\Phi$ functional do not guarantee self-consistency on the two-particle level, so that vertex functions which appear in the calculation of the one-particle self-energy may not the same as those generated by functional differentiation in two-particle correlation functions, and crossing symmetries may be violated.\cite{Bickers04,DeDominicis64,DeDominicis64B} The construction of methods for model systems that respect these symmetries by construction is an active topic of research.\cite{Rohringer16,vanLoon16}
The approximations we discuss in the following sections
are all expressed in the functional form, thus making them straightforward to discuss and compare their respective assumptions, limits, and strengths.
\subsection{The Self-energy Embedding Theory}
\subsubsection{Self-energy Embedding Equations}
The self-energy embedding theory (SEET) \cite{Zgid15,Tran15b,Tran16} starts from the assumption that all orbitals present in the system can be separated into $M$ distinct orbital subsets $A_i$, each containing $N^A_i$ orbitals, and a remainder $R$ with $N^R$ orbitals, such that $N^A_i \ll N$, for each $i$, and $N=\sum_{i=1}^M N_i^A+N^R$.
We assume that the orbitals within each subset $A_i$ are more strongly correlated among each other than with other orbitals present in the system, so that their intra-subset correlations need to be obtained in a non-perturbative way.
Conversely, inter-set correlations between orbitals belonging to two different sets $A_i$ and $A_j$, $i\neq j$, and correlations belonging to the remainder $R$
are assumed to be weaker, such that they can be simulated perturbatively.
The choice of orbital subsets and subset size $N^A_i$ is general and will be commented on in Sec.~\ref{sec:orbitalselec}.
SEET first approximates the solution of the entire system using an affordable but potentially inaccurate $\Phi$-derivable method (weak coupling methods are a natural choice), and then corrects this approximation in the strongly correlated subspaces by a non-perturbative result. This is achieved by approximating the exact $\Phi$-functional as
\begin{align}\label{eq:SeetPhi}
\Phi^\text{SEET} = \Phi^\text{tot}_\text{weak} + \sum_{i=1}^{M} \Big([\Phi_\text{strong}^A]_i-[\Phi_\text{weak}^A]_i\Big).
\end{align}
Here, $\Phi^\text{tot}_\text{weak}$ denotes a solution of the entire system using a conserving low-order approximation, for instance self-consistent second order perturbation theory (GF2)\cite{Dahlen05,Zgid14,Rusakov16,Phillips15,Kananenka15,Kananenka16} or the GW method.\cite{Hedin65}
$\Phi^A$ denotes all those terms in $\Phi$ where all four indices $i,j,k,l$ of $v_{ijkl}$ are contained inside orbital subspace $A$.
$\Phi^A_\text{weak}$ is the approximation to $\Phi^A$ within the weak coupling method used for solving the entire system, and $\Phi^A_\text{strong}$ the approximation or exact solution of $\Phi^A$ obtained using the higher order method capable of describing `strong correlation'.
Since the self-energy is a functional derivative of the $\Phi^\text{SEET}$-functional,
the total self-energy $\Sigma$ contains diagrams from both the `strong' and `weak' coupling methods and can be written in a matrix form reflecting the system separation onto different correlated blocks
\begin{eqnarray}\label{eqn:sigma_seet}
\Sigma^\text{SEET}=
\begin{bmatrix}
[\Sigma^\text{A}]_{1} & \Sigma^\text{int} & \dots &\dots &\dots\\
\Sigma^\text{int} & [\Sigma^\text{A}]_{2} & \Sigma^\text{int} & \dots &\dots \\
\dots & \dots & \dots & \dots &\dots\\
\dots & \dots &\Sigma^\text{int} & [\Sigma^\text{A}]_{M} & \Sigma^\text{int} \\
\dots & \dots & \dots & \Sigma^\text{int} & \Sigma^{R}
\end{bmatrix}
\end{eqnarray}
These blocks are obtained upon differentiation of the $\Phi^{SEET}$ functional according to Eq.~\ref{eq:dphidg} and have the following form
\begin{align}
[\Sigma^A]_{i}&=\Sigma^\text{tot}_\text{weak}+([\Sigma^{A}_\text{strong}]_{i}-[\Sigma^{A}_\text{weak}]_i),\label{eq:seet}\\
\Sigma^R&=\Sigma^{R}_\text{weak},\\
\Sigma^{int}&=\Sigma^{int}_\text{weak}.
\end{align}
Eq.~\ref{eqn:sigma_seet} describes a subspace self-energy consisting of a contribution from the strongly correlated subspace embedded into a weakly correlated self-energy generated by all orbitals outside the subspace. This embedding of the self-energy leads to the name `self-energy embedding theory'.
SEET satisfies the following limits:
\begin{itemize}
\item If the interaction $v_{ijkl}$ is zero or the temperature is infinity, $\beta=0$, the self-energy is zero and therefore the method becomes exact.
\item If $M=1$ and the only subspace $A$ includes all orbitals present in the system, $N^A=N$, so that no orbitals are left in the perturbatively treated subspace, $N^R=0$, then the entire system is solved using the strong correlation method and $\Phi^{SEET}=\Phi^{A}_\text{strong}$. Consequently, if the strong correlation method provides the exact solution, the exact solution of Eq.~\ref{eqn:realistic_ham1} is recovered.
\item In the limit of non-interacting subsystems, when the interactions between strongly correlated subspaces are zero, together with a condition $N^R=0$ and $\sum_i N^{A}_i=N$, SEET recovers the solution of the system with the strong correlation method since $\Phi^\text{SEET}=\sum_{i}^{M}[\Phi^{A}_\text{strong}]_i$.
\item If the correlated subspaces are not treated exactly but using the same `weak correlation' method as the rest of the system, the weak correlation solution for the full system is recovered since $\Phi^\text{tot}_\text{weak} = \Phi^\text{tot}_\text{weak} + \sum_{i=1}^{M} \Big([\Phi_\text{weak}^A]_i-[\Phi_\text{weak}^A]_i\Big)$.
\end{itemize}
While consideration of the exact limits is essential, the important practical question is whether (and where) one can expect SEET to be accurate away from these exact limits. As is evident in Eq.~\ref{eq:SeetPhi}, SEET becomes accurate where the diagrams considered at the lower level method require no higher order corrections. This is the case in the high temperature, high energy, and high doping regimes where the self-energy is perturbative. Additionally, SEET is accurate if all non-perturbative correlations are restricted to the correlated subspaces, and its accuracy will therefore strongly depend on the choice of the correlated subspaces.
Consequently, choosing the correlated subspaces is an important step in any SEET calculation.
\subsubsection{The choice of SEET subspaces}\label{sec:orbitalselec}
\begin{figure} [htb]
\includegraphics[width=\columnwidth]{SEET_spatial_energy_scheme.pdf}
\caption{Illustration of two choices of SEET subspaces. Top panel: Selection of orbital subspaces based on the energy/occupation scheme: partially occupied orbitals near the Fermi level ($\mu,\nu$) are included in the correlated subspace, any other contribution is excluded. Bottom panel: Selection of orbitals based on a localization criterion: sets of neighboring orbitals are chosen as the correlated subspace.}
\label{fig:seet_separation}
\end{figure}
In many techniques, the `strongly correlated' and `weakly correlated' orbital subsets are chosen a priori. An example is DMFT, where $\Phi$ is truncated to local degrees of freedom,\cite{Kotliar06} or LDA+DMFT methods\cite{Anisimov97,Georges04,Kotliar06} where certain `local' orbitals (usually orbitals with $d$ or $f$-like character) are considered to be `correlated', while `wider' $p$ and $s$ orbitals are considered to be non-interacting.
While the same ad hoc orbital choice can be used for self-energy embedding theory, SEET also offers a different approach to the selection of correlated orbitals and in particular makes an adaptive choice of correlated orbitals `a posteriori' possible, without the need to localize or `downfold' orbitals.
A simple criterion for identifying the degree of orbital correlation is given by the frequency-dependence of the self-energy: the larger the frequency dependent part, the more `non-Hartree-Fock' like an orbital is, and therefore the more it needs to be treated at the `strongly correlated' level.
Since Hartree-Fock only yields orbital occupancies of 0 and 2 (at zero $T$), then any partial occupancy of an orbital obtained from diagonalizing a one-body density matrix obtained using a perturbative approach (used in the first step of SEET) indicates some degree of correlation.
The larger the deviation from 0 or 2 the more ``strongly correlated'' an orbital is and the more likely it requires a non-perturbative treatment.
Consequently, the SEET calculations in Ref.~\onlinecite{Zgid15,Tran15b,Tran16} added orbitals to the strongly correlated subspace $A$ using a criterion based on diagonalization of the one-body density matrix: chosen were those $N_A$ orbitals with the largest deviation of the occupancy from $0$ and $2$. This requires a basis transform of the hybridization function, non-interacting Hamiltonian, and two-body integrals into the basis that diagonalizes the one-body density matrix. While basis transforms for the two-body integrals are generally expensive, the transformed integrals are only necessary inside the correlated subspace, making the transform affordable in practice, such that the orbital transformation step is not a computational bottleneck.
Two possible subset selection schemes are illustrated graphically in Fig.~\ref{fig:seet_separation}. The upper panel shows a separation of orbitals based on energy or occupation scheme, where mostly unoccupied and mostly filled orbitals are treated as weakly correlated subspaces that can be treated by a weak correlation method. Partially filled orbitals are chosen as strongly correlated that will be treated by a non-perturbative method. The lower panel shows an alternative separation based on distance, where orbitals localized around a center position are treated as strongly correlated, whereas orbitals at farther distance are treated as uncorrelated or weakly correlated.
\subsubsection{Self-consistent solution of the SEET equations}
The $\Phi^\text{SEET}$ functional of Eq.~\ref{eq:SeetPhi} defines the SEET approximation.\footnote{Potentially multiple self-consistent solutions exist, in analogy to Ref.~\onlinecite{Kozik15,Rusakov16,Welden16}.} It requires the specification of the $M$ correlated orbital subspaces $A_i$ and the subspace $R$, in addition to the `strong coupling' and the perturbative weak coupling diagrams. We now describe an algorithm that generates a self-consistent solution of the SEET equations.
First, the weak coupling method is used to self-consistently obtain the self-energy $\Sigma^\text{tot}_\text{weak}$ and functional $\Phi^\text{tot}_\text{weak}$ of the entire system from a given initial Green's function, {\it e.g.} the Hartree-Fock (HF) or density functional theory (DFT) approximation. The self-consistency of the weakly correlated method eliminates all memory of the initial starting point in its convergence to a fixed point
Upon convergence of the weakly correlated method, we choose the correlated subspaces according to Sec.~\ref{sec:orbitalselec}. We then compute $[\Sigma_\text{weak}^{A}]_i$ and $[\Phi_\text{weak}^{A}]_i$ in every orbital subspace $i$, {\it i.e.} the weak correlation approximation obtained with vertex indices exclusively contained in the correlated orbital subsets $A_i$.
In a next step, $[\Sigma_\text{strong}^A]_i$ needs to be obtained in each subspace $i$. To simplify notation, we select one particular subspace $A_i=A$ and absorb all other subspaces $A_j, j\neq i$, and the remaining weakly correlated orbitals in space $R$.
Using the non-interacting Green's function~\footnote{Note that there is a freedom of choice for the non-interacting Green's function. While we are using $G_0=(\omega-t)^{-1}$ here, where $t$ is the kinetic plus nuclear-electron attraction part of the Hamiltonian, in general other definitions of the one-body Hamiltonian are possible. One of the most commonly used definitions for realistic systems is $t=F$, where $F$ is a Fock matrix obtained from HF, GF2, or GW.}
in a block form
\begin{align}
G_0=\begin{pmatrix}\omega - t_{A}& -t_\text{int}\\
-t^{\dagger}_\text{int}& \omega - t_{R}\end{pmatrix}^{-1} \label{eq:g0eq}
\end{align}
and the Dyson equation $G = G_0+ G_0\Sigma G,$ we express the interacting Green's function as
\begin{align}\label{Gweak}
G^\text{tot}=\begin{pmatrix}(G_0^{-1})^{A}-\Sigma^{A}& (G_0^{-1})^\text{int}-\Sigma^\text{int}\\
\left[(G_0^{-1})^\text{int}-\Sigma^\text{int}\right]^{\dagger} &(G_0^{-1})^{R}-\Sigma^{R}\end{pmatrix}^{-1},
\end{align}
where $(G_0^{-1})^{A}$ denotes the inverse of the non-interacting Green's function restricted to the orbital subset $A$. Evaluation of $G^\text{tot}$
in the subset $A$ yields
\begin{align}
(G^\text{tot})^{A} &= \Big( (G_0^{-1})^{A} - \Sigma^{A} - \Delta \Big)^{-1}, \label{eq:subsetprop}
\end{align}
where $\Delta$ is defined as
\begin{align}
\Delta &=
\Big[\left[(G_0^{-1})^\text{int}-\Sigma^\text{int}\right]^{\dagger} \times \label{eq:hybfun}\\ \nonumber & \left[(G_0^{-1})^{R}-\Sigma^{R}\right]^{-1}\left[(G_0^{-1})^\text{int}-\Sigma^\text{int}\right]\Big].
\end{align}
Eq.~\ref{eq:SeetPhi}, Eq.~\ref{eq:subsetprop} and Eq.~\ref{eq:hybfun} show that the `strongly correlated' $A$-subspace problem can be entirely formulated in the strongly correlated subspace as a problem in which the original interactions $v_{ijkl}$ have been restricted to the subspace $A$, but for which the bare Green's functions have been modified from $G_0$ to new propagators $\mathcal{G}_0$ which contain a contribution from a frequency-dependent `hybridization function' $\Delta$. These propagators are defined as
\begin{align}\label{eqn:g0}
\mathcal{G}_0^{-1}= (G_0^{-1})^{A} - \Delta.
\end{align}
Problems of this type are known as quantum impurity problems. A quantum impurity solver will obtain an expression for a correlated $(G^\text{imp})^A$ given $\Delta$ (Eq.~\ref{eq:hybfun}) and $G_0$ (Eq.~\ref{eq:g0eq}) as well as a subset of interactions $v_{ijkl} \in A_i$ in either spatial or energy basis. Using the impurity problem Dyson equation, the self-energy for a strongly correlated orbital subset is obtained as
\begin{equation}\label{eqn:dyson_imp}
[\Sigma^A_\text{strong}]_i=\mathcal{G}_0^{-1}-((G^\text{imp})^A)^{-1}.
\end{equation}
Once this strongly correlated $\Sigma_\text{strong}^A$ is known, the total self-energy, $\Sigma^A$, in subspace $i$ is evaluated as
\begin{align}\label{eqn:sigma_tot}
[\Sigma^A]_{i}&=\Sigma^\text{tot}_\text{weak}+([\Sigma^{A}_\text{strong}]_{i}-[\Sigma^{A}_\text{weak}]_i).
\end{align}
We note in particular that there are contributions to the $A$-subspace self-energy from vertices and propagators with some indices outside of subspace $A$. These contributions are contained within $(\Sigma^\text{tot}_\text{weak}-[\Sigma^{A}_\text{weak}]_i)$ and only treated at the perturbative level. We would like to stress that these contributions provide an effective adjustment caused by non-local interactions to the $[\Sigma^{A}_\text{strong}]_{i}$ that was evaluated using a subset of local interactions $v_{ijkl} \in A_i$.
While quantum impurity models were originally formulated in the context of dilute impurities in a metal,\cite{Anderson61} they form the basis of many non-perturbative embedding schemes including DMFT.\cite{Georges96,Kotliar06} Impurity problems are numerically tractable, with accurate or numerically exact methods ranging from continuous-time quantum Monte Carlo\cite{Rubtsov05,Werner06,Werner06b,Gull08,Gull11} to exact diagonalization \cite{Caffarel94,ED_Liebsch}, configuration-interaction \cite{Zgid12}, and numerical renormalization group theory \cite{Bulla08} methods. The requirements for SEET impurity problems, {\it i.e.} general (`non-diagonal') hybridization functions $\Delta$, multiple impurity and bath orbitals, and general interactions $v_{ijkl}$ currently make methods based on the configuration interaction hierarchy\cite{Zgid11, Zgid12} most suitable for this task, despite the necessity to approximate the continuous hybridization function $\Delta$ by a set of discrete bath levels and bath couplings.
If multiple correlated spaces are present, separate impurity problems need to be solved in each subspace $A_i$, and correlated self-energies $[\Sigma_{A}]_i$ obtained. These self-energies are then used to update each $[\Sigma_{A}]_i$ block of the self-energy $\Sigma^\text{SEET}$ obtained with the weak coupling method according to Eq.~\ref{eqn:sigma_seet}, and the Green's function for the entire system is evaluated using the Dyson equation. Iteration of this procedure, alternating weak coupling steps to update $\Sigma^{int}, \Sigma^{R},$ with impurity solver steps to obtain $[\Sigma^{A}]_i$ produces a converged $\Phi^\text{SEET}$ and $\Sigma^\text{SEET}$ of the form of Eq.~\ref{eq:SeetPhi}. Appendix~\ref{app:SEETAlgo} and Refs.~\onlinecite{Zgid15, Tran15b,Tran16} have detailed step-by-step instructions on the construction of the iterative procedure.
\section{Relationship to other functional based theories}\label{sec:relationship}
\subsection{DMFT}
DMFT~\cite{Metzner89,Georges92,Georges96} is a $\Phi$-derivable theory that can be cast as an approximation to the exact $\Phi$ functional \cite{Kotliar06}:
\begin{align}
\Phi_\text{DMFT} = \sum_{j=1}^M [\Phi_I]_j\label{eq:phidmft}
\end{align}
where $j$ denotes unit cells, and $[\Phi_I]_j$ contains all those diagrams of $\Phi$ where the interaction vertices have all four indices inside unit cell $j$. All diagrams in $\Phi$ connecting different unit cells, either via interactions or via propagators, are discarded.
As a consequence, $\Sigma_\text{DMFT} = \frac{\delta \Phi_\text{DMFT}}{\delta G}$ is purely local to every cell.
In a translationally invariant system where all unit cells are equal, $\Sigma_\text{DMFT}$ is independent of $I$, and only one impurity problem exists.
In analogy to Eq.~\ref{eq:subsetprop}, an impurity model with $G^\text{imp}=G_I$, $\Sigma^\text{imp}=\Sigma_I$ can be defined and the self-consistent solution of the Dyson equation $G=G_0 + G_0 \Sigma_\text{DMFT}G$ and the solution of the impurity problem leads to the DMFT approximation of Eq.~\ref{eqn:realistic_ham1}.\footnote{Note that it is also possible to consider DMFT as an approximation to the momentum conservation at the vertices, where all terms of $\Phi$ are considered but the propagators are replaced with local propagators.\cite{Hettler00} This violates momentum conservation at each vertex.}
Eq.~\ref{eq:phidmft} shows that DMFT can be understood as a special case of SEET in which the orbital subspaces $A_i$ are chosen to be the orbitals local to a unit cell, the `weak correlation' method is skipped so that $\Phi_\text{weak}=0$, and the strong-correlation problem is computed by the DMFT impurity solver. Correspondingly, DMFT will provide a good approximation to the physics of a correlated system as long as the following two criteria are fulfilled: first, the interactions are predominantly local; and second, self-energy contributions from non-local terms (interactions or propagators) are negligible.
\subsection{HF+DMFT}
Similarly, HF+DMFT can be cast into this framework. The chosen correlated orbital subspaces $I_j$ are local to each unit cell, and the exact $\Phi$ is approximated as
\begin{align}
\Phi_\text{DMFT+HF} = \Phi^\text{tot}_\text{HF}+\sum_{j=1}^{M} \left([\Phi^I]_j - [\Phi_\text{HF}^I]_j\right) \label{eq:phidmfthf},
\end{align}
where $[\Phi_\text{HF}^I]_j$ is the HF $\Phi$-functional with vertex indices restricted to unit cell $j$.
To obtain a self-consistent $\Phi_\text{DMFT+HF}$, the Hartree Fock equations are solved for the entire system and subsequently some or all local orbitals are chosen to the correlated subspace $I_j$. The impurity problem is then solved in the local subspace along the lines of DMFT.
Note that all the non-local contributions to the self-energy of the unit cells that are frequency independent are generated by $\Phi^\text{tot}_\text{HF}$.
Any higher order contributions to the self-energy that are frequency dependent have purely local vertices and there are no non-local frequency dependent self-energy terms in the $\Phi_\text{DMFT+HF}$ functional. Additionally, in the non-empirically adjusted HF+DMFT all impurity interactions remain the bare Coulomb interactions $v_{pqrs}$ and are local to the unit cell orbital subspaces $I_j$.
Consequently, any adjustment or renormalization of the frequency dependent $[\Sigma^{A}_\text{strong}]_i$ term due to the non-local effects that is present in SEET(ED-in-GF2 or ED-in-GW) is absent in HF+DMFT. This is the reason why spectral features and energies produced at the HF+DMFT or LDA+DMFT level using a bare, unrenormalized local Coulomb interaction are not recovered correctly. For small molecular systems, the incorrect energies resulting from employing HF+DMFT with bare Coulomb interactions can be found in Refs.~\onlinecite{nan_lin_prl,Tran16}.
\subsection{GW+DMFT}
GW+DMFT\cite{Sun02,Biermann03,GW_review_werner2016} is based on the premise that both non-local interactions and non-local correlations are important and cannot be discarded; however, the non-local interactions can be treated perturbatively without a significant loss of accuracy.
The starting point of the GW+DMFT procedure is the GW approximation \cite{Hedin65,Onida02} for which the $\Phi$ functional consist of an infinite series of `bubble' polarization diagrams, $P=GG$, connected by bare interaction lines. This series of bubbles can be resumed into a frequency-dependent `screened' interaction $W=v+vPW$, where $v$ is the bare Coulomb interaction. The self-energy is approximated as $\Sigma=-GW$, so that in the GW approximation $\Phi[G]=-\frac{1}{2}GWG$.
As \textcite{Albladh99} showed, it is convenient to define a functional $\Psi$, which is a functional both of the Green's function $G$ and of the screened interaction $W$,\cite{Albladh99} as
\begin{align}
\Psi[G,W] = \Phi - \frac{1}{2}(PW-\log(1+PW))
\end{align}
which satisfies
\begin{align}
\left(\frac{\delta\Psi}{\delta W}\right)_G &= -\frac{1}{2}P,\\
\left(\frac{\delta\Psi}{\delta G}\right)_W &= \Sigma.
\end{align}
Together with the Dyson equation that relates $G$ to $\Sigma$, these expression form a closed set of equations that allow the self-consistent computation of $\Sigma$ and $W$. We note that while these equations are $\Phi$ (and $\Psi$)-derivable, and should be solved in a self-consistent manner, the size and complexity of $W$ as well as the difficulty in carrying out the self-consistency necessitates additional approximations~\cite{QPGW_Schilfgaarde,PhysRevB.66.195215,Onida02,GW100} in the case of large realistic systems, which may not respect the conserving properties of Hedin's `fully self-consistent' $GW$ approximation. Notable cases where these equations have been solved self-consistently without any approximations are the electron gas,\cite{Holm98} atoms and small molecules,\cite{Albladh99,Dahlen04,Stan06,Koval14} and lattice model systems.\cite{Gukelberger15}
GW+DMFT then makes use of the fact that, given $W$ in all orbitals, there is a natural way of defining an `effective' $W$ in a subset $d$ of correlated orbitals:\cite{Aryasetiawan04} splitting the polarization into a contribution $P_d$ from the `correlated' orbitals and a contribution from all other orbitals, $P=P_d+P_r$, one can define a screened interaction $W_r$ which does not contain any $d$-to-$r$ processes and reformulate $W$ as
\begin{align}
W=[1-W_r P_d]^{-1}W_r,\\
W_r=[1-vP_r]^{-1}v.
\end{align}
This identity is general and independent of the GW approximation. It allows to formulate non-perturbative corrections containing contributions by orbitals exclusively in the correlated subspace $d$ without double counting. Choosing as a subset of orbitals the ones that are local to the unit cell (or, equivalently, a subset of those local to the unit cell), it follows that\cite{Biermann05}
\begin{widetext}
\begin{align}
\Psi_\text{GW+DMFT} &= \Psi^\text{tot}_\text{GW} +\sum_{j=1}^{M} \left([\Psi^I(G_I, W_r)]_j-[\Psi^I_\text{GW}(G_I, W_r)]_j\right).
\end{align}
\end{widetext}
This defines the GW+DMFT approximation to the exact $\Psi$ functional.
The approximation is noteworthy because it is, as it is written, a diagrammatically sound method for solving realistic correlated many-body problems that includes renormalized interactions and non-perturbative local correlations. In practice, numerous technical and theoretical limitations exist. A fully self-consistent solution of the GW problem is technically very challenging. The various approximations employed (quasiparticles, no full self-consistency, etc) at the level of GW along with the difficulty of numerically solving multi-orbital impurity problems with general non-local time-dependent interactions means that the rigorous diagrammatic footing described above is severely approximated in practical implementations of the GW+DMFT method~\cite{Aryasetiawan98}.
\subsection{Comparison of SEET, DMFT, and GW+DMFT}\label{sec:Comparison}
The methods outlined above have several important commonalities. First, they require the self-consistent solution of a $\Phi$ (or $\Psi$)-derivable diagrammatic system. This implies that (provided the equations are actually solved to self-consistency) the important conservation laws are automatically fulfilled. They also consist of two-step procedures: an `outer loop' that entails the solution of a system using a `cheap' method (e.g. GW, GF2, or HF), and an `inner' loop that requires the solution of a quantum impurity problem using non-perturbative techniques. All methods become exact at infinite temperature, at zero interaction, and when the system decouples into separate impurity problems without any inter-impurity interactions.
However, there are several important distinctions between these methods. The first is the choice of correlated orbital space. In DMFT and its variants, correlated subspace orbitals are chosen {\it a priori} to be the local orbitals or a subset of the local orbitals. This was historically motivated by an exact limit of infinite coordination number, \cite{Metzner89,MullerHartmann89} where the self-energy can be shown to reduce to the local form. The locality approximation can be controlled by systematically extending the size of the unit cell in the real\cite{Lichtenstein00,Kotliar01} or reciprocal space,\cite{Hettler98,Maier05} or by introducing diagrammatic expansions in the non-local contributions.\cite{Kusunose06,Toschi07,Rubtsov08,Rubtsov12,Iskakov16}
In contrast, SEET uses insight from a low-order solution of the system to adaptively define the correlated subspace, {\it e.g.} via consideration of the elements of the diagonalized one-body density matrix that are different from 0 or 2. The control parameter used to converge SEET to the exact limit is the size of the correlated subspaces $N^A_i$, which can be systematically increased.
A second major difference between HF+DMFT, GW+DMFT, and SEET(ED-in-GF2 or ED-in-GW) is the way in which non-local interactions are treated. DMFT neglects any contribution from non-local interactions to the self-energy, here particularly any contributions from non-local interactions to the local self-energy are neglected. HF+DMFT evaluates the frequency-independent part of the non-local self-energy at the HF level, but any non-local frequency-dependent contribution to the self-energy is neglected, as both interactions and propagators in $\Phi_\text{DMFT}$ are chosen to be local.
\begin{figure}[htb]
\includegraphics[width=0.3\columnwidth]{latex-image-1.pdf}
\includegraphics[width=0.3\columnwidth]{latex-image-2.pdf}
\includegraphics[width=0.22\columnwidth]{latex-image-3.pdf}
\caption{Left panel: Example of a low order self-energy diagram contained in SEET(ED-in-GF2) but not in GW+DMFT. Middle panel: low order diagram contained in GW+DMFT and SEET(ED-in-GW) but not in SEET(ED-in-GF2). Right panel: low-order diagram not contained in GW+DMFT, SEET(ED-in-GW), and SEET(ED-in-GF2). Dashed lines denote interactions, solid lines Green's functions.}
\label{fig:lowestSeetNotGW}
\end{figure}
Both GW+DMFT and SEET(ED-in-GF2 or ED-in-GW) include frequency-dependent non-local correlations to some extent. Assuming that a local (rather than an energy) basis is chosen for SEET, the lowest order diagram contained in SEET(ED-in-GF2) but not in GW+DMFT is illustrated in the left panel of Fig.~\ref{fig:lowestSeetNotGW}.
Here, different indices are assumed to be in different unit cells. Conversely, SEET(ED-in-GF2) in a local basis would not include the diagram illustrated in the middle panel of Fig.~\ref{fig:lowestSeetNotGW}. DMFT could in principle be extended to include the second order exchange diagram, such that the diagram in the left panel is contained, while a formulation of SEET around GW, {\it i.e.} SEET(ED-in-GW), would include the middle panel of Fig.~\ref{fig:lowestSeetNotGW}. None of these methods includes the diagram illustrated in the right panel of Fig.~\ref{fig:lowestSeetNotGW}. As a commonly used basis for SEET is an energy basis, rather than a local basis, a detailed comparison in the practically relevant case is not straightforward.
A third major difference consists of the selection of a basis. As DMFT-type methods perform a local approximation, the choice of basis functions strongly influences the types of correlations that can be contained in DMFT. In contrast, the adaptive choice of SEET basis does not require a localization procedure.
Finally, the nature of the correlated impurity problem is rather different in SEET and GW+DMFT. GW+DMFT, due to its construction of a screened interaction, requires impurity solvers able to evaluate problems with fully general frequency-dependent interactions. While efficient Monte Carlo methods exist that solve impurity problems with frequency-dependent density-density interactions,\cite{Werner07,Werner10} efficient impurity solvers able to treat general frequency-dependent four-fermion interactions do not yet exist.
SEET, on the other hand, due to the use of the $\Phi$ functional, requires no frequency-dependence in the interactions. However, the rotation to the natural orbital basis in which the density matrix is diagonal usually mixes all orbitals and interactions, necessitating a treatment of the full four-fermion interaction terms (rather than just density-density interactions) with `off-diagonal' hybridization functions.
\section{non-local interactions, correlations, and screening}\label{sec:Screening}
Non-local interactions and non-local dynamical correlations (caused both by local and non-local interactions) alter the local low-energy physics. A combination of these effects is colloquially summed up under the term `screening', despite very different physical and diagrammatic origins.
As the methods discussed above treat `screening' to a different extent, we briefly discuss various aspects of it.
First, the `screened interaction' $W$ describes a way of re-summing certain classes of diagrams. $W$ then takes the role of the bare interaction $v$ in $\Phi$ and removes diagrams with repeated insertion of polarization parts, at the cost of introducing a frequency dependence.\cite{Hedin65} The need for formulating perturbation theories in powers of $W$ is motivated by a divergence of the perturbation theory in $v$, when truncated at any order, in the infinite system size (momentum $q\rightarrow 0$) limit of the electron gas.\cite{Mahan00} In contrast, a perturbation theory in $W$ removes this divergence and stays finite.\cite{Hedin65} Within GW and GW+DMFT, as well as within SEET(ED-in-GW), terms are included at least to lowest order in $W$, and $W$ is approximated by the lowest order $P$.
SEET(ED-in-GF2) is based on a GF2 starting point that is divergent for metallic systems in the thermodynamic limit, as it is formulated in terms of the bare $v$. However, any finite system will yield a convergent answer. Thus, for a finite system, in an energy basis, the identification of the correlated orbitals will add near-Fermi-surface states to the correlated subspace and converge as the subspace is enlarged.
A second, entirely different effect also commonly referred to as `screening' that leads to lowering of local bare Coulomb interactions is generated by the effect of non-local interactions on the local self-energy.\cite{Rusakov14}
If the total orbital space is divided into a correlated subspace and the remainder, the correlated subspace self-energy acquires contributions due to non-local interactions with vertices and propagators in the remainder. This effect is general and present both for the frequency independent and dependent contribution to the self-energy. It is best illustrated for the frequency independent Hartree-Fock contribution $\Sigma_{\infty}$ that can be separated into the following contributions:
\begin{eqnarray}
[\Sigma_{\infty}]_{ij\in A}&=&\sum_{kl}\gamma_{kl}(v_{ikjl}-0.5v_{iklj})\\ \nonumber
&=&[\Sigma_{\infty}]_{ij\in A}^\text{embedded}+[\Sigma_{\infty}]_{ij\in A}^\text{embedding},
\end{eqnarray}
\begin{eqnarray}
[\Sigma_{\infty}]_{ij\in A}^\text{embedded}=\sum_{kl \in A}\gamma_{kl}(v_{ikjl}-0.5v_{iklj}),
\end{eqnarray}
\begin{eqnarray}
[\Sigma_{\infty}]_{ij\in A}^\text{embedding}&=& \sum_{kl \in R} \gamma_{kl} (v_{ikjl}-0.5v_{iklj})\\ \nonumber
&+& \sum_{k \in A}\sum_{l \in R}\gamma_{kl}(v_{ikjl}-0.5v_{iklj}).
\end{eqnarray}
Here the matrix elements $[\Sigma_{\infty}]_{ij\in A}$ have an `embedded' contribution coming only from orbitals belonging to the subset $A$ and an `embedding' contribution where the summation runs over other orbitals $R$ that are not contained in the subset $A$.
A model with non-local interactions often appears to have a smaller local self-energy than the same model with only on-site interactions.\cite{Ayral13,vanLoon14} Similarly, a multi-orbital model where inter-orbital interactions are truncated to density-density interactions encounters its metal-to-insulator transition at a weaker interaction than one with the full interaction structure.\cite{Werner09,Medici11,Antipov12} As the DMFT approximation neglects all inter-unit-cell interactions inside the correlated subspace, and as technical limitations of the impurity solvers require restriction to density-density terms, the effective DMFT interactions are additionally lowered to account for these corrections.
In SEET, this method-dependent `screening' contribution that results in the lowering of the correlated orbital subspace self-energy is not caused by introducing effective interactions. Rather, the `embedded' subspace self-energy $[\Sigma^{A}]_{ij\in A}^\text{embedded}=[\Sigma^{A}_\text{strong}]$ is evaluated using the bare Coulomb interactions (transformed to the appropriate basis) and is `screened' due to the presence of the `embedding' self-energy, $[\Sigma]_{ij\in A}^\text{embedding}=[\Sigma^\text{tot}_\text{weak}]_{ij\in A}-[\Sigma^{A}_\text{weak}]$. Note that the internal summations in $[\Sigma^\text{tot}_\text{weak}]_{ij\in A}$ extend over the orbitals that are not present in the correlated subspace, thus accounting for all the effects of the non-local interactions on the total frequency dependent subspace self-energy, $[\Sigma^{A}]=[\Sigma^{A}]_{ij\in A}^\text{embedded}+[\Sigma]_{ij\in A}^\text{embedding}$.
\section{Conclusions}\label{sec:Conclusions}
We have discussed several diagrammatic approximations capable of describing a full Coulomb Hamiltonian. These approximate methods can then be used in {\em ab initio} calculations of realistic materials or molecular problems. We have paid particular attention to the functional interpretation and have shown that the DMFT - type approximations, where the correlated subspace orbitals are chosen to be local to the unit cell, are a subclass of a wider class of self-energy embedding theories, which can deal with both local and non-local orbitals present in the correlated subspace.
We have also shown that relaxing the locality approximation of the self-energy leads to additional freedom in choosing `correlated' orbitals, and introduces a systematic small parameter that can be controlled in practice. Choosing the correlated orbital subspace as a set of one-body density matrix eigenvectors corresponding to eigenvalues with partial occupancy (most different than 0 or 2) provides an adaptive selection procedure.
While all the methods outlined here have a rigorous theoretical foundation, practical implementations of real-materials embedding calculations remain extremely difficult and the approximations needed to lower the computational cost typically break $\Phi$-derivability.
While some of these approximations have the potential to be removed with future increases of computational power, calculating frequency dependent renormalized interactions in GW+DMFT for impurity models remains challenging.
We therefore believe that embedding methods that do not rely on explicitly renormalized interactions in the correlated subspace, such as SEET(ED-in-GW) and SEET(ED-in-GF2), offer a promising route to the simulation of realistic materials with systematically improvable accuracy.
\begin{acknowledgments}
DZ and EG were supported by the Simons Foundation via the Simons Collaboration on the Many-Electron Problem. We thank Hugo Strand for insightful comments and a careful reading of the manuscript.
\end{acknowledgments}
|
2,877,628,089,795 | arxiv |
\section{Introduction}
In a recent paper of E. Formenti and K. Perrot (FP) \cite{FP20}, whose title explicitly involves \virg{sandpiles on lattices}, it is formalised a
$d$--dimensional lattice of \emph{cells} $\IZ^d$ on which \emph{configurations} are defined as mappings $c:\IZ^d\to\IN$ assigning to any cell of the lattice $x\in\IZ^d$ the finite (non-negative) number $c(x)\in\IN$ of sand grains located in this cell.
The collection of all configurations is denoted as $\IN^{\IZ^d}$. Any configuration $c\in\IN^{\IZ^d}$ can also be considered as a \emph{macrostate}, or simply \emph{state}, of the sandpile model. Moreover the non-negative number $c(x)\in\IN$ of sand granules located by configuration $c$ at the cell $x\in\IZ^d$ of the lattice is the \emph{microstate} possesses by the involved cell. In this way, $\IN$ is the collection of all possible \emph{microstates} which can be assumed by any single cell.
Based on this general framework, the authors introduced:
\MP
(1) an invariant \emph{neighborhood} of any cell as a finite subset $\cN$ of $\IZ^d\setminus \{0^d\}$ (i.e., $\cN\incl \IZ^d$ s.t. $|\cN|<\infty$ and for any $x\in\cN$ it is $x\neq 0^d=(0,0,\ldots,0)$);
\\
(2) the \emph{distribution} of sand grains $\mathcal{D}:\cN\to\IN_+$ w.r.t. the neighborhood $\cN$ (let us stress that in any cell of the neighborhood $x\in\cN$ there must be located at least one granule).
\MP
A $d$--dimensional \emph{sandpile model} is so formalized by the triple $\para{d,\cN,\mathcal{D}}$ and on the basis of this notion it is defined the quantity $\vartheta :=\sum_{x\in\cN}\mathcal{D}(x)$, called the \emph{stability threshold}.
The \emph{discrete time dynamical system} based on the \emph{state space} of all configurations $\IN^{\IZ^d}$ is defined by a \emph{global transition function} $F:\IN^{\IZ^d}\to\IN^{\IZ^d}$ associating to any input configuration $c\in\IN^{\IZ^d}$ (i.e., mapping $c:\IZ^d\to\IN$) the output configuration $c'=F(c)\in\IN^{\IZ^d}$ (i.e., mapping $F(c):\IZ^d\to\IN$), generated in the formal context $\para{d,\cN,\mathcal{D}}$ by the \emph{parallel} application to any cell $x$ of the \emph{local rule} defined by FP in \cite{FP20} through the formula:
$$
\oppA x\in\IZ^d,\; c'(x)=(F(c))(x):= c(x) -\vartheta \mathrm{H}(c(x)-\vartheta) + \sum_{y\in\cN} \mathcal{D}(y)\mathrm{H}(c(x+y)-\vartheta)
\leqno{(2)}
$$
(where $\mathrm{H}(r)=1$ if $r\in[0,+\infty)$, and $\mathrm{H}(r)=0$ otherwise, is the Heaviside function on the real domain $r\in\IR$).
In \cite{FP20} the authors claimed that $F$ is the global rule of the sandpile dynamics obtained by the \emph{parallel} application of the following \emph{local rule}: \virg{if a cell $x$ has at least $\vartheta$ grains, then it redistributed $\vartheta$ of its grain to its \emph{neighborhood} $x+\cN$ according to the distribution $\cD$.} This is a \virg{translation} on the supposed sandpile context of the following Goles statement about the \virg{chip firing game} \cite{Go92}: \virg{the application of the local rule consists of selecting a site [i.e., $x$] which has at least as many chips [i.e., FP grains] as its threshold $z_x$ [i.e., FP $\vartheta$] and passing one chip [i.e., FP grain] to each of its neighboring sites}.
In order to discuss from the foundational point of view the exact role of the local rule (2) as description of the parallel dynamics of some kind of \emph{generalized} sand pile (whatever the meaning to be attributed to the term \virg{generalized} -- but we will treat this in detail below), in the present paper we consider and deeply discuss the \emph{simplest} one--dimensional situation ($d=1$) characterized by the \emph{regular} neighborhood $\cN=\parg{-1,+1}$ and the \emph{constant} distribution function $\cD(x)=1$ for $x=\pm 1$, whose induced stability threshold is $\vartheta=2$. With respect to these choices the FP local rule (2) assumes the form
$$
\oppA x\in\IZ,\; c'(x)= c(x) -2 \mathrm{H}(c(x)-2) +\mathrm{H}(c(x-1) -2 ) +\mathrm{H}(c(x+1)-2)
\leqno{(2a)}
$$
This one-dimensional formulation will be the main argument of our investigation, organized according to the following parts:
\begin{description}
\item[First part]
In which we introduce and discuss the standard approach to the one-dimensional sandpile dynamics making reference to the seminal papers of Goles \cite{Go92} and Goles--Kiwi (GK) \cite{GK93}, these latter inspired by \cite{BTW87}, \cite{BTW88}.
First of all we realise that, differently from the GK approach in which the one--di\-men\-sio\-nal lattice of cells is $\IN$, their parallel sandpile dynamics can be naturally extended, without any formal difficulty, to the one-dimensional lattice of cells $\IZ$ proving in section \ref{sc:1sp-par} that this standard approach is governed by the global transition assigning to any initial configuration $c\in\IN^{\IZ}$ the next time configuration $c'\in\IN^{\IZ}$ expressed by the local rule:
$$
\oppA x\in\IZ,\; c'(x) = c(x) + \mathrm{H}(c(x-1) - c(x) -2) - \mathrm{H}(c(x)-c(x+1) -2).
\leqno{\text{(1a)}}
$$
This is the correct one--dimensional sandpile dynamics of the standard approach currently adopted by the scientific sandpile community. But from a comparison of equations (2a) and (1a) it is immediate to conclude that quantity $c(x)$ in the FP equation seems to have little to do with sandpile number of grains located in cell $x$ of the lattice of the standard GK approach (or at least that is our opinion).
On the other hand, if according to \cite{GK93} one introduces the \emph{height difference between consecutive piles} $h(x)= c(x) - c(x+1)$, in section \ref{sec:da-c-a-h} we prove that from equation (1a) one obtains the following rule:
$$
\oppA x\in\IZ,\; h'(x) = h(x) -2 \mathrm{H}(h(x)-2) +\mathrm{H}(h(x-1)-2) + \mathrm{H}(h(x+1)-2)
\leqno{\text{(2h)}}
$$
which has the same form of the supposed FP sandpile one--dimensional local rule (2a), but which in the standard approach to sandpiles more properly describes height differences between consecutive piles. From this point of view, the FP interpretation of equation (2a), or more generally of equation (2), is in contrast with the standard interpretation given to the same equations from the sandpile scientific community (see the above quoted papers).
Of course, as in Euclidean geometry triangles are triangles and circles are circles, without identifying circles with triangles, in the presently discussed case sandpile number of grains are sandpile number of grains and height differences are height differences, without any uncorrect identification of height differences with number of grains.
\item[Second part]
In this second part, despite the seen above correct GK interpretation of equation (2h) as describing height difference between consecutive piles, we want to explore the possible consequences of the totally different FP interpretation of equation (2a) as describing some kind of granules, of which we must identify the real identity (sand granules? Ice granules?), submitted to some kind of dynamics more complicated of the sandpiles one.
The conclusion we arrive in a formal way is that the correct interpretation of equation (2a) is of local rule of a \emph{spurious symmetric icepile parallel dynamics}, also obtained as suitable \emph{sequential} applications of the following three kind of rules:
\begin{enumerate}[(S{I}P1)]
\item
Vertical rules either from left to right $(VR)_d$ or its dual from right to left $(VR)_s$ typical of the symmetric sandpiles of \cite{FMP07}.
\item
Icepile horizontal rules, either of flowing from left to right $(HR)_d$ or its dual from right to left $(HR)_s$, in presence of horizontal plateaus.
\item
Bottom-up jump of an ice granule of one height either from left to right $(BT)_d$ or its dual from right to left $(BT)_s$.
\end{enumerate}
\end{description}
Rules (SIP1) and (SIP2) define the dynamics of symmetric \emph{icepiles}; it is the rule (SIP3) which assigns the \emph{spurious} dynamical behaviour to this model, which in any way cannot be considered a model of sandpiles but of something new kind of granules.
\section{Standard Goles-Kiwi (GK) one-dimensional sandpiles formal model under the vertical local rule}\label{sc:1sp-mpdel}
Let us start this section with a quotation of E. Formenti, B. Masson and T. Pisokas (FMP) from \cite{FMP07} where it is clear that in this paper the authors follow the usual standard definition of sandpile model according to the Goles-Kiwi (GK) approach: \virg{A formal model of sandpiles, called SPM, has been introduced in \cite{GK93, GMP02, GMP02a}. Each column contains a certain number of sand grains. The evolution is based on a local interaction rule: a sand grain falls from a column $A$ to its right neighbor $B$ if $A$ contains at least two granules more than $B$: otherwise there is no movement. The SPM hase been widely studied \cite{Br73, GK93, RS92, DRSV95, MN99, Mi99}.}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=3cm]{sand-pile-rule0.eps}
\end{center}
\caption{Typical local rule of the left--to--right granule movement of a sandpile}
\label{fig:lr-gr-mov}
\end{figure}
Coherently with the just quoted FMP statement we introduce now the standard definition of the sandpile \emph{local rule} formally describing the configuration updating in the one--dimensional case.
\begin{enumerate}[(NG1)]
\item
The \emph{one--dimensional lattice of cells} is the set $\IN$ of all non-negative integer numbers.
\item
A \emph{configuration} is a mapping $c:\IN\to\IN$ assigning to every cell $x\in\IN$ the number of granules $c(x)\in\IN$ located in this cell, under the non-increasing condition $c(0)\ge c(1)\ge c(l-1)\neq 0$, with all the remaining $c(x)=0$ for $x\ge l$.
\\
The collection of these configurations will be denoted as $(\IN^\IN)_d$, with the subscript $d$ = decreasing.
\item
The update of the granule number of the generic pair of cells located in the places
$x, x+1\in\IN\times\IN$ is formalized by the so--called \emph{vertical local rule}:
\begin{multline*}
\text{(VR)}\qquad\text{If}\quad c(x)-c(x+1)\ge 2,\;\text{then we have the transition}
\\
(\ldots,c(x),c(x+1),\ldots)\to (\ldots, c(x)-1,c(x+1)+1,\ldots)\qquad
\end{multline*}
\end{enumerate}
Quoting from \cite{Go92}: \virg{The sandpile dynamics is defined from the introduction of a local rule which takes into account a critical threshold $\vartheta[=2]$. When the eight difference [$c(x)-c(x-1)$] becomes higher than $\vartheta$, one grain of sand tumbles to the lower level. The threshold represents the maximum slope permitted without provoking an avalanche.}
It is immediate to realize that this definition, formalized in its \virg{compact} form to the case of the one--dimensional lattice of cells $\IN$, at first glance seems to be difficult to generalize to the case of the lattice of cells $\IZ$, in which are taken in consideration configurations $c\in\IN^\IZ$ not necessarily satisfying the decreasing constraint.
With the aim of achieving this generalization, assuming a small variation of the Goles and Kiwi
equation at pag. 324 sec.\til 1.2 of \cite{GK93}, let us now consider the following definition.
\begin{definition}\label{df:sp-loc-rule}
The one--dimensional \emph{sandpile local rule} is defined as the configuration transition $(\ldots, c(x),\ldots)\to (\ldots, c'(y),\ldots)$, with $x$ and $y$ cells of a one--dimensional lattice, realized by the law:
$$
c'(y) = \begin{cases}
c(y)-\mathrm{H}(c(x)-c(x+1)-2)&\text{if}\; y=x
\\
c(y)+\mathrm{H}(c(x)-c(x+1)-2)&\text{if}\; y=x+1
\\
c(y) &\text{otherwise}
\end{cases}
\leqno{\text{(VR-a)}}
$$
\end{definition}
Taking into account the formal behavior of the Heaviside function involved in this equation
\begin{equation}\label{eq:H-cond}
\mathrm{H}(c(x)-c(x+1)-2) = \begin{cases}
1 &\text{if}\; c(x)-c(x+1) \ge 2
\\
0 &\text{if}\; c(x)-c(x+1) \le 1
\end{cases}
\end{equation}
the (VR-a) con be re-formulated in the following way:
\begin{align*}
(2.1a)\qquad \text{if}\;& c(x)-c(x+1)\ge 2,\quad\text{then}\quad c'(x)= c(x)-1\;\text{and}\;c'(x+1) = c(x+1)+1,
\\
(2.1b)\qquad \text{if}\;& c(x)-c(x+1)\le 1,\quad\text{then}\quad c'(x) = c(x)\;\text{and}\;c'(x+1) = c(x+1).
\end{align*}
In this way, the above definition {(VR-a)} can be equivalently formalized by the following two conditions:
\LP
{(VRa-1)}\quad If $c(x)-c(x+1)\ge 2$, then we have the transition
$$
(\ldots,c(x),c(x+1)\ldots) \to (\ldots, c(x)-1,c(x+1)+1,\ldots)
$$
{(VRa-2)}\quad If $c(x)-c(x+1)\le 1$, then we have the transition
$$
(\ldots,c(x),c(x+1)\ldots) \to (\ldots, c(x),c(x+1),\ldots)
$$
\begin{remark}
As to this result we can state the following.
\LP
-- The (VR-a) can be applied to the general case of a not necessarily non-increasing configuration. Indeed, thanks to the behavior of the Heaviside function in equation \eqref{eq:H-cond}, whose definition is also valid under the non-increasing condition $c(x)-c(x+1)\le 1$, and then a fortiori for $c(x)-c(x+1)\le 0$, the (VR-a) can be applied also to the particular case $c(x)\le c(x+1)$, that is without asking the configuration non-increasing.
\\
-- Another remark that underlines the importance of the (VR-a) with respect to the (VR) is that variables $x$ and $y$, which appear in the (VR-a), are not tied to range on $\IN$ (as in the (VR) case) but can range on $\IZ$, without \virg{conflicting} with the definition. In this way the (VR-a) can be applied to general configurations $c: \IZ\to \IN$, whose lattice of cells is the whole $\IZ$.
\end{remark}
The second point of this remark can be formulated in the following assumption.
\begin{itemize}
\item
{\it The sandpile local rule (VR-a) of definition \ref{df:sp-loc-rule} is applied to the collection $\IN^\IZ$ of all configurations defined on the whole one--dimensional lattice of cells $\IZ$, i.e., to all mappings $c:\IZ\to\IN$.}
\end{itemize}
The collection $\IN^\IZ$ of all configurations can be decomposed in the two following classes:
\begin{enumerate}
\item
A configuration $c\in\IN^\IZ$ is said to be \emph{stable} iff $\oppA x\in\IZ$, $c(x)-c(x+1)\le 1$;
\item
a configuration $c\in\IN^\IZ$ is \emph{unstable} iff $\oppE x_0\in\IZ$ s.t. $c(x_0)-c(x_0+1)\ge 2$.
\\
In the sequel we will say that the unstable configuration $c\in\IN^\IZ$ presents a \emph{critical jump} (or a \emph{critical slope}) in the pair of cells $x_0,x_0+1$.
\end{enumerate}
Moreover, in the development of the theory we are also interested to some particular subsets of configurations according to the following definitions.
\begin{itemize}
\item
{\it The collection $(\IN^\IZ)_f$ of all configurations $c:\IZ\to\IN$ of \emph{finite support}, i.e., such that \emph{[}$\oppE x_0\in\IZ$ s.t. $c(x_0)\neq 0$ and $c(x)=0$ for all $x< x_0$\emph{]} and \emph{[}$\oppE x_f\in\IZ$ s.t.\til $c(x_f)\neq 0$ and $c(x)=0$ for all $x > x_f$\emph{]}. In this case the support of the configuration $c$ is the finite subset of cells supp$(c):=\parg{x\in\IZ: x_0\le x \le x_f}$.
\item
The configuration $c:\IZ\to\IN$ is of \emph{(simply) connected support} iff it is of finite support and $\oppA x\in\text{supp}(c)$, $c(x)\neq 0$. Note that for any pair of points $a,b\in\text{supp}(c)$, with $a\le b$, the interval $\overline{a,b}:=\parg{z\in\IZ: a\le z\le b}$ is contained in $\text{supp}(c)$ (see the remark \ref{rk:simp-conn} below).
\item
Generalizing what seen above on the non-increasing configurations over $\IN$, we will also take into account $(\IN^\IZ)_d$ as the peculiar space of \emph{finite support} configurations with non-increasing values (let $x_0$ be the first element in supp$(c)$ such that $c(x_0)\neq 0$, then $c(x_0)\ge c(x_0+1)\ge \ldots \ge c(x_f)\neq 0$ and $c(x)=0$ for all $x> x_f$).
}
\end{itemize}
\begin{remark}\label{rk:simp-conn}
Let us recall the definition of \emph{simply connected subset} of $\IZ$. First of all, a \emph{bounded interval} of extreme points $a,b\in\IZ$, with $a\lvertneqq b$, is defined as $\overline{a,b}:=\parg{z\in\IZ: a\le z\le b}$.
Then, a subset $A\incl\IZ$ is \emph{simply connected} iff for any pair of points $a,b\in A$ the corresponding bounded interval $\overline{a,b}\incl A$. Examples of simply connected subsets of $\IZ$ are finite supports $\overline{x_0,x_f}$, not bounded intervals $\overline{x_0,\infty}:=\parg{z\in\IZ: x_0\le z}$ and $\overline{-\infty, x_0}:=\parg{z\in\IZ: z\le x_0}$. The subset $\overline{x_0,x_f}\cup \overline{y_0,y_f}$, with $x_f<y_0$, is not simply connected.
\end{remark}
To any configuration of finite support $c\in(\IN^\IZ)_f$ the sum $N(c):=\sum_{x\in\IZ} c(x)$ is finite and define the \emph{total number of granules} present in the configuration. The following result is trivial to prove.
\begin{lemma}
Let us consider the condition (VRa-1) of Definition \ref{df:sp-loc-rule} which, under the condition of critical jump on site $x\in\IZ$, $c(x)-c(x+1)\ge 2$, characterizes the configuration transition $c=(\ldots,c(x),c(x+1),\ldots) \in\IN^\IZ \to c'=(\ldots,c(x)-1,c(x+1)+1,\ldots)\in\IN^\IZ$.
If configuration $c$ is of finite support with total number of granules $N(c)$, then configuration $c'$ possesses the same finite support with the same total number of granules $N(c')=N(c)$.
\\
(The total number of granules is an \emph{invariant} quantity of the system).
\end{lemma}
Of course, a configuration $c\in\IN^\IZ$ has no finite support, i.e., it is of \emph{infinite support}, iff $\oppA x\in\IZ$, $\oppE x_0\le x$ s.t.\til $c(x_0)\neq 0$ and $\oppE x_1\ge x$ s.t.\til $c(x_1)\neq 0$; in this case the total number of granules is $N(c)=+\infty$.
Once clarified the context $\IN^\IZ$ of the configuration space in which to develop the theory,
as well known \virg{two dynamics can be defined [on the basis of a given local rule]: the sequential and the parallel update. The sequential one consists to update sites, one at time, in a prescribed order. For the parallel dynamics, all the sites are updated synchronously} \cite{Go92}.
As to this argument, let us also quote an adjustment to sandplies from \cite{GK93} originally related to \emph{chip firing game} (see the inserted square brackets):
\virg{The dynamics associated to the [(VR-a)] can be \emph{sequential} or \emph{parallel}. The sequential one consists in updating the [cells], one by one in a [...] prescribed periodic order. The parallel dynamics, which is the most usual one in the context of cellular automata, consists in updating all the [cells] synchronously.}
In the same \cite{GK93} paper, but in the specific section 1.2, titled \emph{the sandpile model}, we can quote \virg{The sandpile model simulates the avalanches produced in a one-dimensional profile of a sandpile. [...] The dynamics is specified as follows: a grain of sand tumbles from site $x$ to site $x+1$, iff the height difference $c(x)-c(x+1)$, is at least [...] 2. Clearly, 2 represents a critical slope of the sandpile. If the local slope of the sandpile at a specific site is at least 2, then an avalanche will occur at that site.}
Finally, from \cite{CCB12}: \virg{A discrete time dynamical system of the sandpiles is introduced by a vertical rule, also called (VR) rule, which solves any jump from left to right, greater that or equal to two granules. The (VR) rule can be applied in a sequential or in a parallel procedure. In the first case only one jump is solved step-by-step, whereas in the parallel case all the jumps are solved during a unique step by a synchronous application of the (VR) rule.}
\subsection{One-dimensional sequential sandpiles on the lattice of cells $\IZ$}
\label{sc:1sp-seq}
Let us anticipate that in the present first part of the paper we have little interest in the sequential update procedure. But just wanting to take a quick look at this topic, we will examine the simple situation of configurations having finite support on the one-dimensional lattice $\IZ$ of cells. In this particular case, as seen in the two quoted papers of Goles \cite{Go92} and Goles-Kiwi \cite{GK93}, the sequential procedure consists in fixing a given order in the support of any configuration, for instance from left to right, and then update the involved microstates one at time according to this order.
\footnote{In the case of a configuration with non-finite support, a possible hypothetical order for the sequential updating of the cells can be the following: $0,1, -1,2, -2, \ldots$ and so on, but according to a theoretically infinite procedure.}
This procedure can be better explained with an example, where we adopt the convention of inserting the symbol $|$ in a configuration $c\in\IN^\IZ$ to denote that the integer value at its right corresponds to the cell of position $0$ in the lattice $\IZ$, in symbols $c=(\ldots, c(-1),|c(0),c(+1),\ldots)$.
\begin{example}\label{ex:5421}
Let us consider the finite support configuration
$c_0=(\bar{0},|5,4,2,1,\bar{0})$ as the initial state of a procedure, consisting in the sequential application of the sandpile local vertical rule (VR-a) from left to right.
Precisely, one can perform a finite sequence of \emph{levels} according to the following steps.
\begin{description}
\item[Level $L=0$] consisting of the unique initial configuration $c_0$.
\item[Level $L=1$] consisting of the configuration $c_1=(\overline{0},|5,3,3,1,\overline{0})$ obtained by the application of the local vertical rule (VR-a) to the unique critical jump $4,2$ of the previous level $0$ configuration $c_0$.
\item[Level $L=2$] consisting of the two configurations $c_2^{(1)}=(\overline{0},|4,4,3,1,\overline{0})$ and $c_2^{(2)}= (\overline{0},|5,3,2,2,\overline{0})$, each obtained from the application of the local vertical rule (VR-a), the first on critical jump $5,3$ and the second on critical jump $3,1$.
\item[Level $L=3$] also this level consists of two configurations $c_3^{(1)}=(\overline{0},|4,4,2,2,\overline{0})$ and $c_3^{(2)}=(\overline{0},|5,3,2,1,1,\overline{0})$. While the second is the result of solving the single critical jump 2,0 of configuration $c_2^{(2)}$, the first is the result of solving the two critical jumps: 3,1 of configuration $c_2^{(1)}$ and 5,3 of configuration $c_2^{(2)}$.
\\ And so on. All other levels can be obtained straightforward without giving their formal construction.
\end{description}
The whole procedure as result of the just described levels can be represented by the following digraph of the configuration transitions $c_i\to c_{i+1}$ (or adopting the Brylawski notation of covering $c_i\succ c_{i+1}$ \cite{Br73}) from a level $L=i$ to its successive $L=i+1$, in which only the supports of the involved configurations are highlighted.
{\footnotesize{
\begin{figure}[h!]
$$\xymatrix{
{} & 5,4,2,1 \ar[d] &{}
\\
{} & 5,3,3,1 \ar[dl]\ar[dr] & {}
\\
4,4,3,1\ar[d] & {} & 5,3,2,2\ar[dll]\ar[d]
\\
4,4,2,2\ar[d]\ar[drr] & {} & 5,3,2,1,1\ar[d]
\\
4,3,3,2 \ar[dr] & {} & 4,4,2,1,1 \ar[dl]
\\
{} & 4,3,3,1,1 \ar[d]& {}
\\
{} & 4,3,2,2,1 & {} }
$$
\caption{}
\label{fg:5421}
\end{figure}
}}
\newpage
According to \cite{CCB12} the \emph{admissible} or \emph{possible paths} (also \emph{orbits, trajectories}) starting from the initial configuration $c_0$ are finite sequences of configurations depending from the time variable $t$,
$\gamma_{c_0}\equiv c_0 , c_1,\ldots, c_t,\ldots, c_{eq}$, constructed
according to the following points: (AP1) the initial configuration at time $t=0$ is $c_0$; (AP2) the configuration $c_{t+1}$ at time $t+1$ is obtained from the configuration $c_t$ at time $t$ by the application of the local vertical rule (VR-a) to a single critical jump inside it; (AP3) the final configuration is the equilibrium configuration $c_{eq}$. Of \emph{equilibrium} in the sense that it does not show any critical jump, and therefore it makes no sense to apply the local vertical rule to some of its cells: the procedure stops at this level of updating.
The following are all the admissible paths (orbits, trajectories) in the present example, each of initial configuration $c_0=(\overline{0},|5,4,2,1,\overline{0})$ and of final equilibrium configuration $c_{eq}=(\overline{0},|4,3,2,2,1,\overline{0})$; note that this equilibrium configuration has no critical jump inside it (all jumps are sub-critical $-4,1,0$).
\begin{align*}
\gamma_{c_0}^{(1)}\equiv\;&5,4,2,1\to 5,3,3,1\to 4,4,3,1\to 4,4,2,2\to 4,3,3,2\to 4,3,3,2,1\to 4,3,2,2,1
\\
\gamma_{c_0}^{(2)}\equiv\;&5,4,2,1\to 5,3,3,1\to 5,3,2,2\to 4,4,2,2\to 4,4,2,1,1\to 4,3,3,2,1\to 4,3,2,2,1
\\
\gamma_{c_0}^{(3)}\equiv\;&5,4,2,1\to 5,3,3,1\to 5,3,2,2\to 4,4,2,2\to 4,3,3,2\to 4,3,3,2,1\to 4,3,2,2,1
\\
\gamma_{c_0}^{(4)}\equiv\;&5,4,2,1\to 5,3,3,1\to 5,3,2,2\to 5,3,2,1,1\to 4,4,2,2,1\to 4,3,3,2,1\to 4,3,2,2,1
\end{align*}
\end{example}
\subsection{One-dimensional parallel sandpiles on the lattice of cells $\IZ$ as reformulation of the Goles-Kiwi approach on the lattice $\IN$}
\label{sc:1sp-par}
\par\noindent\\
Remaining in the context of the one-dimensional lattice of cells $\IZ$, whose configuration space is $\IN^\IZ$ (without any non-increasing requirement), in this section
we discuss the $\IN^\IZ$ sandpile model where the \emph{parallel} upgrading of the initial number of grains $c(x)\in\IN$ in the cell placed in $x\in\IZ$ towards the final number of grains
$c'(x)\in\IN$, always in the cell placed in $x$, is obtained by the following \emph{local rule} re-formulation on the lattice $\IZ$ of the equation (3) of \cite{GK93} formalized by the authors in the limiting context of the lattice of cells $\IN$.
\\
$\oppA c\in\IN^\IZ$ and $\oppA x\in\IZ$,
$$
c'(x) = c(x) + \mathrm{H}(c(x-1) - c(x) -2) - \mathrm{H}(c(x)-c(x+1) -2).
\leqno{\text{(1a)}}
$$
Let us summarize the theoretical context of application of local rule (1a) adopted in the present paper.
\begin{enumerate}[(GSPM1)]
\item
the \emph{lattice of cells} is the whole set of integer numbers $\IZ$;
\item
\emph{configurations} are mappings $c:\IZ\to\IN$ associating to any cell of the lattice $x\in\IZ$ the number $c(x)\in\IN$ of sand grains located in it. The collection of all configurations is then $\IN^\IZ$;
\item
the \emph{local rule} (1a) is applied to any configuration $c\in\IN^\IZ$ in a parallel way generating the \emph{global transition function} $F:\IN^\IZ\to\IN^\IZ$.
\end{enumerate}
In order to realize the main differences of this (GSPM) context with the Goles-Kiwe approach we now enumerate the main points of their approach.
\begin{enumerate}[(SPM1)]
\item
the \emph{lattice of cells} is the set of non negative integer numbers $\IN$;
\item
\emph{configurations} are mapping $c:\IN\to\IN$ associating to any cell of the lattice $x\in\IN$ the number $c(x)\in\IN$ of sand grains located in it under the \emph{non-increasing} condition. In this case the collection of all configuration is denoted as $(\IN^\IN)_d$;
\item
the \emph{local rule} (1a) is so applied to any configuration $c\in(\IN^\IN)_d$ in a parallel way generating the \emph{global transition function} $F:\IN^\IN\to\IN^\IN$.
\end{enumerate}
The local rule (1-a) responds to the main requests to describe the dynamics of a one-dimensional sandpile on $\IZ$ described in definition \ref{df:sp-loc-rule}. Indeed, we have the following cases depending from the behavior of the Heaviside involved values:
\begin{enumerate}[(SPZ1)]
\item
If the triple $c(x-1),c(x),c(x+1)$ is such that $$\mathrm{H}(c(x-1) - c(x) -2) = \mathrm{H}(c(x)-c(x+1) -2) = 1$$ then in equation (1-a) one has $c'(x)=c(x)$. The identities involving the Heaviside functions translate in the following two relationships, both of which are non-negative:
$c(x-1) - c(x) -2\ge 0$ e $c(x)-c(x+1) -2\ge 0$, which lead to the inequality chain
$$
c(x+1)+2\le c(x) \le c(x-1) -2
$$
In other terms, it must be $c(x-1)=c(x)+h$, with $h\ge 2$ and $c(x+1)=c(x)-k$, with $k\ge 2$, corresponding to a triple $c(x)+h,c(x),c(x)-k$, and under these conditions equation (1-a) furnishes the result $c'(x)=c(x)$. Formally,
for any $h,k\ge 2$
$$
c(x)+h,c(x),c(x)-k \;\longrightarrow\; *\,,c(x),\,*
$$
where the symbol $*$ denotes an unknown value depending from the peculiar values assumed by $h$ and $k$.
From another point of view, with respect to the formulation (VR) of the vertical rule applied to the pair $c(x)+h,c(x)$ ($h\ge 2$) the central pile $c(x)$ gains a granule from the previous adjacent pile, but with respect to the pair $c(x),c(x)-k$ ($k\ge 2$) the same central pile $c(x)$ loses a granule towards the successive adjacent pile. As a final result the total number of grains of the cell $x$ stays invariant: $c'(x)=c(x)$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=7cm]{SPZ1.eps}
\end{center}
\caption{Triple $9,4,2$ as example of (SPZ1) situation. The local rule (VR) applied to the critical jump $9,4$ produces a gain of a granule to the central pile $4\to 4+1$, but the same rule applied to the critical jump $4,2$ produces a lost of a granule to the same pile $4\to 4-1$. The final result is that in the central pile there is no variation of the granule number, $9,4,2 \to *,4,*$ (the number $4$ of granules remains invariant).}
\label{fig:SPZ1}
\end{figure}
\item
If the triple $c(x-1),c(x),c(x+1)$ is such that $$\mathrm{H}(c(x-1) - c(x) -2) = \mathrm{H}(c(x)-c(x+1) -2) =0$$ then in the equation (1-a) $c'(x)=c(x)$. But the identities involving the Heaviside functions translate in the two negative conditions:
$c(x-1) - c(x) -2 < 0$ and $c(x)-c(x+1) -2< 0$, or equivalently $c(x-1) - c(x) -1\le 0$ and $c(x)-c(x+1) -1\le 0$, which lead to the chain of inequalities
$$
c(x-1)-1\le c(x)\le c(x+1)+1
$$
\begin{figure}[h!]
\begin{center}
\includegraphics[width=7cm]{SPZ2.eps}
\end{center}
\vspace{-1cm}
\caption{Four examples of triples $c(x-1),c(x),c(x+1)$ satisfying the chain of inequalities $c(x-1)-1\le c(x) \le c(x+1)+1$ without any critical jump between the pairs of piles $x-1,x$ and $x,x+1$. The number of granules in the central cell remains invariant.}
\label{fig:SPZ2}
\end{figure}
\newpage
\item
If the triple $c(x-1),c(x),c(x+1)$ is such that
$$\mathrm{H}(c(x-1) - c(x) -2)=0\quad\text{and}\quad \mathrm{H}(c(x)-c(x+1) -2) = 1$$
then the central cell $x$ loses a granule, $c'(x)=c(x)-1$, and this happens when
\\
$c(x-1)-c(x)-2<0$ e $c(x)-c(x+1)-2\ge 0$, i.e., when
$$c(x-1)-1\le c(x)\quad\text{and}\quad c(x+1)+2\le c(x).$$
Let us note that from the second inequality and from condition $c(x+1)\ge 0$ it follows that necessarily it must be $c(x)\ge 2$, from which we have that $c'(x)\ge 1$ (in any case strictly positive $c'(x)>0$).
\vspace{-2cm}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=7cm]{SPZ3.eps}
\end{center}
\vspace{-1cm}
\caption{Three examples of triples $c(x-1),c(x),c(x+1)$ satisfying the conditions $c(x-1)-1\le c(x)$ and $c(x+1)+2\le c(x)$ with critical jumps at the pairs of cell $x,x+1$ corresponding to a loss of one granule in the central cell.}
\label{fig:SPZ3}
\end{figure}
\item
If the triple $c(x-1),c(x),c(x+1)$ is such that
$$\mathrm{H}(c(x-1) - c(x) -2)=1\quad\text{and}\quad \mathrm{H}(c(x)-c(x+1) -2) = 0$$
then the central cell $x$ gains a granule, $c'(x)=c(x)+1$, and since $c(x)\ge 0$ we get that $c'(x)\ge 1$ (at any rate strictly positive $c'(x)>0$). This happens when
$c(x-1)-c(x)-2\ge 0$ and $c(x) -c(x+1)-2<0$, i.e., when
$$
c(x)\le c(x-1)-2\quad\text{and}\quad c(x)\le c(x+1)+1
$$
\vspace{-1cm}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=7cm]{SPZ4.eps}
\end{center}
\vspace{-1cm}
\caption{Three examples of triples $c(x-1),c(x),c(x+1)$ satisfying the conditions $c(x)\le c(x-1)-2$ and $c(x)\le c(x)+1$ with critical jumps at the pairs of cell $x-1,x$ corresponding to a gain of one granule in the central cell.}
\label{fig:SPZ4}
\end{figure}
\newpage
\end{enumerate}
From the analysis of these four cases we see that condition $c'(x)\ge 0$ holds for any $x\in\IZ$. This guarantees that the final configuration $c'\in\IN^\IZ$, in other words that the local rule (1a) generates a global transition $c\in\IN^\IZ\xrightarrow{F} c'\in\IN^\IZ$.
Moreover, owing to the fact that the Heaviside function can take only the two values 1 or 0, these four possibilities (i.e., 11,\,00,\,01,\,10) exhaust all the possible cases relatively to the pair of terms $\mathrm{H}(c(x-1) - c(x) -2)$ and $\mathrm{H}(c(x)-c(x+1) -2)$ appearing in equation (1a), thus providing all the possible local behaviors of this equation.
The following is an example of global dynamics generated by the local rule (1a), in its only four possibilities (SPZ1)--(SPZ4).
\begin{example}
The initial configuration $c_0=(\bar{0},|1,6,4,2,2,0,\bar{0})$, by the parallel application of the local rule (1a), is transformed in the updated configuration $c_1=F(c_0)=(\bar{0},|1,5,4,3,1,1,\bar{0})$.
The configuration $c_0$ presents three critical jumps $6,4$, $4,2$ and $2,0$, and the parallel global transition $c_0\xrightarrow{F}\;c_1$ is the result of the following single transitions of the involved triplets:
\begin{equation*}
\begin{tabular}{|c|c|}
\hline
\text{Transitions} & \text{Rules}
\\
\hline
$0,0,1\to *,0,*$ & \text{(SPZ2)}
\\
$0,1,6\to *,1,*$ & \text{(SPZ2)}
\\
$1,6,4\to *,5,*$ &\text{(SPZ3)}
\\
$6,4,2\to *,4,*$ &\text{(SPZ1)}
\\
$4,2,2\to *,3,*$ & \text{(SPZ4)}
\\
$2,2,0\to *,1,*$ & \text{(SPZ3)}
\\
$2,0,0\to *,1,*$ & \text{(SPZ4)}\\
\hline
\end{tabular}
\end{equation*}
Let us note that in the global transition $c_0\to c_1=F(c_0)$ the single granule movement happens, when it happens, from the left (where a granule is lost) towards the right (where a granule is gained).
For instance, the adjacent triple $6,4,2$ is transformed in the triple $5,4,3$ and the pair $2,0$ in the pair $1,1$.
All this coherently with the parallel application of the vertical rule (1a).
\end{example}
\subsubsection{The transition from the one-dimensional case on $\IZ$ to the case on $\IN$}
\label{ss:Z-to-N}
In the one-dimensional context of the lattice of cells $\IZ$, described by the local rule of equation (1a), if one considers the input configuration $c=(\bar{0},0,c(x_0)\neq 0,\ldots)$ then in the output configuration the cell of place $x_0-1$ is in the microstate $c'(x_0-1) = 0 + \mathrm{H}(0-0-2) - \mathrm{H}(0-c(x_0)-2)= 0$.
Moreover, for all the other cell of places $x_0-n$, $n\ge 2$, trivially the corresponding microstate is $c'(x_0-n)=0$.
In conclusion we obtain the parallel global transition $(\overline{0},0,c(x_0)\neq 0,\ldots)\xrightarrow{F}\;(\overline{0},0,c'(x_0),\ldots)$, whatever be the local transition of the microstate at place $x_0$, $c(x_0)\to c'(x_0)$.
\begin{description}
\item[Conclusion 1]
If on the lattice of cells $\IZ$ one has a configuration in which all the cells from $-\infty$ to $x_0-1$ have \emph{no} granules (i.e., they are empty) while $c(x_0)\neq 0$, then the local rule (1a) guarantees that during the whole parallel update all these cells remain empty. Formally, under the parallel action of the local rule (1a) one has the following global transition.
Let $c(x_0)\neq 0$, then
$$
(\bar{0},0,c(x_0),\ldots)\xrightarrow{\;F\;} (\bar{0},0,c'(x_0),\ldots)
$$
Moreover, if one takes into account that $\mathrm{H}(c(x_0-1)-c(x_0)-2)=\mathrm{H}(-c(x_0)-2)=0$, the parallel sandpile global dynamics on $\IZ$ according to the local rule (1a) for configurations of the type
$(\bar{0},0,c(x_0),c(x_0+1),\ldots)$, with $c(x_0)\neq 0$, is formalized in the following local rule behaviour relative to the cell $x_0$:
$$
c'(x_0)= c(x_0) -\mathrm{H}(c(x_0)-c(x_0+1)-2)
$$
\end{description}
In other words, the parallel sandpile dynamics on $\IZ$ for configurations of the kind
\linebreak
$(\bar{0},0,c(x_0),c(x_0+1),\ldots)$ can be identified with the parallel sandpile dynamics on the lattice of cells $\IN(x_0)=(x_0,x_0+1,\ldots,x_0+n,\ldots)$ for configurations $(c(x_0),c(x_0+1),\ldots)$ \virg{specified by the following local rule:
\begin{align*}
(1-i)\qquad&c'(x_0)=c(x_0)-\mathrm{H}(c(x_0)-c(x_0+1)-2)
\\
(1-ii)\qquad&c'(x)=c(x)+\mathrm{H}(c(x-1) -c(x) -2) - \mathrm{H}(c(x) - c({x+1}) -2),\quad\oppA x>x_0.
\end{align*}
For this latter updating scheme we define the global transition function $F$ as
$$
F:\IN^{\IN(x_0)}\to\IN^{\IN(x_0)}, \quad c\to c'=F(c)
$$
where $\oppA x\in{\IN(x_0)}$, $c'(x)=(F(c))(x)$ is defined [according to the pair of equations] (1-i,1-ii).} (From Goles \& Kiwi \cite{GK93} in which these last considerations are treated in the particular case of $x_0=0$).
\subsection{The cellular automata interpretation of the parallel sandpile model and related deterministic dynamics}
\label{sec:GK-CA}
\par\noindent\\
Coming back to the general sandpile theory on the one dimensional lattice of cells $\IZ$, let us see as the sandpile local rule expressed by equation (1a) can be obtained in the context of a one-dimensional cellular automata (CA) model on the same lattice of cells.
Precisely, let us consider the one dimensional \emph{elementary} CA $\para{d,\mathcal{A},r,f}$ of \emph{dimension} $d=1$, based on the infinite \emph{alphabet} $\mathcal{A}=\IN$, of radius $r=1$ and \emph{local rule} given by the mapping $f:\IN^3\to\IN$, formally defined as follows:
\\
$\oppA (v,a,w)\in\IN^3$,
$$
f(v,a,w)= a + \mathrm{H}(v-a-2) - \mathrm{H}(a-w-2) =
\begin{cases}
a-1 &\text{if $v-1\le a$ and $w+2\le a$}
\\
a+1 &\text{if $a\le v-2$ and $a\le w+1$}
\\
0 &\text{otherwise}
\end{cases}
$$
Relatively to such elementary CA structure the \emph{discrete time dynamical system} (DTDS) is the pair $\para{\Omega,F_f}$ where the \emph{state space} is the collection $\Omega=\IN^\IZ$ of all bi-infinite sequences $c:\IZ\to \IN$ and the \emph{next state mapping} induced from the local rule is the mapping $F_f:\IN^\IZ\to\IN^\IZ$ transforming the input state $c\in\IN^\IZ$ into the output state $F_f(c)\in\IN^\IZ$ specified by the law
\\
$\oppA x\in\IZ$,
$$
[F_f(c)](x)=f\big(c(x-1),c(x),c(x+1)\big)
$$
Denoting $c'(x):=[F_f(c)](x)$, this output CA state can also written as follows
\\
$\oppA x\in\IZ$,
$$
c'(x)=f(c(x-1),c(x),c(x+1))=c(x) + \mathrm{H}(c(x-1) - c(x) -2) - \mathrm{H}(c(x)-c(x+1) -2).
$$
which formally is just the final number of grains located in the cell $x$ expressed in subsection \ref{sc:1sp-par} by equation (1a) of the one-dimensional parallel approach to sandpile (SP): $\oppA x\in\IZ$, $[F_f(c)](x)=[F(c)](x)$ or, in other words, we can identify the two maps $F_f=F$.
For any fixed configuration $c_0$ this CA next state mapping, or equivalently SP global transition function, $F$ induces an
\emph{orbit} (or \emph{trajectory}, or also \emph{path}) of initial state $c_0$,
$$
\gamma_{c_0}\equiv c_0\xrightarrow{F} c_1=F(c_0)\xrightarrow{F} c_2=F(c_1)=F^2(c_0)\xrightarrow{F}\ldots
$$
described by the sequence of configurations
$$\gamma_{c_0}:\IN\to\IN^\IZ,\: t\to \gamma_{c_0}(t)=c_t,$$
where the general state of this dynamical evolution is expressed by the law
$$
\oppA t\in\IN\setminus\{0\},\quad c_{t}=F(c_{t-1}) = F^t(c_0).
$$
This orbit satisfies the following two Cauchy conditions of a first order difference equation:
$$
\begin{cases} \gamma_{c_0}(t+1) =F(\gamma_{c_0}(t))&\text{for every time instant}\; t\in\IN
\\
\gamma_{c_0}(0)=c_0
\end{cases}
$$
In this dynamical context the following definition turns out to be very important.
\begin{definition}
For definition $c_{eq}\in\IN^\IZ$ is an \emph{equilibrium configuration} iff $F(c_{eq})=c_{eq}$, since if at a time instant $\widehat t$ the dynamical evolution reach this state, $F^{\widehat t}(c_0)=c_{eq}$, then in any successive time instant $t\ge \widehat t$, it is $F^t(c_0)=c_{eq}$.
\end{definition}
\begin{lemma}\label{lm:eq-conf}
The configuration $c_{eq}\in\IN^\IZ$ is an \emph{equilibrium configuration} of the parallel dynamics generated by the local rule (1a) iff it is a stable configuration, i.e., $\oppA x\in\IZ$, $c_{eq}(x)-c_{eq}(x+1)\le 1$ (which can be defined as condition of \emph{sub-critical} jump).
\end{lemma}
\begin{proof}
The condition of equilibrium, for definition, is $F(c_{eq})=c_{eq}$, and from equation (1a) this condition is equivalent to $\oppA x\in\IZ$, $\mathrm{H}(c_{eq}(x-1) - c_{eq}(x) -2) = \mathrm{H}(c_{eq}(x)-c_{eq}(x+1) -2)=0$. From the property of the Heaviside function, these two identities are equivalent to the two conditions
$\oppA x\in\IZ$, $c_{eq}(x)-c_{eq}(x+1)\le 1$ and $\oppA x\in\IZ$, $c_{eq}(x-1) - c_{eq}(x)\le 1$. But this second is nothing else that the first identity since putting in this latter $x=\widehat{x}+1$ we get $\oppA\widehat{x}\in\IZ$, $c_{eq}(\widehat{x})-c_{eq}(\widehat{x}+1)\le 1$.
\end{proof}
\begin{example}\label{ex:1354321}
Let us consider the configuration in $\IN^\IZ$ expressed by the sequence
$(\overline{0},1,3|5,4,3,2,1,\overline{0})$, depicted in the following figure.
\vspace{-2cm}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=7cm]{equil-543.eps}
\end{center}
\caption{Example of a finite support equilibrium configuration, according to the previous Lemma \ref{lm:eq-conf}.}
\label{fig:543}
\end{figure}
The following table shows the satisfaction of all the conditions $\oppA x\in\IZ$, $c(x)-c(x+1)\le 1$ expressed in Lemma \ref{lm:eq-conf} in order to have an equilibrium configuration.
\begin{equation*}
\begin{tabular}{|c|c|c|}
\hline
$c(x)$ & $c(x+1)$ & $c(x)-c(x+1)$ \\
\hline\hline
\mbox{\rule[0cm]{0cm}{2.5ex} $0$} & 1 & $-1$ \\
1 & 3 & $-2$\\
$3$ & $5$ & $-2$ \\
5 & 4& 1\\
4 & 3 & 1\\
3 & 3 & 0\\
3 & 3 & 0\\
3 & 2 & 1\\
2 & 1 & 1\\
1 & 1 & 0\\
1 & 0 & 1\\
\mbox{\rule[0cm]{0cm}{1.5ex} $0$} & 0 & 0\\
\hline
\end{tabular}
\end{equation*}
\end{example}
\begin{example}\label{ex:bool-equil}
The following is an interesting example of equilibrium configuration in which each cell contains at least a unique sand granule, i.e., it is a Boolean configuration $\oppA x\in\IZ$, $c(x)\in\parg{0,1}$.
\vspace{-4cm}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=8cm]{2equil-a.eps}
\end{center}
\vspace{-1cm}
\caption{}
\label{fig:2-eq}
\end{figure}
Of course, any Boolean configuration is of equilibrium since it trivially presents sub-critical jumps $\oppA x\in\IZ$, $c(x)-c(x+1)\in\parg{-1,0,1}$.
\end{example}
\begin{example}
The configuration: $(\overline{0}|5,4,3,\overline{0})$ is not of equilibrium. Indeed, as shown by the following table not all the conditions required by Lemma \ref{lm:eq-conf} are satisfied.
\begin{equation*}
\begin{tabular}{|c|c|c|}
\hline
$c(x)$ & $c(x+1)$ & $c(x)-c(x+1)$ \\
\hline\hline
\mbox{\rule[0cm]{0cm}{2.5ex} $5$} & 4 & $1$ \\
$4$ & $3$ & $1$ \\
3 & 0& 3\\
\mbox{\rule[0cm]{0cm}{1.5ex} $0$} & 0 & 0\\
\hline
\end{tabular}
\end{equation*}
In particular, rule (1a) generates the dynamical transition $(\overline{0}|5,4,3,0,\overline{0})\xrightarrow{F} (\overline{0}|5,4,2,1,\overline{0})$, consequence of the presence of the unique critical jump $3,0$.
\end{example}
\begin{example}\label{ex:815}
Let us consider the configuration $c_0=(\overline{0},|8,1,5,\overline{0})$ as initial state of the dynamical evolution
$c _t \xrightarrow{F} c_{t+1}$
obtained by the parallel application of the local rule (1a). In the following table we represent this dynamical evolution, which ends after $t=6$ time steps with the equilibrium configuration $c_{eq}=(\overline{0},|4,4,3,2,1,\overline{0})$, exposing on the right the total number of granules of each configuration of the orbit.
\begin{align*}
c_0&=8,1,5 & N(c_0)&=14
\\
c_1&=7,2,4,1 & N(c_1)&=14
\\
c_2&=6,3,3,2 & N(c_2)&=14
\\
c_3&=5,4,3,1,1 & N(c_3)&=14
\\
c_4&=5,4,2,2,1 & N(c_4)&=14
\\
c_5&=5,3,3,2,1 & N(c_5)&=14
\\
c_6&=4,4,3,2,1=c_{eq} & N(c_6)&=14
\end{align*}
Of course, $\oppA t\ge 6$, $c_t=c_{eq}= (\overline{0},|4,4,3,2,1,\overline{0})$.
Moreover, this orbit satisfies the principle of \emph{invariance of the total number of granules}
$
\oppA t\in\IN,\; N({c_t}) =\sum_{x\in\IZ} c_t(x)=14.
$
\end{example}
The final result about the invariance of the total number of grains during the dynamical evolution is not an \virg{accident} of this particular example. Indeed the following general result can be proved.
\begin{proposition}\label{pr:N-invariance}
For any initial state $c_0$ the corresponding orbit $c:\IN\to\IN^\IZ$, $t\to c_t=F^t(c_0)$ satisfies the principle of \emph{invariance of the total number of granules}
$$
\oppA t\in\IN,\; N(c_t) =\sum_{x\in\IZ} c_t(x)= \sum_{x\in\IZ} c_0(x) = N(c_0).
$$
\end{proposition}
\begin{example}\label{ex:5421-N}
Below we will refer to the example \ref{ex:5421} discussed in the subsection \ref{sc:1sp-seq}. In particular in the left column of the figure \ref{fg:5421-bis} we will treat the sequential procedure with dashed lines, while we will draw the parallel procedure with continuous lines. In the central column of the same figure we isolate the only case of parallel dynamics while in the column on the right we highlight the invariance behavior of the total number of granules $N=12$ for both sequential and parallel dynamics.
{\scriptsize{
\begin{figure}[h!]
$$\xymatrix{
{} & 5,4,2,1 \ar[d] &{}
&{}& 5,4,2,1 \ar[d] &{} & N=12
\\
{} & 5,3,3,1 \ar@{-->}[dl]\ar@{-->}[dr] \ar[ddl]& {}
&{}& 5,3,3,1 \ar[dd] &{} & N=12
\\
4,4,3,1\ar@{-->}[d] & {} & 5,3,2,2\ar@{-->}[dll]\ar@{-->}[d]
&{}& {} &{} & N=12
\\
4,4,2,2\ar@{-->}[d]\ar@{-->}[drr] \ar[ddr]& {} & 5,3,2,1,1\ar@{-->}[d]
&{}& 4,4,2,2 \ar[dd] &{} & N=12
\\
4,3,3,2 \ar@{-->}[dr] & {} & 4,4,2,1,1 \ar@{-->}[dl]
&{}& &{} & N=12
\\
{} & 4,3,3,1,1 \ar[d]& {}
&{}& 4,3,3,1,1\ar[d] &{} & N=12
\\
{} & 4,3,2,2,1 & {}
&{}& 4,3,2,2,1 &{} & N=12
}
$$
\caption{}
\label{fg:5421-bis}
\end{figure}
}}
\newpage
Of course, $(\overline{0},|4,3,2,2,1,\overline{0})$ is the equilibrium configuration of the parallel dynamics since all the conditions expressed by Lemma \ref{lm:eq-conf} of sub-critical jumps are satisfied.
\end{example}
\subsection{Sandpile dynamics of initial state whose support is perfect (finite and simply connected)}
\par\noindent\\
Let us observe that in both examples \ref{ex:815} and \ref{ex:5421-N} treated in subsection \ref{sec:GK-CA} the dynamics involved converge to an equilibrium configuration in a finite number of steps, keeping the total number of granules constant.
In this section we want to demonstrate that this behavior is not an exceptional fact but represents two particular cases of a general behavior of sandpiles dynamical evolution whose initial configuration $c_0\in\IN^\IZ$ is of perfect (i.e., finite and simply connected) support and of total number of granules $N$.
Before addressing this topic, let us introduce an interesting result, where for simplicity we denote by $[0,N-1]=\parg{0,1,\ldots,N-1}$ (resp., $[0,N]=\parg{0,1,2,\ldots,N}$).
\begin{lemma}\label{lm:N!}
The collection $[0,N]^{[0,N-1]}$ is of finite cardinality equal to $N!$. Formally,
$$
|[0,N]^{[0,N-1]}|=N!
$$
\end{lemma}
\begin{proof}
In order to prove this relationship we adopt the procedure using in statistical thermodynamics.
We have to count the number of mappings $\parg{0,1,2,\ldots,N-1}\to\parg{0,1,2,\ldots,N}$. Let us consider the co-domain $\parg{0,1,2,\ldots,N}$ as consisting of $N+1$ originally empty boxes, and the domain $\parg{0,1,2,\ldots,N-1}$ as consisting of $N$ distinguishable balls. Fixing the first box $y=0$ it can be filled by a ball in $N$ different ways.
At this point there are $N-1$ balls with which the second box $y=1$ can be filled.
Thus, the pair of boxes $1,2$ can be filled by balls in $N (N-1)$ different ways, leaving $N-2$ balls available.
Then, the third box $y=2$ can be filled in $N-2$ different ways and so the triple of boxes $1,2,3$ can be filled in $N(N-1)(N-2)$ different ways.
Continuing in this way until the balls are exhausted in filling all the available boxes, we arrive at the desired relationship.
\end{proof}
In the sequel let us denote by $\Omega(N)$ the \emph{state space} collection of \emph{all} bi-infinite configurations $c=(\overline{0},|c(0),\ldots,c(l),\overline{0})\in\IN^\IZ$ such that:
\\
(1) they have $N$ as total number of granules ($\sum c(x)=N$);
\\
(2) according to subsection \ref{ss:Z-to-N} they can be considered as elements of $\IN^\IN$ in the sense that
$c(x)=0$ for every $x< 0$;
\\
(3) they are of perfect (i.e., finite and simply connected) support ($\oppE l\in\IN\setminus\{0\}$ s.t. $c(x)\neq 0$ for every $0\le x\le l$ and $c(x)=0$ for every $x>l$);
\\
(4) of length at most equal to $N$ ($l\le N$).
\begin{remark}
In a generic configuration $c=(\overline{0},|c(0),c(1),\ldots,c(x),\ldots, c(l),\overline{0})\in\Omega(N)$, the condition $c(0)+c(1)+\ldots+c(x)+\ldots+c(l)=N$, with $c(x)\ge 0$ for every $x$, defines $c$ as a \emph{generalized partition} of $N$.
So from this point of view $\Omega(N)$ is the collection of all \emph{generalized partitions} of $N$.
According to \cite{Br73} a \emph{partition} of $N$ (also \emph{ordered partition} according to \cite{GK93}) is a generalized partition satisfying the further condition of non-increasing: $c(0)\ge c(1)\ge \ldots \ge c(x)\ge \ldots \ge c(l)$.
The collection of all ordered partition of $N$ will be denoted by $S(N)$. So $S(N)\incl \Omega(N)$.
\end{remark}
Let us start this general investigation from the case in which the initial configuration $c_0\in\Omega(N)$, i.e., it is of the kind $c_0=(\overline{0},|c_0(0),\ldots,c_0(l_0),\overline{0})$, with $\sum_{k=0}^{l_0}c_0(k)=N$ and $\oppA 0\le j\le l_0$, $c_0(j)\neq 0$, underlining some important properties that characterize the dynamics.
\begin{enumerate}[(Pr1)]
\item
The initial configuration $c_0\in\Omega(N)$ will \virg{vary} between the two extremal cases
\linebreak
$0:=(\overline{0},|\underbrace{(1,1,\ldots,1)}_{N-times},\overline{0})$
and
$1:=(\overline{0},|N,\overline{0})$.
\item
From the condition $\sum_{k=0}^{l_0}c_0(k)=N$ it follows that $\oppA k$, $c_0(k)\le N$.
\item
From Proposition \ref{pr:N-invariance}, which assures the invariance of the total number of granules during the dynamical evolution, it follows that any state of the orbit $\gamma_{c_0}(t)=c_t$, for time $t\in\IN$, is of the kind $c_t=(\overline{0},|c_t(0),\ldots, c_t(l_t),\overline{0})$, with $0<l_t\le N$ and $\sum_{x=0}^{l_t}c_t(x)=N$. So any state $\gamma_{c_0}(t)=c_t\in\Omega(N)$.
\end{enumerate}
The previous properties allow us to consider $[0,N]^{[0,N-1]}$ as the framework of the state space $\Omega(N)$, and so also of $S(N)$, instead of $\IN^\IZ$.
\\
Summarizing, we consider the following three sets:
\begin{equation*}
\begin{tabular}{ll}
$[0,N]^{[0,N-1]}$&collection of mappings $c$ from $[0,N-1]$ to $[0,N]$,
\\
$\Omega(N)$ &collection of mappings $c$ from $[0,N-1]$ to $[0,N]$, s.t.\til $\sum_x c(x)=N$,
\\
$S(N)$ &collection of mappings $c$ from $[0,N-1]$ to $[0,N]$, s.t.\til $\sum_x c(x)=N$, non-increasing.
\end{tabular}
\end{equation*}
From the trivial chain of set inclusions $S(N)\incl \Omega(N)\incl [0,N]^{[0,N-1]} $ and the fact that the cardinality of $[0,N]^{[0,N-1]}$ is finite and equal to $N!$ (Lemma \ref{lm:N!}), we have that also $S(N)$ and $\Omega(N)$ have finite cardinality with
\begin{equation}\label{eq:ub-SO}
|S(N)|\le |\Omega(N)| \le N!
\end{equation}
\section{The dynamical evolution from a non-increasing sandpile initial configuration}
\par\noindent\\
Let us summarize in the present subsection the dynamical evolution starting from a non-increasing initial state
as described in section 2, entitled \virg{\emph{General result about sandpiles}}, of \cite{GK93}.
To be precise, let us consider the collection of all the \emph{ordered partitions} of $N$
$$
S(N)=\Big\{c\in\IN^\IN:\oppA x\in\IN,\;c(x)\ge c(x+1),\,\sum_{x\in\IN} c(x)=N\Big\}\incl\Omega(N)
$$
Let us recall the two sandpile dynamics introduced in section \ref{sc:1sp-mpdel}:
\begin{enumerate}
\item[(S)]
The \emph{sequential} dynamics updates the cells of any configuration, one by one, in a prescribed order, in general from left to right, according to the \emph{vertical local rule} (VR).
\item[(P)]
The \emph{parallel} dynamics updates all the cells of any configuration synchronously, according to the \emph{local rule} (1a).
\end{enumerate}
Then the following is proved.
\\ \\
\textbf{Corollary 2.4 of \cite{GK93}}.\label{cor:GK93}
Given any initial configuration [$c_0\in S(N)$], both the SPM sequential and parallel dynamic converge towards the same fixed point, i.e., equilibrium state.
\par\noindent
[We can add the information that both the sequential and the parallel \emph{transient time} to reach the equilibrium configuration cannot be greater than the upper bond $N!$ of the cardinality $|S(N)|$ of the space $S(N)$ (see equation \eqref{eq:ub-SO}).]
Furthermore, the following properties are verified (from \cite{Go92}, \cite{GK93}).
\begin{enumerate}[(SN1)]
\item
For any non-negative integer $N$ there exist two non-negative integers $k,k'\in\IN$
such that it can be written as
$$
N=\frac{1}{2}\,k(k+1)+k',\; \text{with}\; k'\le k
$$
\item
Any sequential orbit (trajectory) of initial sandpile state concentrate in the origin $c_0=(N,\overline{0})\in S(N)$ converges to the fixed point (equilibrium state)
$$
c^{(k,k')}_{eq}=(k,k-1,\ldots,k-j=k'+1,k',k',k'-1,\ldots,2,1,\overline{0}),\quad \text{for}\; j=(k-k')-1
$$
\item
The sequential \emph{transient time} to reach the fixed point $c^{(k,k')}_{eq}$ is exactly
$$
T(N)=\binom{k+1}{3}+k\,k'-\binom{k'}{2}
$$
\end{enumerate}
\begin{example}
In the following figure we draw at the left side the sequential (VR) dynamical evolution and at the right side the parallel one (PT), both of initial state $(6,0,0,0,0,0)$, of the sandpile model.
As expected from the general discussion, both dynamics converge in a finite number of steps to the same fixed point (equilibrium configuration) $(3,2,1,0,0,0)$.
\begin{figure}[h!]
$$
\xymatrix{ & & 6 & &
6
\\
& & 5,1\ar@{<-}[u]^{(VR)} & &
5,1\ar@{<-}[u]_{(PT)}
\\
& & 4,2\ar@{<-}[u]^{(VR)} & &
4,2\ar@{<-}[u]_{(PT)}
\\
& 3,3 \ar@{<-}[ur]^{(VR)} & & 4,1,1 \ar@{<-}[ul]_{(VR)} &
{}
\\
& & 3,2,1 \ar@{<-}[ul]^{(VR)}\ar@{<-}[ur]_{(VR)} & &
3,2,1\ar@{<-}[uu]_{(PT)}
}
$$
\caption{}
\label{fig:060}
\end{figure}
In agreement with the above point (SN3), in the sequential dynamics the equilibrium state is reached after four time steps: indeed, from $6=\frac{3\cdot 4}{2}$ ($k=3$ and $k'=0$), we get $T(6)=\binom{4}{3} +3\cdot 0 +\binom{0}{2}= 4$.
The parallel dynamics reaches the same equilibrium state after three time steps.
The sequential dynamics can be considered as decomposition of the two orbits of the same initial state $c_0=(6,0,0,0,0,0)$, both converging to the same final equilibrium state $c_{eq}=(3,2,1,0,0,0)$:
\begin{align*}
\gamma_{c_0}^{(1)}=&6 \xrightarrow{(VR)}\; 5,1 \xrightarrow{(VR)}\; 4,2 \xrightarrow{(VR)}\; 3,3 \xrightarrow{(VR)}\; 3,2,1
\\
\gamma_{c_0}^{(2)}=&6 \xrightarrow{(VR)}\; 5,1 \xrightarrow{(VR)}\; 4,2 \xrightarrow{(VR)}\; 4,1,1 \xrightarrow{(VR)}\; 3,2,1
\end{align*}
\end{example}
\subsection{Failure of the one-dimensional generalization of the local rule (1a) to the case of a generic neighborhood}
\label{ss:failure}
\par\noindent
Let us now write the local rule (1a) for the parallel upgrade of a one-dimensional sandpile on the lattice $\IZ$ in a form which can lead to a \emph{possible} generalization to the case of any neighborhood not containing the cell 0.
\begin{align*}
c'(x) &= c(x) + \mathrm{H}(c(x-1) - c(x) -2) - \mathrm{H}(c(x)-c(x+1) -2)
\\
&= c(x) + \sum_{y\in\{-1,+1\}} y\, \mathrm{H}(y\,(c(x-y) - c(x))-2)
\end{align*}
From this formulation it follows that a possible one-dimensional generalization consists in introducing a \emph{neighborhood} $\mathcal{N}$ as a finite subset of $\IZ\setminus\{0\}$ and a \emph{generalized distribution function} $\mathcal{G}:\mathcal{N}\to \IN$ w.r.t. the neighborhood $\mathcal{N}$, with associated \emph{stability threshold} $\theta=\sum_{y\in\mathcal{N}} |\mathcal{G}(y)|$. Note that this distribution function is \emph{generalized} in the sense that it can assume not only positive values but also negative.
The \emph{global transition function} of a generalized one-dimensional sandpile model on $\IZ$ \emph{could} therefore be formalized as a mapping $F$ assigning to any input configuration $c\in\IN^\IZ$ the output configuration $F(c)\in\IN^\IZ$ obtained by the parallel application to any cell $x\in\IZ$ of the following \emph{local rule}:
\\
$\oppA c\in\IN^\IZ$, $\oppA x\in\IZ$,
$$
(F(c))(x):= c(x) + \sum_{y\in\mathcal{N}} \mathcal{G}(y)\, \mathrm{H}(\mathcal{G}(y)\, (c(x-y)-c(x))-\theta)
\leqno{\text{(1g)}}
$$
The sandpile global rule (1a) seen above is the particular case of this generalized global rule (1g) under the choices of $\mathcal{N}=\{-1,+1\}$ and $\mathcal{G}(y)=y$ for any $y=\pm 1$, from which it follows that $\theta=2$.
We introduced the generalized local rule (1g) using the conditional \virg{could} since all this makes sense if we prove the following
\begin{description}
\item[Open Question]
Since a configuration of a sandpile must associate to each cell the number, greater than or equal to zero, of granules allocated in this cell, the configuration $F(c)$ must be a quantity greater than or equal to zero in each cell of the lattice $x\in\IZ$, and this is a property that must be proved to be satisfied by (1g) for any cell of the lattice. Formally,
given the local rule (1g), the following non-negativity condition must be demonstrated:
\\
$\oppA\mathcal{N}\incl \IZ\setminus\{0\}$ with $|\mathcal{N}|<\infty$ and $\oppA\mathcal{G}\in{\IN}^\mathcal{N}$, let $c\in\IN^\IZ$ then
$$
\oppA x\in\IZ,\quad (F(c))(x)\in\IN.
\leqno{\text{(NN)}}
$$
\end{description}
This condition is problematic to prove given the great arbitrariness in the choice of the distribution function $\mathcal{G}:\mathcal{N}\to \IN$. For instance, given a generic neighborhood $\mathcal{N}$ a possibility is the following distribution function $\oppA y\in\mathcal{N}$, $\mathcal{G}(y)=-\text{exp}(\sqrt[3]{\arctan(y)})$.
\\
In a first approach we could consider the two quite simple cases $\oppA y\in\mathcal{N}$, $\mathcal{G}_{id}(y)=y$ and $\mathcal{G}_1(y)=1$, with respect to which the first result is the following.
\begin{lemma}
If the neighborhood $\mathcal{N}$ in $\IZ$ is \emph{symmetric}, i.e., $y\in\mathcal{N}$ implies $\,-y\in\mathcal{N}$, then the condition of non-negativity (NN) is verified for the constant generalized distribution $\mathcal{G}_{1}$, but in general the conservation of the total number of granules is not verified.
\end{lemma}
\begin{proof}
Under the symmetry condition of $\mathcal{N}$, in the sum of (1g) the following pairs of terms appear
$$
\mathrm{H}((c(x-y)-c(x))-\theta) + \mathrm{H}((c(x+y)-c(x))-\theta)
$$
whose contribution to the sum is one of the non-negative values 0, 1 and 2, which under the condition $c(x)\ge 0$ maintain the non-negativity of (1g).
Let us see now a simple example where the number of granules conservation is not verified. Let us consider the neighborhood $\mathcal{N}=\{-1,+1\}$ and the constant generalized distribution on $\mathcal{N}$ equal to 1 with corresponding $\theta=2$. In this case, the local rule (1g) assumes the form:
$$
c'(x)=c(x) + \mathrm{H}(c(x-1)-2) + \mathrm{H}(c(x+1)-2)
$$
Given the configuration $c=(\bar{0},0,4,|0,4,0,\bar{0})$, whose number of granules is 8, then $c'=(\bar{0},1,4,|2,4,1,\bar{0})$ whose number of granules is 12.
\end{proof}
\begin{proposition}\label{pr:-y+y}
In the case of a neighborhood $\mathcal{N}=\{-y,+y\}$, with the integer number $y>0$ fixed, and of the identity generalized distribution $\mathcal{G}_{id}(y)=y$ and $\mathcal{G}_{id}(-y)=-y$, the positivity condition (NN) is satisfied in the only two cases $y=1$ and $y=2$.
\end{proposition}
\begin{proof}
Under the hypothesis of the proposition, with respect to which $\theta=2y$, we get
\begin{align*}
c'(x)&=c(x) + y \mathrm{H}(y(c(x-y)-c(x))-2y) -y \mathrm{H}(-y(c(x+y)-c(x)) -2y)
\\
&=c(x) + y \mathrm{H}(y(c(x-y)-c(x))-2y) -y \mathrm{H}(y(c(x)-c(x+y)) -2y)
\end{align*}
Let us discuss all the possible cases with respect to the behaviour of the Heaviside function, whose possible values are 0 or 1.
\\
- If both the values of the two Heaviside functions are equal to zero ($0,0$) or equal to one ($1,1$) we have $c'(x)=c(x)\ge 0$, and so the positivity condition is verified.
\\
- If the first Heaviside function is equal to zero and the second equal to 1 (case 0,1) we have $c'(x)=c(x)+y\ge 0$ since $y>0$. So also in this case there is no problem with respect to the positivity.
\\
Let us note that this case corresponds to $y(c(x-y)-c(x))-2y\ge 0$ and $y(c(x)-c(x+y))-2y< 0$, i.e., when $c(x)\le c(x-y) -2$ and $c(x)<c(x+y)+2$, and this is a situation similar to the (SPZ4) when we put $x-y$ and $x+y$ in place of $x-1$ and $x+1$.
\\
- If the first Heaviside function is equal to zero and the second equal to one (case 0,1) we have that $c'(x)=c(x)-y$.
This happens when the Heaviside arguments are $c(x-y)-c(x)-2<0$ and $c(x)-c(x+y)-2 \ge 0$, respectively, that is
$$
c(x-y)-1\le c(x)\quad\text{and}\quad 2\le c(x+y)+2\le c(x)
$$
The case $y=1$ corresponds to the previously discussed point(SPZ3), which showed no problem with respect to the non-negativity of $c'(x)$.
Also the case $y=2$ does not present any problem with respect to the non-negativity of $c'(x)$. Indeed, in this case $c'(x)=c(x)-2$, with $c(x)\ge 2$.
All the cases $y\ge 3$ are problematic. Let us see only two cases, all the others are obtained accordingly.
\\
Let $y=3$ and $c(x)=2$, then $c'(x)=-1$.
\\
Let $y=4$. If $c(x)=2$, then $c'(x) =-2$; but if $c(x)=3$, then $c'(x)=-1$.
\end{proof}
\begin{description}
\item[Conclusion 2]
The generalization (1g) of the local rule (1a), although at first sight interesting, cannot be taken into consideration given the fact that already in cases of very simple symmetrical neighborhoods the condition of coherence (NN), which requires the non-negativity of the transformed $F(c)$ of each generic configuration $c$, is not satisfied.
This in order to be able to interpret the quantity $(F(c))(x)$ as the (non-negative) number of sand grains allocated in cell $x$ of the transformed configuration.
\end{description}
Let us note that the local rule (1g), in order to recover the two \virg{canonical} behaviors of the proposition \ref{pr:-y+y} for $y = 1$ and $y = 2$, requires a distribution function $\mathcal{G}_{id}$ which assumes negative values (i.e. it assumes values in $\IZ$ instead of the usual set $N_+$). This difficulty can be overcome by considering the following local rule:
\\
$\oppA c\in\IN^\IZ$, $\oppA x\in\IZ$,
$$
(F(c))(x):= c(x) + \sum_{y\in\mathcal{N}} \mathcal{D}(y)\,y\, \mathrm{H}(\mathcal{D}(y)\,y\, (c(x-y)-c(x))-\theta)
\leqno{\text{(1g')}}
$$
Of course, the canonical local rule (1a) can be obtained from this generalization (1g') for the neighborhood $\mathcal{N}=\{-1,+1\}$ and the distribution function $\mathcal{D}_1(-1)=\mathcal{D}_1(+1)=1$. But also in this case one can apply the proof of proposition \ref{pr:-y+y} in order to obtain the same result on the satisfaction of the non-negativity condition (NN) for only the two cases $y =1 $ and $y = 2$.
\section{From the one-dimensional dynamics on the number of granules $c(x)$, to the dynamics of height difference $h(x)$}
\label{sec:da-c-a-h}
In \cite{GK93} and \cite{Go92}, besides the parallel sandpile dynamics of the number of granules expressed by the local transition $x\to c(x)$ formalized by equation (1a), reference is also made to the derived dynamics of the height difference between successive positions along the sandpile:
$$
h(x):=c(x)-c(x+1)
\leqno{\text{(ED)}}
$$
where, for the moment, we have deliberately not specified the domain of variability of the $x$ position in the lattice of cells, which can be $\IN$ or $\IZ$.
\subsection{The case when the space is the lattice $\IZ$}
\par\noindent
Given a configuration $c:\IZ\to\IN$ the local rule performs the following two changes:
\begin{align*}
c'(x) &= c(x)+\mathrm{H}(c(x-1) -c(x) -2) - \mathrm{H}(c(x) - c({x+1}) -2)
\\
c'(x+1) &= c(x+1)+\mathrm{H}(c(x) -c(x+1) -2) - \mathrm{H}(c(x+1) - c({x+2}) -2)
\end{align*}
Let $h(x) := c(x+1) - c(x)$ be the \emph{difference of heights}, relative to some initial configuration $c$, of two adjacent cells $x$ and $x+1$. We can infer that the difference of heights $h'(x)$ in configuration $c'$ is:
\begin{align*}
h'(x):=c'(x)-c'(x+1) = &[c(x)-c(x+1)] -2 \mathrm{H}([c(x)-c(x+1)]-2)
\\
&+\mathrm{H}([c(x-1)-c(x)]-2) + \mathrm{H}([c(x+1) - c({x+2})] -2)
\end{align*}
From the definition (ED), we can infer the \emph{local rule} for the parallel update of the difference of heights $\oppA x\in\IZ$:
$$
(\Phi(h))(x)=h'(x) := h(x) -2 \mathrm{H}(h(x)-2) +\mathrm{H}(h(x-1)-2) + \mathrm{H}(h(x+1)-2)
\leqno{\text{(2h)}}
$$
Notice that we moved from a rule updating the number of ``granules'' to a rule updating the difference in heights of consecutive cells. We can now analyse some important properties of this new local rule:
\begin{enumerate}[(D1)]
\item
The application of rule (2h) can result in negative values. For example with:
$$
c(x-1)=c(x)=0\qquad c(x+1) = 6,\; c(x+2)\ge 0
$$
we will have:
$$
c'(x)-c'(x+1) = -7 + \mathrm{H}(4-c(x+2)) =\begin{cases} -7 &\text{if}\; 5\le c(x+2)
\\
-6 &\text{if}\; c(x+2)\le 4
\end{cases}
$$
In both cases this difference has a negative value. If we interpret this value as a height difference, according to (ED), then the sequence of differences $\{h'(x)=c'(x)-c'(x+1):x\in\IZ\}$ can assume negative values for some positions in $\IZ$.
\item
We can consider only configurations over $\IZ$ with finite support, like $(\bar{0},0,c(x_0),c(x_0+1),\ldots,c(x_0+l-1),\bar{0})$, under the following decrease condition: $c(x_0)\ge c(x_0+1)\ge\ldots\ge c(x_0+l-1)> 0$. \\
Under these conditions, the configuration composed of the difference of heights assumes the form, $\oppA x\in\IZ$:
$$
h(x)=c(x)-c(x+1)=\begin{cases}
0&\text{for}\; x< x_0-1
\\
-c(x_0) &\text{for}\; x =x_0-1
\\
c(x_0+n)-c(x_0+n+1) &\text{for}\; 0\le n < l-1
\\
c(x_0+l-1) &\text{for}\; x=x_0+l-1
\\
0 &\text{for}\; x>x_0+l-1
\end{cases}
$$
thus resulting in:
\begin{gather*}
\oppA x\in\IZ\setminus\{x_0-1\},\quad h(x)\ge 0
\\
h(x_0-1)= -c(x_0)\le 0.
\end{gather*}
Under the condition that $c(x_0) \ge 2$ and from (2h), the value of $h'$ in the cell $x_0 - 1$ is:
$$
h'(x_0-1) = h(x_0-1) -2 \mathrm{H}(h(x_0-1)-2) +\mathrm{H}(h(x_0-2)-2) + \mathrm{H}(h(x_0)-2)
$$
that is,
$$
h'(x_0-1) = -c(x_0) +\mathrm{H}(c(x_0)-c(x_{0}+1)-2)\le 0
$$
\end{enumerate}
We can observe the behaviour we just described using an example from~\cite{Go92}, applied to the lattice $\IZ$ instead of the lattice $\IN$ as originally employed by Goles:
\begin{example}
Let $\IZ$ be the lattice used in this example. The initial configuration at time $t = 0$ of the ``number of granules'' is $c_0=(\bar{0},0,|6,0,\bar{0})$, and the corresponding difference of heights is $h_0=(\bar{0},-6,|6,0,\bar{0})$. Therefore, the value $-6$ in the cell in position $-1$ is due to $h_0(-1)=c_0(-1)-c_0(0)=-6$.
Let us see the first step in the dynamics of both configurations.
\begin{description}
\item[$t=1$] Since at time $t = 0$ between the cells in position $0$ and $1$ there is a critical jump $c_0(0) - c_0(1) = 6 > 2$, the local vertical rule (VR) is applied and at time $t = 1$ we will have that $c_1(0)=c_0(0)-1=5$ e $c_1(1)=c_0(1)+1=1$. That is, given that $\oppA x \in \IZ$, $h_1(x)=c_1(x)-c_1(x+1)$, we have:
$$
c_1=(\bar{0},0,|5,1,\bar{0})\quad\text{and}\quad h_1=(\bar{0},-5,|4,1,\bar{0})
$$
Notice how the difference in height of the cell $x = -1$ is negative: $h_1(-1)=c_1(-1) - c_1(0)=-5$.\\
Let us notice that we obtained these results via the ``direct'' definition (VR) and (ED) of $c_1(x)$ and $d_1(x)$, with a results identical to the one obtained via the ``indirect'' formulae (1a) and (2a).
\end{description}
If we continue following the same cell update rules for all successive time steps, it is possible to obtain the following dynamics, summarised in the following table, which is substantially identical to table (iii) of Fig. 3 of \cite{Go92}:
\begin{align*}
{}&{} && \quad c_t && \;\quad h_t
\\
t& =0
&&\bar{0},0|6
&&\bar{0},-6|6
\\
t&=1
&&\bar{0},0|5,1
&&\bar{0},-5|4,1
\\
t&=2 && \bar{0},0|4,2 && \bar{0},-4|2,2
\\
t&=3 && \bar{0},0|3,2,1 && \bar{0},-3|1,1,1
\end{align*}
Clearly, the two configurations $c_3=(\bar{0},0,|3,2,1,\bar{0})$ and $h_3=(\bar{0},-3,|1,1,1,\bar{0})$ are fixed points, ($\oppA t>3$, $c_t=c_3$ and $h_t=h_3$) for the number of granules and the difference in heights, respectively.
\end{example}
\section{A generalization from the lattice of cells $\IZ$ to its $d$-dimensional version $\IZ^d$, but with the different interpretation of number of chips}
\par\noindent
Let us continue with formula (2h), which expresses in a functional way and in one dimensional case of the lattice $\IZ$ the ``local'' dynamics of the difference in heights $\oppA x\in\IZ$, $h(x)=c(x)-c(x+1)$ of the number of granules in cells $x$ and $x+1$. By looking at the neighbourhood $\mathcal{N}=\{-1,+1\}$ not containing the cell $0 \in \IZ$, the local dynamics given by (2h) can be reformulated as follows:
$$
\oppA x\in\IZ,\;\;(\Phi(h))(x) = h(x) - 2 \mathrm{H}(h(x)-2) +\sum_{y\in\mathcal{N}} \mathrm{H}(h(x+y)-2)
\leqno{(\text{2h}')}
$$
If we want to generalize to the $d$-dimensional case, i.e., the lattice $\IZ^d$, it is possible to generalize the notion of configuration as \emph{number of granules} in each cell $\oppA x=(x_1,x_2,\ldots,x_d)\in\IZ^d$, $c(x_1,x_2,\ldots,x_d)\in\IN$ (i.e., $c\in\IN^{\IZ^d}$).
\\
However, it would be difficult to generalize (1a) to the $d$-dimensional case since, at a first glance, it would be difficult to imagine, for a generic finite neighbourhood $\mathcal{N}$ in $\IZ^d$ without the cell $(0, 0, \ldots, 0)$, a generalized version for $c(x_1, x_2, \ldots, x_d)$ of the quantities $c(x-1)$ and $c(x+1)$.
\\
Hence, it would also be difficult to generalize to the $d$-dimensional case the one-dimensional notion of difference in heights $\oppA x\in\IZ$, $d(x):=c(x)-c(x+1)$.
\subsection{The $d$--dimensional semantics of Goles (and of Goles-Kiwi) chip firing game and of Formenti-Perrot granules}
However, on the other hand, as do Goles in \cite{Go92}, and Goles-Kiwi in \cite{GK93}, in the $d$-dimensional context of the lattice of cells $\IZ^d$ we must drop the semantic of height difference associated with a generic mapping $h:\IZ^d\to\IN$ (i.e., $h\in\IN^{\IZ^d}$), interpreting instead the quantity $h(x)\in\IN$ as the \emph{number of chips} located in the cell $x\in\IZ^d$ of a chip firing game.
Introduced this semantics, which therefore has nothing to do with the height difference between two neighboring columns of granules of the one-dimensional case, once fixed a finite neighborhood $\mathcal{N}$ not containing the origin $\vec{0}=(0,0,\ldots,0)$, whose cardinality will be denoted by $\theta=|\mathcal{N}|$, a generalization of the parallel dynamics induced from the local rule (2h') can be given as follows:
$$
\oppA x\in\IZ^d,\quad h'(x)=h(x) -\theta \mathrm{H}(h(x)-\theta) + \sum_{y\in\mathcal{N}}\mathrm{H}(h(x+y)-\theta)
\leqno{(2g)}
$$
Trivially, the local rule (2g) is a particular case of the Goles parallel dynamics specified by the local rule (1.2) in \cite{Go92}, rewritten below with respect to our notations,
$$
\oppA x\in\IZ^d,\quad h'(x)= h(x) -z(x) \mathrm{H}(h(x)-z(x)) + \sum_{r\in V(x)}\mathrm{H}(h(r) -z(r))
\leqno{(1.2)}
$$
Indeed, equation (2g) is obtained from Goles' equation (1.2) once fixed in this latter all the thresholds $z(x)=z(r)=\theta$, for every $x$ and every $r=x+y$, and assuming that all the neighbourhoods $V(x)$ are unchanged as $x$ varies, in such a way that $r\in V(x)$ (i.e., $x+y\in V(x)$) iff $y\in\mathcal N$.
Let us stress that for the dimension $d\ge 2$ we have two different semantical interpretations.
\begin{enumerate}[(S{I}1)]
\item
The Goles (and also Goles-Kiwi) \emph{firing game model} in which the non-negative integer $h(x)$ describes the number of chips located at the site $x$ and the equation (1.2) formalizes the local rule specifying the parallel dynamics.
\\
Quoting from \cite{Go92}: \virg{A site such that $h(x)\ge z(x)$ [i.e., $h(x) \ge\theta$ in our notation] will be called a firing site. [...] Equation (1.2) is interpreted as follows: a site $x$ loses $z(x)$ chips if its number of chips is at least $z(x)$ and receives one chip from each firing neighborhood}.
\\
As seen above, the (2g) is a particular case of the (1.2).
\item
In the multidimensional context of the lattice of cells $\IZ^d$ under the constant distribution function $\mathcal{D}(y) = 1$ for $y \in \mathcal{N}$, equation (2g) is indeed formally identical to the local rule (2) of Formenti--Perrot (FP) paper \cite{FP20} except for the different semantic interpretation of the configuration $h\in\IN^{\IZ^d}$, which in Goles describes the number of chips located in the cell $x\in\IZ^d$, whereas FP interpret it as the number of granules located in the same cell.
\end{enumerate}
In the one-dimensional case of $d=1$ the (2h') is a particular case for the neighborhood $\mathcal{N}=\{-1,+1\}$, with associated $\theta=2$. The interpretation (SI1) of chip firing game can obviously be maintained also in this particular one-dimensional case. We will now give the significant semantic interpretations of this particular one-dimensional case.
\begin{enumerate}[(fD1)]
\item
As widely discussed in section \ref{sec:da-c-a-h}, in the formulation of the parallel dynamics described by local rule (2h) the non-negative quantity $h(x)\in\IN$ is interpreted as the height difference of the number of sand grains located between the cells $x$ and $x+1$ of the one-dimensional lattice $\IZ$.
\item
The interpretation of $h(x)\in\IN$ as number of grains located in the site $x$ can be translated to the present very particular one-dimensional case, i.e., $x\in\IZ$, even if it is absolutely not correct to assign it the meaning of number of sand grains, as we will demonstrate in the next sections, but rather as the number of \emph{ice grains} of a particular bilateral model.
\end{enumerate}
\section{The improper Formenti--Perrot (FP) interpretation of number of chips $h$ as number of sand granules $c$ and related theory}
\label{sc:FP-inter}
As pointed out in the previous semantical interpretation (SI2), in equation (2) of the FP paper \cite{FP20} it is introduced a \emph{global transition function} $F:\IN^{\IZ^d}\to\IN^{\IZ^d}$ which applied to configurations $c\in\IN^{\IZ^d}$, semantically interpreted as mapping associating to any cell of the lattice $x\in\IZ^d$ the \emph{number of sand grains} $c(x)\in\IN$, produces in a parallel update the successive configuration $F(c)\in\IN^{\IZ^d}$. Formally, such a global transition is defined by the following \emph{local rule} version:
\\
$\oppA x\in\IZ^d$, (i.e., $x=(x_1,x_2,\ldots, x_d)$),
$$
(F(c))(x)= c(x) -\vartheta \mathrm{H}(c(x)-\vartheta) + \sum_{y\in\mathcal{N}}\mathrm{H}(c(x+y)-\vartheta)
\leqno{(2-\text{FP})}
$$
where, as explained in the Introduction, $\mathcal{N}$ is a suitable finite neighborhood not containing the origin ($0\notin \mathcal{N}$) and $\theta=|\mathcal{N}|$ is the cardinality of this neighborhood called \emph{threshold}.
As done in the failed case of equation (1g), subsection \ref{ss:failure}, the delicate point is to prove the non-negative condition (NN), but in this regard the following result holds.
\begin{proposition}\label{pr:NN-d}
The local rule (2-FP) is such that, whatever be the configuration $c\in\IN^{\IZ^d}$ and whatever be the \emph{finite} neighborhood $\mathcal{N}\incl\IN\setminus\{0\}$, the condition of non-negativity is satisfied, i.e.,
$$
\oppA x\in\IZ^d,\quad (F(c))(x)\in\IN.
$$
\end{proposition}
\begin{proof}
From the behavior of the Heaviside function of assuming the only two values, either 0 or 1, one gets that the sum in equation (2-FP) always provides a non-negative contribution.
Therefore it remains to be analyzed the problematic part
$$
c_1(x):= c(x) -\vartheta \mathrm{H}(c(x)-\vartheta)
$$
The behavior of $c_1(x)$ depends on the behavior of the Heaviside function that appears in it. There are only two possible cases:
\\
(1) Case $c(x)-\vartheta < 0$. Since in this case $\mathrm{H}(c(x)-\vartheta)=0$, we get that $c_1(x)=c(x) \ge 0$, which does not cause problems on the non-negative sign of $(F (c))(x)$.
\\
(2) Case $c(x)-\vartheta \ge 0$. Since in this case $\mathrm{H}(c(x)-\vartheta)=1$, we get that $c_1(x)=c(x)-\vartheta$, which is non-negative by hypothesis.
So also this case does not involve problems on the non-negative sign of $(F(c))(x)$.
\end{proof}
To tell the truth in \cite{FP20} it is formalized a further generalization of this local rule, once introduced in addition to the neighborhood $\mathcal{N}$ a \emph{distribution function} $\mathcal{D}:\mathcal{N} \to \IN_+$ and its \emph{stability threshold} $\vartheta=\sum_{y\in\mathcal{N}}\mathcal{D}(y)$. This local rule is expressed by the law
\\
$\oppA x\in\IZ$, (i.e., $x=(x_1,x_2,\ldots, x_d)$),
$$
(F(c))(x)= c'(x)= c(x) -\vartheta \mathrm{H}(c(x)-\vartheta) + \sum_{y\in\mathcal{N}} \mathcal{D}(y) \mathrm{H}(c(x+y)-\vartheta)
\leqno{(2g-\text{FP})}
$$
Oviously, the proof of the non-negativity condition (NN) made in Proposition \ref{pr:NN-d} can be immediately extended to this case. Besides, in the particular case of the distribution function defined by the law $\oppA y\in\mathcal{N}$, $\mathcal{D}(y)=1$, we have that $\vartheta=|\mathcal{N}|$ and consequently (2g-FP) reduces to (2-FP).
\subsection{The one-dimensional case on the lattice $\IZ$ of the Formenti-Perrot model}
\par\noindent\\
Let us now consider the local rule (2-FP) in the one-dimensional case on the lattice $\IZ$, corresponding to the particular case of $\mathcal{N}=\{-1,+1\}$, with respect to which $\theta=2$, formally given by the law:
\\
$\oppA x\in\IZ$,
$$
(F(c))(x)= c'(x)= c(x) -2 \mathrm{H}(c(x)-2) + \mathrm{H}(c(x-1)-2) + \mathrm{H}(c(x+1)-2)
\leqno{(2a-\text{FP})}
$$
This local rule generates 8 cases of transitions $c(x-1),c(x),c(x+1)\to *,c'(x),*$ corresponding to the two possible values $0,1$ assumed by the Heaviside functions involved in its formal expression, which we divide in two groups depending on the behaviour of $c(x)$.
\\ \\
\framebox{\textbf{Group: $c(x)\in\parg{0,1}$}}
\begin{enumerate}[(SFP1)]
\item
If $\mathrm{H}(c(x)-2)=\mathrm{H}(c(x-1)-2)=\mathrm{H}(c(x+1)-2)=0$, i.e., $c(x-1)\in\parg{0,1}$, $c(x)\in\parg{0,1}$, $c(x+1)\in\parg{0,1}$, then it happens the transition
$$c(x-1),c(x),c(x+1)\to *,c(x),*\quad\text{with}\;c(x)\in\parg{0,1}$$
\item
If $\mathrm{H}(c(x)-2)=\mathrm{H}(c(x-1)-2)=0$ and $\mathrm{H}(c(x+1)-2)=1$, i.e., $c(x-1)\in\parg{0,1}$, $c(x)\in\parg{0,1}$ and $2\le c(x+1)$, then it happens the transition
$$c(x-1),c(x),c(x+1)\to *,c(x)+1,*\quad\text{with}\;c(x)\in\parg{0,1}$$
\item
If $\mathrm{H}(c(x)-2)=0$, $\mathrm{H}(c(x-1)-2)=1$, $\mathrm{H}(c(x+1)-2)=0$, i.e., $2\le c(x-1)$, $c(x)\in\parg{0,1}$, $c(x+1)\in\parg{0,1}$, then it happens the transition
$$c(x-1),c(x),c(x+1)\to *,c(x)+1,*\quad\text{with}\;c(x)\in\parg{0,1}$$
\item
If $\mathrm{H}(c(x)-2)=0$ and $\mathrm{H}(c(x-1)-2)=\mathrm{H}(c(x+1)-2)=1$, i.e., $2\le c(x-1)$, $c(x)\in\parg{0,1}$, $2\le c(x+1)$, then it happens the transition
$$c(x-1),c(x),c(x+1)\to *,c(x)+2,*\quad\text{with}\;c(x)\in\parg{0,1}$$
\end{enumerate}
\framebox{\textbf{Group: $2\le c(x)$}}
\begin{enumerate}
\item[(SFP5)]
If $\mathrm{H}(c(x)-2)=1$ and $\mathrm{H}(c(x-1)-2)=\mathrm{H}(c(x+1)-2)=0$, i.e., $c(x-1)\in\parg{0,1}$, $2\le c(x)$, $c(x+1)\in\parg{0,1}$, then it happens the transition
$$c(x-1),c(x),c(x+1)\to *,c(x)-2,*\quad\text{with}\;2\le c(x)$$
\item[(SFP6)]
If $\mathrm{H}(c(x)-2)=1$, $\mathrm{H}(c(x-1)-2)=0$, $\mathrm{H}(c(x+1)-2)=1$, i.e., $c(x-1)\in\parg{0,1}$, $2\le c(x)$, $2\le c(x+1)$, then it happens the transition
$$c(x-1),c(x),c(x+1)\to *,c(x)-1,*\quad\text{with}\;2\le c(x)$$
\item[(SFP7)]
If $\mathrm{H}(c(x)-2)=1$, $\mathrm{H}(c(x-1)-2)=1$, $\mathrm{H}(c(x+1)-2)=0$, i.e., $2\le c(x-1)$, $2\le c(x)$, $c(x+1)\in\parg{0,1}$, then it happens the transition
$$c(x-1),c(x),c(x+1)\to *,c(x)-1,*\quad\text{with}\;2\le c(x)$$
\item[(SFP8)]
If $\mathrm{H}(c(x)-2)=\mathrm{H}(c(x-1)-2)=\mathrm{H}(c(x+1)-2)=1$, i.e., $2\le c(x-1)$, $2\le c(x)$, $2\le c(x+1)$, then it happens the transition
$$c(x-1),c(x),c(x+1)\to *,c(x),*\quad\text{with}\;2\le c(x)$$
\end{enumerate}
Also in the present case of FP parallel dynamics generated by the local rule (2a-FP) (similarly to the GK parallel dynamics of the standard sandpile model treated in subsection \ref{sec:GK-CA} -- on the other hand this is a general behavior of any discrete time dynamical system) we can introduce the notion of \emph{equilibrium configuration} as any configuration $c_{eq}\in\IN^\IZ$ such that $F(c_{eq})=c_{eq}$, i.e., it is a \emph{fixed point} of the global transition function $F:\IN^\IZ\to\IN^\IZ$.
The following result characterizes the form of configurations which are of equilibrium in the FP parallel dynamics.
\begin{lemma}\label{lm:FP-eq}
The configuration $c_{eq}\in\IN^\IZ$ is of \emph{equilibrium} with respect to the parallel dynamics governed by the local rule (2a-FP), i.e., $F(c_{eq})=c_{eq}$ iff $\oppA x\in\IZ$, $c_{eq}(x)\in\parg{0,1}$, i.e., it is a Boolean sequence.
\end{lemma}
\begin{proof}
Under the condition $F(c_{eq})=c_{eq}$ the equation (2a-FP) leads to the condition $\oppA x\in\IZ$, $2\mathrm{H}(c_{eq}(x)-2) +\mathrm{H}(c_{eq}(x-1)-2) +\mathrm{H}(c_{eq}(x+1) -2)=0$, from which we have in particular that $\oppA x\in\IZ$, $\mathrm{H}(c_{eq}(x)-2) =0$, i.e., that $\oppA x\in\IZ$, $c_{eq}(x)\in\parg{0,1}$.
Under the condition $\oppA x\in\IZ$, $c_{eq}(x)\in\parg{0,1}$ the triplet $c_{eq}(x-1),c_{eq}(x),c_{eq}(x+1)\in\parg{0,1}^3$, i.e., it is a Boolean triplet, and so the corresponding transition (SFP1) is necessary of the type $c_{eq}(x-1),c_{eq}(x),c_{eq}(x+1)\to *,c_{eq}'(x)=c_{eq}(x),*$, that is $\oppA x\in\IZ$, $c'(x)=c(x)$; but since $c'(x)$ is a simplified formulation of $(F(c))(x)$ we have proved that $\oppA x\in\IZ$, $(F(c))(x)=c(x)$.
\end{proof}
\begin{remark}
This result about the Boolean equilibrium configurations of the parallel (FP) model must be compared with the equilibrium configurations of the standard parallel (GK) sandpile model discussed in Lemma \ref{lm:eq-conf}, this latter characterized by the condition
\linebreak
$\oppA x\in\IZ$, $c_{eq}(x)-c_{eq}(x+1)\le 1$. Recall that in example \ref{ex:bool-equil} it is shown that all the Boolean configurations are of (GK) equilibrium.
\end{remark}
\begin{example}\label{ex:FP-eq}
The following are examples of FP equilibrium configurations of the parallel dynamics $(\bar{0},|0,1,1,0,0,0,0,1,0,1,\bar{0})$, $(\overline{0,1},|0,1,\overline{0,1})$, $(\overline{1},0,|1,0,\overline{1})$.
\end{example}
Making reference to the local rule (2g-FP) acting on configurations from the $d$-dimensional state space $c\in\IN^{\IZ^d}$, \virg{a cell $x\in\IZ^d$ is said to be \emph{stable} if $c(x)<\vartheta$ and \emph{unstable} otherwise. A configuration is \emph{stable} when all cell are stable, and is \emph{ustable} if at least one cell is unstable. Remark that stable configurations are fixed points of the global rule $F$} \cite{FP20}.
Applying these definitions to the particular one-dimensional case of the local rule (2a-FP), relatively to which $\vartheta=2$, a cell $x\in\IZ$ of the configuration $c\in\IN^\IZ$ is \emph{stable} if $c(x)<2$, i.e., if $c(x)\in\parg{0,1}$ is Boolean, and \emph{unstable} if $c(x)\ge 2$. Therefore, in this case a configuration $c$ is \emph{stable} if $\oppA x\in\IZ$, $c(x)\in\parg{0,1}$, and is \emph{unstable} if $\oppE x_0\in\IZ$, $c(x_0)\ge 2$. So, according to Lemma \ref{lm:FP-eq}, a stable configuration is an \emph{equilibrium} state of the induced dynamics.
This means that if starting from the initial configuration $c_0\in\IN^\IZ$ the dynamical evolution generated by the global rule $F:\IN^\IZ\to\IN^\IZ$ reaches at the time instant $t_0$ an equilibrium configuration $c_{eq}\in\IN^\IZ$, with $F(c_{eq})=c_{eq}$, formally one has the finite sequence of transitions
$c_0\xrightarrow{F} c_1\xrightarrow{F} c_2\xrightarrow{F} \ldots\xrightarrow{F} c_{t_0}=c_{eq}$, where
$F^{t_0}(c)=c_{eq}$, then in the successive dynamical evolution it is $\oppA t\ge {t_0}$, $F^t(c_{eq})=c_{eq}$.
\subsection{The particular case of one-dimensional configurations concentrated in the origin}
In this subsection we centered our attention on the particular case of configurations \emph{concentrated} in the origin of the one dimensional lattice of cells $\IZ$, that is of configurations $c:\IZ\to\IN$ such that $c(0)=k$ for a given integer $k\in\IN$ and $c(x)=0$ for any $x\neq 0$.
\begin{proposition}\label{pr:FPequil}
Starting from an initial configuration of the kind $(\bar{0},|k,\bar{0})$, with $k\in\IN$, the unique equilibrium configuration reached after a finite number of time steps of the FP parallel dynamics has one of the two forms:
\begin{enumerate}
\item[(Eq1)]
If $k$ is odd ($=2h+1$) then the final equilibrium configuration is of the symmetric form $(\bar{0},1,\ldots,1,|1,1,\ldots,1,\bar{0})$ centered in the cell $0\in\IZ$ and consisting of a number $2h+1$ of single granules for cell in a \virg{continuous} sequence.
\item[(Eq2)]
If $k$ is even ($=2h$) then the final equilibrium configuration is of the symmetric form $(\bar{0},1,\ldots,1,|0,1,\ldots,1,\bar{0})$ centered in the cell $0\in\IZ$ and consisting of a sequence of $h$ cells with a single granule, followed by a cell whit zero granules in its turn followed by a sequence of $h$ cells with a single granule.
\end{enumerate}
\end{proposition}
\begin{example}\label{ex:FP-020}
The parallel (FP) dynamics of initial state $(\overline{0},|2,\overline{0})$ reaches the equilibrium configuration in a single time step:
\begin{align*}
{}&{} && \qquad c_t
\\
t&=0 &&(\bar{0},0,0,|2,0,0,\bar{0})
\\
t&=1 &&(\bar{0},0,1,|0,1,0,\bar{0})
\end{align*}
\end{example}
\begin{example}\label{ex:FP-030}
Another very simple parallel (FP) dynamics is the one of initial state $(\overline{0},|3,\overline{0})$ whose equilibrium configuration is always reached in a single time step:
\begin{align*}
{}&{} && \qquad c_t
\\
t&=0 &&(\bar{0},0,0,|3,0,0,\bar{0})
\\
t&=1 &&(\bar{0},0,1,|1,1,0,\bar{0})
\end{align*}
This result as consequence of the following triplet transitions $0,0,3\xrightarrow{(SFP2)}*,1,*$, and $0,3,0\xrightarrow{(SFP5)}*,1,*$, and $3,0,0\xrightarrow{(SFP3)}*,1,*$.
\end{example}
We shall discuss now another interesting example of one-dimensional parallel dynamical evolution generated by the local rule (2a-FP), whose initial configuration at time $t=0$ is $c_0=(\overline{0},|6,\overline{0})$ and in agrement with the just stated general result reaches after 8 iterations the equilibrium configuration $c_8=(\bar{0},1,1,1|0,1,1,1,\bar{0})$.
\begin{example}\label{ex:FP-060}
Let us take in examination the configuration $c_0=(\bar{0}|6,\bar{0})$ in the state space $\IN^\IZ$ and let us calculate the parallel dynamics generated by the local rule (2a-FP) starting from it as initial state.
\begin{align*}
{}&{} && \qquad\qquad c_t
\\
t&=0 &&(\bar{0},0,0,0|6,0,0,0,\bar{0})
\\
t&=1 &&(\bar{0},0,0,1|4,1,0,0,\bar{0})
\\
t&=2 &&(\bar{0},0,0,2|2,2,0,0,\bar{0})
\\
t&=3 &&(\bar{0},0,1,1|2,1,1,0,\bar{0})
\\
t&=4 &&(\bar{0},0,1,2|0,2,1,0,\bar{0})
\\
t&=5 &&(\bar{0},0,2,0|2,0,2,0,\bar{0})
\\
t&=6 &&(\bar{0},1,0,2|0,2,0,1,\bar{0})
\\
t&=7 &&(\bar{0},1,1,0|2,0,1,1,\bar{0})
\\
t&=8 &&(\bar{0},1,1,1|0,1,1,1,\bar{0})
\end{align*}
Each transition $t\to t+1$ is obtained by the application of some of the previously discussed 8 cases, where in any of these transitions it is involved the sub-triplet transition $0,0,0\xrightarrow{(SFP1)} *,0,*$. Let us discuss some (not all) of these transitions.
\\
-- $t=0\to t=1$ transition. The first transition $(\bar{0},0,0,0|6,0,0,0,\bar{0})\to (\bar{0},0,0,1|4,1,0,0,\bar{0})$ is the result of the following transitions on sub-triplets: $0,0,6\xrightarrow{(SFP2)} *,1,*$, and $0,6,0\xrightarrow{(SFP5)} *,4,* $, and
$6,0,0\xrightarrow{(SFP3)} *,1,*$.
\\
-- $t=1\to t=2$ transition. Analogously the second transition $(\bar{0},0,0,1|4,1,0,0,\bar{0})\to (\bar{0},0,0,2|2,2,0,0,\bar{0})$ is obtained by the sub-triplets transitions: $0,0,1\xrightarrow{(SFP1)} *,0,*$, and $0,1,4\xrightarrow{(SFP2)} *,2,*$, and
$1,4,1\xrightarrow{(SFP5} *,2,*$, and $4,1,0\xrightarrow{(SFP3)} *,2,*$, and $1,0,0\xrightarrow{(SFP1)} *,0,*$.
And so on for all the other transitions.
The configuration $c_8=(\bar{0},1,1,1|0,1,1,1,\bar{0})$ is of equilibrium, as expected from the general theory since any of its sub-triplet trivially produces the local transition $c(x-1),c(x),c(x+1) \to *,c(x),*$, as consequence of the fact that it is always involved the case (SFP1).
Let us stress that this global parallel dynamics generated by the local rule (2a-FP) starting from the initial configuration $c=(\bar{0}|6,\bar{0})$ cannot be confused with the global dynamics generated by the local rule of height difference (2-a) of section \ref{sec:da-c-a-h}, formally analogous to the (2a-FP), since in this last case of height differences the initial configuration is $h=(\bar{0},-6|6,\bar{0})$.
\end{example}
From these examples we can induce the following general result, which in any case can be proved.
\begin{proposition}
Let us consider the symmetric configuration $c_0=(\overline{0},|k,\overline{0})$, centered in the origin $x=0$ of the one-dimensional lattice $\IZ$, then the dynamical evolution generated by the parallel application of the FP local rule (2a-FP), $\oppA t\in\IN$, $c_t=F^t(c_0)\in\IN^\IZ$, consists of configurations which are always symmetric and centered in the origin, i.e.,
$$
\oppA t\in\IN,\;\;\oppA x\in\IZ,\quad c_t(-x) =c_t(x).
$$
\end{proposition}
Let us now analyse some of the previous parallel transitions from the point of view of the possible \emph{sequential} application of local rules of the following three different types:
\begin{enumerate}[(S{I}P1)]
\item
Vertical local rule from left to right, typical of sandpiles,
\\
(VR)$_d$ If $c(x)-c(x+1)\ge 2$, then $c'(x)=c(x)-1$ and $c'(x+1) = c(x+1)+1$.
\\
Vertical local rule from right to left, dual of the previous and typical of symmetric sandpiles of \cite{FMP07},
\\
(VR)$_s$ If $c(x)-c(x-1)\ge 2$, then $c'(x)=c(x)-1$ and $c'(x-1) = c(x-1)+1$.
-- Icepile local horizontal rule, of granules flow from left to right,
\\
(HR)$_d$ If $c(x)=c(x+1)+1$, then $c'(x)=c(x)-1$, and $c'(x+1)=c(x+1)+1$, i.e., under this condition we have the transition $c(x),c(x)-1\to c(x)-1,c(x)$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=7cm]{HR-d.eps}
\end{center}
\vspace{-1cm}
\caption{The three cases of horizontal rule HR$_d$ with the involvement of the possible state of the cell $x-1$.}
\label{fig:HR-d}
\end{figure}
In the following we will frequently have to do with the condition $c(x) = 1$ and $c (x + 1) = 0$, or with the transition $1,0\to 0,1$. If now under this condition we consider the possible Boolean state $c (x-1)\in\parg{0,1}$ of the cell at the $x-1$ site we have the following two possible transitions $1,1,0\to 1,0,1$ and $0,1,0\to 0,0,1$.
But even if these are in principle possible transitions we will conventionally consider them as forbidden for reasons of comparison of the digraph of the sequential updating procedure with the parallel dynamics.
\\ \\
\item Icepile local horizontal rule, of granules flow from right to left,
\\
(HR)$_s$ If $c(x)=c(x-1)+1$, then $c'(x)=c(x)-1$ and $c'(x-1)=c(x-1)+1$, corresponding to the transition $c(x)-1,c(x)\to c(x),c(x)-1$.
Analogously to the previous case (HR)$_d$, it will be of great interest the Boolean situation $c(x)=1$ and $c(x-1)=0$, corresponding to the transition $0,1\to 1,0$, with the involvement of the Boolean state of the cell at site $x$ and the two possible transitions $0,1,1\to 1,0,1$ and $0,1,0\to 1,0,0$. Also in this case we conventionally assume them as forbidden.
\end{enumerate}
Summarizing, in the sequel we adopt the following
\begin{description}
\item[Convention (HR)]
In the description of the FP sequential dynamics the two transitions $0,1,0\to 0,0,1$ and $0,1,0\to 1,0,0$ are forbidden in order to compare it with the corresponding parallel dynamics.
\end{description}
\begin{enumerate}
\item[(SIP3)]
Bottom-up jump of a granule from left to right
of one height,
\\
(BT)$_d$ If $c(x)\ge 2$ and $c(x)=c(x+1)$, then $c'(x)=c(x)-1$ and $c'(x+1)=c(x+1) +1$.
\\ \\
Bottom-up jump of a granule from right to left
of one height,
\\
(BT)$_s$ If $c(x)\ge 2$ and $c(x)=c(x-1)$, then $c'(x)=c(x)-1$ and $c'(x-1)=c(x-1)+1$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=7cm]{SIP3.eps}
\end{center}
\vspace{-1cm}
\caption{Figure at left describes one granule bottom-up jump from left to right, whereas the figure at right describes always a granule bottom-up jump but from right to left.}
\label{fig:SIP3}
\end{figure}
\end{enumerate}
\begin{description}
\item[Convention (SIP3)]
Since we will consider the sequential update of the pure ice pile model centered in the points (SIP1) and (SIP2) as a theoretical priority, we make the further convention of not using the two updates of the bottom-up jumps (SIP3) until these
can be considered inessential to reproduce the parallel dynamics.
\\
In other words, we will not use them until we can work without them.
\end{description}
\begin{example}
Let us consider the simplest non-equilibrium configuration $(\overline{0}|2,\overline{0})$ of support centered in the origin of the one-dimensional lattice $\IZ$, and let us apply the local rule (2a-FP) in order to obtain the corresponding equilibrium configuration according to the (one time step) parallel transition
$$
(\overline{0},0,|2,0,\overline{0})\xrightarrow{(PT)}(\overline{0},1,|0,1,\overline{0})
$$
Let us now try to verify in this case the FP claim from \cite{FMP07} that the one-dimensional local rule (2a-FP) is the parallel version of a \emph{pure} sandpile model, i.e., that it can be obtained by the sequential application of the sandpile symmetric (i.e., bidirectional) vertical local rules (VR)$_d$ and (VR)$_s$. The corresponding dynamical digraph is depicted below.
$$
\xymatrix{
{}&0,0,|2,0,0\ar[dr]^{(VR)_d}\ar[dl]_{(VR)_s}&{}
\\
0,1,|1,0,0&{}&0,0,|1,1,0
}
$$
We can therefore establish the following
\begin{description}
\item[Conclusion FP1]
Adopting the convention (HR) the two configurations $(\overline{0},1,|1,0,\overline{0})$ and $(\overline{0},0,|1,1,\overline{0})$ are of equilibrium of the pure sequential sandpile model, but none of them coincide with the expected equilibrium configuration of the parallel transition (PT) seen above: $(\overline{0},1,|0,1,\overline{0})$.
\end{description}
$$
\xymatrix{
{}&0,0,|2,0,0\ar[dr]^{(VR)_d}\ar[dl]_{(VR)_s}\ar[d]^{(PT)}&{}
\\
0,1,|1,0,0&0,1|0,1,0&0,0,|1,1,0
}
$$
\end{example}
\begin{example}
In this example we consider the configuration $(\overline{0},0,|3,0,\overline{0})$ as initial state of the FP parallel dynamics induced from the local rule (2a-FP). Trivially, one gets that the equilibrium configuration of this parallel dynamics is reached in one time step according to the following transformation:
$$
(\overline{0},0,|3,0,\overline{0})\xrightarrow{(PT)}(\overline{0},1,|1,1,\overline{0})
$$
In this particular example the just obtained parallel equilibrium configuration
is reached by the corresponding sequential dynamics based on the unique vertical rule of the pure sandpile model, as shown in the following digraph:
$$
\xymatrix{
{}&0,0,|3,0,0\ar[dr]^{(VR)_d}\ar[dl]_{(VR)_s}\ar[dd]^{(PT)}&{}
\\
0,1,|2,0,0\ar[dr]_{(VR)_d}&{}&0,0,|2,1,0\ar[dl]^{(VR)_s}
\\
{}&0,1,|1,1,0&{}
}
$$
Therefore, from this particular example it might appear correct the FP's claim that the model based on the parallel application of the local rule (2) referred to in the Introduction and characterizing their article, or its peculiar one-dimensional version (2a), is about sandpiles.
Indeed, in this example the parallel equilibrium configuration is obtained through appropriate sequential applications of the vertical rules from point (SIP1) only, albeit bilateral, which characterize the sandpile dynamics.
In any case, below we still want to describe the sequential dynamics of the pure model of bilateral icepiles starting from the same initial configuration according to the vertical (SIP1) and horizontal (SIP2) rules introduced above, rather than a bidirectional pure sandpile model centered in the unique vertical (SIP1) rule. The corresponding sequential dynamics is drawn in the following digraph:
$$
\xymatrix{
{}&0,0,|3,0,0\ar[dr]^{(VR)_d}\ar[dl]_{(VR)_s}\ar[dd]^{(PT)}&{}
\\
0,1,|2,0,0\ar[dr]_{(VR)_d}\ar[d]_{(HR)_s}&{}&0,0,|2,1,0\ar[dl]^{(VR)_s}\ar[d]^{(HR)_d}
\\
0,2,|1,0,0\ar[d]_{(VR)_s}&0,1,|1,1,0&0,0,|1,2,0\ar[d]^{(VR)_d}
\\
1,1,|1,0,0&{}&0,0,|1,1,1
}
$$
In this case, and under the (HR) convention, we have three equilibrium configurations, one of the parallel update $(\overline{0},0,1,|1,1,0,\overline{0})$ and other two as results of the sequential update $(\overline{0},1,1,|1,0,0,\overline{0})$ and $(\overline{0},0,0,|1,1,1,\overline{0})$.
\end{example}
\begin{remark}
This particular result of the presence of three equilibrium configurations of the sequential FP dynamics suggest some interesting consideration about their \virg{physical} symmetry.
First of all, for any fixed integer $a\in\IZ$ let us introduce the so-called $a$--\emph{translation} (also, $a$--\emph{left shift})
operator on the configuration space $\IN^\IZ$, denoted as $T_a:\IN^\IZ\to\IN^\IZ$ and defined by the correspondence
$(\ldots,c(-1),|c(0),c(1),\ldots) \xrightarrow{\;T_a\;} (\ldots,c(a-1),|c(a),c(a+1),\ldots)$.
The collection of all such translations, $\mathcal{T}(\IN^\IZ)=\parg{T_a:a\in\IZ}$, has a structure of abelian group with respect to the operation of composition, $T_a\circ T_b =T_b\circ T_a=T_{a+b}$. In particular we have that
the neutral element is the \emph{identical} translation $T_0$ ($\oppA c\in\IN^\IZ$, $T_0(c)=c$) since $T_0\circ T_a=T_a\circ T_0= T_a$, for every $T_a$,
and the inverse of a generic translation $T_a$ is the translation $(T_a)^{-1}=T_{-a}$ since $T_a\circ T_{-a}=T_0=T_{-a}\circ T_a$.
Now on the sate space of all configurations $\IN^\IZ$ the following is an equivalence relation
$$
\text{Let}\; c_1,c_2\in\IN^\IZ,\quad\text{then}\;\; c_1\sim c_2\;\;\text{iff}\;\; \oppE a\in\IZ\;s.t.\; c_1 = T_a c_2.
$$
That is, two configurations are mutually equivalent iff one is obtained from the other by a suitable translation, and the configuration space can be decomposed by the collection of all pairwise disjoint nonempty equivalence classes relatively to translations $[c]_\sim :=\parg{c':c'\sim c}$.
\\
From the physical point of view the abelian group $\mathcal{T}(\IN^\IZ)$ of all translations of the configuration space is a \emph{symmetry} of this space, and so two configurations $c_1\sim c_2$ are \emph{equivalent} by the symmetry of translation.
In particular $(\overline{0},1,0,|1,0,0,\overline{0}) =T_1(\overline{0},0,1,|0,1,0,\overline{0})$ and $(\overline{0},0,0,|1,0,1,\overline{0})=T_{-1}(\overline{0},0,1,|0,1,0,\overline{0})$, and so the three equilibrium configurations of the sequential dynamics are mutually equivalent among them relatively to translations, in other words, they belong to the same translation equivalence class.
\end{remark}
\begin{example}
The further example we will now consider is based on the configuration consisting of four granules centered at the origin, $(\overline{0},0,|4,0,\overline{0})$, considered as the initial state of the following FP dynamics generated by the local rule (2a-FP) reaching the equilibrium configuration $(\overline{0},1,1,|0,1,1,\overline{0})$ after four time steps:
\begin{align*}
{}&{} & c_t\phantom{400000}
\\
t&=0 &(\bar{0},0,0|4,0,0,\bar{0})
\\
t&=1 &(\bar{0},0,1|2,1,0,\bar{0})
\\
t&=2 &(\bar{0},0,2|0,2,0,\bar{0})
\\
t&=3 &(\bar{0},1,0|2,0,1,\bar{0})
\\
t&=4 &(\bar{0},1,1|0,1,1,\bar{0})
\end{align*}
The digraph of the sequential updating procedure obtained by the use of the pure local rule (2a-FP) characterizing the one-dimensional sandpiles, would seem to be the one drawn below, where we neglect all the sequential transitions that generate orbits that in any case lead to equilibrium configurations different from the required \virg{parallel} one $(\overline{0},1,1,|0,1,1,\overline{0})$.
$$
\xymatrix{
{}&0,0,|4,0,0\ar[dl]_{(VR)_s}\ar[dd]^{(PT)}\ar[dr]^{(VR)_d}&{}
\\
0,1,|3,0,0\ar[dr]_{(VR)_d}\ar@{-->}[d] &{}& 0,0,|3,1,0\ar[dl]^{(VR)_s}\ar@{-->}[d]
\\
0,2,|2,0,0&0,1,|2,1,0\ar[dd]^{(PT)}&0,0,|2,2,0
\\
{}&{}&{}
\\
{}&0,2,|0,2,0\ar[dl]_{(VR)_s}\ar[d]\ar[dr]^{(VR)_d}&{}
\\
1,1,|0,2,0\ar[dr]_{(VR)_d}&1,0|2,0,1\ar[d]&0,2,|0,1,1\ar[dl]^{(VR)_s}
\\
{}&1,1,|0,1,1&{}
}
$$
The only negative point of this digraph lies in the parallel transition $(\overline{0},0,1,|2,1,0,\overline{0})\xrightarrow{(PT)}(\overline{0},0,2,|0,2,0,\overline{0})$ which cannot be justified by the sequential application of the vertical rule (SIP1) since the input configuration $c_1=(\overline{0},0,1,|2,1,0,\overline{0})$ does not present any at least 2 \virg{critical jump} of the height differences (indeed, for any $x\in\IZ$, $c_1(x)-c_1(x+1)\le 1$).
On the contrary, this result can necessarily be obtained by the appropriate application of both horizontal (SIP2) and bottom-up jump (SIP3) local rules, as shown in the partial diagram below which completes the overall dynamics seen above:
$$
\xymatrix{
{}&0,1,|2,1,0\ar[dl]_{(HR)_s}\ar[dd]^{(PT)}\ar[dr]^{(HR)_d}&{}
\\
0,2,|1,1,0\ar[dr]_{(BT)_d}&{}&0,1,|1,2,0\ar[dl]^{(BT)_s}
\\
{}& 0,2,|0,2,0&{}
}
$$
\end{example}
\begin{description}
\item[Conclusion FP2]
The present example shows that the expected equilibrium configuration of the parallel dynamics generated by the one-dimensional local rule (2a-FP) of the Formenti-Perrot (FP) model is obtained not only through the sequential use of only the vertical rule (SIP1) (this rule is not sufficient to obtain the expected goal) but by the necessary intervention of the horizontal rule (SIP2) plus the relevant use of the bottom-up jump rule (SIP3). And this is what we referred to in the Introduction as the \emph{spurious} icepile model.
\\
This means that the FP claim in \cite{FP20} that theirs is a model of a sandpiles dynamic is not correct since in order to obtain the expected equilibrium parallel result, at least in this simple initial configuration of the total number of $N = 4$ granules centered in the origin, all the rules of a \virg{spurious} icepile model \emph{must} necessarily be sequentially applied.
\\
Therefore the title of their article, and the whole section 2.1, which both explicitly refer to sandpiles is incorrect because at least they should refer to \emph{spurious icepile} model, as this counterexample shows.
\end{description}
\begin{example}\label{ex:FP-060-seq}
Let us analyse some transitions of the parallel dynamics generated by the one-dimensional local rule (2a-FP) of the Formenti--Perrot model discussed in example \ref{ex:FP-060}, starting from the initial state $(\bar{0}|6,\bar{0})$, as the results of the sequential application of the previously discussed three types of local rules.
\\
- The parallel transition (PT) from $t=0$ to $t=1$ can be decomposed by two sequential transitions, each consisting of two steps. Firs of all, let us draw the corresponding two steps conventional sequential digraph
$$
\xymatrix{
{}&0,0|6,0,0\ar[dl]_{(VR)_s}\ar[dr]^{(VR_d)}\ar[dd]^{(PT)}&{}
\\
0,1|5,0,0\ar[dr]^{(VR_d)}\ar[d]_{(VR)_s} &{}&0,0|5,1,0\ar[dl]_{(VR)_s}\ar[d]^{(VR)_d}
\\
0,2|4,0,0&0,1|4,1,0&0,0|4,2,0
}
$$
Then, we have the following two paths towards the parallel configuration at time $t=1$:
\begin{align*}
&(\bar{0},0,|6,0,\bar{0})\xrightarrow{\text{(VR)$_s$}} (\bar{0},1,|5,0,\bar{0})\xrightarrow{\text{(VR)$_d$}} (\bar{0},1,|4,1,\bar{0})
\\
&(\bar{0},0,|6,0,\bar{0})\xrightarrow{\text{(VR)$_d$}} (\bar{0},0,|5,1,\bar{0})\xrightarrow{\text{(VR)$_s$}} (\bar{0},1,|4,1,\bar{0})
\end{align*}
- Neglecting the \virg{secondary} sequential transitions (dashed arrows) with respect to obtaining the parallel ones, below we draw the transitions from $t=1$ to $t=2$, and from the latter to $t=3$, all involving the sequential vertical transitions (VR).
$$
\xymatrix{
{}&0,1|4,1,0\ar[dl]_{(VR)_s}\ar[dr]^{(VR_d)}\ar[dd]^{(PT)}&{}
\\
0,2|3,1,0\ar[dr]^{(VR)_d}\ar@{-->}[d] &{}&0,1|3,2,0\ar[dl]_{(VR)_s}\ar@{-->}[d]
\\
{}&0,2|2,2,0\ar[dr]_{(VR)_d}\ar[dl]^{(VR)_s}\ar[dd]^{(PT)}&{}
\\
1,1,|2,2,0\ar[dr]_{(VR)_d}\ar@{-->}[d]&{}&0,2,|2,1,1\ar[dl]^{(VR)_s}\ar@{-->}[d]
\\
{}&1,1,|2,1,1&{}
}
$$
- More interesting is the drawn below parallel transition (PT) from time $t=3$ to time $t=4$, starting from the configuration at time $t=3$ as initial state, whose \virg{essential} sequential digraph (i.e., neglecting the \virg{secondary} transitions) necessarily involves, besides the (HR) horizontal transitions, also the (BT) transitions of bottom-to-top jumps of a granule.
$$
\xymatrix{
{}&1,1,|2,1,1\ar[dl]_{(HR)_s}\ar[dr]^{(HR)_d}\ar[dd]_{(PT)}&{}
\\
1,2,|1,1,1\ar[dr]_{(BT)_d}&{}&1,1,|1,2,1\ar[dl]^{(BT)_s}
\\
{}&1,2|0,2,1&{}
}
$$
The state at time $t=4$ of the parallel transition is reached by the following two sequential paths in which a relevant role is played by the two jumps from bottom-to-top (BT)$_d$ and (BT)$_s$.
\begin{align*}
&(\bar{0},1,1,|2,1,1,\bar{0})\xrightarrow{(HR)_s} (\bar{0},1,2,|1,1,1,\bar{0})\xrightarrow{(BT)_d} (\bar{0},1,2,|0,2,1,\bar{0})
\\
&(\bar{0},1,1,|2,1,1,\bar{0})\xrightarrow{(HR)_d} (\bar{0},1,1,|1,2,1,\bar{0})\xrightarrow{(BT)_s} (\bar{0},1,2,|0,2,1,\bar{0})
\end{align*}
Note that from this state onwards, all the essential sequential transitions that justify the parallel transitions of the chain $1,2,|0,2,1\xrightarrow{(PT)}\; 2,0,|2,0,2\xrightarrow{(PT)}\;1,0,2|0,2,0,1\xrightarrow{(PT)}\;1,1,0,|2,0,1,1$ necessarily involve the transitions bottom-top (BT) in an unavoidable point.
\end{example}
\section{Conclusions about the now discussed one-dimensional FP model, open questions and further developments}
As first conclusions we can summarize the main results obtained by the one-dimensional discussion about the FP model widely treated in section \ref{sc:FP-inter} in the following points.
\begin{enumerate}[(Co1)]
\item
The parallel application of the local rule (2a-FP) is not able to produce the \emph{canonical} sandpile dynamics generated by the local rule (VR) (or its equivalent version (VR-a)).
\item
On the contrary, the parallel dynamics generated by (2a-FP) is precisely the one of a \emph{spurious symmetrical icepile} as suitable sequential application of the following three rules:
\begin{enumerate}[(S{I}P1)]
\item
Vertical rules both from left to right (VR)$_d$, than its dual from right to left (VR)$_s$, typical of the symmetric sandpiles of \cite{FMP07}.
\item
Icepile horizontal rules, of a single cell flowing both from left to right (HR)$_d$, than from right to left (HR)$_s$, in presence of horizontal plateaus.
\item
Jump of a granule from the bottom to the top of a single height, both from left to right (BT)$_d$ than from right to left (BT)$_s$.
\end{enumerate}
\end{enumerate}
\begin{description}
\item[First important conclusion]
In our opinion it turns out to be quite improper to entitle the Formenti-Perrot paper \cite{FP20} with the explicit reference to \virg{sandpiles} on a lattice when really it is modelled the situation of \emph{symmetrical icepiles} with the furthermore involvement of a \emph{spurious} law consisting in unusual (anti-gravitational) jumps of granules towards the top, with the certainty that this approach will never be able to simulate the parallel version of classical one-dimensional sandpiles governed by the \emph{unique} standard vertical rule (VR), or its equivalent formulation (1a).
\\
This means that in treating this argument one must take in in consideration the terminology of \virg{ice granules} instead of the one of \virg{sand granules}.
\item[Second important conclusion]
The main focus of the paper consists in a comparison of the one-dimensional FP spurious symmetric icepile model, whose sequential version is based on the above three \virg{local rules} (SI1)--(SIP3), with the standard GK sandpile model, which is not symmetric.
\\
This comparison is not at all correct since GK is not symmetric, contrary to the FP model. In order to have a right comparison it is necessary to investigate, and this will be done in some forthcoming papers actually in a draft form, the following symmetric model:
\end{description}
\begin{enumerate}[(SM1)]
\item
\emph{Symmetric sandpile model}. A symmetric model of sandpiles (SSPM) as symmetric version of sandpile model (SPM) is introduced and discussed in \cite{FMP07}, and at the best of our knowledge, it is the unique contribution to this argument one can found in literature. Quoting from \cite{FMP07}: \virg{The new model follows the rules of SPM but it applies them in both directions}. Moreover, the dynamics is the one generated by the sequential updating of the sites, in which \virg{only one grain is allowed to move per time step [...] according to the following guidelines: (i) a grain can move either to the left or to the right, if the [height] difference is more than 2; (ii) when a grain can move only in one direction, it follows the SPM rule (right) or it symmetric (left). [...] The model is intrinsically sequential: only one grain moves at each time step}.
In \cite{CM21} we introduce the \emph{global transition function} as parallel application to any cell of the one dimensional lattice $\IZ$ of the following \emph{symmetric local rule}.
\\
$\oppA c\in\IN^\IZ,\;\oppA x\in\IZ$,
\begin{align*}
c'(x) = c(x)
&+\mathrm{H}\big(c(x)-c(x+1)\big)\Big(\mathrm{H}\big(c(x-1)-c(x)-2 \big) - \mathrm{H}\big( c(x)-c(x+1)-2\big)\Big)
\\
&+\mathrm{H}\big(c(x)-c(x-1)\big)\Big(-\mathrm{H}\big(c(x)-c(x-1)-2 \big) + \mathrm{H}\big(c(x+1)-c(x)-2 \big)\Big)
\end{align*}
This local rule is \emph{symmetric} in the sense that it formalizes the simultaneous action of the standard GK local rule in which \virg{a grain of sand tumbles from site $x$ to site $x+1$ [i.e., direction from left to right] if the height difference $c(x)-c(x+1)$ is at least 2} \cite{GK93}
$$
\oppA c\in\IN^\IZ,\;\oppA x\in\IZ,\quad c'(x)= c(x)+\mathrm{H}\Big(c(x-1)-c(x)-2\Big) -\mathrm{H} \Big(c(x)-c(x+1)-2\Big)
$$
and the dual right-to-left local rule in which a grain of sand tumbles from site $x$ to site $x-1$ [i.e., direction from right to left] if the height difference $c(x-1)-c(x)$ is at least 2
$$
\oppA c\in\IN^\IZ,\;\oppA x\in\IZ,\quad c'(x)= c(x)-\mathrm{H}\Big(c(x)-c(x-1)-2\Big) +\mathrm{H} \Big(c(x+1)-c(x)-2\Big)
$$
\end{enumerate}
|
2,877,628,089,796 | arxiv | \section{ introduction and main results}
Let $A_n$ be an $n\times n$ matrix with real or complex entries. The linear statistics of
eigenvalues $\lambda_1,\lambda_2,\ldots, \lambda_n$ of $A_n$ is a
function of the form
$$
\frac{1}{n}\sum_{k=1}^{n}f(\lambda_k)
$$
where $f$ is some fixed function. The function $f$ is known as the test function. One of the interesting
object to study in random matrix theory is the fluctuation of linear
statistics of eigenvalues of random matrices. The study of fluctuation of linear statistics of eigenvalues was initiated by Arharov \cite{arharov} in 1971 for sample covariance matrices. In 1975 Girko \cite{girko} studied the central limit theorem (CLT) of the traces of the Wigner and sample covariance matrices using martingale techniques. In
1982, Jonsson \cite{jonsson} proved the
CLT of linear eigenvalue statistics for Wishart matrices using method of moments. After that the fluctuations of
eigenvalues for various random matrices have been extensively studied by various people.
For new results on fluctuations of linear eigenvalue statistics of Wigner and sample covariance matrices, see \cite{johansson1998}, \cite{soshnikov1998tracecentral}, \cite{bai2004clt}, \cite{lytova2009central}, \cite{shcherbina2011central}. For band and sparse random matrices, see \cite{anderson2006clt}, \cite{jana2014}, \cite{li2013central}, \cite{shcherbina2015} and for Toeplitz and band Toeplitz matrices, see \cite{chatterjee} and \cite{liu2012}.
In a recent article \cite{adhikari_saha2017}, the CLT for linear eigenvalue statistics has been established in total variation norm for the circulant matrices and of its variants with Gaussian entries. Here we consider the fluctuation problem for the circulant matrices with general entries which are independent and satisfy some moment condition.
A sequence is said to be an {\it input sequence} if the matrices are constructed from the given sequence. We consider the input sequence of the form $\{x_i: i\ge 0\}$
and the circulant matrix is defined as
$$
C_n=\left(\begin{array}{cccccc}
x_0 & x_1 & x_2 & \cdots & x_{n-2} & x_{n-1} \\
x_{n-1} & x_0 & x_1 & \cdots & x_{n-3} & x_{n-2}\\
\vdots & \vdots & {\vdots} & \ddots & {\vdots} & \vdots \\
x_1 & x_2 & x_3 & \cdots & x_{n-1} & x_0
\end{array}\right).
$$
For $j=1,2,\ldots, n-1$, its $(j+1)$-th row is obtained by giving its $j$-th row a right circular shift by one positions and the (i,\;j)-th element of the matrix $x_{(j-i) \mbox{ \tiny{mod} } n}$.
In our first result we consider the fluctuation of linear eigenvalue statistics of circulant matrices with a polynomial test function. Let $P_d(x)=\sum_{k=2}^da_kx^k$ be a real polynomial of degree $d$ where $d\geq 2$.
\begin{theorem}\label{thm:cirpoly}
Suppose $C_n$ is the random circulant matrix with independent input sequence $\{\frac{X_i}{\sqrt n}\}_{i\geq 0}$ such that
\begin{equation}\label{eqn:condition}
\mbox{\bf E}(X_i)=0, \mbox{\bf E}(X_i^2)=1 \ \mbox{and}\ \sup_{i\geq 0}\mbox{\bf E}(|X_i|^k)=\alpha_k<\infty \ \mbox{for}\ k\geq 3.
\end{equation}
Then, as $n\to \infty$,
\begin{align*}
\frac{\Tr [P_d(C_n)]-\mbox{\bf E}\Tr [P_d(C_n)]}{\sqrt{n}}\stackrel{d}{\longrightarrow} N(0,\sigma_{p_d}^2),
\end{align*}
where $\sigma_{p_d}^2= \sum_{\ell=2}^d a_{\ell}^2{\ell}!\sum_{s=0}^{{\ell}-1}f_{{\ell}}(s)$ and $f_{\ell}(s)=\sum_{k=0}^{s}(-1)^k\binom{\ell}{k}(s-k)^{\ell-1}$.
\end{theorem}
We use the method of moments to prove Theorem \ref{thm:cirpoly}. Note that, the constant term and the first degree term are not considered in the polynomial $P_d(x)$. The constant term of a matrix polynomial will be a constant times the identity matrix and this term will not effect the fluctuation result of linear eigenvalue statistics of a random matrix as we are centering the linear eigenvalue statistics by its mean.
If we consider a degree one monomial of
the random circulant matrix $C_n$ with independent input sequence $\{\frac{X_i}{\sqrt n}\}_{i\geq 0}$ where $\mbox{\bf E}(X_i)=0$ then
$$\frac{\Tr(C_n)-\mbox{\bf E}\Tr(C_n)}{\sqrt n}=X_0.$$
Thus the limiting distribution depends on the distribution of $X_0$ and hence CLT type result does not hold for degree one monomial. Due to these reasons we have not considered the constant and the first degree terms in $P_d(x)$.
Next we consider the fluctuation problem for circulant matrices in total variation norm. It has been shown \cite{adhikari_saha2017} that
$$
\frac{\Tr(A_n^{p_n})-\mbox{\bf E}(\Tr(A_n^{p_n}))}{\sqrt{\mbox{Var}(\Tr(A_n^{p_n}))}} \;\mbox{converges in total variation norm to } N(0,1),
$$
as $n\to \infty$, where $p_n=o(\log n/\log\log n)$ and $A_n$ is one of circulant, reverse circulant, symmetric circulant and Hankel matrices with Gaussian inputs. In this article, we show that the above results hold when the matrices are constructed from the input sequence belongs to $\mathcal L(c_1,c_2)$, for some $c_1,c_2>0$ and subgaussian.
\begin{definition}
For each $c_1,c_2>0$, let $\mathcal L(c_1,c_2)$ be the class of probability measures on $\mathbb R$ that arise as laws of random variables like $u(Z)$, $Z$ is a standard Gaussian random variable and $u$ is a twice continuously differentiable function such that for all $x\in \mathbb R$
$$
|u'(x)|\le c_1 \;\;\mbox{ and } |u''(x)|\le c_2.
$$
\end{definition}
For example, the standard Gaussian random variable is in $\mathcal L(1,0)$. The uniformly distributed random variable on $[0,1]$ is in $\mathcal L((2\pi)^{-1/2}, (2\pi e)^{-1/2})$.
\begin{definition}
A random variable $X$ is said to be {\it $\sigma$-subgaussian} or subgaussian with parameter $\sigma$, $\sigma>0$, if $
\mbox{\bf E}[e^{tX}]\le e^{\sigma^2t^2/2}
$
for every $t\in \mathbb{R}$.
\end{definition}
For example, the Bernoulli random variable with mass at $+1$ and $-1$ with equal probability is $1$-subgaussian. More generally, if $X$ is a random variable with $\mbox{\bf E} [X]=0$ and $|X|\le \sigma$ for some $\sigma>0$, then $X$ is $\sigma$-subgaussian. The normal random variable with mean zero variance $\sigma^2$ is $\sigma$-subgaussian.
Also note that if a random variable $X$ is $\sigma$-subgaussian, then its (absolute) moments
are bounded above by an expression involving $\sigma$ and
the gamma function (see e. g. \cite[p. 93]{stroock}). Therefore if a sequence of random variables $\{X_i\}_{i\geq 0}$ is $\sigma$-subgaussian then $\sup_{i\geq 0}E(|X_i|^k)<\infty$ for all $k\in \mathbb N$. We use this fact in the proof of Theorem \ref{thm:main1}. Now we have the following central limit theorem result in total variation norm.
\begin{theorem}\label{thm:main1}
Suppose $C_n$ is the random circulant matrix with input sequence $\{\frac{X_i}{\sqrt n}\}_{i\geq 0}$ such that $X_i$'s are independent symmetric $\sigma$-subgaussian random variables and $X_i\in \mathcal L(c_1,c_2)$ for some finite $c_1$ and $c_2$. Then, as $n\to \infty$,
\begin{equation}\label{total variation result}
\frac{\Tr(P_d(C_n))-\mbox{\bf E}(\Tr(P_d(C_n)))}{\sqrt{Var(\Tr(P_d(C_n)))}} \;\mbox{converges in total variation to } N(0,1),
\end{equation}
where $P_d(x)=\sum_{k=2}^da_kx^k$, a real polynomial of degree $d\ge 2$.
\end{theorem}
\begin{remark}
As we are dealing with circulant matrices in this article, we have stated the total variation norm convergence result for circulant matrices only. But the result \eqref{total variation result} holds for other variants of circulant matrices also, namely, for reverse circulant and symmetric circulant matrices. For description of these matrices, see \cite{adhikari_saha2017}.
\end{remark}
Note that there is a large class of random variables which satisfy the assumptions on the input sequence in Theorem \ref{thm:main1}. For example, standard Gaussian random variable, symmetric uniform random variable and linear combination of these two belong to $\mathcal L(c_1,c_2)$ for some $c_1,c_2\geq 0$ and subgaussian. The proof techniques of Theorem \ref{thm:main1} passively depend on Stein's method and second order Poincar\'e inequality (see \cite{chatterjee}). In particular, we use Result \ref{re:sourav}, which relies on Stein's method and second order Poincar\'e inequality.
The rest of the article is organized as follows. In Section \ref{sec:poly} we give a proof of Theorem \ref{thm:cirpoly} using moment method. In Section \ref{sec:totalvariation}, we prove Theorem \ref{thm:main1}.
\section{Proof of Theorem \ref{thm:cirpoly}}\label{sec:poly}
We first define some notation which will be used in the proof of Theorem \ref{thm:cirpoly}.
\begin{align}
A_p&=\{(i_1,\ldots,i_p)\in \mathbb{Z}^p\; : \; i_1+\cdots+i_p=0\;{(\mbox{mod $n$})},\; 0\le i_1,\ldots, i_p\le n-1\},\label{def:A_p}\\
A_{p}'&=\{(i_1,\ldots,i_p)\in \mathbb{Z}^p\; : \; i_1+\cdots+i_p=0\;{(\mbox{mod $n$})},\; 0\le i_1\neq i_2\neq \cdots\neq i_p\le n-1\},\nonumber\\
A_{p,s}&=\{(i_1,\ldots,i_p)\in \mathbb{Z}^p\; : \; i_1+\cdots+i_p=sn,\; 0\le i_1,\ldots, i_p\le n-1\},\nonumber \\
A_{p,s}'&=\{(i_1,\ldots,i_p)\in \mathbb{Z}^p\; : \; i_1+\cdots+i_p=sn,\; 0\le i_1\neq i_2\neq \cdots\neq i_p\le n-1\}.\nonumber
\end{align}
We prove Theorem \ref{thm:cirpoly} by the method of moments. To apply this method we need to calculate the higher order moments of linear eigenvalue statistics of the circulant matrices and that involve the trace of higher power of the circulant matrices. So we calculate the trace of $(C_n)^p$ for some positive integer $p$.
Let $e_1,\ldots,e_n$ be the standard unit vectors in $\mathbb R^n$, i.e., $e_i=(0,\ldots,1,\ldots, 0)^t$ ($1$ in $i$-th place). Therefore we have
\begin{align*}
(C_n)e_i=\mbox{$i$-th column}=\sum_{i_1=0}^{n-1}x_{i_1}e_{i-i_1 \mbox{ mod $n$}},
\end{align*}
for $i=1,\ldots, n$. In the last equation $e_0$ stands for $e_n$. Repeating the procedure we get
\begin{align*}
(C_n)^2e_i=\sum_{i_1,i_2=0}^{n-1}x_{i_1}x_{i_2}e_{i-i_1-i_2 \mbox{ mod $n$}},
\end{align*}
for $i=1,\ldots, n$. Therefore in general we get
\begin{align*}
(C_n)^{p}e_i&=\sum_{i_1,\ldots,i_{p}=0}^{n-1}x_{i_1}\ldots x_{i_{p}}e_{i-i_1-i_2-i_3\cdots -i_p \mbox{ mod $n$}},
\end{align*}
for $i=1,\ldots, n$. Therefore the trace of $C_n^{p}$ can be written as
\begin{align}\label{trace formula C_n}
\Tr(C_n^{p})=\sum_{i=1}^{n}e_i^t(C_n)e_i=n\sum_{A_{p}}x_{i_1}\cdots x_{i_{p}},
\end{align}
where $A_p$ is as defined in \eqref{def:A_p}. For a similar result on the trace of band Toeplitz matrix see \cite{liu_wang2011}.
The following result will be used in the proof of Theorem \ref{thm:cirpoly}.
\begin{result}\label{ft:variance}
Consider $A_{p}$ as defined above. Then
\begin{align*}
\lim_{n\to \infty}\frac{|A_p|}{n^{p-1}}=\sum_{s=0}^{p-1}\lim_{n\to \infty}\frac{|A_{p,s}|}{n^{p-1}}=\sum_{s=0}^{p-1}f_p(s),
\end{align*}
where $$f_p(s)=\frac{1}{(p-1)!}\sum_{k=0}^{s}(-1)^k\binom{p}{k}(s-k)^{p-1}.$$
\end{result}
For the proof of Result \ref{ft:variance}, we refer to \cite[Lemma 13]{adhikari_saha2017}.
Assuming the lemma we proceed to prove Theorem \ref{thm:cirpoly}.
\begin{proof}[Proof of Theorem \ref{thm:cirpoly}]
We first calculate expected value of $\Tr[P_d(C_n)]$.
Using the trace formula \eqref{trace formula C_n}, we get
\begin{align*}
\mbox{\bf E}(\Tr[P_d(C_n)])&=\sum_{k=2}^da_k\mbox{\bf E}\Tr[C_n^k]=\sum_{k=2}^d \frac{a_k}{n^{\frac{k}{2}-1}} \sum_{A_k}\mbox{\bf E}[X_{i_1}\cdots X_{i_k}].
\end{align*}
Note that, for $\mbox{\bf E}[X_{i_1}\cdots X_{i_k}]$ to be non-zero, each random variable has to appear at least twice as the random variables have mean zero. Again the index variables satisfy one constrain since $(i_1,i_2,\ldots,i_k)$ belongs to $A_k$. Thus we have at most $(\frac{k}{2}-1)$ free choice in the index set. Due to this fact and \eqref{eqn:condition}, we have
\begin{align}\label{eqn:mean}
\mbox{\bf E}(\Tr[P_d(C_n)])=O(1).
\end{align}
Now we calculate the limit of the variance of $\frac{\Tr[P_d(C_n)]-\mbox{\bf E}(\Tr[P_d(C_n)])}{\sqrt n}$. This variance calculation will help us to understand the behaviour of higher order central moments of $\Tr[P_d(C_n)]$ as $n$ tends to infinity.
By \eqref{eqn:mean} we have
\begin{align*}
\lim_{n\to \infty}\mbox{Var}\left(\frac{\Tr[P_d(C_n)]-\mbox{\bf E}(\Tr[P_d(C_n)])}{\sqrt n}\right)=\lim_{n\to \infty}\frac{1}{n}\mbox{\bf E}(\Tr[P_d(C_n)])^2.
\end{align*}
Expanding the polynomial $P_d$ and using the trace formula \eqref{trace formula C_n}, we have
\begin{align}\label{eqn:var}
\frac{1}{n}\mbox{\bf E}(\Tr[P_d(C_n)])^2&=\sum_{i_1,i_2=2}^d a_{i_1}a_{i_2}\frac{1}{n^{\frac{i_1+i_2}{2}-1}}\sum_{A_{i_1}, A_{i_2}}\mbox{\bf E}[X_{j_1}\cdots X_{j_{i_1}}X_{k_1}\cdots X_{k_{i_2}}]\nonumber
\\&=\sum_{i_1,i_2=2}^da_{i_1}a_{i_2}\frac{1}{n^{\frac{i_1+i_2}{2}-1}}\sum_{s=0}^{i_1-1}\sum_{t=0}^{i_2-1}\sum_{A_{i_1,s}, A_{i_2,t}}\mbox{\bf E}[X_{j_1}\cdots X_{j_{i_1}}X_{k_1}\cdots X_{k_{i_2}}].
\end{align}
Note that, for the non-zero contribution, no random variable can appear only once, as the random variables are independent and have zero mean. Therefore each indices in $\{j_1,\ldots,j_{i_1},k_1,\ldots, k_{i_2}\}$ has to appear at least twice. Observe that, if there is a self-matching in $\{j_1,\ldots, j_{i_1}\}$ or in $\{k_1,\ldots, k_{i_2}\}$, then the indices satisfy at least two equations. Therefore in such cases we have $|A_{i_1,s}|| A_{i_2,t}|=O(n^{\frac{i_1+i_2}{2}-2})$. As all the moments of the input random variables are finite by \eqref{eqn:condition}, we have
$$
\sum_{A_{i_1,s}, A_{i_2,t}}\mbox{\bf E}[X_{j_1}\cdots X_{j_{i_1}}X_{k_1}\cdots X_{k_{i_2}}]=O(n^{\frac{i_1+i_2}{2}-2}),
$$
when $A_{i_1,s}, A_{i_2,t}$ satisfy the self matching condition. Therefore the maximum contribution comes when $\{j_1,\ldots, j_{i_1}\}$ matched with $\{k_1,\ldots, k_{i_2}\}$ completely. This is possible only when $i_1=i_2$ and $s=t$, otherwise there will be a self-matching either in $\{j_1,\ldots, j_{i_1}\}$ or in $\{k_1,\ldots, k_{i_2}\}$. Thus, from \eqref{eqn:var}, we get
\begin{align*}
\lim_{n\to \infty}\frac{1}{n}\mbox{\bf E}(\Tr[P_d(C_n)])^2=\lim_{n\to \infty}\sum_{i=2}^d a_{i}^2\ i! \frac{1}{n^{i-1}}\sum_{s=0}^{i-1}\sum_{A_{i,s}}\mbox{\bf E}[X_{j_1}^2\cdots X_{j_{i}}^2].
\end{align*}
The factor $i!$ appeared because $\{k_1,\ldots, k_{i}\}$ can match with given vector $(j_1,j_2,\ldots, j_{i})$ in $i!$ ways.
The maximum contribution comes when $(j_1,\ldots, j_{i})$ consists of distinct elements and that contribution is $O(n^{i-1})$. Otherwise the contribution will be of the order of $O(n^{i-2})$. Therefore we have
\begin{align*}
\lim_{n\to \infty}\frac{1}{n}\mbox{\bf E}(\Tr[P_d(C_n)])^2&=\lim_{n\to \infty}\sum_{i=2}^da_{i}^2\ i! \frac{1}{n^{i-1}}\sum_{s=0}^{i-1}\sum_{A_{i,s}'}\mbox{\bf E}[X_{j_1}^2\cdots X_{j_{i}}^2]
\\&=\sum_{i=2}^da_{i}^2\ i! \sum_{s=0}^{i-1}\lim_{n\to \infty}\frac{|A_{i,s}'|}{n^{i-1}}
=\sum_{i=2}^da_{i}^2\ i! \sum_{s=0}^{i-1}\lim_{n\to \infty}\frac{|A_{i,s}|}{n^{i-1}},
\end{align*}
where $A_{i,s}'$ and $A_{i,s}$ are as defined in \eqref{def:A_p}. The last equality holds because if any two indices of $(j_1,\ldots, j_{i})$ are equal then $|A_{i,s}|=O(n^{i-2})$, which contribute zero in the limit. Therefore from Result \ref{ft:variance}, we get
\begin{align*}
\lim_{n\to \infty}\frac{1}{n}\mbox{\bf E}(\Tr[P_d(C_n)])^2=\sum_{i=2}^da_{i}^2\ i! \sum_{s=0}^{i-1}f_i(s)
\end{align*}
Thus the limiting variance $\sigma_{P_d}^2$ is given by
\begin{align}\label{liming variance}
\sigma_{p_d}^2=\lim_{n\to \infty}\mbox{Var}\left(\frac{\Tr[P_d(C_n)-\mbox{\bf E}(\Tr[P_d(C_n)])}{\sqrt n}\right)=\sum_{i=2}^da_{i}^2\ i! \sum_{s=0}^{i-1}f_i(s).
\end{align}
Next we calculate the higher order moments of $\frac{\Tr[P_d(C_n)]-\mbox{\bf E}(\Tr[P_d(C_n)])}{\sqrt n}$. Using the binomial expansion we have
\begin{align}\label{eqn:expansion}
\left(\frac{\Tr [P_d(C_n)]-\mbox{\bf E}\Tr [P_d(C_n)]}{\sqrt{n}}\right)^k&=\frac{1}{n^{\frac{k}{2}}}\sum_{j=0}^{k}(-1)^{k-j}\binom{k}{j}(\Tr [P_d(C_n))^j](\mbox{\bf E}\Tr [P_d(C_n)])^{k-j}.
\end{align}
Since $\mbox{\bf E}\Tr [P_d(C_n)]=O(1)$ (see \eqref{eqn:mean}), we focus on $(\Tr [P_d(C_n)])^j$. By expanding the polynomial we get
\begin{align}\label{eqn:exp2}
(\Tr [P_d(C_n)])^j=\sum_{I_j}a_{i_1}a_{i_2}\ldots a_{i_j}[\Tr C_n^{i_1}\cdots \Tr C_n^{i_j}],
\end{align}
where $I_j=\{(i_1,\ldots , i_{j})\; :\; 2\le i_1,\ldots, i_j\le d\}$. From the trace formula \eqref{trace formula C_n} of the circulant matrix, we have
\begin{equation}\label{eqn_referee}
\mbox{\bf E}[\Tr C_n^{i_1}\cdots \Tr C_n^{i_j}]=\frac{1}{n^{\frac{i_1+\cdots+i_j}{2}-j}}\sum_{A^{(i_1,\ldots, i_j)}}\mbox{\bf E}\left(\prod_{\ell=1}^{j}[X_{k_{\ell,1}}\cdots X_{k_{\ell,i_{\ell}}}]\right),
\end{equation}
where $A^{(i_1,\ldots, i_j)}=\{(A_{i_1},\ldots, A_{i_j}):\; 2\le i_1,\ldots, i_j\le d\}$ and $A_{i_1},\ldots,A_{i_j}$ are as defined in \eqref{def:A_p}. Also note that in the sum in the right hand side of \eqref{eqn_referee} for each $\ell$, we have $(k_{\ell,1},k_{\ell,2},\ldots,k_{\ell,i_\ell})\in A_{i_\ell}$. For non zero contribution, the each random variables in $\{X_{k_{\ell,1}},\ldots ,X_{k_{\ell,i_{\ell}}}\; :\; \ell=1,\ldots, j\}$ must occur at least twice as the random variables have mean zero. Observe that, following the arguments given in variance calculation, we get the maximum contribution when for every $\ell$ there exists $\ell'$ such that $i_{\ell}=i_{\ell '}$ and the sets $\{{k_{\ell,1}},\ldots ,{k_{\ell,i_{\ell}}}\}$ and $\{{k_{\ell',1}},\ldots ,{k_{\ell',i_{\ell'}}}\}$ are same with distinct elements. Therefore we need a pair matching in $\{i_1,\ldots,i_j\}$ to have maximum contribution. In other cases we have lower order contribution, as all the moments of the random variables are finite. Thus we get
\begin{align}\label{eqn:lesscontribution}
\sum_{A^{(i_1,\ldots, i_j)}}\mbox{\bf E}\left(\prod_{\ell=1}^{j}[X_{k_{\ell,1}}\cdots X_{k_{\ell,i_{\ell}}}]\right)=O(n^{\frac{i_1+\cdots+i_{j}}{2}-\lceil \frac{j}{2}\rceil}).
\end{align}
Therefore using \eqref{eqn:lesscontribution}, from \eqref{eqn:exp2} we get
\begin{align}\label{eqn:moment}
\mbox{\bf E}(\Tr [P_d(C_n)])^j=O(n^{j-\lceil \frac{j}{2}\rceil}).
\end{align}
Therefore using \eqref{eqn:mean} and \eqref{eqn:moment}, from \eqref{eqn:expansion} we get
\begin{align*}
\lim_{n\to \infty}\mbox{\bf E}\left(\frac{\Tr [P_d(C_n)]-\mbox{\bf E}\tr [P_d(C_n)]}{\sqrt{n}}\right)^k=0, \;\;\mbox{when $k$ is odd}.
\end{align*}
Next we calculate the even moments. We use $2k$ instead of $k$. Again due to \eqref{eqn:mean} and \eqref{eqn:moment}, from \eqref{eqn:expansion} we get
\begin{align*}
&\lim_{n\to \infty}\mbox{\bf E}\left(\frac{\Tr [P_d(C_n)]-\mbox{\bf E}\Tr [P_d(C_n)]}{\sqrt{n}}\right)^{2k}=\lim_{n\to \infty}\frac{1}{n^k}\mbox{\bf E}(\Tr [P_d(C_n)])^{2k}
\\=&\frac{(2k)!}{k!2^k}\sum_{I_k}a_{i_1}^2\cdots a_{i_k}^2 \lim_{n\to \infty}\frac{i_1!\cdots i_k!}{n^{i_1+\cdots+i_k-k}}\sum_{A^{(i_1,\ldots, i_k)}}\mbox{\bf E}\left[\prod_{\ell=1}^{k}[X_{k_{\ell,1}}^2\cdots X_{k_{\ell,i_{\ell}}}^2]\right].
\end{align*}
The factor $\frac{(2k)!}{k!2^k}$ appear because that many pair matched possible among $2k$ variables $\{i_1,\ldots, i_{2k}\}$.
After the pair matching in $\{i_1,\ldots, i_{2k}\}$, we rename the indices as $\{i_1,\ldots, i_k\}$. The factor $i_1!\cdots i_k!$ appear because, for $\ell=1,\ldots, k$, each vector $(k_{\ell,1},\ldots,k_{\ell,i_{\ell}})$ can be pair matched with $\{k'_{\ell,1},\ldots,k'_{\ell,i_{\ell}}\}$ in $i_{\ell}!$ many ways. Now we have
\begin{align*}
\lim_{n\to \infty}\frac{1}{n^{i_1+\cdots+i_k-k}}\sum_{A^{(i_1,\ldots, i_k)}}\mbox{\bf E}\left[\prod_{\ell=1}^{k}[X_{k_{\ell,1}}^2\cdots X_{k_{\ell,i_{\ell}}}^2]\right]=\lim_{n\to \infty}\frac{|A^{(i_1,\ldots, i_k)'}|}{n^{i_1+\cdots+i_k-k}},
\end{align*}
where $A^{(i_1,\ldots, i_k)'}=\{(A_{i_1}',\ldots, A_{i_k}')\;:\; \mbox{all coordinates are distinct throughout all } A_{i_l}'\}$ and $A_{i_l}', 1\leq l\leq k$ are as in \eqref{def:A_p}. Again we have
$$
\lim_{n\to \infty}\frac{|A^{(i_1,\ldots, i_k)'}|}{n^{i_1+\cdots+i_k-k}}=\lim_{n\to \infty}\frac{|A^{(i_1,\ldots, i_k)}|}{n^{i_1+\cdots+i_k-k}}=\prod_{\ell=1}^k\lim_{n\to \infty}\frac{|A_{i_{\ell}}|}{n^{i_{\ell}-1}}.
$$
Therefore by Result \ref{ft:variance}, we get
\begin{align}\label{2kth moment of normal}
\lim_{n\to \infty}\mbox{\bf E}\left(\frac{\Tr [P_d(C_n)]-\mbox{\bf E}\Tr [P_d(C_n)]}{\sqrt{n}}\right)^{2k}&=\frac{(2k)!}{k!2^k}\sum_{I_k}\prod_{\ell=1}^k a_{i_{\ell}}^2\ i_{\ell}! \sum_{s=0}^{i_{\ell}-1}f_{i_{\ell}}(s) \nonumber\\
&=\frac{(2k)!}{k!2^k} \left( \sum_{i=2}^d a_{i}^2\ i! \sum_{s=0}^{i-1}f_{i}(s)\right)^k,
\end{align}
where $I_k=\{(i_1,\ldots , i_{k})\; :\; 2\le i_1,\ldots, i_k\le d\}$. The final expression in \eqref{2kth moment of normal} is the $2k$-th moment of $N(0,\sigma_{p_d}^2)$ and this completes the proof.
\end{proof}
\section{proof of theorem \ref{thm:main1}}\label{sec:totalvariation}
In this section we give the proof of Theorem \ref{thm:main1}. The following result is the key ingredient for the proof.
\begin{result}\label{re:sourav}\cite[ Theorem 2.2]{chatterjee}
{\it Let $X=(X_1,X_2,\ldots,X_n)$ be a vector of independent random variables in $\mathcal L(c_1,c_2)$ for some finite $c_1,c_2$. Take any $g\in C^2(\mathbb R^n)$ and let $\nabla g$ and $\nabla^2 g$ denote the gradient and Hessian of $g$. Let
\begin{align*}
\kappa_0= \left(\mbox{\bf E}\sum_{k=1}^n\left|\frac{\partial g}{\partial x_k}(X)\right|^4\right)^{\frac{1}{2}},\
\kappa_1= (\mbox{\bf E}\|\nabla g(X) \|^4)^{\frac{1}{4}} \mbox{ and }
\kappa_2= (\mbox{\bf E}\|\nabla^2 g(X) \|^4)^{\frac{1}{4}}.
\end{align*}
Suppose $W=g(X)$ has a finite fourth moment and $\sigma^2=\mbox{Var}(W)$. Let $Z$ be a normal random variable having the same mean and variance as $W$. Then
$$
d_{TV}(W,Z)\le \frac{2\sqrt{5}(c_1c_2\kappa_0+c_1^3\kappa_1\kappa_2)}{\sigma^2}.
$$}
\end{result}
We use Result \ref{re:sourav} to prove Theorem \ref{thm:main1}, and for that we need to estimate $\kappa_0, \kappa_1,\kappa_2$ and $\sigma^2$. The following lemma gives the estimates of these quantities.
\begin{lemma}\label{lem:kappa}
Let $g(X_0,X_1,\ldots,X_{n-1})=\Tr(P_d(C_n))$ and consider $\kappa_0,\kappa_1$ and $\kappa_2$ as defined in Result \ref{re:sourav}. Then
\begin{align*}
\kappa_0&= O(n^{\frac{1}{2}}),\;\;
\kappa_1= O(n^{\frac{1}{2}})\;\;\mbox{ and }\;\;
\kappa_2=O\left(\frac{1}{n}(\sqrt{ \log n})^{d-2}\right).
\end{align*}
\end{lemma}
\noindent Assuming Lemma \ref{lem:kappa} we proceed to prove Theorem \ref{thm:main1}.
\begin{proof}[Proof of Theorem \ref{thm:main1}] Let $W_n=\Tr(P_d(C_n))$. Using Lemma \ref{lem:kappa} in Result \ref{re:sourav}, we get
\begin{align}\label{eqn:TVnorm}
d_{TV}(W_n,Z_n)\le \frac{O(\sqrt n )}{\mbox{Var}(\Tr(P_d(C_n)))},
\end{align}
where $Z_n$ is a normal random variable having the same mean and variance as $W_n$.
Now from the variance calculation \eqref{liming variance} in the proof of Theorem \ref{thm:cirpoly}, we get
\begin{align*}
\lim_{n\to \infty}\frac{1}{n}\mbox{Var}(\Tr(P_d(C_n)))=\sigma_{P_d}^2.
\end{align*}
Which implies that the right hand side of \eqref{eqn:TVnorm} goes to zero as $n\to \infty$, as $\sigma_{P_d}^2>0$. Hence the result.
\end{proof}
\noindent It remains to prove Lemma \ref{lem:kappa}. The following result will be used for estimating $\kappa_2$.
\begin{result}\label{res:norm1}
Let $C_n$ be a circulant matrix with input sequence $\{\frac{X_i}{\sqrt n}\}$, where $X_i$'s are symmetric $\sigma$-subgaussian. Then, for some $\alpha>0$,
$$
\| C_n \|\le \alpha\sqrt{\log n} \;\;\mbox{a.s.},
$$
where $\|C_n\|:=\sup \{\|C_n x\|_2: x\in \mathbb{R}^n\}$ and $\|x\|_2=\sqrt{\sum_{i=1}^nx_i^2}$ for $x=(x_1,\ldots,x_n)^t\in~\mathbb{R}^n$.
\end{result}
We skip the proof of Result \ref{res:norm1}. For a proof of it see the proof of Theorem 8 and Remark 19 in \cite{adhikari_saha2017}, and see \cite{meckes} also. The following result from \cite{chatterjee} will be used in the proof of Lemma \ref{lem:kappa}.
\begin{result}\label{re:norm}
{\it Let $A=(a_{ij})_{1\le i,j\le n}$ be an arbitrary square matrix with complex entries. Let $f(z)=\sum_{m=0}^{\infty}b_mz^m$ be an entire function. Define two associate entire functions $f_1=\sum_{m=1}^{\infty}m|b_m|z^{m-1}$ and $f_2=\sum_{m=2}^{\infty}m(m-1)|b_m|z^{m-2}$. Then, for each $i,j$, we have
$$\frac{\partial}{\partial a_{ij}}\Tr(f(A))=(f'(A))_{ji},$$
Next, for each $1\le i,j,k,\ell\le n$, let
$$
h_{ij,k\ell}=\frac{\partial^2}{\partial a_{ij}\partial a_{k\ell}}\Tr(f(A)).
$$
Let $H$ be the $n^2\times n^2$ matrix $(h_{ij,k\ell})_{1\le i,j,k,\ell\le n}$. Then $\|H\|\le f_2(\|A\|)$.}
\end{result}
For the proof of Result \ref{re:norm}, we refer to \cite[Lemma 5.4]{chatterjee}. We use the following notation: For positive integers $p$ and $q$, define
\begin{align*}
N_{p}^{q}&=\{(i_1,i_2,\ldots,i_p)\; : \; i_1+i_2+\cdots + i_p=q, 0\le i_1,i_2,\ldots,i_p\le n-1\}.
\end{align*}
\begin{proof}[Proof of Lemma \ref{lem:kappa}]
Let $g(X_0,X_1,\ldots,X_{n-1})=\Tr(P_d(C_n))$. Then from the trace formula \eqref{trace formula C_n} of $C_n$, we have
$$
g(X)=\sum_{k=2}^{d}\frac{a_k}{n^{\frac{k}{2}-1}}\sum_{A_k}X_{i_1}X_{i_2}\cdots X_{i_k}=\sum_{k=2}^{d}\frac{a_k}{n^{\frac{k}{2}-1}}\sum_{s=0}^{k-1}\sum_{N_{k}^{sn}}X_{i_1}X_{i_2}\cdots X_{i_k},
$$
where $X=(X_0,X_1,\ldots,X_{n-1})$. Therefore, for $0\le j,\ell\le n-1$, we have
\begin{align*}
\frac{\partial g}{\partial x_j}(X)&=\sum_{k=2}^{d}\frac{a_k}{n^{\frac{k}{2}-1}}\sum_{s=0}^{k-1}k \sum_{N_{k-1}^{sn-j}}X_{i_1}X_{i_2}\cdots X_{i_{k-1}} \;\mbox{ and } \\\frac{\partial^2 g}{\partial x_{\ell}\partial x_j}(X)&=\sum_{k=2}^{d}\frac{a_k}{n^{\frac{k}{2}-1}}\sum_{s=0}^{k-1}k(k-1)\sum_{N_{k-2}^{sn-j-\ell}}X_{i_1}X_{i_2}\cdots X_{i_{k-2}}.
\end{align*}
Therefore we have
\begin{align}\label{derivative of g power four}
\mbox{\bf E}\left|\frac{\partial g}{\partial x_j}(X)\right|^4=\sum_{I_4}\frac{k_1k_2k_3k_4a_{k_1}a_{k_2}a_{k_3}a_{k_4}}{n^{\frac{k_1+k_2+k_3+k_4}{2}-4}}\sum_{S(k_1,k_2,k_3,k_4)}\sum_{N_{k_1,\ldots,k_4}^{s_1,\ldots,s_4}}\mbox{\bf E}\prod_{j=1}^4[X_{i_{j,1}}\cdots X_{i_{j,k_j-1}}].
\end{align}
where
\begin{align*}I_4&=\{(k_1,\ldots,k_4)\; :\; 2\le k_1,\ldots,k_4\le d\},\\
S(k_1,k_2,k_3,k_4)&=\{(s_1,\ldots,s_4): 0\le s_j\le k_j-1,j=1,\ldots,4\},\\
N_{k_1,\ldots,k_4}^{s_1,\ldots,s_4}&=(N_{k_1-1}^{ s_1n-j},N_{k_2-1}^{ s_2n-j},N_{k_3-1}^{ s_3n-j},N_{k_4-1}^{ s_4n-j}).
\end{align*}
The input random variables are independent and have mean zero, as they are symmetric $\sigma$-subgaussian. Therefore each random variable has to appear at least twice for non zero contribution in the right hand side of \eqref{derivative of g power four}. Note that, the total number of variables in the set $N_{k_1,\ldots,k_4}^{s_1,\ldots,s_4}$ is $k_1+k_2+k_3+k_4-4$. Following the arguments as given to find the limiting variance in the proof of Theorem \ref{thm:cirpoly}, we get
\begin{align*}
\sum_{N_{k_1,\ldots,k_4}^{s_1,\ldots,s_4}}\mbox{\bf E}\prod_{j=1}^4[X_{i_{j,1}}\cdots X_{i_{j,k_j-1}}]=O(n^{\frac{k_1+\cdots+k_4-4}{2}-2})=O(n^{\frac{k_1+\cdots+k_4}{2}-4}),
\end{align*}
as the input random variables are $\sigma$-subgaussian. Since the degree $d$ of the polynomial is fixed, we have
\begin{align*}
\mbox{\bf E}\left|\frac{\partial g}{\partial x_j}(X)\right|^4=O(1) \;\; \mbox{and }\kappa_0=\left(\mbox{\bf E}\sum_{k=0}^{n-1}\left|\frac{\partial g}{\partial x_j}(X)\right|^4\right)^{\frac{1}{2}}=O(n^{\frac{1}{2}}).
\end{align*}
Using Cauchy-Schwarz inequality and the bound of $\kappa_0$, we have
\begin{align*}
\kappa_1=\left(\mbox{\bf E}\|\nabla g\|^4\right)^{\frac{1}{4}}=\left(\mbox{\bf E}\left(\sum_{k=0}^{n-1}\left|\frac{\partial g}{\partial x_k}(X)\right|^2\right)^2\right)^{\frac{1}{4}}=O(n^{\frac{1}{2}}).
\end{align*}
Now we use Result \ref{re:norm} to get an upper bound for $\kappa_2$. Let $f(z)=P_d(z)$ and $A=C_n$. Then $a_{ij}=\frac{1}{\sqrt n}X_{j-i (\mbox{ mod $n$})}$, in particular, $a_{1i}=\frac{1}{\sqrt n} X_{i-1}$ for $i=1,\ldots, n$. Considering the matrix $A$ as a $n^2\times 1$ vector $(a_{11},\ldots, a_{1n},a_{21},\ldots, a_{2n},a_{31}, \ldots, a_{nn})^t$, the matrix $H=(h_{ij,k\ell})$, where $h_{ij,k\ell}=\frac{\partial^2}{\partial a_{ij}\partial a_{k\ell}}\Tr(P_d(A)),$ has the following form
$$
H=\left(\begin{array}{cc}
n[\nabla^2g]_{n\times n} & *
\\ * & *
\end{array}\right)_{n^2\times n^2}.
$$
Note that $n$ appeared in the first $n\times n$ block of $H$ due to the change of variables from $\{a_{11}, \ldots, a_{1n}\}$ to $\{x_0,\ldots, x_{n-1}\}$. From Results \ref{re:norm} and \ref{res:norm1}, we have
$$
\|\nabla^2g\|\le \frac{1}{n}\|H\|\le \frac{1}{n} f_2(\|C_n\|)\le C\frac{1}{n}(\sqrt{\log n})^{d-2} \;\;\mbox{ a.s.}
$$
for some non-random constant $C$.
Therefore we have
$$
\kappa_2=(\mbox{\bf E}\|\nabla^2 g(X)\|^4)^{\frac{1}{4}}=O\left(\frac{1}{n}(\sqrt{\log n})^{d-2}\right).
$$
This completes the proof.
\end{proof}
\noindent{\bf Acknowledgement:} We would like to thank Prof. Arup Bose for his comments. We thank both the referees for their useful suggestions.
\bibliographystyle{amsplain}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
2,877,628,089,797 | arxiv | \section{Introduction}
Renormalized perturbative quantum field theory describes large parts of physics, in particular particle physics, with good, and sometimes spectacular precision. It is, however, a conceptually and technically complicated subject, and it required hard and ingenious work to put the original treatment of Tomonaga, Schwinger, Feynman and Dyson on solid grounds. This was achieved, by Bogoliubov, Parasiuk, Hepp, Zimmermann, Epstein, Glaser, Steinmann and others, in a twenty years struggle, and the finally reached state of the art is nicely documented in the proceedings of the Erice school 1975 dedicated to renormalization \cite{VW76}. Main highlights are the Forest Formula of Zimmermann \cite{Zim69} which solves the recursion relations of the Bogoliubov-Parasiuk-Hepp (BPH) method \cite{BP57,BS59,Hep66}, the causal method of Epstein-Glaser (EG)\cite{EG73}, elaborating on older attempts of St\"uckelberg \cite{SR50} and Bogoliubov \cite{BP57,BS59}, and the method of retarded products by Steinmann \cite{Ste71}.
In spite of the fact that highly nontrivial mathematical methods were used (and to some extent, invented), the theory of perturbative renormalization had, for several decades, less impact on mathematics than it deserved%
\footnote{See however the work induced by Polchinski's version \cite{Pol84} of the Wilsonian renormalization group: \cite{KKS91,KK91,KK92,KK99}.}.
This changed recently, induced by the observation of Kreimer \cite{Kre98}
that the BPH recursion relations may be understood in terms of Hopf algebras. It culminated in the Connes-Kreimer theory of renormalization \cite{CK00,CK01} and initiated a broad interest of mathematicians in perturbative quantum field theory.
In the present formulation (see, e.g., the book of Connes and Marcolli \cite{CM07}) the theory is based on the method of dimensional regularization, and on the combinatorics of Zimmermann's Forest Formula.
Dimensional regularization was invented simultaneously by Bollini and Giambiagi \cite{BG72a} and by 't Hooft and Veltman \cite{tHV72}. It relies on the fact that after parametrizing Feynman integrals by Schwinger or by Feynman parameters the momentum space integrals can be performed, and it remains an integral over the parameters whose integrand depends on the spacetime dimension. Formally, one can replace the spacetime dimension by an arbitrary complex number $d$. The resulting integral exists on a certain domain of the complex plane; moreover, it can be extended to a meromorphic function on the whole complex plane. A finite value at the physical dimension is obtained by subtracting the principal part of the Laurent series. It is, however, not a priori clear that this procedure is physically meaningful. A similar situation is present in the so-called $\zeta$-function renormalization where the deeper reasons for the spectacular successes are not well understood (see, e.g. \cite{Elizalde:1994gf}). In the case of dimensional regularization the situation was clarified by the analysis of Breitenlohner and Maison \cite{BM77a,BM77b,BM77c} who showed how the combinatorics of Zimmermann's Forest Formula can be adapted to dimensional regularization.
Dimensional regularization turned out to be very effective for practical calculations, in particular due to the fact that gauge invariance is not broken during the renormalization process. Its conceptual basis is, however, not very transparent.
Quite the contrary is true for EG renormalization. This method is based on the observation that time-ordered products of local fields can, up to coinciding points, be performed as operator products. The latter are well defined in the sense of operator valued distributions on Fock space, as shown originally by G\aa{}rding and Wightman \cite{GW64}. Thus we know, from the very beginning, the time-ordered products everywhere up to coinciding points. Then, using an induction process, we can prove that the time-ordered product of $n$ local fields is, outside the thin diagonal (where all arguments coincide), uniquely determined in terms of lower order time-ordered products. The induction step amounts to an extension of a distribution in the relative coordinates which is defined for test functions which vanish in the neighborhood of the origin, to all test functions. The latter process is ambiguous, and the ambiguities correspond to the freedom of adding finite counter terms to the interaction Lagrangean.
The nice features of EG renormalization, which in particular allow renormalization on generic globally hyperbolic spacetimes \cite{BF00,HW01,HW02} are, unfortunately, connected with the difficulty of carrying through explicit calculations. Nevertheless, quite a number of computations have been performed within this framework (see, e.g. \cite{Sch89,GraciaBondiaLazzarini2003,DF04}). There remains, however, an impression that, essentially, one needs a new idea for every new calculation.
The main purpose of this paper is to develop a method for practical calculations which is always applicable.
A similar problem arises when one tries to analyze the combinatorics of EG renormalization. There are interesting attempts in this direction, see e.g. \cite{GBL00,Pin00b,BK05,BBK09}, but the obtained picture is not yet completely satisfactory.
The method we describe in this paper is based on the Main Theorem of Renormalization. This theorem, originally formulated in an unpublished preprint by Stora and Popineau \cite{SP82}, was later generalized and improved, in particular by Pinter \cite{Pin01}. Its final version, which relies heavily on a proof of Stora's ``Action Ward Identity'' \cite{DF04,DF07}, was obtained in \cite{HW03,DF04} and was then further analyzed in \cite{BDF09}. The main statement is that the ambiguity of associating a perturbative quantum field theory to an interaction Lagrangean is described in terms of a group of formal diffeomorphisms (tangent to the identity) on the space of interaction Lagrangeans. Such a group also appears in the work of Connes and Kreimer, and one of the aims of the present paper is to understand the relations between the two frameworks.
The first insight is that, due to the Main Theorem of Renormalization, the combinatorics of finite renormalizations derives from an iterated application of the chain rule. In fact this combinatorics was investigated long ago by Fa\`a di Bruno \cite{FdB1855}, and the relation to the combinatorics observed in perturbation theory was nicely described in \cite{FGB05}.
One may now, after performing the construction of the theory by the EG procedure, introduce a regularization and ask for a renormalization group element which subtracts the counter terms in such a way that the regularized theory converges to the given theory. If the regularized theory depends meromorphically on the regularization parameter, it is clear that the principal part of the Laurent series of the regularized theory must coincide with that of the counter terms. It is now tempting to identify the counter terms with the principal part and to define a new theory corresponding to minimal subtraction.
The arising method works in an inductive way by proceeding order by order and inserting the results of the lower orders into the calculations for the next order. An obvious question is whether the result at
$n$th order can be obtained directly, as in Zimmermann's Forest Formula. We will derive such a formula in the framework of Epstein-Glaser renormalization.
Dimensional regularization in position space amounts essentially to a change of the order of the Bessel functions defining the propagators.
Such a procedure was first proposed by Bollini and Giambiagi \cite{BG96} and was also tested in several examples by \cite{GKP07}.
It can be viewed as a particular 'analytic regularization', as introduced by Speer in the context of BPHZ-renormalization
long ago \cite{Speer1971}, and applied to EG renormalization by Hollands \cite{Hollands2007}.
A different approach had been taken by Rosen and Wright \cite{RoW}: they implement dimensional regularization in $x$-space
by making replacements on the level of the position space Feynmann rules. In particular, the spacetime coordinate $x$ is replaced by $X=(x,\hat{x})$, where $\hat{x}$ is a formal parameter corresponding to the ``integration over the complex dimension''. This approach, similar to the one taken by Breitenlohner and Maison \cite{BM77a}, seems to be very formal, since it is not clear if the algebraic relations postulated for the formal symbol can be fulfilled in any concrete model.
With the procedure of \cite{BG96}, which we adopt in this paper,
the regularized theory can be uniquely defined as a meromorphic function of the
regularization parameter (which is called the ``dimension'' in the physics literature). Its analyticity
property directly follows from the analytic dependence of Bessel functions on their
order. The analytic continuation to a meromorphic function with a pole at the physical value of the regularization parameter can be performed by exploiting homogeneity properties. This
appears very clearly in
{\it massless} theories.
There, the inductive Epstein-Glaser construction of time-ordered products can
be traced back to the extension of homogeneously or, for terms with divergent subdiagrams, almost
homogeneously scaling distributions $t\in\mathcal{D}^\prime(\mathbb{R}^l\setminus\{0\})$ to almost homogeneously scaling
distributions $\dot t\in\mathcal{D}^\prime(\mathbb{R}^l)$. Existence and uniqueness of such extensions
are classified in Prop.~3.3 of \cite{DF04} (related theorems, which are precursors of this proposition,
can be found in \cite[Thm.~3.2.3]{Hoer03} and \cite{HW02}):
\begin{prop}\label{alm-hom-scal}
Let $t\in \mathcal{D}^\prime(\mathbb{R}^l\setminus\{0\})$ scale
almost homogeneously with degree $\kappa\in\mathbb{C}$ and power $N\in\mathbb{N}_0$, i.e.
\begin{equation}\label{almosthomscal}
\sum_{r=1}^l(m_{x_r}\partial_{x_r}+\kappa)^{(N+1)}t=0
\end{equation}
where $N$ is the minimal natural number with this property and $m_{x_r}$ is a multiplication with the function $x\mapsto x_r$. (For $N=0$ the scaling is homogeneous.)
Then $t$ has an extension $\dot t\in \mathcal{D}^\prime(\mathbb{R}^l)$ which also scales almost homogeneously with
degree $\kappa$ and with power $\dot N\geq N$.
\begin{itemize}
\item For $\kappa\not\in\mathbb{N}_0+l$ it holds: $\dot t$ is uniquely determined and it is $\dot N=N$;
\item for $\kappa\in\mathbb{N}_0+l$ we have: $\dot t$ is non-unique and $\dot N\in\{N,\,N+1\}$.
\end{itemize}
\end{prop}
In the {\it unregularized} theory we have $\kappa\in\mathbb{Z}$, and usually there are terms with $\kappa\in\mathbb{N}_0+l$.
Then, renormalization is non-unique and homogeneous scaling may be broken by logarithmic terms. In the regularized theory, however, $\kappa\not\in\mathbb{Z}$, hence the extension is unique and always homogeneous.
We will show that a similar method works also for the massive case.
The calculation of principal parts can be performed in terms of integrals in the
complex plane.
If one wants to iterate the subtraction procedure one has to do these integrations independently.
This requires the ability to vary the ``dimensions'' of propagators independently. This is possible in the position
space formulation. Moreover, also the regularized propagators are distributions on
Minkowski space. In the momentum space formulation, the dimension of propagators
has to be chosen for subgraphs, and, in the case of overlapping divergences, these dimensions
cannot independently be varied.
A nice feature of dimensional regularization is that many structural properties are respected by the regularization which then are automatically satisfied by the minimally subtracted theory. This holds also for our method and includes in particular Poincar\'{e} invariance, unitarity and the validity of field equations.
However, our version of dimensional regularization does not preserve gauge invariance, because the propagators are modified and not the integration measure.
A few thoughts, how one may possibly overcome this drawback, are given in the 'Conclusions and Outlook'.
\section{Functional approach and Epstein-Glaser Renormalization}\label{EG}
We restrict ourselves to the theory of a real scalar field. Let $\Ecal(\MM)$ denote the space of smooth functions on the
$d$-dimensional Minkowski space, equipped with its standard Fr\'echet topology, and consider the space of smooth maps $F:\Ecal(\MM)\to\CC$. Let us recall the definition of smoothness used in infinite dimensional calculus (see, e.g., \cite{Neeb2005}). The derivative of $F$ at $\varphi\in\Ecal(\MM)$ in the direction of $\psi$ is defined as
\begin{equation}\label{de}
F^{(1)}(\varphi)[\psi] \doteq \lim_{t\rightarrow 0}\frac{1}{t}\left(F(\varphi + t\psi) - F(\varphi)\right)\,,
\end{equation}
whenever the limit exists. $F$ is called differentiable at $\varphi$ if $F^{(1)}(\varphi)[\psi]$ exists for all $\psi\in\Ecal(\MM)$. It is called continuously differentiable in an open neighborhood $U\subset \Ecal(\MM)$ if it is differentiable at all points of $U$ and
$F^{(1)}:U\times \Ecal(\MM)\rightarrow \RR, (\varphi,\psi)\mapsto F^{(1)}(\varphi)[\psi]$
is a continuous map. It is called a $\mathcal{C}^1$-map if it is continuous and continuously differentiable. Higher derivatives are defined for $\mathcal{C}^n$-maps by
\begin{equation}
F^{(n)} (\varphi)(\psi_1 , \ldots , \psi_n ) \doteq \lim_{t\rightarrow 0}\frac{1}{t}\big(F^{(n-1)} (\varphi + t\psi_n )(\psi_1 , \ldots, \psi_{n-1} ) -
F^{(n-1)}(\varphi)(\psi_1 , \ldots, \psi_{n-1}) \big)
\end{equation}
In particular, it means that if $F$ is a smooth functional on $\Ecal(\MM)$, then its $n$-th derivative at the point $\varphi\in\Ecal(\MM)$ is a compactly supported distributional density $F^{(n)}(\varphi)\in\Ecal'(\MM^n)$. There is a distinguished volume form on $\MM$, so we can use it to construct densities from functions and to provide an embedding of $\mathcal{D}(\MM^n)$
into $\Ecal'(\MM^n)$. Using the distinguished volume form we can identify derivatives $F^{(n)}(\varphi)$ with distributions.
An important property of a functional is its spacetime support. It is defined as a generalization of the distributional support, namely as the set of
points $x\in \MM$ such that $F$ depends on the field configuration in any neighborhood of $x$.
\begin{align}\label{support}
\mathrm{supp}\, F\doteq\{ & x\in \MM|\forall \text{ neighborhoods }U\text{ of }x\ \exists \varphi,\psi\in\Ecal(\MM), \mathrm{supp}\,\psi\subset U
\\ & \text{ such that }F(\varphi+\psi)\not= F(\varphi)\}\ .\nonumber
\end{align}
Here we will discuss only compactly supported functionals.
Finally we assume that the wave front set of the distribution $F^{(n)}(\varphi)$, considered as a subset of the cotangent bundle
$T^*(\MM^n)=\MM^n\times\MM^n$, does not intersect the set $\MM^n\times(V_+^n\cup V_-^n)$
where $V_{\pm}$ denotes the closed forward and backward light cone, respectively. Functionals fulfilling all the conditions listed above are called microcausal, and the space of such functionals is denoted by $\Fcal$. It contains a subspace $\Floc$, the space of local functionals, characterized by the additivity condition
\begin{equation}\label{add}
F(\varphi+\psi+\chi)=F(\varphi+\psi)-F(\psi)+F(\psi+\chi) \ \text{ if }\mathrm{supp}\phi\cap\mathrm{supp}\chi=\emptyset
\end{equation}
(as shown in \cite{BDF09} this implies that the derivatives $F^{(n)}(\varphi)$ have support on the thin
diagonal $\Delta=\{(x,\dots,x)|x\in\MM\}$). In addition, the wave front sets of derivatives of local functionals are required to be perpendicular to the tangent bundle of $\Delta$, considered as a subset of the tangent
space of $\MM^n$. $\Floc$ contains the functionals which occur as local interactions in the EG framework, e.g.
$F(\varphi)=\int\, \varphi(x)^4f(x)d^dx$ with a test function $f$ with compact support. Finite sums of pointwise products of local functionals form a subalgebra of $\mathcal{F}$, which is called the algebra of multilocal functionals, and we denote it by $\mathcal{F}_{\mathrm{ml}}$. It was shown in \cite{BFLR} that local functionals in the above sense are of the form:
\[
F(\varphi)=\int f(x,\varphi(x),\partial\varphi(x),\dots)d^4x
\]
where $f$ depends smoothly on $x$ and, for a fixed $\varphi$, on finitely many derivatives%
\footnote{In \cite{BFLR} it was shown, with the use of the fundamental theorem of calculus, that $F(\varphi)=\int f(j_x^\infty(\varphi)) d^4x$,
where $j_x^\infty(\varphi)=(x,\varphi(x),\partial\varphi(x),\dots)$ is the infinite jet of $\varphi$ at $x$. Moreover, the functional derivatives of $F$
have compact support on the thin diagonal and are therefore finite derivatives of the $\delta$-distribution in the relative variables
(i.e.~denoting the latter by $\delta_\Delta$ it holds
$\langle\delta_\Delta,h\rangle=\int d^dx\, h(x,\dots,x)\ ,\,\, h\in\mathcal D(\mathbb M^n)$),
with coefficients which are smooth functions of $x$.} of $\varphi$ at $x$.
Dynamics is introduced along the lines of \cite{BDF09} by means of a generalized Lagrangian. It is defined as a map $L:\mathcal{D}(\MM)\rightarrow \Fcal_\mathrm{loc}$ satisfying
\begin{equation}\label{L:supp}
\mathrm{supp}(L(f))\subseteq \mathrm{supp}(f)\,,\qquad \forall\, f\in\mathcal{D}(\MM)\,,
\end{equation}
and the additivity condition \eqref{add} with respect to test functions. The action $S(L)$ is defined as an equivalence class of Lagrangians, where two Lagrangians $L_1$, $L_2$ are called equivalent $L_1\sim L_2$ if
\begin{equation}\label{equ}
\mathrm{supp} (L_{1}-L_{2})(f)\subset\mathrm{supp}\, df\,,
\end{equation}
in particular two Lagrangians are identified if their densities differ by a total derivative.
For the free Klein-Gordon field the generalized Lagrangian is given by:
\begin{equation}\label{free:action}
L_0(f)(\varphi)=\frac{1}{2}\int (\partial_\mu\varphi\partial^\mu\varphi-m^2\varphi^2)fd^4x\,.
\end{equation}
The second functional derivative of $L_0$ induces a
linear operator, which in our case is the Klein-Gordon operator $P=\Box+m^2$. The free quantized theory is defined by means of deformation quantization of the classical Poisson structure induced by $P$ (see \cite{DF01b,DF01a,BDF09} for details). On Minkowski spacetime one can perform this deformation using the Wightman 2-point function $\Delta_+$, to define a non-commutative product
\begin{equation}\label{star:prod}
(F\star G)(\varphi)\doteq\sum\limits_{n=0}^\infty \frac{\hbar^n}{n!}\left<F^{(n)}(\varphi),\left(\Delta_+\right)^{\otimes n}G^{(n)}(\varphi)\right>\,,
\end{equation}
which is interpreted as the operator product of the free theory.
Interaction is introduced in terms of time-ordered products. Let us first consider regular functionals, i.e. such that $F^{(n)}(\varphi)\in\mathcal{D}(\MM^n)$ for all $\varphi\in\Ecal(\MM)$, $n\in\NN$. We denote the space of such functionals by $\Fcal_\mathrm{reg}$.
Time-ordered products $\Tcal_n$, defined on $\Fcal_\mathrm{reg}[[\hbar]]$, are equivalent to the pointwise product
\begin{equation}\label{pointwise-product}
m_n(F_1\otimes\dots\otimes F_n)(\varphi)=F_1(\varphi)\dots F_n(\varphi)
\end{equation}
by the ``heat kernel''
\begin{equation}
\Tcal=e^{\frac12 D}
\end{equation}
with $D=\langle\hbar \Delta_F,\frac{\delta^2}{\delta\varphi^2}\rangle$ ($\Delta_F$ denotes
the Feynman propagator), i.e.
\begin{equation}\label{TT-unren}
\Tcal_n=\Tcal\circ m_n\circ (\Tcal^{-1})^{\otimes n} \ .
\end{equation}
Using Leibniz' rule
\begin{equation}
\frac{\delta}{\delta\varphi}\circ m_n=m_n\circ(\sum_{i=1}^n\frac{\delta}{\delta\varphi_i})
\end{equation}
(here an element of the $n$th tensor power of $\Fcal_\mathrm{reg}$ is considered as a functional of $n$ independent field configurations $\varphi_1,\dots,\varphi_n$) and the notation
\begin{equation}\label{F1}
D_{ij}=\langle\hbar\Delta_F, \frac{\delta^2}{\delta\varphi_i\delta\varphi_j}\rangle
\end{equation}
we obtain $\Tcal_n=m_n\circ T_n$, where
\begin{equation}\label{F2}
T_n=e^{\sum_{i<j}D_{ij}}=\prod_{i<j}\sum_{l_{ij}=0}^{\infty}\frac{D_{ij}^{l_{ij}}}{l_{ij}!}
\end{equation}
Note that the time-ordered product is commutative and associative. In this paper we will use consequently the calligraphic script (for example $\Tcal_n$) to denote objects involving the multiplication $m_n$, while roman letters (like $T_n$) are reserved for ``bare'' expressions, where $m_n$ is not applied.
The exponential in the formula \eqref{F2}
might be expanded into a formal power series. This yields the usual graphical description for time-ordered products, since the right hand side of \eqref{F2} may be written as a sum over all graphs $\Gamma$ with vertices $V(\Gamma)=\{1,\dots,n\}$ and $l_{ij}$ lines $e\in E(\Gamma)$ connecting the vertices $i$ and $j$.
We set $l_{ij}=l_{ji}$ for $i>j$ and $l_{ii}=0$ (no tadpoles). If $e$ connects $i$ and $j$ we set $\partial e:=\{i,j\}$.
Then we obtain
\begin{equation}\label{time:ord}
T_n=\sum_{\Gamma\in \Gcal_n}T_{\Gamma}\,,
\end{equation}
with $\Gcal_n$ the set of all graphs with vertex set $V(\Gamma)=\{1,\dots n\}$ and
\begin{equation}\label{GraphDO}
T_{\Gamma}=\frac{1}{\textrm{Sym}(\Gamma)}\langle t_{\Gamma},\delta_{\Gamma}\rangle\,,
\end{equation}
where
\[\delta_{\Gamma}=\frac{\delta^{2\,|E(\Gamma)|}}{\prod_{i\in V(\Gamma)}\prod_{e:i\in\partial e}\delta\varphi_i(x_{e,i})}\]
is a functional differential operator on $\Fcal_\mathrm{reg}^{\otimes n}$,
\begin{equation}\label{SGamma}
t_{\Gamma}=\prod_{e\in E(\Gamma)}\hbar\Delta_F(x_{e,i},i\in\partial e)
\end{equation}
and the, so called, symmetry factor $\textrm{Sym}$ is the number of possible permutations of lines joining
the same two vertices, $\textrm{Sym}(\Gamma)=\prod_{i<j}l_{ij}!$. We point out that in our approach the Feynman
graphs are not fundamental objects of the theory, instead they are a bijective graphical description of the
terms appearing in the exponential function \eqref{F2}, from which one can read off
the analytic expression (including the numerical prefactor) of each term/graph.
{\small \begin{example}[Graph expansion]
Regard three functionals $F,G,H\in\Fcal_\mathrm{reg}$. Their time-ordered product is given by
\begin{equation}
\Tcal_3(F\otimes G\otimes H)(\varphi)=
\sum_{k=0}^\infty\frac1{k!}\left(D_{12}+D_{23}+D_{13}\right)^k F(\varphi_1) G(\varphi_2)H(\varphi_3)
\bigg|_{\varphi_1=\varphi_2=\varphi_3=\varphi}
\end{equation}
Applying the multinomial theorem and inserting the definition for $D_{ij}$ gives
\begin{align}
\Tcal_3(F\otimes G\otimes H)(\varphi)&\nonumber\\
&\hspace{-30mm}=\sum_{k=0}^\infty\sum_{k_1+k_2+k_3=k}\frac{\hbar^k}{k_1!\,k_2!\,k_3!}\label{gexpansion}\\
&\hspace{-30mm}\qquad\left\langle (\Delta_F^{12})^{\otimes k_1}(\Delta_F^{23})^{\otimes k_2}(\Delta_F^{13})^{\otimes k_3}, F^{(k_1+k_3)}(\varphi_1) G^{(k_1+k_2)}(\varphi_2)H^{(k_2+k_3)}(\varphi_3) \right\rangle\bigg|_{\varphi_1=\varphi_2=\varphi_3=\varphi}\nonumber
\end{align}
The terms in this expression are identified as the usual Feynman graphs in the following way: we assign to lines Feynman propagators and functional derivatives derivatives of given functionals to vertices. Formula \eqref{gexpansion} can now be represented as:
\begin{align}
&=\FGH+\hbar\left(\FoneGHF+\FGoneHF+\FGHoneF\right)\\
&\phantom{=\FGH}+\hbar^2\left[\FoneGHoneF+\FoneGoneHF+\FGoneHoneF +\frac12\left(\FtwoGHF+\FGtwoHF+\FGHtwoF\right)\right]
+\mathcal{O}(\hbar^3)\nonumber
\end{align}
In case the functionals are polynomial in the field and its derivatives, only a finite number of functional derivatives are non-vanishing. Only those graphs remain where the valence at the vertices is bounded by the degree of the associated polynomial functionals.
\end{example}}
For regular functionals $F\in\mathcal{F}_{\mathrm{reg}}$, the contraction $\langle t_{\Gamma},\delta_{\Gamma}\rangle$ is well-defined since $t_{\Gamma}$ is applied to a test function in the corresponding dual space. For local functionals, however, the functional derivatives are of the form
\begin{equation}\label{dF}
F^{(k)}(\varphi)(x_1,\dots,x_k)=\sum_\beta f^{(k)}_\beta(x_{\text{cms}})\partial^{\beta}\delta(x_{\text{rel}})
\end{equation}
where $\beta\in\NN_0^{d(k-1)}$, test functions
$f^{(k)}_\beta(x)\equiv f^{(k)}_\beta(\varphi)(x)\in\mathcal{D}(\MM)$ are functions of the center of
mass coordinate $x_{\text{cms}}=(x_1+\dots+x_k)/k$
which depend on $\varphi$, and $x_{\text{rel}}=(x_1-x_{\text{cms}},\dots, x_k-x_{\text{cms}})$ denotes the relative coordinates.
Hence, the functional differential operator $\delta_\Gamma$ applied to $\mathcal{F}_{\mathrm{loc}}^{\otimes n}$ yields, at any $n$-tuple of field configurations $(\varphi_1,\dots,\varphi_n)$, a compactly supported distribution in the variables $x_{e,i},i\in\partial e, e\in E(\Gamma)$ with support on the partial diagonal $\Delta_{\Gamma}=\{x_{e,i}=x_{f,i},i\in\partial e\cap\partial f, e,f\in E(\Gamma)\}$ with a wavefront set perpendicular to $T\Delta_{\Gamma}$. Such a distribution can uniquely be written as a finite sum
\begin{equation}
f=\sum f_\beta\otimes\partial^\beta\delta\label{factorisation}
\end{equation}
where $f_{\beta}\in\mathcal{D}(\Delta_\Gamma)$ and where $\partial^\beta\delta$ (with a multi-index $\beta$) is a partial derivative of the $\delta$-distribution on the orthogonal complement of $\Delta_\Gamma$. A concrete coordinatization of $\Delta_\Gamma$ can be given by the center of mass coordinates introduced above and the coordinates on the orthogonal complement can be chosen as the relative coordinates. To obtain \eqref{factorisation}, one has to write all partial derivatives $\partial_{x_{e,i}}$ in terms of partial derivatives in $x_{\text{cms}}$ and $x_{\text{rel}}$ coordinates. The former are applied on the $\varphi$-dependent test function and produce $f_\beta$ and the latter are applied on the Dirac $\delta$ distribution.
Let $Y_{\Gamma}$ denote the vector space spanned by the distributions $\partial^\beta\delta$. $Y_\Gamma$ is graded by the order of the derivatives. The space of $Y_\Gamma$-valued test functions on $\Delta_\Gamma$ is denoted by $\mathcal{D}(\Delta_\Gamma,Y_\Gamma)$. One now has to define the action of the distribution $t_\Gamma$ as a linear functional on $\mathcal{D}(\Delta_\Gamma,Y_\Gamma)$,
\begin{equation}\label{time ordered functions}
\langle t_\Gamma,f\rangle=\sum \langle t_{\Gamma}^{\beta},f_{\beta}\rangle
\end{equation}
with numerical distributions $t_{\Gamma}^{\beta}\in\mathcal{D}'(\Delta_{\Gamma})$.
{\small\begin{example} Let $F_1=\int dx \,g(x)\,(\varphi^2(\partial\varphi)^2)(x)\ ,\,\,
F_2=\int dx \,h(x)\,\varphi^3(x)\ ,\,\,g,h\in\mathcal{D}(\MM)$,
$t_\Gamma=\hbar^2\,\Delta_F(x_{11}-x_{12})\Delta_F(x_{21}-x_{22})$ and
$\delta_\Gamma=\tfrac{\delta^4}{\delta\varphi_1(x_{11})\delta\varphi_1(x_{21})\delta\varphi_2(x_{12})\delta\varphi_2(x_{22})}$.
Then,
\begin{align*}
f=&\delta_\Gamma(F_1(\varphi_1)F_2(\varphi_2))=\int dx_1 \,g(x_1)\int dx_2 \,h(x_2)\,6\,\varphi_2(x_2)\,\delta(x_{12}-x_2)\delta(x_{22}-x_2)\\
&\Bigl(2\,(\partial\varphi_1)^2(x_1)\,\delta(x_{11}-x_1)\delta(x_{21}-x_1)+2\,\varphi_1^2(x_1)\,
\partial\delta(x_{11}-x_1)\partial\delta(x_{21}-x_1)\\
&-4\,(\varphi_1\partial\varphi_1)(x_1)\bigl(\delta(x_{11}-x_1)\partial\delta(x_{21}-x_1)+(x_{11}\leftrightarrow x_{21}\bigr)\Bigr)
\end{align*}
and with that we obtain
\begin{align*}
\langle t_\Gamma,&f\rangle=\hbar^2\,\int dx_1 \,g(x_1)\int dx_2 \,h(x_2)\,\Bigl(
12\,[(\Delta_F(x_1-x_2))^2]^\mathbf{\cdot}\,(\partial\varphi_1)^2(x_1)\,\varphi_2(x_2)+\\
&12\,[(\partial\Delta_F(x_1-x_2))^2]^\mathbf{\cdot}\,\varphi_1^2(x_1)\,\varphi_2(x_2)+
48\,[\Delta_F(x_1-x_2)\partial\Delta_F(x_1-x_2)]^\mathbf{\cdot}\,\varphi_1(x_1)\partial\varphi_1(x_1)\,\varphi_2(x_2)\Bigr)\ .
\end{align*}
Hence, modulo constant prefactor
the appearing numerical distributions $t^\beta_\Gamma$ are
the extensions (denoted by $[\cdots]^\mathbf{\cdot}$)
of $(\Delta_F(x_1-x_2))^2$, $(\partial\Delta_F(x_1-x_2))^2$
and $\Delta_F(x_1-x_2)\partial\Delta_F(x_1-x_2)$, respectively,
from $\mathcal{D}'(\MM^2\setminus\Delta_2)$ to $\mathcal{D}'(\MM^2)$ (where
$\Delta_2$ is the diagonal $x_1=x_2$).
\end{example}}
The method of Epstein and Glaser proceeds by induction. One proves that if
$t_{\Gamma'}$ is known for all graphs $\Gamma'$ with less vertices than $\Gamma$, then $t_\Gamma$ can be uniquely defined for all disconnected, all connected one particle reducible and all one particle irreducible one vertex reducible graphs. For the remaining graphs (which we call EG-irreducible) one can define it uniquely on all distributions $f\in\mathcal{D}(\Delta_{\Gamma},Y_\Gamma)$ of the form above where $f_\beta$ vanishes together with all its derivatives of order $\leq\omega_\Gamma+|\beta|$ on the thin diagonal of $\Delta_\Gamma$. Here
\[\omega_\Gamma=(d-2)|E(\Gamma)|-d(|V(\Gamma)|-1)\]
is the degree of divergence of the graph $\Gamma$.
We denote this subspace by $\mathcal{D}_{\omega_{\Gamma}}(\Delta_\Gamma,Y_\Gamma)$ .
Renormalization then amounts to project a generic $f$ to this subspace by a translation invariant projection $W_{\Gamma}:\mathcal{D}(\Delta_\Gamma,Y_\Gamma)\to\mathcal{D}_{\omega_\Gamma}(\Delta_\Gamma,Y_\Gamma)$. Different renormalizations differ by different choices of the projections $W_\Gamma$.
The ambiguity is best described in terms of the Main Theorem of Renormalization \cite{SP82,Pin01,DF04,BDF09}.
Let the formal S-matrix be defined as the generating functional of time-ordered products, formally given by
\begin{equation}
\mathcal{S}=\Tcal\circ\exp\circ \Tcal^{-1}\ ,
\end{equation}
i.e., its $n$th derivative at zero, $\Tcal_n\equiv \mathcal{S}^{(n)}(0)$, as a linear map
$\Tcal_n:\Floc[[\hbar]]^{\otimes n}\to\Fcal[[\hbar]]$ is the (renormalized) time-ordered $n$-fold product.
Given two $S$-matrices $\mathcal{S}$ and $\hat{\mathcal{S}}$ fulfilling the Epstein-Glaser axioms, there exists
a unique analytic map $\mathcal{Z}:\Floc[[\hbar]]\to\Floc[[\hbar]]$ with $\mathcal{Z}(0)=0$ such that
\begin{equation}
\hat{\mathcal{S}}=\mathcal{S}\circ \mathcal{Z}\ .
\end{equation}
To first order, this relation gives\footnote{Similarly to $\mathcal{S}^{(n)}$ we write $\mathcal{Z}^{(n)}$ for $\mathcal{Z}^{(n)}(0)$.}
$\mathcal{Z}^{(1)}=\mathrm{id}$.
The maps $\mathcal{Z}$ relating different $S$-matrices in this way form the renormalization group
$\mathscr{R}$ in the sense of St\"uckelberg and Petermann. It is a subset of the group of formal diffeomorphisms
(tangent to the identity) on the space of interactions. A direct definition
of $\mathscr{R}$ by the properties of the maps $\mathcal{Z}$
is given in \cite{DF04,BDF09}: $\mathscr{R}$ has the structure of an affine space,
\begin{equation}\label{R-affine}
\mathscr{R}=\mathrm{id}+\hbar\,\mathscr{V}[[\hbar]]\ ,
\end{equation}
where $\mathscr{V}[[\hbar]]$ is a {\it vector space} of formal power series $\mathcal{V}=\sum_{n=0}^\infty \mathcal{V}_n\,\hbar^n$, which are analytic maps
$\mathcal{V}:\Floc[[\hbar]]\to\Floc[[\hbar]$, the main defining properties of elements of $\mathscr{V}$ are $\mathcal{V}(0)=0\ ,\,\,\mathcal{V}^{(1)}(0)=0$ and {\it locality},
\begin{equation}\label{R-locality}
\mathcal{V}(F+G+H)=\mathcal{V}(F+G)-\mathcal{V}(G)+\mathcal{V}(G+H)\quad\text{if}\quad \mathrm{supp}\,F\cap\mathrm{supp}\,H=\emptyset\ ,\quad\forall\ \mathcal{V}\in\mathscr{V}\ .
\end{equation}
To show that $\mathscr{R}$ is indeed a group, one needs additionally the property (proved in \cite{DF04}) that,
given an S-matrix $\mathcal{S}$ and $\mathcal{Z}\in \mathscr{R}$, the composition $\hat{\mathcal{S}}:=\mathcal{S}\circ\mathcal{Z}$ satisfies
also the Epstein-Glaser axioms.
One of the great virtues of the Epstein-Glaser approach is that it does not involve any divergences, and that it is explicitly independent of any regularization prescription. It can therefore be used for an \textit{a priori} definition of the problem of the perturbative construction of quantum field theory which then is solved by the method of renormalization. In other schemes usually only an \textit{a posteriori} definition is possible, and the independence of the construction from the chosen method relies on a comparison with other methods.
We have just outlined how to define the $n$-fold time-ordered products (i.e. multilinear maps $\Tcal_n$) by the procedure of Epstein and Glaser. An interesting question is whether the renormalized time-ordered product defined by such a sequence of multilinear maps can be understood as an iterated binary product on a suitable domain.
Recently it was proven in \cite{FRb} that this is indeed the case. The crucial observation is that
pointwise multiplication of local functionals is injective. More precisely, let $\Fcal_0$ be the set of local functionals vanishing at some distinguished field configuration (say $\varphi=0$). Iterated multiplication $m$ is then a linear map from the symmetric Fock space over $\Fcal_0$ onto $\Fcal_{\mathrm{ml}}$. Then, the following assertion holds:
\begin{prop}\label{beta}The multiplication $m:S^\bullet\Fcal_0\to\Fcal_{{\mathrm{ml}}}$ is bijective (where $S^k$ denotes the symmetrised tensor product of vector spaces).
\end{prop}
Let $\beta=m^{-1}$. We now define the renormalized time-ordering operator on the space of multilocal functionals $\Fcal_{{\mathrm{ml}}}$ by
\begin{equation}\label{T-ren}
\Tcal_\mathrm{ren}:=(\bigoplus_n \Tcal_n)\circ\beta
\end{equation}
This operator is a formal power series in $\hbar$ starting with the identity, hence it is injective.
The corresponding binary product $\cdot_{{}^\Tcal}$ is now defined on the image of $\Tcal_\mathrm{ren}$ by
\begin{equation}\label{Tprod}
F\cdot_{{}^\Tcal} G\doteq \Tcal_\mathrm{ren}(\Tcal_\mathrm{ren}^{-1}F\cdot \Tcal_\mathrm{ren}^{-1}G)\,,
\end{equation}
This product is equivalent to the pointwise product and, hence, it
is in particular associative and commutative. Moreover, the $n$-fold iteration
of the binary product $\cdot_{{}^\Tcal}$ applied to local functionals
coincides with the linear
map $\Tcal_n$ defined by the Epstein-Glaser procedure.
We may now use the St\"uckelberg-Petermann group $\mathscr{R}$ in order to establish a relation between the renormalized and the
regularized S-matrix. Let $\Delta_F^{\Lambda}$ be a regularized Feynman propagator, and let the upper index $\Lambda$
indicate that in the formal construction the regularized propagator was used. A regularization should satisfy the
condition that all the expressions for time-ordered products become meaningful for local functionals
(still in the sense of formal power series in $\hbar$),
and that the regularized propagators converge in the sense of the H\"ormander topology on distributions on
$\RR^d\setminus\{0\}$ with the appropriate wave front sets and microlocal scaling degrees. The former condition is surely
satisfied if $\Delta_F^{\Lambda}$ is a smooth function of rapid decrease.
\section{Analytic regularization, Minimal Subtraction and Forest Formula}\label{sec:forest-MS}
Let $\mathcal{S}^{\Lambda}$ be the regularized S-matrix constructed from $\Delta_F^{\Lambda}$, more precisely $\mathcal{S}^{\Lambda}$ is the formal power series
$$
\mathcal{S}^{\Lambda}:=1+\mathrm{id}+\sum_{n=2}^\infty\frac{1}{n!}\,m_n\circ \exp\sum_{1\leq i<j\leq n}D^\Lambda_{ij}\ ,\quad
D^\Lambda_{ij}:=\langle\hbar\Delta^\Lambda_F, \frac{\delta^2}{\delta\varphi_i\delta\varphi_j}\rangle\ .
$$
To relate the construction of the S-matrix $\mathcal{S}$ of Epstein-Glaser to the method of divergent counter terms, we search a
family of renormalization group elements $Z^{\Lambda}\in\mathscr{R}$ such that
\begin{equation}\label{counter terms}
\mathcal{S}=\lim_{\Lambda\to\Lambda_0}\mathcal{S}^{\Lambda}\circ \mathcal{Z}^{\Lambda} \ .
\end{equation}
If $\mathcal{S}$ is given, then such a family $(\mathcal{Z}^{\Lambda})$ exists and this family
is uniquely determined up to a sequence which converges to the identity (see the Appendix \ref{app:regularization} and \cite[Sect.~5.2]{BDF09}).
A special role is played by analytic regularization schemes where $\mathcal{S}^{\Lambda}$ is a meromorphic function of
$\Lambda\in\CC$ with a pole at the limit point $\Lambda_0$. In these cases there exists a distinguished
choice $\mathcal{S}_{\MS}$ of the S-matrix (\emph{minimal subtraction}) and the corresponding family of renormalization group elements $\mathcal{Z}_{\MS}^{\Lambda}\in\mathscr{R}$. To construct these objects we start with the family $(\mathcal{Z}^{\Lambda})$ of meromorphic functions and we prove the so called Birkhoff decomposition (see \cite{Connes1999,CK00}, where such notion was first introduced in the context of renormalization):
\[
\mathcal{Z}^{\Lambda}=\mathcal{Z}^{\Lambda}_{\MS}\circ \mathcal{Z}_f^{\Lambda}\,,
\]
where $\mathcal{Z}_f^{\Lambda}$ is regular and $\mathcal{Z}^{\Lambda}_{\MS}$ corresponds to subtracting the principal part. We prove this by induction. Let us consider the n-th functional derivative ${\mathcal{Z}^{\Lambda}}^{(n)}$ and we assume that for $k<n$ we have already constructed ${\mathcal{Z}_f^{\Lambda}}^{(k)}$ and ${\mathcal{Z}^{\Lambda}_{\MS}}^{(k)}$ in such a way that, for $l<n$, the chain rule (Fa\`a di Bruno formula)
$${\mathcal{Z}^{\Lambda}}^{(l)}=\sum_{P\in\mathrm{Part}(\{1,\dots,l\})}(\mathcal{Z}^{\Lambda}_{\MS})^{(|P|)}\circ\bigotimes_{I\in P} (\mathcal{Z}^{\Lambda}_f)^{(|I|)}\quad\textrm{holds.}$$
The function
\[
{\mathcal{Z}^{\Lambda}}^{(n)}-\sum_{P\in\mathrm{Part}(\{1,\dots,n\})\atop 1<|P|<n
}(\mathcal{Z}^{\Lambda}_{\MS})^{(|P|)}\circ\bigotimes_{I\in P} (\mathcal{Z}^{\Lambda}_f)^{(|I|)}
\]
is meromorphic, so we can decompose it as a sum of the principal part, which we call ${\mathcal{Z}^{\Lambda}_{\MS}}^{(n)}$, and the rest term ${\mathcal{Z}_f^{\Lambda}}^{(n)}$. This way we construct the $n$-th order derivative of $\mathcal{Z}^{\Lambda}_{\MS}$ and we can proceed by induction. Using that $\mathcal{Z}_\Lambda\in\mathscr{R}$ one easily sees that $\mathcal{Z}^{\Lambda}_{\MS}$ satisfies \eqref{R-locality} for $G=0$. This implies the general case since all the quantities are formal power series (see Appendix B of \cite{BDF09}). It follows that $\mathcal{Z}^{\Lambda}_{\MS}$ and $\mathcal{Z}^\Lambda_f$ obtained by the above construction are elements of the St\"uckelberg-Petermann group $\mathscr{R}$.
By construction, $\mathcal{Z}_f^{\Lambda}$ has a well defined limit as $\Lambda$ approaches $\Lambda_0$ and, since it is invertible, we can define $\mathcal{S}_{\MS}^\Lambda:= \mathcal{S}^\Lambda\circ \mathcal{Z}^\Lambda\circ{\mathcal{Z}_f^{\Lambda}}^{-1}=\mathcal{S}^\Lambda\circ \mathcal{Z}_{\MS}^\Lambda$ and this expression also has a well defined limit,
\[
\mathcal{S}_{\MS}:= \lim\limits_{\Lambda\rightarrow \Lambda_0}\mathcal{S}_{\MS}^\Lambda\,.
\]
It can be expressed as $\mathcal{S}_{\MS}=\mathcal{S}\circ \mathcal{Z}_f^{-1}$, where $\mathcal{Z}_f:=\lim\limits_{\Lambda\rightarrow \Lambda_0}\mathcal{Z}^\Lambda_f$ and, because $\mathcal{Z}_f$ is an element of $\mathscr{R}$, $\mathcal{S}_{\MS}$ is an S-matrix fulfilling the Epstein-Glaser axioms.
It is the generating functional for minimally subtracted time-ordered products ($MS$ scheme).
We will now derive a useful recursive formula for $\mathcal{Z}^{\Lambda}_{\MS}$. Consider the functional derivative
\begin{equation}\label{FaadiBruno}
(\mathcal{S}_{\MS}^\Lambda)^{(n)}=(\mathcal{Z}_{\MS}^\Lambda)^{(n)}+\sum_{P\in\mathrm{Part}(\{1,\dots,n\})\atop 1<|P|}(\mathcal{S}^{\Lambda})^{|P|}\left(\bigotimes_{I\in P} (\mathcal{Z}_{\MS}^{\Lambda})^{|I|}\right)\,.
\end{equation}
Since $(\mathcal{S}_{\MS}^\Lambda)^{(n)}$ converges for $\Lambda\rightarrow \Lambda_0$, the principal parts of the summands above must add up to 0, so we obtain a recursion
\begin{equation}\label{recursion}
(\mathcal{Z}^{\Lambda}_{\MS})^{(n)}=-\pp
\sum_{|P|>1}(\mathcal{S}^{\Lambda})^{|P|}\left(\bigotimes_{I\in P} (\mathcal{Z}^{\Lambda}_{\MS})^{|I|}\right)
\end{equation}
together with $(\mathcal{Z}^{\Lambda}_{\MS})^{(1)}=\mathrm{id}$.
One may now solve the recursive definition of the minimally subtracted S-matrix in terms of an
analogue of Zimmermann's Forest Formula.
We define an Epstein-Glaser forest $F=\{I_1,...,I_k\}\in \mathfrak{F}_{\bar n}$
to be a set of subsets $I_j\subset\bar n:=\{1,\dots,n\}$
which contain at least two elements, $|I_j|\geq 2$, and which satisfy
\begin{equation}
I_i\cap I_j=\emptyset\quad\text{or}\quad I_i\subset I_j \quad\text{or}\quad
I_j\subset I_i\quad\forall i<j\ .\nonumber
\end{equation}
The empty set of subsets is referred to as the empty forest. We assume that we can vary the regularization
parameters $\Lambda_{ij}$ independently for every pair of indices $1\le i<j \le n$ such that the regularized
time-ordered product is a well defined meromorphic function in all these variables \footnote{For the definition
of the meromorphic function of several variables, see for example \cite{Lang}.}.
More precisely, we
assume that in the graph expansion every distribution $t_{\Gamma}^{\beta}$ (see \eqref{time ordered functions})
is, after evaluation on a test function, an analytic function on a suitable domain which can be extended to a meromorphic function on a domain containing $\{\Lambda_{ij}=\Lambda_0, 1\le i<j\le n\}$.
Now, given a forest
$F\in \mathfrak{F}_{\bar n}$, we reduce the number of parameters $\Lambda_{ij}$ as follows:
for each $I\in F$ we set $\Lambda_{ij}=\Lambda_I$ for all $i,j\in I$.
Let $R_I$ be (-1) times the projection onto the principal part with respect to the variable $\Lambda_I$.
We then obtain the EG Forest Formula
\begin{thm}\label{forest}
\begin{equation}\label{EGforest}
\mathcal{S}_{\MS}^{(n)}=\lim_{{\bf \Lambda}\to {\bf \Lambda}_0}
m_n\circ \Bigl(\sum_{F\in\mathfrak{F}_{\bar n}}\prod_{I\in F}R_I\Bigr)
\exp{\sum_{1\leq i<j\leq n}D^{\bf \Lambda}_{ij}}
\quad\quad({\bf \Lambda}\equiv(\Lambda_{ij})_{1\leq i<j\leq n})\ ,
\end{equation}
where as in Zimmermann's formula $R_I$ has to be applied before $R_J$ if $I\subset J$. The expression $\exp{\sum_{1\leq i<j\leq n}D^{\bf \Lambda}_{ij}}$ has to be understood as a meromorphic function obtained, term by term, by analytic continuation from the region where it exists due to sufficient regularity of the modified Feynman propagators.
\end{thm}
\begin{proof} We omit the index ${\bf \Lambda}$ belonging to each $\mathcal{Z}$ and each differential operator $D$,
since it is unessential to the proof. Let us define a full forest as a forest containing the set $\{1,\dots,n\}$,
and let $\mathfrak{F}_{\bar n}^{\text{full}}$ denote the set of full forests.
We set $\mathcal{Z}^{(1)}:=\mathrm{id}$ and
\begin{equation}\label{counter}
\mathcal{Z}^{(n)}:=m_n\circ \Bigl(\sum_{F\in \mathfrak{F}_{\bar n}^{\text{full}}}\prod_{I\in F}R_I\Bigr)\exp{\sum_{i<j}D_{ij}}\ ,
\end{equation}
and we verify that it satisfies the recursion relation \eqref{recursion},
i.e. $\mathcal{Z}^{(n)}=\mathcal{Z}_{\MS}^{(n)}$.
In order to include the case $n=1$ into the formula \eqref{counter} we define
$\mathfrak{F}_{\{1\}}^{\mathrm{full}}=\{\{1\}\}$ and we adopt the convention that $R_I=\mathrm{id}$ if $I$ contains only one element.
We proceed by induction. For $n=2$ the only full forest is $\{\{1,2\}\}$, hence
\begin{equation}
\mathcal{Z}^{(2)}=m_2\circ R_{\{1,2\}}\exp{D_{12}}
\end{equation}
in agreement with the definition of minimal subtraction \eqref{recursion} in second order.
For $k>2$ we now assume that $\mathcal{Z}^{(n)}=\mathcal{Z}_{\MS}^{(n)}$ for all $n<k$.
Let $F\in\mathfrak{F}_{\bar k}^{\mathrm{full}}$ be a full forest. Then there exists a partition $P$ of $\bar k$ such that
\begin{equation}\label{ff}
F=\{\{1,\dots,k\}\}\cup\bigcup_{L\in P}F_L
\end{equation}
with full forests $F_L\in\mathfrak{F}_{L}^{\mathrm{full}}$, and $P$ and $F_L$ are uniquely determined. Vice versa, given a partition $P$ and full forests $F_L$, $L\in P$,
equation \eqref{ff} defines a full forest.
Using Leibniz' rule and the associativity of the pointwise product, we find for a partition $P$ of ${\bar k}$
\begin{equation}\label{as}
m_{|P|}\circ\exp{\sum_{I<J\in P}D_{IJ}}
\left(\bigotimes_{L\in P}m_{|L|}\circ
\exp{\sum_{i<j\in L}D_{ij}}\right)=
m_k\circ\exp{\sum_{i<j}D_{ij}}
\end{equation}
where $I<J\in P$ means that $I,J\in P$ and the smallest element of $I$ is smaller than the smallest element of $J$.
This formula, applied to local functionals $F_1,\dots,F_k$, holds on a suitable domain in the deformation parameters and, by analytic continuation, everywhere as an identity for meromorphic functions.
We now insert the decomposition of a full forest \eqref{ff} into equation \eqref{counter} and use the identity \eqref{as}.
We find
\begin{equation}\label{Zkmin}
\mathcal{Z}^{(k)}=R_{\{1,\dots,k\}}\sum_{|P|>1}m_{|P|}\circ\exp{\sum_{I<J\in P}D_{IJ}}
\left(\bigotimes_{L\in P}m_{|L|}\circ\Bigl(\sum_{F\in \mathfrak{F}_L^{\text{full}}}\prod_{M\in F}R_M
\Bigr)\exp{\sum_{i<j\in L}D_{ij}}\right)
\end{equation}
where we used the fact that the operation $R_M$ of taking the principal part involves only the variables $\Lambda_{ij}$ with $i,j\in M$.
But \eqref{Zkmin} is just the recursion relation which defines the minimal subtraction.
In the last step we insert the formula \eqref{counter} into the Fa\`a di Bruno formula \eqref{FaadiBruno} and
repeat the calculation
with the modifications that $R_{\{1,\dots,k\}}$ and the restriction $|P|>1$
are omitted. As a result we obtain \eqref{EGforest}.
\end{proof}
{\small \begin{example}[Forest Formula for a particular graph]
The Forest Formula (\ref{EGforest}) can be broken down to the renormalization of individual graphs. As an example we want to regard the following overlapping divergence in $\varphi^4$-theory in 4 dimensions,
\begin{equation}
G=\Catseye.
\end{equation}
It is a contribution to the selfenergy in fourth order of causal perturbation theory. Introducing an irrelevant numbering of vertices the corresponding differential operator for the graph is:
\begin{equation}
m_4\circ \left[D_{12}D_{13}D_{14}D_{23}^2D_{24}D_{34}\right]
\end{equation}
Next we write down the basic subsets, from which the EG forests for any four point graph are built:
\begin{equation}
\{1,2\},\{1,3\},\{1,4\},\{2,3\},\{2,4\},\{3,4\},
\{1,2,3\},\{1,3,4\},\{1,2,4\},\{2,3,4\},
\{1,2,3,4\}.
\end{equation}
Only some of these subsets correspond to divergent subgraphs of $G$:
\begin{equation}\label{div-graphs}
\{2,3\},
\{1,2,3\},\{2,3,4\},
\{1,2,3,4\}.
\end{equation}
Thus, the relevant forests are
\begin{equation}\nonumber
\begin{array}{cccccc}
\{\}, &\{23\}, &\{123\}, &\{234\}, &\{23,123\}, &\{23,234\},\\
\{1234\},&\{23,1234\},&\{123,1234\},&\{234,1234\},&\{23,123,1234\},&\{23,234,1234\},
\end{array}
\end{equation}
where we wrote the normal forests in the first, and the corresponding full forests in the second line. The Forest Formula thus yields
\begin{align*}
G_\MS
&= m_4\circ [1+R_{23}+R_{123}+R_{234}+R_{123}R_{23}+R_{234}R_{23}+R_{1234}+R_{1234}R_{23}\\
&\phantom{==m_4\circ}+R_{1234}R_{123}+R_{1234}R_{234}+R_{1234}R_{123}R_{23}+R_{1234}R_{234}R_{23}]\;G\\
&= m_4\circ(1+R_{1234})(1+R_{123}+R_{234})(1+R_{23})\left[D_{12}D_{13}D_{14}D_{23}^2D_{24}D_{34}\right].
\end{align*}
The second equality, i.e.~that $\sum_{F}\prod_{I\in F}R_I$ can be written as $\prod(1+\sum R_I)$, is a peculiarity of this example,
which is due to the fact that each divergent subgraph \eqref{div-graphs} is a subgraph of all divergent (sub)graphs
of higher orders.
\end{example}}
Simpler examples, for which we will compute the projections $R_I$, are \ref{exp:triangle1}
and \ref{thm:doubletriangle}.
\section{Dimensional regularization in position space}\label{sec:dimreg}
To perform the analytic regularization required for minimal subtraction and the EG Forest Formula we have to find a distribution valued analytic function $\zeta\mapsto \Delta_F^{\zeta}$ with the following properties: for $\zeta =0$ the distribution should coincide with the Feynman propagator, the wave front set of $\Delta_F^{\zeta}$ should always be contained in the wave front set of $\Delta_F$,
and the scaling degree \eqref{sd} of $\Delta_F^{\zeta}$, modulo smooth functions, should tend to $-\infty$ as the real part of $\zeta$ approaches infinity. Under these conditions each term in the graph expansion of the unrenormalized S-matrix is a well defined analytic function for suitable values of $\zeta$. One then has to perform an analytic extension to a meromorphic function with a pole at $\zeta =0$. For the Forest Formula we also require that this extension can be done individually for every propagator associated to a pair of vertices. Under these conditions minimal subtraction is well defined and the EG Forest Formula yields a closed expression. Depending on the choice of the analytic regularization the minimally subtracted S-matrix may automatically satisfy further conditions (e.g. Lorentz invariance, unitarity etc.).
In the following we concentrate on dimensional regularization. According to Bollini and Giambiaggi \cite{BG96} dimensional regularization in position space essentially amounts, in the massive case, to a change in the index of the Bessel function appearing in the formula for the Feynman propagator. We have in $d$ dimensions
\[\Delta_F(x)=\lim_{\epsilon\searrow 0}\mathpzc{w}^d(x^2-i\epsilon)\]
where
\[\mathpzc{w}^d(z^2)=(2\pi)^{-\frac{d}{2}}m^{\frac{d}{2}-1}\sqrt{-z^2}^{1-\frac{d}{2}}K_{\frac{d}{2}-1}(m \sqrt{-z^2})\]
with the modified Bessel function of the second kind $K_\nu$ with index $\nu=d/2-1$. The massless case is obtained as the limit $m\downarrow 0$. The dimensionally regularized Feynman propagator is obtained by replacing $d$ by $d-2\zeta$ and, in order to keep the mass dimension constant, by multiplication with a factor $\mu^{2\zeta}$ with a mass parameter $\mu> 0$,
\begin{equation}\label{regFeynprop}
\Delta^{\zeta}_F(x)=\mu^{2\zeta}\lim_{\epsilon\searrow 0}\mathpzc{w}^{d-2\zeta}(x^2-i\epsilon)\ .
\end{equation}
Since $x^{-\nu} K_\nu(x)$ is analytic in the cut plane $\CC\setminus\RR^0_-$
irrespective of the value of $\nu\in\CC$, the function $w^{d-2\zeta}$ is
analytic on the complement of the positive real axis.
Therefore $\Delta^{\zeta}_F(x)$ is, for all values of $\zeta$, the boundary value of an
analytic function on the complexified Minkowski space with domain
$\{x+iy|(x+iy)^2\not\in\mathbb{R}_-\}$, as for timelike $x$
the imaginary part $y$ approaches zero from the backward light cone if $x^0>0$ and
from the forward light cone if $x^0<0$.
By \cite[Thm.~8.1.6]{Hoer03} the analyticity domain of a function determines the wave front
set of its boundary values, hence $\WF(\Delta_F^\zeta)\subset\WF(\Delta_F)$.
Moreover, the scaling degree of $\Delta_F^\zeta$ is $\mathrm{max}\{d-2-2\Re\,\zeta,\,0\}$,
as may be seen from the behavior of the Bessel function at the origin.
We now define the terms of the regularized S-matrix for suitable values of $\zeta$. Using the information on the scaling degrees as well as on the wave front sets we can proceed in the usual way for the Epstein-Glaser induction and find, at every step, a unique expression which has the correct wave front sets and analyticity properties. It remains to construct the analytic continuations.
For this purpose we use the fact that $\Delta_F^\zeta$ can be written as a sum of homogeneous distributions and a rest term with sufficiently small scaling degree where every term is analytic in $\zeta$. Every contribution in the expansion is then a product of homogeneous distributions and sufficiently well behaved functions. But for non-integer degree of homogeneity, homogeneous distributions have unique homogeneous extensions. This extension is analytic in $\zeta$, hence the whole expression is analytic for non-integer values of $\zeta$. In case one uses different values of $\zeta$ the degree of homogeneity differs from its value for $\zeta_{ij}=0$ by
a certain linear combination $\sum_{i<j}l_{ij}\zeta_{ij}$ of the $\zeta$-variables, and one may find poles at points where these degrees are integers.
The choice of dimensional regularization has further nice properties. First of all, since the regularized propagators are Lorentz invariant, one automatically obtains a Lorentz invariant S-matrix. Moreover, due to $\overline{\Delta_F^\zeta(x)}=\Delta_{AF}^{\overline{\zeta}}(x)$ (where $\Delta_{AF}^{\zeta}$ is the regularized anti-Feynman
propagator which is obtained from \eqref{regFeynprop} by reversing the sign of $\epsilon$),
the S-matrix is unitary. Finally, the condition that the field equation holds, is due to the fact that $\Delta_F^\zeta$ is analytic at $\zeta=0$.
Before we enter in the details of the regularization of the two point functions
(sect.~\ref{sec:DimRegHadamard}) and of the construction of a pertinent sequence $(\Tcal^{\boldsymbol{\zeta}}_n)$
of regularized time-ordered products
(sect.~\ref{sec:dimregS}), we study regularization and minimal subtraction on the
level of numerical distributions in the Epstein-Glaser framework.
\subsection{Regularization of numerical distributions}\label{MSnumerical}
In a translation invariant framework, perturbative renormalization can be understood in $x$-space as the extension
of distributions $t\in\mathcal{D}'(\RR^n\setminus\{0\})$ to distributions $\dot t\in\mathcal{D}'(\RR^n)$ \cite{Stora1993}.
This mathematical problem is
treated, for example, in \cite{BF00,DF04}.
The existence and uniqueness of extensions $\dot t$ can be answered in terms of
Steinmann's scaling degree \cite{Ste71},
\begin{equation}\label{sd}
\mathrm{sd}(t):=\inf\{\omega\in\RR\,|\,\lim_{\rho\downarrow 0}\rho^\omega\,t(\rho x)=0\}\ ,\quad
t\in\mathcal{D}'(\RR^n)\quad\text{or}\quad t\in\mathcal{D}'(\RR^n\setminus\{0\})\ .
\end{equation}
For the complete existence and uniqueness theorem we refer to \cite{BF00}, we only mention the following partial result.
\begin{thm}[{\cite{Ste71,BF00}}]\label{thm:Extension-sd}
For $\lambda\in\RR$ let
\begin{equation}
\mathcal{D}_\lambda(\RR^n):=\{f\in\mathcal{D}(\RR^n)\,|\,(\partial^\alpha f)(0)=0\,\,\,\forall |\alpha|\leq\lambda\}
\end{equation}
(in particular $\mathcal{D}_\lambda(\RR^n)=\mathcal{D}(\RR^n)$ if $\lambda <0$)
and let $\mathcal{D}'_\lambda(\RR^n)$ be the corresponding space of distributions.
A distribution $t\in\mathcal{D}'(\RR^n\setminus\{0\})$ with scaling degree
$\mathrm{sd}(t)$ has a unique extension $\bar{t}\in\mathcal{D}_\lambda'(\RR^n)$, $\lambda=\mathrm{sd}(t)-n$, which satisfies the condition
$\mathrm{sd}(\bar{t})=\mathrm{sd}(t)$.
\end{thm}
We will call $\bar{t}$ the \textit{direct extension}. With the requirement $\mathrm{sd}(\dot t)=\mathrm{sd}(t)$, the extension is unique for $\mathrm{sd}(t)< n$ and given by the direct extension\footnote{Note that the axioms for the (regularized) time-ordered products used in \cite{DF04,BDF09}
or in this paper (sect.~\ref{sec:dimregS}), imply the condition $\mathrm{sd}(\dot t)=\mathrm{sd}(t)$.}. For $\mathrm{sd}(t)\geq n$, the condition does not fix the extension. To treat this case
we introduce a regularization.
\begin{df}[Regularization]\label{df:regularisation} Let $t\in\mathcal{D}'(\RR^n\setminus\{0\})$ be a distribution with degree of
divergence $\lambda:=\mathrm{sd}(t)-n\geq 0$, and let $\bar{t}\in\mathcal{D}_\lambda'(\RR^n)$ be the direct extension of $t$. A family of distributions $\{t^\zeta\}_{\zeta\in\Omega\setminus\{0\}}$, $t^\zeta\in\mathcal{D}'(\RR^n)$, with $\Omega\subset\CC$ a neighborhood of the origin, is called a regularization of $t$, if
\begin{equation}\label{eq:regularization}
\forall g\in\mathcal{D}_\lambda(\RR^n):\quad\lim_{\zeta\rightarrow0}\langle t^\zeta,g\rangle=\langle \bar{t},g\rangle\,.
\end{equation}
The regularization $\{t^\zeta\}$ is called analytic, if for all functions $f\in\mathcal{D}(\RR^n)$ the map
\begin{equation}
\Omega\setminus\{0\}\ni\zeta\mapsto \langle t^\zeta,f \rangle
\end{equation}
is analytic with a pole of finite order at the origin. The regularization $\{t^\zeta\}$ is called finite, if
the limit $\lim_{\zeta\rightarrow 0}\langle t^\zeta,f\rangle\in\CC$ exists $\forall f\in\mathcal{D}(\RR^n)$.
\end{df}
Note that for a finite regularization, the limit
$\lim_{\zeta\rightarrow0}t^\zeta$ is indeed a solution $\dot t$ of the
extension problem, that is $\lim_{\zeta\rightarrow0}\langle
t^\zeta,h\rangle=\langle t,h\rangle\quad\forall h\in\mathcal{D}(\RR^n\setminus\{0\})$
and $\mathrm{sd}(\lim_{\zeta\rightarrow0}t^\zeta)=\mathrm{sd}(t)\ $.\footnote{To verify the latter
let $\dot t$ be an extension with $\mathrm{sd}(\dot t)=\mathrm{sd}(t)\ $. Writing $\Delta t:=\dot t-
\lim_{\zeta\rightarrow0}t^\zeta=\sum_\gamma C_\gamma\partial^\gamma\delta$, it follows from
$\Delta t\vert_{\mathcal{D}_\lambda(\RR^n)}=0$ that $C_\gamma=0\,\,\,\forall|\gamma|>\lambda$ and,
hence, $\mathrm{sd}(\lim_{\zeta\rightarrow0}t^\zeta)=\mathrm{sd}(t)\ $.}
Any extension $\dot t\in\mathcal{D}'(\RR^n)$ of $t$ with the same scaling degree is of
the form $\langle\dot t,f\rangle=\langle \bar t,Wf\rangle$ with some projection,
\begin{equation}\label{W-projection}
\begin{array}{rccl}
W:&\mathcal{D}&\rightarrow & \mathcal{D}_{\lambda} \\
& f &\mapsto &Wf:=f-\displaystyle{\sum_{|\gamma|\leq\mathrm{sd}(t)-n}w_\gamma\;\partial^\gamma f(0)}\ ,
\end{array}
\end{equation}
given in terms of functions $w_\beta\in\mathcal{D}(\RR^n)\ ,\,\,\,|\beta|\leq\mathrm{sd}(t)-n$ , fulfilling
\begin{equation}\label{w-derivatives}
\partial^\gamma w_\beta(0)=\delta^\gamma_\beta\>\>\quad\quad\forall \gamma\in\NN_0^n
\end{equation}
\cite[Lem.~B.2]{DF04}. Since $t^\zeta\in\mathcal{D}'(\RR^n)$, we can write \eqref{eq:regularization} in the form
\begin{equation}\label{regW-2}
\langle \bar{t},Wf\rangle=\lim_{\zeta\rightarrow0}\left[\langle t^\zeta,f\rangle - \sum_{|\gamma|\leq\mathrm{sd}(t)-n}\langle t^\zeta,w_\gamma\rangle\;\partial^\gamma f(0)\right].
\end{equation}
In general, the limits of the individual terms on the right hand side might not exist. However, if the regularization $\{t^\zeta,\zeta\in\Omega\setminus\{0\}\}$ is analytic, each term can be expanded in a Laurent series around $\zeta=0$, and since the overall limit is finite, the principal parts ($\pp$) of these Laurent series must coincide,
\begin{equation}\label{pp=pp}
\forall f\in\mathcal{D}(\RR^n):\quad
\pp(\langle t^\zeta,f\rangle) = \sum_{|\gamma|\leq\mathrm{sd}(t)-n}\pp(\langle t^\zeta,w_\gamma\rangle)\;\partial^\gamma f(0)\,.
\end{equation}
Note that $\pp(\langle t^\zeta,w_\gamma\rangle)$ is independent of the choice of $w_\gamma$, because
$\pp(t^\zeta)$ is a linear combination of derivatives of $\delta(x)$ and
all information about $w_\gamma$ that is used is \eqref{w-derivatives}.
We thus have proven
\begin{lemma}\label{lem:pp-local}
The principal part of any analytic regularization $\{t^\zeta\}$ of a distribution $t\in\mathcal{D}'(\RR^n\setminus\{0\})$ is a local distribution of order $\mathrm{sd}(t)-n$, i.e.,
\begin{equation}\label{pp=local}
\pp(t^\zeta) = \sum_{|\gamma|\leq\mathrm{sd}(t)-n}C_\gamma(\zeta)\;\delta^{(\gamma)}\,.
\end{equation}
In the derivation above we have $C_\gamma(\zeta)=(-1)^{|\gamma|}\pp(\langle t^\zeta,w_\gamma\rangle)$.
\end{lemma}
Alternatively, the latter formula for $C_\gamma(\zeta)$ can be obtained directly from \eqref{pp=local}
by applying it to $w_\gamma$ and using \eqref{w-derivatives}.
\begin{cor}[Minimal Subtraction]\label{cor:MS-same-sd}
The regular part ($\rp=1-\pp$) of any analytic regularization $\{t^\zeta\}$ of a distribution $t\in\mathcal{D}'(\RR^n\setminus\{0\})$ defines by
\begin{equation}\label{def:MS}
\langle t^\MS,f\rangle :=\lim_{\zeta\rightarrow0} \rp(\langle t^\zeta,f\rangle)
\end{equation}
an extension of $t$ with the same scaling degree, $\mathrm{sd}(t^\MS)=\mathrm{sd}(t)$.
The extension $t^\MS$ defined by (\ref{def:MS}) is called 'minimal subtraction'.
\end{cor}
In traditional terminology $(-1)\pp(t^\zeta)$ is a 'local counter term'.
\begin{proof}
It follows directly from (\ref{regW-2})-(\ref{pp=pp}) that any extension $\dot t$ of $t$ with the same scaling degree
can be written as
\begin{align*}
\langle \dot t,f\rangle=
\langle \bar{t},Wf\rangle&=\lim_{\zeta\rightarrow0}\left[\langle t^\zeta,f\rangle - \sum_{|\gamma|\leq\mathrm{sd}(t)-n}
\left[\pp\langle t^\zeta,w_\gamma\rangle+\rp\langle t^\zeta,w_\gamma\rangle\right]\;\partial^\gamma f(0)\right].\\
&=\langle t^\MS,f\rangle
-\lim_{\zeta\rightarrow0}\sum_{|\gamma|\leq\mathrm{sd}(t)-n} \rp(\langle t^\zeta,w_\gamma\rangle)\;\partial^\gamma f(0)\,.
\end{align*}
Obviously $t^\MS$ differs from $\dot t$ by a local distribution of lower or equal scaling degree.
\end{proof}
\begin{cor}
For finite $\zeta$ the projection to the regular part of any analytic regularization $\{t^\zeta\}$ can be realized as a $W$-projection up to a term of order $\zeta$, i.e., there exists a projection $W^\MS:\mathcal{D}(\RR^n)\rightarrow\mathcal{D}_\lambda(\RR^n)$, $\lambda=\mathrm{sd}(t)-n$, such that
\begin{equation}
\forall f\in\mathcal{D}(\RR^n):\quad \rp\langle t^\zeta, f\rangle=\langle t^\zeta,W^\MS f\rangle+\mathcal{O}(\zeta)\,.
\end{equation}
\end{cor}
\begin{proof}
According to Corollary~\ref{cor:MS-same-sd} there is a projection $W^\MS:\mathcal{D}(\RR^n)\rightarrow\mathcal{D}_\lambda(\RR^n)$ such that
\begin{equation}\label{eq:W-MS}
\langle t^\MS,f \rangle=\langle \bar t,W^\MS f\rangle
\end{equation}
It follows for the regular part for all $f\in\mathcal{D}(\RR^n)$,
\begin{align*}
\rp\langle t^\zeta, f\rangle
&=\bigg\langle t^\zeta-\pp(t^\zeta), W^\MS f+\sum_{|\gamma|\leq\mathrm{sd}(t)-n} w_\gamma^\MS f^{(\gamma)}(0) \bigg\rangle\\
&=\langle t^\zeta, W^\MS f\rangle + \sum_{|\gamma|\leq\mathrm{sd}(t)-n} \rp\langle t^\zeta,w_\gamma^\MS\rangle f^{(\gamma)}(0)\,,
\end{align*}
since $\pp\langle t^\zeta,W^\MS f\rangle=0$ by \eqref{pp=local}. The left hand side as well as the first term on the right
hand side tend to $\langle t^\MS,f\rangle$ as $\zeta\rightarrow0$, cf.~(\ref{def:MS}) and \eqref{eq:regularization}, (\ref{eq:W-MS}).
Hence the remaining sum on the right hand side needs to vanish in this limit, and since it is the regular part of a Laurent series it is at least of order $\zeta$.
\end{proof}
By means of the results above, the statements made at the beginning of section \ref{sec:forest-MS}
can be illustrated on the level of the numerical distributions $t=t_{\Gamma}^{\beta}$ introduced
in \eqref{time ordered functions}.
We first note that $t^\zeta\mapsto -\pp(t^\zeta)$ is
just the action in terms of the numerical distributions of the projectors
$R_I$, $I=V(\Gamma)$ in the Forest Formula \eqref{EGforest},
because $\mathcal{S}^{\zeta\,(n)}$ depends on $\zeta$ only through
$t^\zeta$ (see sect.~\ref{sec:dimregS}). The fact that $t^\MS$ is a
renormalization of $t$ admitted by the Epstein-Glaser
axioms (Corollary \ref{cor:MS-same-sd}), reflects that $\mathcal{S}_{\MS}$ is a solution of these axioms.
Moreover, since in $(\mathcal{S}^{\zeta}\circ \mathcal{Z}^{\zeta}_{\MS})^{(n)}$ the action of $\mathcal{Z}^{\zeta}_{\MS}$
is the addition of the divergent counter terms for all contributing diagrams and for all their
subdiagrams, and since, in terms of the pertinent numerical distributions $t^\zeta$,
these counter terms are given by $(-1)\pp(t^\zeta)$, and since $\pp(t^\zeta)$ is a
{\it local} distribution with $\mathrm{sd}(\pp(t^\zeta))\leq\mathrm{sd}(t)$
(Lemma \ref{lem:pp-local}), we understand on the level of numerical distributions why
$\mathcal{Z}^{\zeta}_{\MS}\in\mathscr{R}$ (see \eqref{R-locality}).
\subsection{Dimensionally Regularized Two Point Function}\label{sec:DimRegHadamard}
As in the unregularized case, the regularized Wightman two point function $\Delta_+^\zeta$
differs from $\Delta_F^\zeta$ \eqref{regFeynprop} only by a change of the boundary value prescription:
\begin{equation}\label{regWightman}
\Delta^{\zeta}_+(x):=\mu^{2\zeta}\lim_{\epsilon\searrow 0}\mathpzc{w}^{d-2\zeta}(x^2-i\epsilon x^0)\ .
\end{equation}
As for $\Delta_F^\zeta$, we conclude that $\WF(\Delta_+^\zeta)\subset \WF(\Delta_+)$. Due to
\begin{equation}\label{eq:RegularizedCausalityDistributions}
\Delta_F^\zeta(x)=
\begin{cases}
\Delta_+^\zeta(x) \quad & \text{if}\quad x\notin \bar V_-\\
\Delta_+^\zeta(-x) \quad & \text{if}\quad x\notin \bar V_+
\end{cases}\ ,
\end{equation}
causality of the regularized time-ordered products $\Tcal^{{\boldsymbol{\zeta}}}_n$ can be postulated in the
Bogoliubov-Shirkov way \cite{BS59} and with that the family $(\Tcal^{{\boldsymbol{\zeta}}}_n)$ can be
constructed inductively by a version of the Epstein-Glaser method; this is done in the next subsection.
Under a simultaneous scaling of $x$ and $m$, $\Delta_+^\zeta$ and $\Delta_F^\zeta$ are homogeneous,
\begin{equation}\label{hom:scaling}
\rho^{d-2-2\zeta}\,\Delta_{j,\rho^{-1}m}^\zeta(\rho x)=\Delta_{j,m}^\zeta(x)\ ,\quad j=+,F\ ,
\end{equation}
since $\mathpzc{w}^{d-2\zeta}(z^2)$ has this property.
We now use the relation between Bessel functions
\begin{equation}
K_\nu=\frac{\pi}{2\sin(\nu\pi)}[I_{-\nu}-I_\nu]\quad(\nu\in\CC\setminus\ZZ)\quad
\end{equation}
and the fact that $I_{\nu}$ is of the form
\begin{equation}
I_{\nu}(z)=z^{\nu}F_{\nu}(z^2)
\end{equation}
with the entire function
\begin{equation}
F_{\nu}(z^2)=2^{-\nu}\sum_{k=0}^\infty\frac1{k!\Gamma(\nu+k+1)}\left(\frac{z^2}{4}\right)^{k} \ .
\end{equation}
Inserting these relations into the formula for the 2-point function we obtain the decomposition
\begin{equation}\label{D=H+C}
\Delta_{j,m}^\zeta=H_j^{m,\zeta}+C_m^\zeta
\end{equation}
where
\begin{equation}\label{Creg}
C_m^\zeta(x)=-c(d-2\zeta)m^{d-2}\left(\frac{\mu}{m}\right)^{2\zeta}F_{\frac{d}{2}-1-\zeta}(-m^2x^2)
\end{equation}
\begin{equation}\label{Hreg}
H_j^{m,\zeta}(x)=c(d-2\zeta)\mu^{2\zeta}\left((-x^2)^{1-\frac{d}{2}+\zeta}\right)_jF_{\zeta-\frac{d}{2}+1}(-m^2x^2)
\end{equation}
with $c(d-2\zeta):=(2\pi)^{\zeta-\frac{d}{2}}\frac{\pi}{2\sin((\frac{d}{2}-1-\zeta)\pi)}$. The index $j=+,F$ denotes as above the appropriate boundary values. Note that
the zeroes of the sine function at multiples of $\pi$ produce poles at $\zeta=\frac{d}{2}+n, n\in\ZZ$ in the above decomposition which cancel in the sum \eqref{D=H+C}.
We observe that $H_j^{m,\zeta}$ is a smooth function of the mass $m$ and $C^{m,\zeta}$ is a smooth function of the position $x$. Both terms satisfy the homogeneous scaling \eqref{hom:scaling}. $H_F^{m,\zeta}$ is the Feynman type propagator corresponding to the Hadamard function which was already used in \cite{BDF09} (see also \cite{Kel10a}).
The interpretation of $\Delta_+^\zeta$ as the dimensionally regularized 2-point function (in spite of the fact that it is a distribution in $d$ dimensions) may be justified by the fact that it solves an appropriately deformed
version of the Klein-Gordon equation. This may be useful for the discussion of symmetries (as current conservation or gauge invariance cf.~\cite{BD08,FRb}) for the
dimensionally regularized amplitudes.
\begin{lemma}\label{mod-KG}
Let $d_\zeta:=d-2\zeta\ $, $t:=x^0\ $, $r:=\sqrt{\sum_{i=1}^{d-1} (x^i)^2}$
and let the $\zeta$-dependent functions $f_\zeta$ and $G_\zeta$ be related by
\begin{equation}
f_\zeta(z)=z^{-d_\zeta/2+1}\,G_\zeta(mz)\ .
\end{equation}
For $r\not=0$ we introduce the 'wave operator
in $d_\zeta$-dimensions':
\begin{equation}\label{reg-wellenop}
\square_d^\zeta:=\partial_t^2-\partial_r^2
-\frac{d_\zeta-2}{r}\partial_r-\frac{1}{r^2}\Delta_{S^{d-2}}\ .
\end{equation}
with the Laplacian $\Delta_{S^{d-2}}$ on the $(d-2)$-sphere.
\begin{itemize}
\item[(a)] For $x^2=t^2-r^2<0$ it holds: $F_\zeta(x):=f_\zeta(\sqrt{-x^2})$ solves
the 'Klein-Gordon equation in $d_\zeta$-dimensions', i.e.
\begin{equation}\label{eq:mod-KG}
(\square_d^\zeta+m^2)\, F_\zeta(x)=0\ ,
\end{equation}
if and only if $G_\zeta(u)$ is a solution of the modified Bessel equation of order
$d_\zeta/2-1$,
\begin{equation}\label{mod-Bessel}
G''_\zeta(u)+\frac{G'_\zeta(u)}{u}+G_\zeta(u)\Bigl(1+\frac{(d_\zeta/2-1)^2}{u^2}\Bigr)=0\ .
\end{equation}
\item[(b)] $\Delta_+^\zeta(x)$ solves the 'Klein-Gordon equation in $d_\zeta$-dimensions'
\eqref{eq:mod-KG} for all $x$ with $r\not=0$.
\end{itemize}
\end{lemma}
\begin{proof} {\it (a) and (b):} The statement (a) is obtained straightforwardly
by inserting the definitions and computing the derivatives. Since
\begin{equation}
\Delta^\zeta_+(x)=\lim_{\epsilon\downarrow 0}f_\zeta(\sqrt{r^2-t^2+it\epsilon})
\end{equation}
with a pertinent function $G_\zeta$ solving \eqref{mod-Bessel}, part (a) immediately yields
$(\square_d^\zeta+m^2)\,\Delta^\zeta_+(x)=0$ for $x^2<0$. For $x^2\geq 0$ the calculation in the proof of (a)
has to be supplemented by the $i\epsilon$-terms, the final limit $\epsilon\downarrow 0$ is harmless.
\end{proof}
\subsection{Dimensionally Regularized Time-ordered products}\label{sec:dimregS}
In contrast to the situation described in (\cite[Sect.~5.2]{BDF09}) and (\cite[Sect.~4]{DWilson}),
the regularized Feynman propagator $\Delta^\zeta_F\in\mathcal{D}^\prime(\MM)$ is {\it not} a smooth function.
Actually, we are not aware of any analytic regularization which yields smooth propagators.
Hence, the construction of the regularized time-ordered products
involves non-direct extensions of distributions.
The aim of this section is to construct a unique family of linear maps
\begin{equation}\label{Sreg}
\mathcal{T}_n^{\boldsymbol{\zeta}}:\Floc^{\otimes n}\rightarrow\Fcal
\end{equation}
perturbatively by Epstein-Glaser induction (to simplify the notations we write $\Floc$ and $\Fcal$ for $\Floc[[\hbar]]$ and $\Fcal[[\hbar]]$ resp.). The construction has to be done in such a way
that each $\Tcal_n^{\,{\boldsymbol{\zeta}}}$ in the perturbative expansion is a meromorphic function of
$N:= {n\choose 2}$ complex parameters $\zeta_{ij}$, i.e. each order in $\hbar$ is meromorphic.
We choose different parameters $\zeta_{ij}$
for each bidifferential operator in the formal expression
\begin{equation}\label{Sreg-unren}
T^{{\boldsymbol{\zeta}},\,\mathrm{unren}}_{n}:=\exp\sum_{1\leq i<j\leq n} D_{ij}^{\zeta_{ij}}\ ,\quad
D_{ij}^{\zeta_{ij}}:=\langle \hbar \Delta_F^{\zeta_{ij}},\tfrac{\delta^2}{\delta\varphi_i\delta\varphi_j}\rangle,\,
\end{equation}
which we now want to make precise with the use of homogeneous extensions of distributions ('$\mathrm{unren}$' stands for 'unrenormalized'). We can expand the exponential in \eqref{Sreg-unren} in terms of graphs by means of \eqref{time:ord}, so we can construct $T^{{\boldsymbol{\zeta}}}_n$ as a sum of $T_\Gamma^{{\boldsymbol{\zeta}}}$, with $\Gamma\in\Gcal_n$ (the set of all graphs with vertices $\{1,\dots n\}$). Each expression $T^{{\boldsymbol{\zeta}}}_\Gamma$ can be obtained by recursively extending $t_{\Gamma}$ given by the formula \eqref{SGamma} to a distribution defined on the space $\mathcal{D}(\Delta_{\Gamma},Y_{\Gamma})$ introduced in Section \ref{EG} . The family ${\boldsymbol{\zeta}}$ contains one regularization parameter for each pair of vertices.
We write ${\boldsymbol{\zeta}}:=(\zeta_{ij})_{1\leq i<j\leq n}\in\CC^N$.
The regularized time-ordered products $\mathcal{T}_n^{\boldsymbol{\zeta}}$ are given by $m_n\circ T^{{\boldsymbol{\zeta}}}_{n}$. We will show in this section that the latter can be constructed in such a way that certain properties, similar to Epstein-Glaser axioms are satisfied. These properties can be specified equivalently on the level of $T^{{\boldsymbol{\zeta}}}_{n}$ or $T^{{\boldsymbol{\zeta}}}_{\Gamma}$ and we will make use of both possibilities, dependent on notational convenience.
The axioms which we assume are the following (compare with \cite{DF04} and \cite{BDF09}):
\begin{itemize}
\item{\bf Starting element}. $T_1^{{\boldsymbol{\zeta}}}=\id\ $,
\item{\bf Causality}. Let $F_1\,\dots,F_n$ be local functionals such that $F_1,\dots,F_k$ have supports later than the supports of $F_{k+1},\dots,F_n$. Let us denote by $I$ the index set $\{1,\dots,k\}$ and by ${\boldsymbol{\zeta}}_I$ the family of parameters
$\zeta_{ij}$, where $i,j\in I$. Similarly, elements of ${\boldsymbol{\zeta}}_{I^c}$ will have $i,j\in I^c$ and elements of ${\boldsymbol{\zeta}}_{II^c}$ satisfy: $i\in I$, $j\in I^c$. Together they form the set of parameters ${\boldsymbol{\zeta}}=({\boldsymbol{\zeta}}_I,{\boldsymbol{\zeta}}_{I^c},{\boldsymbol{\zeta}}_{II^c})$.
The condition of causality is the requirement that
\begin{equation}\label{caus}
T_n^{{\boldsymbol{\zeta}}}(F_1,\dots,F_n)=\exp(\sum\limits_{i\leq k\atop j> k}D_{ij}^{{\boldsymbol{\zeta}}_{II^c}})\ T_k^{{\boldsymbol{\zeta}}_I}(F_1,\dots,F_k)\otimes T_{n-k}^{{\boldsymbol{\zeta}}_{I^c}}(F_{k+1},\dots,F_n)\,.
\end{equation}
\item{\bf $\varphi$-Locality}. We require that $T_n$ is, in $L$-th order of $\hbar$, a functional differential operator on $\Ecal(\M)$ of order $2L$ (see \cite{BDF09} for details).
\item{\bf Field Independence}. For every $k=1,\ldots,n$ we require that
$\langle\frac{\delta}{\delta \varphi_k} T_n^{\boldsymbol{\zeta}}(F_1,\dots,F_n),\psi\rangle=T_n^{\boldsymbol{\zeta}}(F_1,\dots,\langle\frac{\delta F_k}{\delta \varphi_k},\psi\rangle,\dots,F_n),\ $ with $F_1,\dots,F_n\in\Floc,\,\psi\in\Ecal(\MM)\ $.
As explained in \cite{BDF09}, $\varphi$-Locality and Field Independence imply that
$T^{\,{\boldsymbol{\zeta}}}_n(F^{\otimes n})$ can be expanded in the fields as follows:
\begin{equation}\label{causWick}
T^{\,{\boldsymbol{\zeta}}}_n(F^{\otimes n})(\varphi_1,\ldots,\varphi_n)=
\sum_{\alpha,\beta}\langle t_{\alpha}^{{\boldsymbol{\zeta}},\beta},f^{\alpha_1}_{\beta_1}(\varphi_1)\otimes\cdots f^{\alpha_n}_{\beta_n}(\varphi_n)\rangle \,
\end{equation}
where the test functions $f^{\alpha_i}_{\beta_i}(\varphi)$ are defined in \eqref{dF} and the numerical distribution
$t^{{\boldsymbol{\zeta}},\beta}_{\alpha}=\sum_{\Gamma}t^{{\boldsymbol{\zeta}},\beta}_{\Gamma}$ is a time-ordered product of balanced fields\footnote{Balanced fields are local field polynomials
$A(x) = P(\partial_1,\dots,\partial_n)\varphi(x_1)\cdots\varphi(x_n)\big|_{x_1=\dots=x_n=x}$.
Here $P$ is a polynomial, with the peculiarity that $P$
depends only on the differences of variables $(\partial_i - \partial_j)$ (``relative derivatives'').
Balanced fields, originally introduced in \cite{BOR02}, were used in \cite{DF04}
in order to fulfill Stora's Action Ward Identity (AWI). The latter guarantees that the time-ordered product
depends only on local interaction {\it functionals} $F$, and not on
the choice of a corresponding Lagrangian.} at $\varphi=0$,
where the sum runs over all graphs with vertex set $V(\Gamma)=\{1,\dots,n\}$
and $\alpha_i$ lines at the vertex $i$, $i=1,\dots,n$.
This formula is a generalization of the causal Wick expansion given in \cite{EG73}.
We point out that the r.h.s.~of \eqref{causWick} depends
on ${\boldsymbol{\zeta}}$ only through the numerical distributions $t^{{\boldsymbol{\zeta}},\beta}_{\alpha}$.
\item{\bf Translation Invariance}. This axiom can be expressed by the requirement that the numerical
distributions $t^{{\boldsymbol{\zeta}},\beta}_{\alpha}$ appearing in the field expansion
\eqref{causWick} depend only on the relative coordinates $(x_1-x_n,...,x_{n-1}-x_n)$.
\item{\bf Smoothness in $m^2$}. To formulate the requirement of the smoothness in mass we make use of the decomposition of the Feynman propagator $\Delta_{F}^{m,\zeta}$ into $H_{F}^{m,\zeta}$ and $C^{m,\zeta}$. Let $D_{ij}^{C}:=\langle \hbar C^{m,\zeta_{ij}},\tfrac{\delta^2}{\delta\varphi_i\delta\varphi_j}\rangle$. We can ``factor out'' the powers of $C^{m,\zeta}$ from the regularized time-ordered products by applying the following equivalence relation:
\begin{equation}\label{TH:TF}
T^{{\boldsymbol{\zeta}}}_{H,n}(F_1,\dots, F_n):=\exp\Big(-\sum_{i<j}D_{ij}^{C}\Big)\circ T^{{\boldsymbol{\zeta}}}_n (F_1,\dots, F_n)\,.
\end{equation}
Let us now explain what this operation means in terms of Feynman graphs. First we decompose a given graph into a sum of graphs
that have $H_{F}^{m,\zeta}$ or $C^{m,\zeta}$ assigned to lines. For example, the setting sun graph can be written as
\[
\begin{tikzpicture}[thick,scale=1.5]
\useasboundingbox (0,0) rectangle (1,0.6);
\filldraw (0,0) circle (1pt);
\filldraw (0.8,0) circle (1pt);
\draw (0,0) edge [out=80,in=100] node[above] {} (0.8,0);
\draw (0,0) -- node[above] {} (0.8,0);
\draw (0,0) edge [out=-80,in=-100] node[above] {} (0.8,0);
\end{tikzpicture}
\,=
\begin{tikzpicture}[thick,scale=1.5]
\useasboundingbox (0,0) rectangle (1,0.6);
\filldraw (0,0) circle (1pt);
\filldraw (0.8,0) circle (1pt);
\draw (0,0) edge [out=80,in=100] node[above=-1.5pt] {\footnotesize$C$} (0.8,0);
\draw (0,0) -- node[above=-1.5pt] {\footnotesize$C$} (0.8,0);
\draw (0,0) edge [out=-80,in=-100] node[above=-1.5pt] {\footnotesize$C$} (0.8,0);
\end{tikzpicture}
\
+
3\ \begin{tikzpicture}[thick,scale=1.5]
\useasboundingbox (0,0) rectangle (1,0.6);
\filldraw (0,0) circle (1pt);
\filldraw (0.8,0) circle (1pt);
\draw (0,0) edge [out=80,in=100] node[above=-1.5pt] {\footnotesize$C$} (0.8,0);
\draw (0,0) -- node[above=-1.5pt] {\footnotesize$H$} (0.8,0);
\draw (0,0) edge [out=-80,in=-100] node[above=-1.5pt] {\footnotesize$C$} (0.8,0);
\end{tikzpicture}
\
+
3\ \begin{tikzpicture}[thick,scale=1.5]
\useasboundingbox (0,0) rectangle (1,0.6);
\filldraw (0,0) circle (1pt);
\filldraw (0.8,0) circle (1pt);
\draw (0,0) edge [out=80,in=100] node[above=-1.5pt] {\footnotesize$H$} (0.8,0);
\draw (0,0) -- node[above=-1.5pt] {\footnotesize$H$} (0.8,0);
\draw (0,0) edge [out=-80,in=-100] node[above=-1.5pt] {\footnotesize$C$} (0.8,0);
\end{tikzpicture}
\ +
\ \begin{tikzpicture}[thick,scale=1.5]
\useasboundingbox (0,0) rectangle (1,0.6);
\filldraw (0,0) circle (1pt);
\filldraw (0.8,0) circle (1pt);
\draw (0,0) edge [out=80,in=100] node[above=-1.5pt] {\footnotesize$H$} (0.8,0);
\draw (0,0) -- node[above=-1.5pt] {\footnotesize$H$} (0.8,0);
\draw (0,0) edge [out=-80,in=-100] node[above=-1.5pt] {\footnotesize$H$} (0.8,0);
\end{tikzpicture}
\]
Next, we write each such graph as a product of two graphs with only one kind of lines, for example:
\[
\begin{tikzpicture}[thick,scale=1.5]
\useasboundingbox (0,-0.1) rectangle (1,0.6);
\filldraw (0,0) circle (1pt);
\filldraw (0.8,0) circle (1pt);
\draw (0,0) edge [out=80,in=100] node[above=-1.5pt] {\footnotesize$C$} (0.8,0);
\draw (0,0) -- node[above=-1.5pt] {\footnotesize$H$} (0.8,0);
\draw (0,0) edge [out=-80,in=-100] node[above=-1.5pt] {\footnotesize$C$} (0.8,0);
\end{tikzpicture}\ =\ \begin{tikzpicture}[thick,scale=1.5]
\useasboundingbox (0,-0.1) rectangle (1,0.6);
\filldraw (0,0) circle (1pt);
\filldraw (0.8,0) circle (1pt);
\draw (0,0) edge [out=80,in=100] node[above=-1.5pt] {\footnotesize$C$} (0.8,0);
\draw (0,0) edge [out=-80,in=-100] node[above=-1.5pt] {\footnotesize$C$} (0.8,0);
\end{tikzpicture}\cdot\ \begin{tikzpicture}[thick,scale=1.5]
\useasboundingbox (0,-0.1) rectangle (1,0.6);
\filldraw (0,0) circle (1pt);
\filldraw (.8,0) circle (1pt);
\draw (0,0) -- node[above=-1.5pt] {\footnotesize$H$} (0.8,0);
\end{tikzpicture}
\]
Similarly to \cite{BDF09}, we require the maps $T^{{\boldsymbol{\zeta}}}_{H,n}$ to be smooth in $m^2\in\RR$.
Since one can switch between
$T^{{\boldsymbol{\zeta}}}_{H,n}$ and $T^{{\boldsymbol{\zeta}}}_n$ using the map $\exp\,{\sum_{i<j}D_{ij}^{C}}$, this requirement is a condition that affects also $T^{{\boldsymbol{\zeta}}}_n$. The contribution to $T^{{\boldsymbol{\zeta}}}_{H,n}$ coming from a graph $\Gamma$ will be denoted by $T^{{\boldsymbol{\zeta}}}_{H,\Gamma}$.
\item{\bf Scaling}. Both the regularized Feynman propagator $\Delta^{m,\zeta}_F$ and $H_{F}^{m,\zeta}$ satisfy the scaling property \eqref{hom:scaling}, so it is natural to require a corresponding scaling behavior from $T^{{\boldsymbol{\zeta}}}_{H,n}$ and $T^{{\boldsymbol{\zeta}}}_n$. Following \cite{DF04,BDF09}, we define a map $\sigma_\rho:\Fcal\rightarrow \Fcal$, which acts as the scaling transformation:
\[
\sigma_\rho (F)(\varphi):=F(\rho^{\frac{2-d}{2}}\varphi_\rho)\,,\qquad \varphi_\rho(x)=\varphi(\rho^{-1}x)\,.
\]
For $F_1\dots F_n\in \Fcal$ with disjoint supports
\begin{equation}\label{nonren:scaling}
\sigma_\rho\circ T^{m,{\boldsymbol{\zeta}}}_n\circ \sigma_\rho^{-1}(F_1,\dots,F_n)=\exp\Big(\sum_{1\leq i< j\leq n}\rho^{2\zeta_{ij}}D^{\rho m}_{ij}\Big)(F_1,\dots,F_n)
\end{equation}
holds, where we exhibited the dependence on the mass $m$. To formulate the scaling condition, it is convenient to work on the level of graphs. In the expression \eqref{nonren:scaling} we get a factor $\rho^{2\zeta_{ij}}$ for each line joining vertices $i$ and $j$. We want the extended $ T^{m,{\boldsymbol{\zeta}}}_n$ and $ T^{m,{\boldsymbol{\zeta}}}_{H,n}$ to behave in the same way, so we require that
\begin{equation}\label{scalingH}
\sigma_\rho\circ T^{m,{\boldsymbol{\zeta}}}_{H,\Gamma}\circ \sigma_\rho^{-1}=\rho^{2 \mathbf{l}{\boldsymbol{\zeta}}}\,T^{\rho m,{\boldsymbol{\zeta}}}_{H,\Gamma}\,,
\end{equation}
and the same for $T^{m,{\boldsymbol{\zeta}}}_{\Gamma}$. In the formula above, $l_{ij}$ is the number of lines connecting vertices $i$ and $j$ and $\mathbf{l}{\boldsymbol{\zeta}}$ is the scalar product $\mathbf{l}{\boldsymbol{\zeta}}:=\sum_{i,j\in V(\Gamma)}l_{ij}\zeta_{ij}$.
The formula above may be illustrated by the following example:
{\small \begin{example}
\begin{align}
&\sigma_\rho\circ
D_{12}^{m,\zeta_{12}}\,D_{23}^{m,\zeta_{23}}\circ\sigma_\rho^{-1}
(\varphi_1^2(x), \varphi_2^2(y),\varphi_3^2(z))\notag\\
&\quad\quad= 8\hbar^2\,\rho^{2(d-2)}\,\Delta_{F}^{m,\zeta_{12}}
(\rho(x-y))\,\Delta_{F}^{m,\zeta_{23}}(\rho(y-z))\,
\sigma_\rho(\rho^{d-2}\varphi_1(\rho x)\varphi_3(\rho z))\notag\\
&\quad\quad= \rho^{2(\zeta_{12}+\zeta_{23})}\, D_{12}^{\rho m,\zeta_{12}}\,
D_{23}^{\rho m,\zeta_{23}}(\varphi_1^2(x), \varphi_2^2(y),\varphi_3^2(z)) \end{align}
\end{example}}
\end{itemize}
We will now show that the given axioms determine the family
$(T_n^{\boldsymbol{\zeta}})$ {\it uniquely}, for an appropriate choice of the parameters ${\boldsymbol{\zeta}}$. First we construct the family $(T_{H,n}^{\boldsymbol{\zeta}})$ by the Epstein-Glaser induction.
Using the causal factorization and the field expansion, in each order $n$, we reduce the problem to the extension of a numerical distribution defined everywhere outside of the thin diagonal. The crucial property that allows us to do this is the fact that the propagators $H^{m,{\boldsymbol{\zeta}}}_F$ are symmetric for spacelike points and therefore the definition of $(T_{H,n}^{\boldsymbol{\zeta}})$ doesn't depend on the way in which we split $F_1,\dots,F_n$ into an earlier and later supported set on the r.h.s. of \eqref{caus}. The scaling behavior of the numerical distributions is obtained from the formula \eqref{scalingH}, after inserting the field expansions \eqref{causWick} of functionals. See \cite{DF04} for details of this construction. For a given graph $\Gamma$, the scaling degree of a numerical coefficient $t_{H,\Gamma}^{{\boldsymbol{\zeta}},\beta}$ is given by
\begin{equation}\label{kappa:zeta}
\kappa^{{\boldsymbol{\zeta}},\beta}=\sum\limits_{i<j} l_{ij}(d-2-2\zeta_{ij})+|\beta|\,.
\end{equation}
We choose the parameters $\zeta_{ij}$ in such a way, that $\kappa^{{\boldsymbol{\zeta}},\beta}\notin\NN_0+d(|V(\Gamma)|-1)$ and $\Re(\kappa^{{\boldsymbol{\zeta}},\beta})<\kappa^{\mathbf{0},\beta}+1$, where $\kappa^{\mathbf{0},\beta}:=\sum_{i<j} l_{ij}(d-2)+|\beta|$. The reason for the former condition will be seen in \eqref{diffrenhom0}. The latter condition guarantees that the regularization doesn't make ${t_{H,\Gamma}^{{\boldsymbol{\zeta}},\beta}}$ too singular.
If these conditions are fulfilled, we obtain a unique homogeneous extension with the same degree $\kappa^{{\boldsymbol{\zeta}},\beta}$, which is smooth in $m^2$. The uniqueness follows immediately from the fact that two homogeneous extensions would differ by a sum of derivatives of the $\delta$-function multiplied by non-integer powers of $m$, thus violating the smoothness condition. The explicit construction of such an extension will be given in section \ref{extension-hom}. In section \ref{examples} we illustrate the inductive procedure presented here on the level of single diagrams.
Having constructed the family $(T_{H,n}^{\boldsymbol{\zeta}})$ as a unique solution to our extension problem, we can obtain $(T_n^{\boldsymbol{\zeta}})$ by applying \eqref{TH:TF}.
The maps $T_n^{\boldsymbol{\zeta}}$ constructed here have some additional useful
{\bf properties}:\footnote{In the exact theory (i.e.~for ${\boldsymbol{\zeta}} ={\bf 0}$),
these properties play the role of renormalization
conditions, they are part of the axioms \cite{EG73,DF04,BDF09}.}
\begin{itemize}
\item{\bf Lorentz Covariance}. Since $\Delta^\zeta_+$ and $\Delta^\zeta_F$ are of the form
$\Delta^\zeta_+(x)=\mathpzc{w}^\zeta(x^2-i\epsilon x^0)$ and $\Delta^\zeta_F(x)=\mathpzc{w}^\zeta(x^2-i\epsilon)$ resp.,
they are Lorentz invariant. This is the origin of Lorentz Covariance of $T_n^{\boldsymbol{\zeta}}$:
\begin{equation}
\beta_L(T_n^{\boldsymbol{\zeta}}(F_1,\dots,F_n))=T_n^{\boldsymbol{\zeta}}(\beta_L(F_1),\dots,\beta_L(F_n))\quad\quad
\forall L\in {\cal L}_+^\uparrow\ ,
\end{equation}
where $\beta$ is the natural automorphic action of the Lorentz group ${\cal L}_+^\uparrow$
on $\Fcal$.
\item{\bf Unitarity}.
In the exact theory (${\boldsymbol{\zeta}} ={\bf 0}$) one wants the relation $\bar S(-F)=(S(F))^{-1}$
to hold true, where
$\bar S(F):=\overline{S(\bar F)}$. In our formalism for the regularized
time-ordered products, the corresponding property can be formulated as
\begin{equation}\label{eq:unitarity}
(-1)^n\,\bar{T}_n^{\boldsymbol{\zeta}}=\hat{T}_n^{\boldsymbol{\zeta}}\ ,
\end{equation}
where $\bar{T}_n^{\boldsymbol{\zeta}}(F^{\otimes n}):=\overline{T_n^{\bar{\boldsymbol{\zeta}}}(\bar F^{\otimes n})}$ and
\begin{equation}\label{hatT}
\hat{T}_n^{\boldsymbol{\zeta}}:=
\sum_{P=(I_1,...,I_r)\in\mathrm{Part}(\{1,\dots,n\})}(-1)^r \exp\Bigl(
\sum_{i\in I_k,\,j\in I_l\,\mathrm{with}\,k<l}D_{ij}^+\Bigr)\,T^{{\boldsymbol{\zeta}}_{I_1}}_{|I_1|}\otimes\cdots\otimes
T^{{\boldsymbol{\zeta}}_{I_r}}_{|I_r|}
\end{equation}
with $D_{ij}^+:=\langle \hbar \Delta_+^{\zeta_{ij}},\tfrac{\delta^2}{\delta\varphi_i\delta\varphi_j}\rangle$.
Similarly to the exact theory (cf. \cite{EG73,Sch89})\footnote{The fact that we work with different
$\zeta$'s does not complicate the calculations, because the propagators depend on $(i,j)$ already
via their argument $(x_i-x_j)$.},
$\hat{T}_n^{\boldsymbol{\zeta}}$ satisfies anti-causal factorization, i.e.~the
$T^{\boldsymbol{\zeta}}$-factors on the r.h.s.~of \eqref{caus} appear in reversed order.
This property holds also for $\bar{T}^{\boldsymbol{\zeta}}_n$, because the underlying propagator is the
anti-Feynmann propagator, that is
\begin{equation}\label{barT}
\bar{T}^{{\boldsymbol{\zeta}},\mathrm{unren}}_n:=\exp\sum_{1\leq i<j\leq n} D^{AF}_{ij}\ ,\quad
D^{AF}_{ij}:=\langle \hbar \Delta_{AF}^{\zeta_{ij}},\tfrac{\delta^2}{\delta\varphi_i\delta\varphi_j}\rangle,\,,
\end{equation}
where
\begin{align}
\Delta_{AF}^\zeta(x)&:=\Theta(x^0)\Delta_+^\zeta(-x)+\Theta(-x^0)\Delta_+^\zeta(x)\notag\\
&=\mathpzc{w}^\zeta(x^2+i\epsilon)=\overline{\mathpzc{w}^{\bar\zeta}(x^2-i\epsilon)}=
\overline{\Delta_{F}^{\bar\zeta}(x)}\label{AFprop}\,.
\end{align}
From the anti-causal factorization of both, $\bar{T}_n^{\boldsymbol{\zeta}}$ and $\hat{T}_n^{\boldsymbol{\zeta}}$, we
conclude that in the inductive Epstein-Glaser construction unitarity \eqref{eq:unitarity}
can possibly be violated only in the extension to the thin diagonal. However, since both sides of
\eqref{eq:unitarity} are {\it uniquely} extended by homogeneity, also the extensions must agree.
\item{\bf Field Equation}. Let $G=\int dx\,\varphi(x)\,h(x)$ (where $h\in\mathcal{D}(\MM)$).
By the field equation we mean the relation
\begin{align}\label{FE}
&T^{\,{\boldsymbol{\zeta}}}_n(G,F_1,\dots,F_{n-1})=G\otimes T^{\,{\boldsymbol{\zeta}}}_{n-1}(F_1,\dots,F_{n-1})\notag\\
&\quad\quad\quad+\sum_{i=1}^{n-1}\int dx\,h(x)\int dy\, \Delta_F^{\zeta_{0i}}(x-y)\,T^{\,{\boldsymbol{\zeta}}}_{n-1}
\Bigl(F_1,\dots,\tfrac{\delta F_i}{\delta\varphi(y)},\dots,F_{n-1}\Bigr)\ .
\end{align}
The validity of the Field Equation is most easily shown by using the uniqueness of $T^{{\boldsymbol{\zeta}}}_n$.
The right hand side of \eqref{FE} gives an alternative inductive definition of $T^{{\boldsymbol{\zeta}}}_n$ on the restricted domain
$\{G\otimes F_1\otimes...F_{n-1}\,|\,G=\int\varphi\,h\ ,\,\,F_i\in \Floc\}$, which fulfills all the axioms.
Therefore, the alternative definition \eqref{FE} of
$T^{{\boldsymbol{\zeta}}}_n(G\otimes F_1\otimes...F_{n-1})$ must agree with the original one.
\item{\bf Meromorphicity}. The maps $T^{\,{\boldsymbol{\zeta}}}_n$ are meromorphic in ${\boldsymbol{\zeta}}$.
\end{itemize}
\subsection{Extension of homogeneously scaling distributions
with non-integer degree}\label{extension-hom}
In this section we derive a general formula for differential renormalization of homogeneous
distributions with a non-integer degree. In order to include the case of nonzero masses, we consider smooth distribution valued functions of $m^2$,
\[t:\RR\to \mathcal{D}^\prime(\RR^l\setminus\{0\})\]
(i.e. for every test function $f\in\mathcal{D}(\RR^l\setminus\{0\})$, the function $m^2\mapsto\langle t(m^2),f\rangle$ is smooth).
We assume that $t$ is homogeneous under simultaneous scaling of $m^2$ and the underlying coordinates $x_r,r=1,\dots,l$,
i.e.
\begin{equation}\label{scaling}
(\sum_{r=1}^l Q_{r}\,\partial_r -m\partial_m)t=-\kappa\,t \,
\end{equation}
with $\kappa\not\in l+\NN_0$,
where $Q_r$ is the operator of multiplication with the function $x\mapsto x_r$.
Due to smoothness in $m^2$, the scaling degree of $t(m^2)$ is equal to $\mathrm{Re}\,\kappa$.
It is convenient to introduce a uniform notation and write $Q_0$ for the operator $\partial_m$. Furthermore, we denote $P_r:= \partial_r$, $r=1,\dots,l$ and $P_0$ is the multiplication by $-m$. Using this notation we define a sequence of operators $E_n$ by
\begin{align*}
E_0&:=1\\
E_{n+1}&:=\sum_{r=0}^l P_r E_n Q_r\,.
\end{align*}
We can think of $E_n$ as generalized Euler operators, hence the notation.
\begin{lemma}\label{lemma:scaling}
The scaling relation
\eqref{scaling} implies the formula
\begin{equation}\label{diffrenhom}
t=\frac{1}{\prod_{j=0}^{n-1}(l+j-\kappa)} E_n t\,,\ \forall n\in\NN\,.
\end{equation}
\end{lemma}
\begin{proof}We prove this by induction on $n$. The case $n=1$ is the scaling relation
\eqref{scaling} written in the form
\begin{equation}\label{scaling1}
\sum_{r=0}^lP_r\Bigl(Q_r\, t\Bigr)=(l-\kappa)\,t\ .
\end{equation}
Assuming \eqref{diffrenhom} to hold true for $n\leq k$, we take into account that
$Q_r t$ is homogeneous with degree $(\kappa-1)$
(i.e.~it satisfies \eqref{scaling1} correspondingly modified) and obtain
\begin{multline}
E_{k+1}t=\sum_{r_1...r_{k+1}}P_{r_{k+1}}\dots P_{r_{1}}\Bigl(Q_{r_1}\dots Q_{r_{k+1}}\,t\Bigr)=\\
=\sum_{r_1...}P_{r_{k+1}}\dots P_{r_2}\Bigl(\sum_rP_{r}\circ Q_r\circ
\Bigl(Q_{r_2}\dots Q_{r_{k+1}}\,t\Bigr)\Bigr)\notag=\\
=(l-(\kappa-k))\,\sum_{r_1...}P_{r_{k+1}}\dots P_{r_2}
\Bigl(Q_{r_2}\dots Q_{r_{k+1}}\,t\Bigr)=\notag\\
=(l+k-\kappa)\,\Bigl(\prod_{j=0}^{k-1}(l+j-\kappa)\Bigr)\,t\ ,
\end{multline}
which is \eqref{diffrenhom} for $n=k+1$.
\end{proof}
We will now use this lemma for defining extensions of distributions. Obviously, multiplication by $x_r$ reduces the scaling degree by 1. Since $t$ is a smooth function of mass, fulfilling \eqref{scaling}, the scaling degree is also reduced by 1 if we apply $\partial_m$.
Let $\omega\in\ZZ$ be the minimal integer fulfilling
\begin{equation}\label{omega}
\omega>\Re(\kappa)-l-1\ .
\end{equation}
Now, choosing $n=\omega+1$ in \eqref{diffrenhom} we have
\begin{equation}
\mathrm{sd}\bigl(Q_{r_1}\dots Q_{r_{\omega+1}}\,t\bigr)=
\Re(\kappa)-(\omega +1)< l\ .
\end{equation}
Hence, $Q_{r_1}\dots Q_{r_{\omega+1}}\,t$ can be uniquely extended (by the direct extension,
see Thm.~\ref{thm:Extension-sd}) to a homogeneous distribution
\[
\overline{Q_{r_1}\dots Q_{r_{\omega+1}}\,t}\in\mathcal{D}'(\RR^l)\,.
\]
Using differential renormalization, the unique homogeneous extension $\dot t\in\mathcal{D}'(\RR^l)$ of $t$ is given by
\begin{equation}\label{diffrenhom0}
\dot t=\frac{1}{\prod_{j=0}^\omega (l+j-\kappa)}\,
\sum_{r_1...r_{\omega+1}}P_{r_{\omega+1}}\dots P_{r_1}
\Bigl(\overline{Q_{r_1}\dots Q_{r_{\omega+1}}\,t}\Bigr)\ .
\end{equation}
It is now clear, why the assumption $\kappa\not\in l+\NN_0$ is needed. The massless case is easily obtained by setting $Q_0$ and $P_0$ to 0.
\begin{rem}[Almost homogeneous scaling distributions]\footnote{This remark
is not relevant for our construction, but it may be useful in other instances.}
For {\it almost homogeneous scaling distributions $t\in\mathcal{D}^\prime(\RR^l\setminus\{0\})$
with $\omega=0$}, the scaling relation
(which is now \eqref{almosthomscal}) can also be used to derive a formula for differential renormalization.
More precisely we assume that $t$ fulfills \eqref{almosthomscal} with degree $\kappa=l+z$,
where $0\not= z\in\CC$ and $\Re(z)<1$. Now we write \eqref{almosthomscal} in the form
\begin{align}
0&=(P\cdot Q+z)^{N+1}\,t\notag\\
&=z^{N+1}\,t+\sum_{k=1}^{N+1}\,{N+1\choose k} \,z^{N+1-k}
\sum_sP_s\Bigl(Q_s\circ(P\cdot Q)^{k-1}\,t\Bigr)\ ,
\end{align}
where $P\cdot Q:=\sum_{r=1}^lP_r\circ Q_r$. Since
$\mathrm{sd}\bigl(Q_s\circ(P \cdot Q)^{k-1}\,t\bigr)=\Re(\kappa)-1<l$
the unique, almost homogeneous extension can be written as
\begin{equation}\label{almosthomscal1}
\dot t=-\sum_{k=1}^{N+1}\,{N+1\choose k}\,\frac{1}{z^k}\,
\sum_{s=1}^lP_s\Bigl(\overline{Q_s\,(P \cdot Q)^{k-1}\,t}\Bigr)
\in \mathcal{D}^\prime(\RR^l)\ .
\end{equation}
\end{rem}
\subsection{Minimal subtraction and the Forest Formula}\label{sec:MSandFF}
We start with a $1$-dimensional toy example, which is taken from
\cite[sect.~III.3.2]{Hoer03}, but treated here in the somewhat different light of extension of
(homogeneous) distributions from $\mathcal{D}'(\RR\setminus\{0\})$ to $\mathcal{D}'(\RR)$,
cf.~\cite{NST11} and \cite{GraciaBondiaLazzarini2003}.
{\small\begin{example}[Toy model]\label{thm:toy} The distribution
\begin{equation}
t^\zeta(x)=\Theta(x)\,x^{-k+\zeta}\in\mathcal{D}'(\RR\setminus\{0\})\ ,\quad
k\in\NN\,,\quad\zeta\in\CC\,,\,\,|\zeta|<1\ ,
\end{equation}
($\Theta(x)$ denotes the Heaviside function) scales homogeneously with degree $\kappa=k-\zeta$.
We are searching almost homogeneous extensions to $\mathcal{D}'(\RR)$, in particular for $\zeta =0$.
For $\zeta\not= 0$ the unique homogeneous extension $\dot t^\zeta\in\mathcal{D}'(\RR)$ can
be obtained by our formula \eqref{diffrenhom0}: the definition \eqref{omega}
gives $\omega=k-1$ and with that we obtain
\begin{equation}\label{diffren-toy}
\dot t^\zeta(x)=\frac{1}{\zeta(\zeta-1)...(\zeta-k+1)}\,\frac{d^k}{dx^k}(\Theta(x)\,x^{\zeta})=:
\sum_{l=-1}^\infty \dot t_l(x)\,\zeta^l\ .
\end{equation}
This is an analytic regularization of $t^0=\Theta(x)\,x^{-k}$ in the sense of definition
\ref{df:regularisation}, since one verifies straightforwardly that
$$
\lim_{\zeta\to 0}\langle \dot t^\zeta , g\rangle=\int dx\,\Theta(x)\,x^{-k}\,g(x)\ ,\quad\quad\forall
g\in \mathcal{D}_{k-1}(\RR)\ ,
$$
by using that such a function $g$ is of the form $g(x)=x^k\,\tilde g(x)$ with $\tilde g\in \mathcal{D}(\RR)$.
For $\zeta=0$ the extension is non-unique, almost homogeneous scaling is
compatible with the addition of a term $C\,\delta^{(k-1)}(x)\ $, where $C\in\CC$ arbitrary.
However, the $MS$-prescription \eqref{def:MS} yields a unique result: $t^{\rm MS}(x)=\dot t_0(x)$
(the coefficient $l=0$ in the expansion \eqref{diffren-toy}). Using
\begin{equation}
\Theta(x)\,x^{\zeta}=\Theta(x)+\zeta\,\Theta(x)\,\ln\,x+{\cal O}(\zeta^2)
\end{equation}
and
\begin{equation}
\frac{1}{(\zeta-1)...(\zeta-k+1)}=\frac{(-1)^{k-1}}{(k-1)!}\,
\Bigl(1+\zeta\,\sum_{j=1}^{k-1}1/j+{\cal O}(\zeta^2)\Bigr)
\end{equation}
we obtain
\begin{equation}
t^{\rm MS}(x)=\frac{(-1)^{k-1}}{(k-1)!}\,
\Bigl(\frac{d^k}{dx^k}(\Theta(x)\,\ln x)+(\sum_{j=1}^{k-1}1/j)
\,\,\delta^{(k-1)}(x)\Bigr)\ .
\end{equation}
Note that $t^{\rm MS}$ scales almost homogeneously with degree $k$ and power $1$.
\end{example}}
We now apply the formula \eqref{diffrenhom0} to the
distributions ${t_{H,\Gamma}^{{\boldsymbol{\zeta}},\beta}}\in\mathcal{D}'(\RR^{d(n-1)}\setminus\{ 0\})$,
arising as numerical coefficients in the expansions \eqref{causWick} of $T_{H,\Gamma}^{{\boldsymbol{\zeta}}}$. For such objects $\kappa\equiv\kappa^{{\boldsymbol{\zeta}},\beta}$ is given by equation \eqref{kappa:zeta} and the domain for ${\boldsymbol{\zeta}}\in\CC^N$
is restricted by $\Re(\kappa^{{\boldsymbol{\zeta}},\beta})<\kappa^{{\bf 0},\beta}+1$ and
$\kappa^{{\boldsymbol{\zeta}},\beta}\not\in d(n-1)+\NN_0$ to the region
\begin{equation}\label{Omega-graph}
\Omega_{\Gamma}^{\beta}:=\bigl\{{\boldsymbol{\zeta}}=(\zeta_{ij})_{1\leq i<j\leq n}\,\vert\,2\,{\bf l}{\boldsymbol{\zeta}}\not\in
\{0,1,...,\kappa^{{\bf 0},\beta}-d(n-1)\}\,\, \wedge\,\,\Re\, 2{\bf l}{\boldsymbol{\zeta}}> -1\bigr\}\,.
\end{equation}
The minimal $\omega\in\ZZ$ satisfying \eqref{omega} for all ${\boldsymbol{\zeta}}$
fulfilling these restrictions is
\begin{equation}\label{sing-order}
\omega=\kappa^{{\bf 0},\beta}-d(n-1)\ .
\end{equation}
Since for $\omega <0$ the direct extension (Thm.~\ref{thm:Extension-sd})
is applicable, we only study the case $\omega\geq 0$. The unique homogeneous
extension \eqref{diffrenhom0} can be written as
\begin{equation}\label{diffrenhom1}
\dot{t}_{H,\Gamma}^{{\boldsymbol{\zeta}},\beta}=\frac{1}{\prod_{k=0}^\omega (2{\bf l}{\boldsymbol{\zeta}}-k)}\,
\sum_{r_1...r_{\omega+1}}P_{r_{\omega+1}}...P_{r_{1}}
\Bigl(\overline{Q_{r_1}\dots Q_{r_{\omega+1}}\,{t_{H,\Gamma}^{{\boldsymbol{\zeta}},\beta}}}\Bigr)\ .
\end{equation}
We explicitly see that $\dot{t}_{H,\Gamma}^{{\boldsymbol{\zeta}},\beta}$ has possible poles at $2\,{\bf l}{\boldsymbol{\zeta}}\in\{0,1,...,\omega\}$. Before we can perform the minimal subtraction, we have to pass from $T_{H,n}^{\boldsymbol{\zeta}}$ to $T_{n}^{\boldsymbol{\zeta}}$. On the level of graphs, this corresponds to multiplying extended regularized expressions constructed above with powers of $C^\zeta$. Since $C^\zeta$ is regular in $x$, these powers and multiplications
are well defined and no extension is needed. Let us fix a graph $\Gamma$ and consider subgraphs $\gamma,\gamma^c$ with the same vertex set such the edges of $\Gamma$ are either edges of $\gamma$ or $\gamma^c$, i.e. $E(\gamma)\subset E(\Gamma)$ and $E(\gamma^c)=E(\Gamma)\setminus E(\gamma^c)$. According to \eqref{TH:TF}, $T_{\Gamma}^{{\boldsymbol{\zeta}}}$ can be constructed as
\[
T_\Gamma^{{\boldsymbol{\zeta}}}=\sum\limits_{\gamma}T_{H,\gamma}^{{\boldsymbol{\zeta}}_\gamma}\circ T_{C,\gamma^c}^{{\boldsymbol{\zeta}}_{\gamma^c}}\,,
\]
where ${\boldsymbol{\zeta}}=({\boldsymbol{\zeta}}_{\gamma}, {\boldsymbol{\zeta}}_{\gamma^c})$ and
$T_{C,\gamma^c}^{{\boldsymbol{\zeta}}_{\gamma^c}}:=\prod_{i,j\in V(\gamma)}(C^{\zeta_{ij}})^{l_{ij}^{\gamma^c}}$
($l_{ij}^{\gamma^c}$ denotes the number of $(ij)$-lines in $\gamma^c$).
Using the field expansion \eqref{causWick} we can write the corresponding formula also on the level of numerical distributions:
\begin{equation}\label{zeta:expansion}
\dot{t}_\Gamma^{{\boldsymbol{\zeta}},\beta}=\sum\limits_{\gamma,\beta_1\le\beta}\dot{t}_{H,\gamma}^{{\boldsymbol{\zeta}}_\gamma,\beta_1}\,t_{C,\gamma^c}^{{\boldsymbol{\zeta}}_{\gamma^c},\beta-\beta_1}\ .
\end{equation}
Note that $\beta_1$, contrary to $\beta$, is not restricted by the condition that it involves only the relative coordinates at each vertex.
To perform the minimal subtraction scheme we set all $\zeta_{ij}$ to be equal to a fixed value $\zeta$ and
determine the coefficient of $\zeta^0$ in the Laurent series
\eqref{zeta:expansion}. In the massless case the minimal subtraction scheme simplifies significantly, since we don't have to separate the regularized time-ordered products into $T^{{\boldsymbol{\zeta}}}_{H,\Gamma}$ and the powers of $C$.
We illustrate the massless case for a graph $\Gamma$ with no subdivergences.
Then the non-extended distribution $t_\Gamma^{\zeta,\beta}\equiv t^\zeta$ is analytic in $\zeta$,
\begin{equation}\label{t-analyt}
t^\zeta=t_0+\zeta\,t_1+\mathcal{O}(\zeta^2)\ ,
\end{equation}
Thus the
extension $\dot t^\zeta$ \eqref{diffrenhom1}
has a pole of order $1$ at $\zeta =0$, i.e. $\dot t^\zeta=\sum_{j=-1}^\infty \dot t_j\,
\zeta^j\ $; the coefficient $\dot t_0$ is the $MS$-solution $t^{\rm MS}$.
By expanding also the prefactor $\tfrac{1}{\prod_k(2c\zeta-k)}$ with $c=\sum_{i<j}l_{ij}=|E(\Gamma)|$, we obtain
\begin{equation}\label{diffrenhomMS}
t^{\rm MS}=\frac{(-1)^\omega}{\omega!}
\sum_{r_1...}\partial_{r_1}...\partial_{r_{\omega+1}}
\Bigl(\frac{1}{2c}\,\overline{Q_{r_1} \dots Q_{r_{\omega+1}}\,t_1}
+(\sum_{k=1}^\omega\frac{1}{k})\,\overline{Q_{r_1} \dots Q_{r_{\omega+1}}\,t_0}\Bigr)\ .
\end{equation}
The second term on the right hand side is a finite renormalization term, i.e.~it is of the form $\sum_{|a|=\omega} C_a\,
\partial^a\delta$ with $C_a\in\CC$, since it vanishes for
$x\not= 0$ due to $\partial_{r_{\omega+1}}\circ Q_{r_{\omega+1}}(Q_{r_1}\dots Q_{r_\omega}\,t_0)=0$
(cf.~\eqref{scaling1}).
The first term in \eqref{diffrenhomMS} contains generically logarithmic terms
which come from the expansion of a product of massless Feynman propagators:
\begin{equation}\label{expansion-Feyprop}
\prod_j\frac{\mu^{2\zeta}}{(-(x_j^2-i\epsilon))^{1-\zeta}}=
\frac{1+\zeta\,\sum_j\ln(-\mu^2(x_j^2-i\epsilon))+\mathcal{O}(\zeta^2)}{\prod_j(-(x_j^2-i\epsilon))}\ .
\end{equation}
The dimensional regularization which we introduce doesn't yield
${T^{{\boldsymbol{\zeta}},\mathrm{unren}}_n}$ \eqref{Sreg-unren} (i.e.~the Feynman rules)
finite, in the sense that the formal expressions for a graph $\Gamma$ and a multi-index $\beta$ characterizing the derivatives are a priori meaningful only for values of the regularization parameters ${\boldsymbol{\zeta}}$ with $\mathrm{Re}\,\zeta_{ij}$ sufficiently large.\footnote{\label{dim-reg-usual} This corresponds
to the fact that in the dimensional regularization in Euclidean momentum space the ``Feynman
integrals in $d-2\zeta$ dimensions'' are only defined for $\mathrm{Re}\,\zeta$ sufficiently large and have to be extended by analytic continuation.}
The analytic extension to $\Omega_{\Gamma}^{\beta}$ can be constructed by the use of the homogeneous scaling with non-integer degree in terms of
formula \eqref{diffrenhom1}.
In the presence of divergent subdiagrams, one first has to perform the analytic extension for the subdiagrams.
This amounts to solving the EG induction scheme for the deformed theory.
The result is unique. Then, the limit ${\boldsymbol{\zeta}}\to {\bf 0}$ is performed by
applying the EG Forest Formula \eqref{EGforest}. A disadvantage is that in intermediate steps of the construction of the analytic extension partitions of unity are used (see Example \ref{thm:doubletriangle}) which make the method less explicit.
An alternative is the so-called splitting method originally used by Epstein and Glaser which avoids partitions of unity on the price of a more complicated combinatorics.
\begin{rem}[Quick computation in the massive case]\label{m-quick}
If $t^\zeta_m:=t^{\zeta,\beta}_{\Gamma}$ is a product of derivated regularized Feynman propagators
(i.e.~all preceding inductive steps of the EG-construction of $t^\zeta_m$ are done by
the direct extension), the unique $\dot t^\zeta_m$ and the unique $t^\MS_m$ can be computed by the
following procedure, which is usually much faster than the method explained above. In this case
it suffices to work with one $\zeta$.
\begin{itemize}
\item[(1)] Insert the expansion
\begin{equation}\label{reg-Feyn-prop}
\Delta^\zeta_{F,\,m}(x)=\sum_{l=0}^\infty h_l^\zeta\,\mu^{2\zeta}\,m^{2l}\,(-(x^2-i\epsilon))^{l+1-\tfrac{d}2+\zeta}
+\sum_{l=0}^\infty c_l^\zeta\,\mu^{2\zeta}\, m^{d-2+2l-2\zeta}\,(-x^2)^l
\end{equation}
(with coefficients $h_l^\zeta,\,c_l^\zeta$ which do not depend on $(x,m)$, see \eqref{Creg}-\eqref{Hreg})
into $t^\zeta_m=\prod\partial^a\Delta^\zeta_{F,\,m}$.
\item[(2)] Let $\zeta\in\Omega^\beta_\Gamma$. Write $t^\zeta_m$ as the summands with scaling degree $> d(|V(\Gamma)|-1)-1$
and a remainder $r^\zeta_m$:
\begin{equation}
t^\zeta_m(x)=\sum_{p=0}^P\sum_c m^{p-2c\zeta}\,\tau_{p,c}^\zeta(x)+r^\zeta_m(x)\ ,
\end{equation}
where $c$ is the total number of $c$-lines (i.e.~the propagator is a $c_l^\zeta$-term).
\item[(3)] Apply the direct extension to $r^\zeta_m$. Since $\tau_{p,c}^\zeta(x)$ is homogeneous in $x$
with a non-integer degree, it can be extended by the differential renormalization formula \eqref{diffrenhom1}
with $Q_0\equiv 0\equiv P_0$. Summing up we obtain
\begin{equation}
\dot t^\zeta_m(x)=\sum_{p=0}^P\sum_c m^{p-2c\zeta}\,\dot\tau_{p,c}^\zeta(x)
+\overline{r^\zeta_m}(x)\ .
\end{equation}
Obviously, the so constructed $\dot t^\zeta_m$ is the unique solution of our axioms.
\item[(4)] Minimal subtraction acts only on the expressions $(m^{-2c\zeta}\,\dot\tau_{p,c}^\zeta)$, i.e.
\begin{equation}
t^\MS_m(x)=\sum_{p=0}^P m^p \sum_c \,\lim_{\zeta\to 0}\rp\bigl(m^{-2c\zeta}\,\dot\tau_{p,c}^\zeta(x)\bigr)
+\overline{r_m}(x)\ ,\quad \overline{r_m}:=\lim_{\zeta\to 0}\overline{r^\zeta_m}\ ,
\end{equation}
because $\overline{r^\zeta_m}$ is analytic in $\zeta$. The latter can be seen as follows: for $x\not= 0$
we conclude from the analyticity of $\Delta^\zeta_{F,\,m}$ that $t^\zeta_m$ and the sums
$\sum_c m^{-2c\zeta}\,\tau_{p,c}^\zeta$ are analytic, hence, $r^\zeta_m$ is analytic and this property is maintained
in the direct extension.
\end{itemize}
\end{rem}
\subsection{Examples}\label{examples}
In this section we use the shorthand notations
\begin{equation}
x_{kl}:=x_k-x_l\ ,\quad X:=-(x^2-i\epsilon)\ ,\quad X_{kl}:=-(x_{kl}^2-i\epsilon)\ ,
\quad X-Y:=-((x-y)^2-i\epsilon)\ .
\end{equation}
To simplify the formulas we work with a slight modification of the
regularized Feynman propagator: writing the prefactor $c(d-2\zeta)$
(used in \eqref{Creg}-\eqref{Hreg}) as
$$
c(d-2\zeta)=\tfrac{(2\pi)^{\zeta-\frac{d}{2}}}2\,\Gamma(\tfrac{d}{2}-1-\zeta)\Gamma(2-\tfrac{d}{2}+\zeta)
$$
(by means of Euler's reflection formula),
we replace $\pi^\zeta\,\Gamma(\tfrac{d}2-1-\zeta)$ by $\Gamma(\tfrac{d}2-1)$. This amounts to a finite renormalization of
$t^\MS$, which is analogous to the step from the $MS$- to the
$\overline{MS}$-prescription in conventional dimensional regularization.
\begin{example}[Second order of a massless model in $d=4$ dimensions]\footnote{In
\cite{GraciaBondiaLazzarini2003} this example is treated by essentially the same method under the
name 'analytical regularization'.}
The $k$-th power of the dimensionally regularized massless Feynman propagator,
\begin{equation}\label{DF4}
t^\zeta(x):=(D_F^\zeta(x))^k\ ,\quad k\in\{2,3,4,...\}\,,\quad\text{with}\quad
D_F^\zeta(x)=\frac{\mu^{2\zeta}}{4\pi^2\,X^{1-\zeta}}\ ,
\end{equation}
exists in $\mathcal{D}'(\RR^4)$ (by the direct
extension, Thm.~\ref{thm:Extension-sd}) for
\begin{equation}
\mathrm{sd}(t^\zeta)<4\quad\text{that is for}\quad \Re(\zeta)>1-\tfrac{2}{k}\ .
\end{equation}
Analytic continuation to a function meromorphic in
$\Omega:=\{\zeta\in\CC\,|\,\Re(\zeta)>-\tfrac{1}{k}\}$
can be done by differential renormalization. Instead of using \eqref{diffrenhom1}
we proceed in the
following way:\footnote{For $k=2$ (fish diagram)
the two procedures give essentially the same formula, due to
$\square X^\alpha=-2\alpha\,\partial_\mu(x^\mu\,
X^{\alpha-1})$; but for higher powers of $D_F^\zeta$, the method \eqref{DFk}
yields shorter formulas.}
on $\mathcal{D}(\RR^4\setminus\{0\})$ the distribution $X^\alpha$ is
well-defined for $\alpha\in\CC$ and one easily verifies
\begin{equation}
\square X^\alpha=-4\alpha(\alpha+1)\,X^{\alpha-1}
\end{equation}
(cf.~\cite{GraciaBondiaLazzarini2003}). Hence, in $\mathcal{D}'(\RR^4\setminus\{0\})$
$t^\zeta$ agrees with
\begin{equation}
\dot t^\zeta(x)=\frac{(-1)^{k-1}\,\mu^{2k\zeta}}
{(4\pi^2)^k\,4^{k-1}\,k\zeta\,(k\zeta-k+1)\,\prod_{j=1}^{k-2}(k\zeta-j)^2}
\,\square^{(k-1)} X^{-1+k\zeta}\ ,\label{DFk}
\end{equation}
for almost all $\zeta\in\CC$, where $\prod_{j=1}^{k-2}(k\zeta-j)^2:=1$ for $k=2$.
As distributions on $\mathcal{D}(\RR^4)$ we have $t^\zeta=\dot t^\zeta$ for
$\Re(\zeta)>1-\tfrac{2}{k}$ only; however, $\dot t^\zeta$ is well-defined as meromorphic function
on $\Omega$ by direct extension of $X^{-1+k\zeta}$,
since $\mathrm{sd}(X^{-1+k\zeta})=2(1-\Re(k\zeta))<4=d$ for $\zeta\in\Omega$.
Hence, $\dot t^\zeta$ is the unique analytic continuation of $t^\zeta$.
The $MS$-solution can be computed as explained in \eqref{t-analyt}-\eqref{expansion-Feyprop}:
\begin{align}\label{DFk-MS}
t^{\rm MS}(x)&=\frac{(-1)^{k-1}}{(4\pi^2)^k\,4^{k-1}\,(1-k)\,\prod_{j=1}^{k-2}j^2}\,\square^{(k-1)}
\Bigl(\frac{\ln(\mu^2\,X)+c}{X}\Bigr)\notag\\
&=\frac{(-1)^{k-1}}{(4\pi^2)^k\,4^{k-1}\,(1-k)\,\prod_{j=1}^{k-2}j^2}\,
\Bigl(\square^{(k-1)}\Bigl(\frac{\ln(\mu^2\,X)}
{X}\Bigr)+c\,i4\pi^2\,\square^{(k-2)}\delta(x)\Bigr)\ ,
\end{align}
where $c:=(k-1)^{-1}+2\sum_{j=1}^{k-2}j^{-1}$ and $\sum_{j=1}^{k-2}j^{-1}:=0$ for $k=2$.
\end{example}
\begin{example}[Massive setting sun diagram in $d=4$ dimensions]\label{exp:settingsun}
We use the quick computation of remark \ref{m-quick}: we write $ t^\zeta_m:=(\Delta_{F,m}^\zeta)^3$ as
\begin{equation}\label{sm-settingsun}
t^\zeta_m(x)=\tau_{0,0}^\zeta(x)+(\tfrac{m}{\mu})^2\Bigl(\tau_{2,0}^\zeta(x)+
(\tfrac{m}{\mu})^{-2\zeta}\,\tau_{2,1}^\zeta(x)\Bigr)+r^\zeta_m(x)
\end{equation}
with
\begin{align}
\tau_{0,0}^\zeta(x)=&(h_0)^3\,\mu^{6\zeta}\,X^{-3+3\zeta}\ ,\quad
\tau_{2,0}^\zeta(x)=3\,h_1^\zeta\,(h_0)^2\,\mu^{2+6\zeta}\,X^{-2+3\zeta}\ ,\notag\\
\tau_{2,1}^\zeta(x)=&3\,c_0^\zeta\,(h_0)^2\,\mu^{2+4\zeta}\,X^{-2+2\zeta}\ ,
\end{align}
where
\begin{align}\label{coefficients}
h_0 &=\frac{1}{4\,\pi^2}\,,\quad
h_1^\zeta=\frac{\Gamma(\zeta)}{16\,\pi^2\,\Gamma(1+\zeta)}
=\frac{1}{16\,\pi^2}\,\Bigl(\frac{1}{\zeta}+\mathcal{O}(\zeta^0)\Bigr)\ ,\notag\\
c_0^\zeta&=-\frac{4^\zeta\,\Gamma(\zeta)}{16\,\pi^2\,\Gamma(2-\zeta)}
=\frac{-1}{16\,\pi^2}\,\Bigl(\frac{1}{\zeta}+\mathcal{O}(\zeta^0)\Bigr)\ .
\end{align}
We point out that the singularities of $h_1^\zeta,\, c_0^\zeta$ for $\zeta\to 0$
cancel out in the combination $t^\zeta_{2}:=(\tau_{2,0}^\zeta+(\tfrac{m}{\mu})^{-2\zeta}\,\tau_{2,1}^\zeta)$:
\begin{equation}\label{cancellation}
\lim_{\zeta\to 0}t^\zeta_{2}(x)=3\,(h_0)^2\,\mu^2\,X^{-2}\,
\lim_{\zeta\to 0}(h_1^\zeta\,(\mu^2X)^{\zeta}+c_0^\zeta\,(\tfrac{m}{\mu})^{-2\zeta})=
\tfrac{3}{2^8\,\pi^6}\,\mu^2\,X^{-2}
\,\Bigl(\ln(\tfrac{m^2\,X}{4})-2\,\Gamma '(1)-1\Bigr)
\end{equation}
in $\mathcal{D}'(\RR^3\setminus\{0\})$, as it must be since $(\Delta_F^\zeta)^3$ is analytic in $\zeta$.
We have to compute the coefficient
$t^\MS=\dot t_0$ of $\dot t^\zeta=\sum_{k=-l}^\infty \dot t_k\,\zeta^k$ for $t^\zeta=\tau^\zeta_{0,0}\ $,
$t^\zeta=\tau^\zeta_{2,0}$ and $t^\zeta=m^{-2\zeta}\,\tau^\zeta_{2,1}\ $.
For the former the result is the particular case $k=3$ of \eqref{DFk-MS}:
\begin{equation}
\tau^\MS_{0,0}(x)=\frac{-1}{2^{11}\,\pi^6}\,
\Bigl(\square^2\Bigl(\frac{\ln(\mu^2\,X)}
{X}\Bigr)+\tfrac{5}2\,i\,4\,\pi^2\,\square\delta(x)\Bigr)\ .
\end{equation}
The extensions of $\tau^\zeta_{2,0}$ and $\tau^\zeta_{2,1}$ are obtained analogously to \eqref{DFk}:
\begin{equation}
\dot \tau^\zeta_{2,0}(x)=\frac{\mu^{2+6\zeta}\,3\,h_1^\zeta\,(h_0)^2}{4\,(1-3\zeta)\,3\zeta}\,
\square X^{-1+3\zeta}\ ,\quad
\dot \tau^\zeta_{2,1}(x)=\frac{\mu^{2+4\zeta}\,3\,c_0^\zeta\,(h_0)^2}{4\,(1-2\zeta)\,2\zeta}\,
\square X^{-1+2\zeta}\ .
\end{equation}
Note that the cancellation of the leading negative power of $\zeta$ in the sum
$t^\zeta_{2}$ \eqref{cancellation}
does not work for the pertinent extensions:
\begin{equation}
\dot t^\zeta_{2}:=(\dot\tau_{2,0}^\zeta+(\tfrac{m}{\mu})^{-2\zeta}\,\dot \tau_{2,1}^\zeta)
=\frac{3\,(h_0)^2}{4\,\zeta}\,\square \Bigl(X^{-1+2\zeta}\,\mu^{2+4\zeta}\Bigl[
\frac{h_1^\zeta\,(\mu^2 X)^\zeta}{(1-3\zeta)\,3}+\frac{c_0^\zeta\,(\tfrac{m}{\mu})^{-2\zeta}}{(1-2\zeta)\,2}\Bigr]\Bigr)\ ,
\end{equation}
since $[...]=\tfrac{1}{16\,\pi^2\,\,\zeta}\,(\tfrac{1}3-\tfrac{1}2)+\mathcal{O}(\zeta^0)$. Expanding
\begin{align}
\frac{3\,h_1^\zeta\,(h_0)^2}{4\,(1-3\zeta)\,3\zeta}&=\Bigl(\frac{r_{-2}}{\zeta^2}+
\frac{r_{-1}}{\zeta}+r_0+\mathcal{O}(\zeta)\Bigr)\ ,\notag\\
\frac{3\,c_0^\zeta\,(h_0)^2}{4\,(1-2\zeta)\,2\zeta}&=\Bigl(\frac{s_{-2}}{\zeta^2}+
\frac{s_{-1}}{\zeta}+s_0+\mathcal{O}(\zeta)\Bigr)\ ,
\end{align}
the MS-solutions read
\begin{align}
\tau^\MS_{2,0}(x):=\mu^{-2}\,\lim_{\zeta\to 0}\rp (\dot\tau_{2,0}^\zeta)(x)&=
\square\Bigl(\frac{r_0+r_{-1}\,3\,\ln(\mu^2X)+r_{-2}\,\tfrac{9}2\,
(\ln(\mu^2X))^2}{X}\Bigr)\ ,\notag\\
\tilde\tau_{2,1}^\MS(x):=\mu^{-2}\,\lim_{\zeta\to 0}\rp ((\tfrac{m}{\mu})^{-2\zeta}\,\dot\tau_{2,1}^\zeta)(x)&=
\square\Bigl(\frac{s_0+s_{-1}\,2\,\ln(\tfrac{\mu^3X}{m})+s_{-2}\,2\,
(\ln(\tfrac{\mu^3X}{m}))^2}{X}\Bigr)\ .
\end{align}
Joining together the various terms we end up with
\begin{equation}
t^\MS_m(x)=\tau_{0,0}^\MS(x)+m^2\Bigl(\tau_{2,0}^\MS(x)+\tilde\tau_{2,1}^\MS(x)\Bigr)
+\overline{r_m}(x)\ .
\end{equation}
Notice that $\overline{r_m}$ can be computed directly, without using any regularization: inserting
the $m^2$-expansion of the Feynman propagator,
\begin{equation}
\Delta_F(x)=\frac{h_0}{X}+m^2\sum_{l=0}^\infty\Bigl(q_l\,(m^2X)^l\,\ln(m^2X)+Q_l\,(m^2X)^l\Bigr)\ ,
\quad q_l,Q_l\in\RR\ ,
\end{equation}
into $(\Delta_F)^3$, $\overline{r_m}$ is the direct extension (Thm.~\ref{thm:Extension-sd}) of the sum of all
terms which are $\sim m^{4+2l}\,(\ln(m^2X))^k$ with $l\in\NN_0$ (and $k=0,1,2,3$).
\end{example}
\begin{example}[Massless triangle diagram in $d=6$ dimensions]\label{exp:triangle}
The dimensionally regularized massless Feynman propagator
in $d=6$ dimensions reads
\begin{equation}\label{Feynman6}
D_F^\zeta(x)=\frac{\mu^{2\zeta}}{4\pi^3\,X^{2-\zeta}}\ ,
\end{equation}
The triangle diagram
\begin{equation}
t^\zeta(x,y)=D_F^\zeta(x)\,D_F^\zeta(y)\,D_F^\zeta(x-y)\,\in\mathcal{D}'(\RR^{12}\setminus\{0\})\ ,\quad
\zeta\in\CC\setminus\{0\}\ ,
\end{equation}
is homogeneous with degree $\kappa^\zeta=12-6\zeta$, i.e.~we have $\omega =0$
\eqref{omega}. With that \eqref{diffrenhom1} yields
\begin{equation}\label{triangle6}
\dot t^\zeta(x,y)=\frac{\mu^{6\zeta}}{4^3\pi^9\,6\,\zeta}\,
\partial_{(x,y)\mu} \Bigl(\overline{\frac{(x,y)^\mu}{X^{2-\zeta}\,
Y^{2-\zeta}\,(X-Y)^{2-\zeta}}}\Bigr)\ ,
\end{equation}
where we write $\partial_{(x,y)\mu} \bigl(\overline{(x,y)^\mu\,t(x,y)}\bigr)$ for $\partial_{x,\mu}
(\overline{x^\mu t(x,y)})+\partial_{y,\mu} (\overline{y^\mu t(x,y)})$ to simplify the notation.
For $\zeta\to 0$ the $MS$-prescription \eqref{diffrenhomMS} yields
\begin{equation}\label{triangle6MS}
t^{\rm MS}(x,y)=\frac{1}{3\cdot 2^7\,\pi^9}\,\partial_{(x,y)\mu}
\Bigl(\overline{(x,y)^\mu\,\frac{\ln(\mu^6\,X\,Y\,(X-Y))}
{X^2\,Y^2\,(X-Y)^2}}\Bigr)
\end{equation}
\end{example}
\begin{example}[Massless triangle diagram with subdivergences in $d=4$ dimensions]\label{exp:triangle1}
We want to renormalize
\begin{equation}
t^{\boldsymbol{\zeta}}(x,y)=\Bigl((D_F^{\zeta_1}(x))^2\Bigr)_\mathrm{ren}\,\Bigl((D_F^{\zeta_2}(y))^2\Bigr)_\mathrm{ren}
D_F^{\zeta_3}(x-y)\,\in\mathcal{D}'(\RR^8\setminus\{0\})\ ,
\end{equation}
with ${\boldsymbol{\zeta}}\equiv (\zeta_1,\zeta_2,\zeta_3)\in\CC^3\ ,\,\,|{\boldsymbol{\zeta}}|$ small enough,
$\zeta_1\not= 0$, $\zeta_2\not= 0$, $(2\zeta_1+2\zeta_2+\zeta_3)\not= 0$
and where $D_F^\zeta$ is given by \eqref{DF4}.
By 'ren' we mean that the pertinent (divergent) subdiagram is renormalized. Doing this
by using \eqref{DFk} we have
\begin{equation}
t^{\boldsymbol{\zeta}}(x,y)=c(\zeta_1)\,c(\zeta_2)\,\tau^{\boldsymbol{\zeta}}(x,y)\ ,
\end{equation}
where
\begin{equation}
c(\zeta):=\frac{\mu^{4\zeta}}{(4\pi^2)^2\,8\,\zeta\,(1-2\zeta)}
=:\sum_{k=-1}^\infty c_k\,\zeta^k
\end{equation}
and
\begin{equation}
\tau^{\boldsymbol{\zeta}}(x,y):=\frac{\mu^{2\zeta_3}}{4\pi^2}
\Bigl(\square_x X^{-1+2\zeta_1}\Bigr)\,\Bigl(\square_y Y^{-1+2\zeta_2}\Bigr)\,
(X-Y)^{-1+\zeta_3}\ .
\end{equation}
We explicitly see that $t^{\boldsymbol{\zeta}}$ scales homogeneously with
degree $\kappa^{\boldsymbol{\zeta}}=10-2(2\zeta_1+2\zeta_2+\zeta_3)$. Hence,
the differential renormalization formula \eqref{diffrenhom1}
can be applied with $\omega=2$:
\begin{equation}
\dot t^{\boldsymbol{\zeta}}(x,y)=\frac{c(\zeta_1)\,c(\zeta_2)}{2\zeta_1+2\zeta_2+\zeta_3}\,
\tilde t^{\boldsymbol{\zeta}}(x,y)\in\mathcal{D}^\prime(\RR^8)\ ,
\end{equation}
where
\begin{align}\label{trianglesubsub}
\tilde t^{\boldsymbol{\zeta}}(x,y)&:=\frac{1}{2\,(1-4\zeta_1-4\zeta_2-2\zeta_3)\,(2-4\zeta_1-4\zeta_2-2\zeta_3)}\Bigl(
\partial_{x,\mu}\partial_{x,\nu}\partial_{x,\lambda}\bigl(\overline{x^\mu x^\nu x^\lambda \tau^{\boldsymbol{\zeta}}(x,y)}\bigr)
\notag\\
&+3\,\partial_{x,\mu}\partial_{x,\nu}\partial_{y,\lambda}\bigl(\overline{x^\mu x^\nu y^\lambda \tau^{\boldsymbol{\zeta}}(x,y)}\bigr)
+3\,\partial_{x,\mu}\partial_{y,\nu}\partial_{y,\lambda}\bigl(\overline{x^\mu y^\nu y^\lambda \tau^{\boldsymbol{\zeta}}(x,y)}\bigr)
\notag\\
&+\partial_{y,\mu}\partial_{y,\nu}\partial_{y,\lambda}\bigl(\overline{y^\mu y^\nu y^\lambda \tau^{\boldsymbol{\zeta}}(x,y)}\bigr)\Bigr)
\end{align}
is analytic in ${\boldsymbol{\zeta}}$ for $|{\boldsymbol{\zeta}}|$ sufficiently small.
Turning to the limit ${\boldsymbol{\zeta}}\to {\bf 0}$ we apply the EG Forest Formula \eqref{EGforest} (Thm.~\ref{forest}):
first we subtract the principle parts of the divergent subdiagrams
\begin{align}\label{ssd}
(1+R_{\zeta_1}+R_{\zeta_2})\,\dot t^{\boldsymbol{\zeta}}(x,y)&=\frac{c(\zeta_1)\,c(\zeta_2)}{2\zeta_1+2\zeta_2+\zeta_3}\,
\tilde t^{(\zeta_1,\zeta_2,\zeta_3)}(x,y)\notag\\
&-\frac{c_{-1}\,c(\zeta_2)}{\zeta_1\,(2\zeta_2+\zeta_3)}\tilde t^{(0,\zeta_2,\zeta_3)}(x,y)
-\frac{c_{-1}\,c(\zeta_1)}{\zeta_2\,(2\zeta_1+\zeta_3)}\tilde t^{(\zeta_1,0,\zeta_3)}(x,y)
\end{align}
(where we write $R_{\Lambda_I}$ instead of $R_I$). Using $\square_x X^{-1}=-i 4 \pi^2\,\delta(x)$
we explicitly see that $\tilde t^{(0,\zeta_2,\zeta_3)}(x,y)= \tilde t^{\zeta_2,0,\zeta_3)}(y,x)$
has support on the partial diagonal $x=0$:
\begin{align}
\tilde t^{(0,\zeta_2,\zeta_3)}(x,y)=& \frac{-i\,\mu^{2\zeta_3}}
{2\,(1-4\zeta_2-2\zeta_3)\,(2-4\zeta_2-2\zeta_3)}\,\delta(x)\notag\\
&\quad\cdot \partial_{y,\mu}\partial_{y,\nu}\partial_{y,\lambda}\Bigl(\overline{y^\mu y^\nu y^\lambda
\bigl(\square_y Y^{-1+2\zeta_2}\bigr)\,
Y^{-1+\zeta_3}}\Bigr)\ .
\end{align}
To obtain
\begin{equation}
t^\mathrm{MS}(x,y)=\lim_{\zeta\to 0}\,(1+R_\zeta)\vert_{\zeta:=\zeta_1=\zeta_2=\zeta_3}
(1+R_{\zeta_1}+R_{\zeta_2})\,\dot t^{\boldsymbol{\zeta}}(x,y)
\end{equation}
($(1+R_\zeta)$ removes the remaining ``overall divergence'') we set $\zeta:=\zeta_1=\zeta_2=\zeta_3$
in $(1+R_{\zeta_1}+R_{\zeta_2})\,\dot t^{\boldsymbol{\zeta}}(x,y)$ \eqref{ssd} and compute from the resulting
Laurent series in $\zeta$ (which has a pole of order $3$) the term $\sim\zeta^0$. Using
the expansions
\begin{equation}
\tilde t^{(\zeta,\zeta,\zeta)}(x,y)=\sum_{k=0}^\infty t_k(x,y)\,\zeta^k\ ,\quad\quad
\tilde t^{(0,\zeta,\zeta)}(x,y)=\sum_{k=0}^\infty t^1_k(x,y)\,\zeta^k\ ,
\end{equation}
we end up with
\begin{equation}
t^\mathrm{MS}(x,y)=\frac{1}5\,\sum_{q+r+s=1}c_q\,c_r\,t_s(x,y)-
\frac{c_{-1}}3\,\sum_{r+s=2}c_r\,(t^1_s(x,y)+t^1_s(y,x))\ ,
\end{equation}
where $q,r\geq -1$ and $s\geq 0$.
\end{example}
\begin{example}[Massless double triangle diagram with overlapping divergences in $d=6$
dimensions]\label{thm:doubletriangle}
We introduce the notation $D_F^\zeta(x)=: d(\zeta)\, X^{-2+\zeta}$
for the Feynman propagator \eqref{Feynman6}, note that $\zeta\mapsto d(\zeta)$ is analytic.
Compared with the preceding examples, the additional complication
of the double triangle diagram,
\begin{equation}
t_\mathrm{unren}(x_{14},x_{24},x_{34})=D_F^{\zeta_{12}}(x_{12})\,D_F^{\zeta_{13}}(x_{13})\,
D_F^{\zeta_{23}}(x_{23})\,D_F^{\zeta_{24}}(x_{24})\,D_F^{\zeta_{34}}(x_{34})\ ,
\end{equation}
is an ``overlapping divergence''. The subdiagram 123 (i.e.~with vertices $x_1,x_2,x_3$) is computed in
Example \ref{exp:triangle}; with different $\zeta_{kl}$'s the regularized amplitude \eqref{triangle6}
reads
\begin{equation}\label{subtriangle}
\dot t_3^{(\zeta_{12},\zeta_{13},\zeta_{23})}(x_{12},x_{13})=
\frac{d(\zeta_{12})d(\zeta_{13})d(\zeta_{23})}{2(\zeta_{12}+\zeta_{13}+\zeta_{23})}\,\,
\tilde t^{(\zeta_{12},\zeta_{13},\zeta_{23})}(x_{12},x_{13})\in\mathcal{D}^\prime(\RR^{12})\ ,
\end{equation}
where
\begin{equation}
\tilde t^{(\zeta_{12},\zeta_{13},\zeta_{23})}(x_{12},x_{13}):=
\partial_{(x_{12},x_{13})\mu} \Bigl(\overline{\frac{(x_{12},x_{13})^\mu}{
X_{12}^{\>\>2-\zeta_{12}}\,X_{13}^{\>\>2-\zeta_{13}}\,
X_{23}^{\>\>2-\zeta_{23}}}}\Bigr)
\end{equation}
is analytic in $(\zeta_{12},\zeta_{13},\zeta_{23})$.
The second divergent subdiagram, which is 234, is obtained from 123 by replacing 1 by 4.
The whole diagram 1234 has $\omega =2$; we use the notations ${\bf x}:=(x_{12},x_{13},x_{14})$
and ${\boldsymbol{\zeta}}:=(\zeta_{12},\zeta_{13},\zeta_{23},\zeta_{24},\zeta_{34})$. To write down $t^{\boldsymbol{\zeta}}({\bf x})
\in\mathcal{D}^\prime(\RR^{18}\setminus\{0\})$ we need to introduce a partition of unity: for ${\bf x}
\in\RR^{18}\setminus\{0\}$ let
\begin{equation}\label{partofunity1}
1=f_1({\bf x})+ f_2({\bf x}) \quad\text{with}\quad f_1,f_2\in {\cal C}^\infty (\RR^{18}\setminus\{0\})
\end{equation}
and
\begin{equation}\label{partofunity2}
\mathrm{supp}\,f_1\subset \RR^{18}\setminus\{{\bf x}\,\vert\,x_{24}=0=x_{34}\}\ ,\quad
\mathrm{supp}\,f_2\subset \RR^{18}\setminus\{{\bf x}\,\vert\,x_{12}=0=x_{13}\}\ .
\end{equation}
With that we can write
\begin{equation}\label{dt1}
t^{\boldsymbol{\zeta}}({\bf x})=\frac{\prod d(\zeta_{kl})}{2}\,\Bigl(f_1({\bf x})\,
\frac{t^{\boldsymbol{\zeta}}_{123|4}({\bf x})}{\zeta_{12}+\zeta_{13}+\zeta_{23}}+
f_2({\bf x})\,\frac{(1\leftrightarrow 4)}{\zeta_{24}+\zeta_{34}+\zeta_{23}}\Bigr)
\in\mathcal{D}^\prime(\RR^{18}\setminus\{0\})\ ,
\end{equation}
where
\begin{equation}
t^{\boldsymbol{\zeta}}_{123|4}({\bf x}):=\tilde t^{(\zeta_{12},\zeta_{13},\zeta_{23})}(x_{12},x_{13})\,
X_{24}^{\>\>-2+\zeta_{24}}\,X_{34}^{\>\>-2+\zeta_{34}}
\end{equation}
is analytic in ${\boldsymbol{\zeta}}$. Here and in the following we mean by $\prod$ and $\sum$ the
product or sum over $(k,l)=(1,2),\,(1,3),\,(2,3),\,(2,4),\,(3,4)$.
We point out that
$t^{\boldsymbol{\zeta}}$ \eqref{dt1} is {\it independent of the choice of $f_1,\,f_2$}, because on
\begin{equation}
\Bigl(\mathrm{supp}\,f_1\cap\mathrm{supp}\,f_2\Bigr)\subset\Bigl(\{{\bf x}\,|\,x_{23}\not= 0\}\cup
\{{\bf x}\,|\,x_{23}= 0\,\,\wedge\,\,x_{12}\not= 0\not= x_{34}\}\Bigr)
\end{equation}
the distribution $\dot t_3^{(\zeta_{12},\zeta_{13},\zeta_{23})}(x_{12},x_{13})$ \eqref{subtriangle}
is equal to its non-extended version and, hence,
\begin{equation}\label{independent}
\frac{t^{\boldsymbol{\zeta}}_{123|4}({\bf x})}{2(\zeta_{12}+\zeta_{13}+\zeta_{23})}=
\prod X_{kl}^{\>\>-2+\zeta_{kl}}\ ,
\end{equation}
is invariant under $(1\leftrightarrow 4)$.
The differential renormalization formula \eqref{diffrenhom1}
(with $\omega =2$) yields the extension
\begin{align}\label{dt2}
\dot t^{\boldsymbol{\zeta}}({\bf x})=&\frac{\prod d(\zeta_{kl})}
{(\sum\zeta_{kl})\,4\,(1-2\sum\zeta_{kl})(2-2\sum\zeta_{kl})}\,\,\partial_{{\bf x}\mu}
\partial_{{\bf x}\nu}\partial_{{\bf x}\lambda}\notag\\
&\quad\quad\Bigl(\overline{{\bf x}^\mu {\bf x}^\nu {\bf x}^\lambda
\Bigl(f_1({\bf x})\,\frac{t^{\boldsymbol{\zeta}}_{123|4}({\bf x})}{\zeta_{12}+\zeta_{13}+\zeta_{23}}+
f_2({\bf x})\,\frac{(1\leftrightarrow 4)}{\zeta_{24}+\zeta_{34}+\zeta_{23}}\Bigr)}\Bigr)
\in\mathcal{D}^\prime(\RR^{18})\ ,
\end{align}
where similarly to \eqref{triangle6} a shorthand notation is used (the detailed version is
analogous to \eqref{trianglesubsub}).
We use the EG Forest Formula \eqref{EGforest} (Thm.~\ref{forest}) to compute $t^\mathrm{MS}$:
\begin{equation}
t^\mathrm{MS}({\bf x})=\lim_{{\boldsymbol{\zeta}}\to {\bf 0}}\,(1+R_{1234})
(1+R_{123}+R_{234})\,\dot t^{\boldsymbol{\zeta}}({\bf x})\ .
\end{equation}
We point out that $R_{123}$ gives a non-vanishing contribution {\it only} on the $f_1$-term;
because, setting $\zeta:=\zeta_{12}=\zeta_{13}=\zeta_{23}$, the $f_2$-term is analytic in a
neighbourhood of $\zeta =0$.
Taking this into account the counter term of the 123-subdiagram reads
\begin{align}
R_{123}\,\dot t^{\boldsymbol{\zeta}}({\bf x})&=R_\zeta\,\frac{d(\zeta)^3\,d(\zeta_{24})\,d(\zeta_{34})}
{\zeta\cdot 12\cdot (3\zeta+\zeta_{24}+\zeta_{34})(1-2(3\zeta+\zeta_{24}+\zeta_{34}))
(2-2(3\zeta+\zeta_{24}+\zeta_{34}))}\notag\\
&\quad\quad\quad\quad\cdot \partial_{{\bf x}\mu}\partial_{{\bf x}\nu}\partial_{{\bf x}\lambda}
\Bigl(\overline{{\bf x}^\mu {\bf x}^\nu {\bf x}^\lambda\,f_1({\bf x})\,
t^{(\zeta,\zeta,\zeta,\zeta_{24},\zeta_{34})}_{123|4}({\bf x})}\Bigr)\notag\\
&=-\frac{d(0)^3\,d(\zeta_{24})\,d(\zeta_{34})\,\pi^6}
{\zeta\cdot 12\cdot (\zeta_{24}+\zeta_{34})(1-2(\zeta_{24}+\zeta_{34}))
(2-2(\zeta_{24}+\zeta_{34}))}\notag\\
&\quad\quad\quad\quad \cdot\partial_{{\bf x}\mu}\partial_{{\bf x}\nu}\partial_{{\bf x}\lambda}
\Bigl(\overline{{\bf x}^\mu {\bf x}^\nu {\bf x}^\lambda\,\delta(x_{12},x_{13})\,
X_{24}^{\>\>-2+\zeta_{24}}\,X_{34}^{\>\>-2+\zeta_{34}}}\Bigr)\ ,
\end{align}
where we use the result for the scaling anomaly of the 123-subdiagram,
$\tilde t^{(0,0,0)}(x_{12},x_{13})=\pi^6\,\delta(x_{12},x_{13})$ (derived e.g.~in \cite[sect.~7.1]{BDF09}),
and that we may replace $f_1({\bf x})$ by 1 (due to $0=f_2({\bf x})\,\delta(x_{12},x_{13})
=(1-f_1({\bf x}))\,\delta(x_{12},x_{13})$). With that we explicitly see that the {\it 123-counter term
is independent of the choice of $f_1,f_2$}.
Finally $t^\mathrm{MS}({\bf x})$ is the coefficient
$\sim\zeta^0$ of the Laurent series $(1+R_{123}+R_{234})\,
\dot t^{\boldsymbol{\zeta}}({\bf x})\vert_{{\boldsymbol{\zeta}}=(\zeta,\zeta,\zeta,\zeta,\zeta)}$,
i.e.~it is the coefficient $\sim\zeta^2$ of the power series
\begin{align}\label{t-MS}
&\frac{d(\zeta)^5}
{60\,(1-10\zeta)(2-10\zeta)}\,\partial_{{\bf x}\mu}
\partial_{{\bf x}\nu}\partial_{{\bf x}\lambda}\Bigl(\overline{{\bf x}^\mu {\bf x}^\nu {\bf x}^\lambda
\Bigl(f_1({\bf x})\,t^{(\zeta,\zeta,\zeta,\zeta,\zeta)}_{123|4}({\bf x})+
f_2({\bf x})\,(1\leftrightarrow 4)\Bigr)}\Bigr)\notag\\
&-\frac{d(0)^3\,d(\zeta)^2\,\pi^6}
{24\,(1-4\zeta)(2-4\zeta)}\partial_{{\bf x}\mu}\partial_{{\bf x}\nu}\partial_{{\bf x}\lambda}
\Bigl(\overline{{\bf x}^\mu {\bf x}^\nu {\bf x}^\lambda\,\bigl(\delta(x_{12},x_{13})\,
X_{24}^{\>\>-2+\zeta}\,X_{34}^{\>\>-2+\zeta}+(1\leftrightarrow 4)\bigr)}\Bigr)\ .
\end{align}
Due to \eqref{independent}, this result is independent of the choice of $f_1,f_2$.
However, to compute $t^\mathrm{MS}$, an explicit choice of $f_1$ and $f_2$ is needed.
This can be done as follows: Let $\chi$ be a smooth approximation of the Heaviside-function
$\Theta (x)$ with $\mathrm{supp}\, \chi'\subset [-\epsilon,\epsilon]$ for a sufficiently small
$\epsilon >0$. For $x\in\RR^6$ we mean by $|x|$ the Euclidean norm. We set
\begin{align}
g_2({\bf x}):=
\begin{cases}
\chi\Bigl(\frac{|x_{12}|^2+|x_{13}|^2}{|x_{14}|^2}-a\Bigr)\quad & \text{if}\quad x_{14}\not= 0\\
1 \quad & \text{if}\quad x_{14}= 0\>\>\wedge\>\>{\bf x}\not={\bf 0}
\end{cases}\ ,
\end{align}
with a sufficiently small $a >0$. Note that $g_2\in {\cal C}^\infty (\RR^{18}\setminus\{0\})$.
Let
\begin{equation}
g_1({\bf x}):=g_2(x_{24},x_{34},x_{14})\ .
\end{equation}
The sets
\begin{equation}
K_j:=\{{\bf x}\not={\bf 0}\,|\,g_j({\bf x})=0\}\ ,\quad j=1,2\ ,
\end{equation}
are narrow cones around the diagonal $x_{12}=x_{13}=x_{14}$ (for $g_1$) and the $x_{14}$-axis (for $g_2$).
Since $K_1\cap K_2 =\emptyset$, we have $g_1({\bf x})+g_2({\bf x})>0$ for all ${\bf x}\not={\bf 0}$ and,
hence, we may set
\begin{equation}
f_j({\bf x}):=\frac{g_j({\bf x})}{g_1({\bf x})+g_2({\bf x})}\ ,\quad j=1,2\ .
\end{equation}
Obviously the pair $(f_1,f_2)$ satisfies the required properties \eqref{partofunity1} and
\eqref{partofunity2}, and it also fulfills the $(1\leftrightarrow 4)$-symmetry
and scales homogeneously with degree $0$:
\begin{equation}
f_1({\bf x})=f_2(-x_{24},-x_{34},-x_{14})\ ,\quad f_j(\lambda{\bf x})=f_j({\bf x})\quad\forall\lambda\not= 0\ .
\end{equation}
The computation of $t^\mathrm{MS}$ needs explicit formulas for $f_1,f_2$ only in the first line of
\eqref{t-MS}; to compute the latter smoothness of $f_1,f_2$ is not necessary -- $\chi$ can be replaced by $\Theta$.
\end{example}
The computational difficulty that, in case of overlapping divergences, our method needs explicit
formulas for a partition of unity, can be avoided by using the distribution-splitting method of
Epstein-Glaser \cite{EG73} or Steinmann's direct construction of retarded products \cite{Ste71,DF04}
(instead of Stora's extension of distributions \cite{Stora1993}).
In the splitting method, a map $\mathcal{D}_n^{\boldsymbol{\zeta}}:\Floc^{\otimes n}\rightarrow\Fcal$ corresponds to
$\Tcal^{{\boldsymbol{\zeta}}}_n$ restricted to the complement of the thin diagonal
$\Delta_n:=\{(x_1,\ldots ,x_n)\,|\,x_1=\cdots =x_n\}$.
This $\mathcal{D}_n^{\boldsymbol{\zeta}}$ is a sum of products of
time-ordered products $\Tcal^{{\boldsymbol{\zeta}}}_k$ of lower orders $k<n$,
its construction does not need any partition of unity.
$\mathcal{D}_n^{\boldsymbol{\zeta}}$ has causal support, that is the pertinent numerical distributions
$d(x_1-x_n,...):=d^{{\boldsymbol{\zeta}}\,\beta}_{\alpha}(x_1,...,x_n)$
(defined analogously to $t^{{\boldsymbol{\zeta}}\,\beta}_{\alpha}$ in \eqref{causWick})
have support in $(\bar V_+)^{\times (n-1)}\,\cup\,(\bar V_-)^{\times (n-1)}$. The distribution splitting,
\begin{align}\label{splitting0}
d=a-r\ ,\quad & \text{with}\quad \mathrm{supp}\, a\subset (\bar V_+)^{\times (n-1)}\quad\wedge\quad
\mathrm{supp}\, r\subset (\bar V_-)^{\times (n-1)}\notag\\
& \text{and}\quad \mathrm{sd}(a)\leq\mathrm{sd}(d)\quad\wedge\quad
\mathrm{sd}(r)\leq \mathrm{sd}(d)\ ,
\end{align}
corresponds to the extension of $t^{{\boldsymbol{\zeta}}\,\beta}_{\alpha}(x_1-x_n,...)$ from
$\mathcal{D}^\prime(\RR^l\setminus\{0\})$ to $\mathcal{D}^\prime(\RR^l)$ (where $l=d(n-1)$), i.e.~it
can be understood as renormalization. Our results about minimal subtraction (Sect.~\ref{MSnumerical})
and the differential renormalization formula \eqref{diffrenhom1} hold, suitably reformulated, also for the
splitting problem \eqref{splitting0}. This is worked out in Appendix \ref{app:EG-splitting}.
A main disadvantage of the
splitting method is that usually the computation of $\mathcal{D}_n^{\boldsymbol{\zeta}}$ requires quite a lot of work
(see the examples in \cite{Sch89}), so in absence of overlapping divergences the extension method
is mostly much more efficient.
\section{Hopf Algebra and Renormalization}
In pioneering work \cite{CK00,CK01}, Connes and Kreimer uncovered interesting algebraic structures underlying the combinatorics of perturbative renormalization. In particular, one obtains the diffeomorphism group on the space of coupling constants, tangent to the identity, which is nothing else than the
renormalization group in the sense of St{\"u}ckelberg and Petermann and was independently derived in form of the Main Theorem of Renormalization in \cite{SP82,Pin01,DF04,BDF09}. Other structures
refer to computational methods and can best be formulated in terms of Hopf algebras of graphs.
In this section we will describe the main combinatorial structure arising in our framework and relate it to a certain Hopf algebra. We also argue how this structure can be related to the one used by Connes and Kreimer. Let $\mathscr{R}$ denote the St\"uckelberg Petermann renormalization group. The Main Theorem of renormalization describes the set of scattering matrices $\mathscr{S}$ as a right group module (right action),
\begin{equation}
\begin{array}{rccl}
\rho: & \mathscr{S}\otimes \mathscr{R} & \rightarrow &\mathscr{S} \\
& \mathcal{S} \otimes \mathcal{Z} & \mapsto & \mathcal{S}':=\mathcal{S}\circ \mathcal{Z}
\end{array}
\end{equation}
Consider the formal symbols $\delta^n$, $n\in\NN$. Using the prescription
\begin{equation}
\delta^n(\mathcal{S}):=\mathcal{S}^{(n)}(0):\Floc^{\otimes n}\rightarrow\Fcal \quad\text{and}\quad
\delta^n (\mathcal{Z}):=\mathcal{Z}^{(n)}(0):\Floc^{\otimes n}\rightarrow\Floc
\end{equation}
we associate $\delta^n$'s with maps from $\mathscr{S}$ and $\mathscr{R}$ to the $\CC$-linear space of linear maps $\Lin(\Floc^{\otimes n},\Fcal)$ and $\Lin(\Floc^{\otimes n},\Floc)$, respectively. Using this identification we can define on symbols $\delta^n$
two distinguished products. We start with the tensor product $\delta^n\otimes \delta^m$ of linear mappings, which is defined to be a map form $\mathscr{R}$ to $\Lin(\Floc^{\otimes n+m},\Floc^{\otimes2})$. Apart from $\otimes$ it is also natural to consider another product, the non-commutative composition product $\oc$ defined for symbols $\delta^{n}$, $\delta^{k_1}\otimes\dots\otimes\delta^{k_m}$ by
\[
(\delta^{n}\oc\delta^{k_1}\otimes\dots\otimes\delta^{k_m})(\mathcal{S},\mathcal{Z})\doteq \left\{\begin{array}{lcl}
\mathcal{S}^{(n)}\circ (\mathcal{Z}^{(k_1)}\otimes\dots\otimes \mathcal{Z}^{(k_m)})
&,&\textrm{if}\quad m=n\,,\\
0&,&\textrm{else}\,,\end{array}\right.
\]
where $\mathcal{S}\in\mathscr{S}$, $R\in\mathscr{R}$. Note that $\mathcal{S}^{(1)}(0)=\mathcal{Z}^{(1)}(0)=\id$, i.e. $\delta^1=1$ is the unit with respect to $\oc$. We can now write the termwise version of the main theorem (i.e.~the Fa\`a di Bruno formula
for $(\mathcal{S}\circ \mathcal{Z})^{(n)}$, cf.~\eqref{FaadiBruno}) and the group law of $\mathscr{R}$ as,
\begin{align}
\delta^n(\mathcal{S}\circ \mathcal{Z})=\sum_P(\delta^{|P|}\oc\bigotimes_{I\in P}\delta^{|I|})(\mathcal{S},\mathcal{Z})\label{main-Hopf}\,,\\
\delta^n(\mathcal{Z}_1\circ \mathcal{Z}_2)=\sum_P(\delta^{|P|}\oc\bigotimes_{I\in P}\delta^{|I|})(\mathcal{Z}_1,\mathcal{Z}_2)\label{main-Hopf2}\,.
\end{align}
Now we want to reinterpret these formulas in the Hopf algebraic language. To construct the Hopf algebra dual to $\mathscr{R}$ we consider first the algebra $\mathcal{O}$ of functions on $\mathscr{R}$ with values in $\RR$. We want to encode the group law in the coproduct structure, i.e. we want to define $\tilde{\Delta}:\mathcal{O}\rightarrow\mathcal{O}\otimes\mathcal{O}$ such that
\begin{equation}\label{coprod}
\tilde{\Delta}f(\mathcal{Z}_1,\mathcal{Z}_2)=f(\mathcal{Z}_1\circ \mathcal{Z}_2)\,.
\end{equation}
This is in general not possible, since $\mathscr{R}$ is not finite. One can fix the problem by replacing the algebraic tensor product with some completed tensor product (see for example \cite{Far00} for the case of Hopf algebras of smooth functions) or restrict oneself to the algebra of representative functions\footnote{We recall that a function is representative if its orbit under the left translation is a finite dimensional subspace of $\mathcal{O}$. In particular, matrix elements of finite dimensional representations are such functions. For more details see the review paper \cite{FGB05} and lecture notes \cite{Frabetti2007}.}. Fortunately the situation simplifies significantly if we take into account the fact that, as shown in \cite{DF04,BDF09}, $\mathscr{R}$ acts on the space of actions\footnote{By actions we mean equivalence classes of generalized Lagrangians, in the sense of \eqref{equ} and the discussion above it.}. Moreover, in a renormalizable theory the orbit of the interaction can be described by a finite set of parameters (coupling constants), so there exists a group morphism map from $\mathscr{R}$ to a subspace $\tilde{\mathscr{R}}$ of $\mathrm{Diff}(\RR^N)$, the group of formal diffeomorphisms with $N\in\NN$. From the physical point of view, all the relevant information about the theory is contained in $\tilde{\mathscr{R}}$, so we can now focus our attention on this group. We consider now the algebra $\Halg$ spanned by symbols $\delta^{\alpha,i}$, where $\delta^{\alpha,i}(\mathcal{Z})\doteq \partial_\alpha \mathcal{Z}(0)^i$, $\mathcal{Z}\in\tilde{\mathscr{R}}$, $\mathcal{Z}^i$ is the i-th component of $\mathcal{Z}$ and $\alpha\in \NN_0^N$ is a multiindex. The group law in $\mathrm{Diff}(V)$ is the composition of diffeomorphisms of $V$ and it is easy to check that the functions $\delta^{\alpha,i}$ are representative. We assumed that $\mathcal{Z}(0)=0$, $\mathcal{Z}^{(1)}(0)=\id$, for $\mathcal{Z}\in\mathscr{R}$, so $\delta^{0,i}$ is trivial and $\delta^{j,i}:\mathcal{Z}\mapsto \partial_j\mathcal{Z}^i(0)$ is identically $1$ for $j=i$ and $0$ otherwise. Therefore we identify all $\delta^{i,i}, i=1,\dots,N$ with the unit $1$ element and define the counit by setting: $\epsilon(\delta^{\alpha,i}):=0$, for $\alpha\neq i$ and $\epsilon(1)=1$. The coproduct of $\Halg$ is defined by \eqref{coprod} and the explicit formula is just the Fa\`a di Bruno formula for maps $\RR^N\rightarrow\RR^N$.
Next we introduce on $\Halg$ the grading $\deg(\delta^{\alpha,i})=|\alpha|-1$. With this definition, $\Halg$ is an $\NN_0$-graded connected bialgebra and from the result of \cite{Kastler2000} follows that $\Halg$ has an antipode
and, hence, is a Hopf algebra. This way we have constructed the Hopf algebra induced by action of the renormalization group $\mathscr{R}$ on the space of coupling constants of a given renormalizable theory.
{\small\begin{example}[Fa\`a di Bruno Hopf algebra]
Let us consider the case where the space of coupling constants is one dimensional. This happens for example in case of $\varphi^3$ in 6 dimensions, after performing the wave function and mass renormalization (see example 7.1 in \cite{BDF09} for details). $\mathscr{R}$ is mapped to
$\tilde{\mathscr{R}}\subset \mathrm{Diff}(\RR)$ and we consider the algebra $\Falg$ generated by functions $\delta^n$, where $\delta^n(\mathcal{Z})=\mathcal{Z}^{(n)}(0)$, $\mathcal{Z}\in \tilde{\mathscr{R}}$. The product is just the pointwise product of functions. The action of the St\"uckelberg-Petermann renormalization group on itself (formula \eqref{main-Hopf2}) induces a coproduct $\Delta:\Falg\rightarrow\Falg\otimes\Falg$ by
\begin{equation}
\Delta\delta^n\doteq \sum_P\delta^{|P|}\otimes\prod_{I\in P}\delta^{|I|}\,,
\label{coproduct}
\end{equation}
and one can write \eqref{main-Hopf2} in the form
\[
\delta^n(\mathcal{Z}_1\circ \mathcal{Z}_2)=m\circ\Delta\delta^n (\mathcal{Z}_1,\mathcal{Z}_2)\,.
\]
$\mathcal{H}$ is $\NN$-graded
by the order of the derivatives,
\begin{equation}
\deg(\delta^n):=n-1\,,\quad\Falg=\bigoplus_{n=0}^\infty\Falg^n\ .
\end{equation}
The unit of $\Falg$ is $1=\delta^1$ and from $\mathcal{Z}^{(1)}(0)=1$, $\mathcal{Z}\in\tilde\mathscr{R}$ follows that $\Falg$ is connected. $\Falg$ also has a counit $\epsilon:\Falg\to \CC$ by $\epsilon(\delta^{n_1}
\cdot...\cdot\delta^{n_l}):=\delta_{1n_1}\cdot...\cdot\delta_{1n_l}$
($\delta_{ij}$ means the Kronecker delta) and an antipode $\Ap:\Falg\to\Falg $, which is obtained by recursion from its definition as
\begin{equation}
\Ap(1):=1\quad\text{and}\quad 0=({\rm id}*\Ap)(\delta^n):=m\circ({\rm id}\otimes\Ap)\circ
\Delta(\delta^n)\quad\text{for}\quad n>1
\end{equation}
(where $\Ap(\bigodot_{I\in P}\delta^{|I|}):=\bigodot_{I\in P}\Ap(\delta^{|I|})$), which gives
the recursive relation
\begin{equation}\label{Ap}
\Ap(\delta^n):=-\sum_{|P|>1}\delta^{|P|}\cdot\prod_{I\in P}\Ap(\delta^{|I|})
\quad\text{for}\quad n>1\ .
\end{equation}
The resulting Hopf algebra $(\Falg,\cdot,\Delta,1,\epsilon,\Ap)$ is a well known structure called the Fa\`a di Bruno Hopf algebra \cite{JoniRota1982} (see also \cite{FGB05} in the context of renormalization).
\end{example}}
To relate our Hopf-algebraic to the Connes-Kreimer approach one has to use the expansion of $S^{(n)}$ into graphs, given by relations \eqref{time:ord} and \eqref{GraphDO}. Let us start on the abstract level of multilinear maps between spaces of functionals. According to \eqref{GraphDO}, with a graph $\Gamma$, we associate a functional differential operator $T_\Gamma$ from $\Fcal_\mathrm{loc}^{\otimes V(\Gamma)}$ to $(\Fcal^{\otimes V(\Gamma)})_\mathrm{loc}$. The notation $\mathcal{F}_{\mathrm{loc}}^{\otimes V(\Gamma)}$ means that the factors of the tensor product are numbered by the indices of $\Gamma$, i.e. for each vertex $i$ we have a variable $\varphi_i$. At the end we set all the $\varphi_i$ to be equal (by applying $m_n$), but for now it is important to keep track of the information, which functional derivatives are applied at which vertex. The space $(\mathcal{F}^{\otimes V(\Gamma)})_{\mathrm{loc}}$ contains functional which are local as functions of the multiplet $(\varphi_i; i\in V(\Gamma))$, i.e. depending on field configurations only through the jet $(x,\varphi_i(x),\partial\varphi_i(x),\dots; i\in V(\Gamma))$. The main theorem of renormalization theory can be now formulated on the level of graphs:
\begin{equation}\label{Hopf:graph}
(S\circ Z)_\Gamma=\sum_{P\in \mathrm{Part}(V(\Gamma))}T_{\Gamma_P}\oc \bigotimes_{I\in P} Z_{\Gamma_I}
\end{equation}
where $Z_{\Gamma_I}:\mathcal{F}_{\mathrm{loc}}^{\otimes V(\Gamma_I)}\to (\mathcal{F}^{\otimes V(\Gamma_I)})_{\mathrm{loc}}$ and $\Gamma_P$ is the graph with vertex set $V(\Gamma_P)=V(\Gamma)$, with all lines connecting different index sets of the partition $P$, and $\Gamma_I$ is the graph with vertex set $V(\Gamma_I)=I$ and all lines of $\Gamma$ which connect two vertices in $I$. To find the Hopf algebra structure underlying \eqref{Hopf:graph}, we have to go to the concrete renormalizable theory, where $\mathscr{R}$ is mapped to $\tilde\mathscr{R}$.
More details can be found in \cite{Pin00b,BrouderFrabettiMenous2009,GBL00}. In non-renormalizable
theories, one has to use a generalization of Hopf algebras, which uses completed tensor products.
\section*{Conclusions and Outlook}
Causal perturbation theory is known to provide rigorous results on structural properties of renormalized
perturbative quantum field theory in a transparent and elegant way. However, for models containing massless fields, the central
solution%
\footnote{For a purely massive model the infrared behaviour is harmless and, hence,
one may choose $w_\gamma=\frac{x^\gamma}{\gamma!}$ in the $W$-projection
\eqref{W-projection}. This is the central solution of Epstein and Glaser \cite{EG73}, which maintains
several symmetries (in particular Lorentz covariance), and is explicitly computable.
For the distribution splitting method \eqref{splitting0} in Minkowski space, it can easily be computed by a
dispersion integral in momentum space \cite{EG73,Sch89}.}
does not exist and a generally applicable method for explicit calculations is missing so far.
In this paper we develop such a method, by using dimensional regularisation in position space, proposed
by Bollini and Giambiagi \cite{BG96} some years ago. More precisely, the regularization parameter is the
index $\tfrac{d}2-1$ of the Bessel function
appearing in the Feynman propagator ($d$ denotes the spacetime dimension). Since, in the limit $\zeta\to 0$
(which removes the regularization) of the regularized time-ordered product $\Tcal^\zeta_n$,
there appears not only the overall divergence (localized on the thin diagonal $\Delta_n$),
but also subdivergences localized on partial diagonals, our method needs a
position space version of Zimmermann's
Forest Formula, which adds suitable local counter terms in correct succession, such that the limit $\zeta\to 0$
exists. We prove such a formula (``Epstein-Glaser Forest Formula'', Thm.~\ref{forest}). It
is based on families of subsets of the set of vertices and not, as in Zimmermann's formula, on families of subgraphs.
Generally, Epstein-Glaser renormalization is non-unique. However, our regularized time-ordered products
$\Tcal^\zeta_n$ are unique and, using the minimal subtraction prescription for the limit $\zeta\to 0$,
we get a unique result for the renormalized time-ordered products.
A main reason for the usefulness of conventional dimensional regularization is that the
regularized time-ordered products are gauge invariant (in particular this holds true for the term
$\sim\zeta^0$ which is the minimal subtracted time-ordered product). To obtain
gauge invariance of our $\Tcal^\zeta_n$, a crucial necessary condition is
that for all kinds of fields
the regularized Feynman propagator $\Delta_F^\zeta(x)$ is, for $x\not= 0$, a solution
of the pertinent free field equation. But, as we see from lemma \ref{mod-KG}, in case of a
real scalar field, this holds only if we deform the Klein-Gordon operator into $d_\zeta=d-2\zeta$
dimensions. Hence, it seems that a $\zeta$-dependent deformation of the free Lagrangian is needed.
In \cite{FRb} gauge theories are incorporated into the Epstein-Glaser framework with the use of the so called Batalin-Vilkovisky formalism. This allows to keep track of gauge symmetry also in the regularized theory by means of the regularized quantum master equation (QME). We hope to apply these ideas also in the case of dimensional regularization.
The combinatorial structure we found can be described in Hopf algebraic terms. The structure is similar to the structure found in the approach by Connes and Kreimer. There are, however, also differences. In particular, it turned out to be appropriate to distinguish carefully between the tensor product appearing in the decomposition of disconnected graphs and the composition of linear maps arising from finite renormalizations. In the one dimensional case these two products coincide, but in order to exploit this fact one had to choose a basis and work with matrix elements.
\begin{appendix}
\section{Regularization in the Epstein-Glaser framework}\label{app:regularization}
The Epstein-Glaser method does neither involve any regularization nor divergent counter terms. Nevertheless, one may introduce a regularization and determine the necessary counter terms. This was already discussed in the original paper of Epstein and Glaser where Pauli-Villars regularization was used. With the help of the concept of the St\"uckelberg-Petermann renormalization group, this fact was formulated in \cite{BDF09} in the way that given a solution $S$ of the EG-axioms and a smooth approximation of the Feynman propagator $\Delta_F^\Lambda\to\Delta_F$ such that the formal S-matrices $S_\Lambda$ can be directly defined, then there exists a sequence of renormalization group elements $\mathcal{Z}_\Lambda$ such that $S_\Lambda\circ \mathcal{Z}_\Lambda\to S$. The proof proceeds in the same way as the proof of the main theorem of renormalization \cite{DF04} and was not included in \cite{BDF09}. We therefore present it here in a slightly stronger form.
\begin{thm}
Let $\mathcal{S}$ be a solution of the EG-axioms and let the Feynman propagator be approximated by a sequence of symmetric distributions
$\Delta_F^\Lambda$ which converges in the H\"ormander topology with a scaling degree bounded by that of the Feynman propagator. Let
$\mathcal{S}_\Lambda$ be a formal S-matrix associated to $\Delta_F^\Lambda$. Then there exists a sequence $\mathcal{Z}_\Lambda\in\mathscr{R}$ such that
\[\mathcal{S}_\Lambda\circ \mathcal{Z}_\Lambda\to \mathcal{S} \ .\]
\end{thm}
\begin{proof}
It is convenient to expand the formal S-matrix as a sum over graphs as explained in Section \ref{EG}.
We are going to show that
for each graph $\Gamma$ there exists a sequence of linear maps $Z_{\Gamma,\Lambda}:\mathcal{F}_{\mathrm{loc}}^{\otimes V(\Gamma)}\to (\mathcal{F}^{\otimes V(\Gamma)})_{\mathrm{loc}}$ such that
\[{T}_{\Gamma}=\lim_\Lambda \sum_{P\in\mathrm{Part}(V(\Gamma))}T_{\Gamma_P,\Lambda}\circ\bigotimes_{I\in P}Z_{\Gamma_I,\Lambda}\]
Here $\Gamma_P$ is the graph with vertex set $V(\Gamma_P)=V(\Gamma)$, with all lines connecting different index sets of the partition $P$, and $\Gamma_I$ is the graph with vertex set $V(\Gamma_I)=I$ and all lines of $\Gamma$ which connect two vertices in $I$.
$Z_{\Gamma,\Lambda}$ is recursively defined by $Z_{\Gamma,\Lambda}=\mathrm{id}$ for graphs with one vertex (and no lines, since only graphs without tadpoles are admitted), $Z_{\Gamma,\Lambda}=0$ for EG-reducible graphs and by
\[Z_{\Gamma,\Lambda}=\langle t_{\Gamma,\Lambda},(\mathrm{id}-W_{\Gamma})\delta_{\Gamma}\rangle\]
for EG-irreducible graphs. Here $t_\Gamma$ contains already the contributions from subgraphs, i.e.
\[\langle t_{\Gamma,\Lambda},\delta_\Gamma\rangle=\sum_{|P|>1}T_{\Gamma_P,\Lambda}\circ\bigotimes_{I\in P}Z_{\Gamma_I,\Lambda}\ .\]
Due to the fact that $W_\Gamma$ coincides with the identity on elements of $\mathcal{D}(\Delta_\Gamma,Y_\Gamma)$ which vanish at the thin diagonal, $Z_{\Gamma,\Lambda}$
satisfies the locality condition
\[Z_{\Lambda,\Gamma}((F+G)^{\otimes |V(\Gamma)|})=Z_{\Lambda,\Gamma}(F^{\otimes |V(\Gamma)|})+Z_{\Lambda,\Gamma}(G^{\otimes |V(\Gamma)|}) \]
for local functionals $F,G$ with disjoint support.
$\mathcal{Z}_{\Lambda}$ is then defined by
\[\mathcal{Z}_{\Lambda}(F)=\sum_\Gamma \frac{1}{\mathrm{Sym}(\Gamma)}m_{|V(\Gamma)|}\circ Z_{\Gamma,\Lambda}(F^{\otimes |V(\Gamma)|})\]
where the sum extends over all graphs without tadpoles and with vertex sets $V(\Gamma)=\{1,\dots,n\}$ for some $n\in\mathbb{N}$.
$\mathcal{Z}_\Lambda$ then satisfies the locality condition and is thus an element of $\mathscr{R}$.
\end{proof}
\section{Minimal subtraction for the distribution-splitting method}\label{app:EG-splitting}
We assume that the reader is familiar with the distribution-splitting method
(see \cite{EG73,Sch89}). We recall:
\begin{thm}\label{thm:splitting}
A distribution $d\in \mathcal{D}^\prime(\RR^l)\ ,\,\,\,l:=d(n-1)$, with causal support,
$\mathrm{supp}\, d\subset (\bar V_+)^{\times (n-1)}\,\cup\,(\bar V_-)^{\times (n-1)}$,
has a unique solution $\bar{a}\in\mathcal{D}_\lambda'(\RR^l)$, $\lambda:=\mathrm{sd}(d)-l$
of the splitting problem \eqref{splitting0}, that is the pointwise product
\begin{equation}\label{splitting1}
\bar a(x):=\Theta(v\cdot x)\,d(x)
\end{equation}
exists in $\mathcal{D}_\lambda'(\RR^l)$. Here, $\Theta$ has to be understood as the weak limit
$\Theta:=\lim_{\epsilon\downarrow 0}\,\chi_\epsilon$, where $(\chi_\epsilon)_{\epsilon>0}$ is
a family of smooth approximations of the Heaviside function with
$\mathrm{supp}\,\chi'\subset[0,\epsilon]$, and
\begin{equation}
v\cdot x:=\sum_{j=1}^{n-1}v_j\cdot x_j\ , \,\quad v_j\in V_+\,\quad\forall j\ .
\end{equation}
\end{thm}
Due to the causal support of $d$, the definition \eqref{splitting1} of
$\bar a$ is independent of the choice of $v_1,...,v_{n-1}\in V_+$.
With this Theorem, a solution $a\equiv a_W\in\mathcal{D}'(\RR^l)$ of the splitting problem \eqref{splitting0}
can be obtained by means of a $W$-projection \eqref{W-projection}:
\begin{equation}\label{splitting2}
\langle a_W,f\rangle :=\langle \bar a,Wf\rangle\ ,\quad
\quad \forall f\in \mathcal{D}(\RR^l)\ .
\end{equation}
Given a splitting solution $a\in\mathcal{D}'(\RR^l)$, a solution
$\dot t\in\mathcal{D}'(\RR^l)$ of the corresponding\footnote{``Corresponding'' means that comparing the field
expansions \eqref{causWick} of $\mathcal{D}^{\boldsymbol{\zeta}}_n$ and $\Tcal^{\boldsymbol{\zeta}}_n\vert_{\text{outside} \Delta_n}$,
the numerical distributions $d$ and $t$ are the coefficients of the same field combination
$f^{\alpha_1}_{\beta_1}(\varphi)(x_1)...f^{\alpha_n}_{\beta_n}(\varphi)(x_n)$.} extension problem
$\mathcal{D}^\prime(\RR^l\setminus\{0\})\ni t\rightarrow \dot t\in \mathcal{D}^\prime(\RR^l)$
is obtained by $\dot t:=a-a'$. Note that $\mathrm{sd}(t)=\mathrm{sd}(d)$.
The distribution $a'\in\mathcal{D}'(\RR^l)$ is inductively given by the time-ordered products
of lower orders, see \cite{EG73,Sch89}. Restricting to $\mathcal{D}_\lambda(\RR^l)$,
we conclude that the unique
splitting solution $\bar a$ (Theorem~\ref{thm:splitting}) and the unique
extension solution $\bar t$ (Theorem~\ref{thm:Extension-sd}) are related by
$\bar t:=\bar a-a'$.
Given functions $(w_b)_{|b|\leq\lambda}$ (where still
$\lambda:=\mathrm{sd}(d)-l\,$) determing a projection $W$ \eqref{W-projection}
and the pertinent splitting- and extension-solution
$a_W$ \eqref{splitting2} and $\dot t_W:=\bar t\circ W$, respectively, we define
a map $F$ from the set $S$ of solutions of the splitting problem,
\begin{equation}\label{solutions-splitting}
S=\{a=a_W+\sum_{|b|\leq\lambda}C_b\,\partial^b\delta\,|\,C_b\in\CC\}\ ,
\end{equation}
to the set $E$ of solutions of the extension problem,
\begin{equation}\label{solutions-extension}
E=\{\dot t=\dot t_W+\sum_{|b|\leq\lambda}C_b\,\partial^b\delta\,|\,C_b\in\CC\}\ ,
\end{equation}
by
\begin{equation}\label{bijection}
F(a):=a-a'+\sum_{|b|\leq\lambda}\langle a',w_b\rangle\,(-1)^{|b|}\,\partial^b\delta\ .
\end{equation}
Since
\begin{equation}\label{S->E}
F(a_W+\sum_{b}C_b\,\partial^b\delta)=\dot t_W+\sum_{b}C_b\,\partial^b\delta\ ,
\end{equation}
the map $F$ is a {\it bijection}. To verify the latter equation, we use that
$Ww_b=0\quad\forall |b|\leq\lambda$, hence $\langle a_W,w_b\rangle =0$ and
$\langle \dot t_W,w_b\rangle =0$. Since $F(a_w)$ \eqref{bijection} is an extension solution, it
can be written as $F(a_w)=\dot t_W+\sum_{|b|\leq\lambda}K_b\,\partial^b\delta$ and
with that we obtain
\begin{equation}
(-1)^{|c|}K_c=\langle F(a_W),w_c\rangle=-\langle a',w_c\rangle
+\sum_{|b|\leq\lambda}\langle a',w_b\rangle\,(-1)^{|b|}\,\langle\partial^b\delta,w_c\rangle =0\ ,
\end{equation}
hence $F(a_W)=\dot t_W$, and this implies \eqref{S->E}.
Since for any $\dot t\in E$ there is a projection $W$ \eqref{W-projection}
with $\dot t=\dot t_W$, we conclude that any $a\in S$ is of the form
$a=a_W$ for some projection $W$ \eqref{W-projection}.
Using these facts, our results about minimal subtraction (Sect.~\ref{MSnumerical})
and the differential renormalization formula \eqref{diffrenhom1} can be transformed to the splitting
problem as follows:
\begin{df}[Regularization] With the above notations, a family $\{a^\zeta\}_{\zeta
\in\Omega\setminus\{0\}}$, $a^\zeta\in\mathcal{D}'(\RR^l)$, is a(n) (analytic / finite)
regularization of $d$, if $(a^\zeta-a')_{\zeta\in\Omega\setminus\{0\}}$ is
a(n) (analytic / finite) regularization of the corresponding $t\in\mathcal{D}'(\RR^l\setminus\{0\})$.
\end{df}
More explicitly, in the definition \ref{df:regularisation} the condition
\eqref{eq:regularization} is replaced by
\begin{equation}
\lim_{\zeta\rightarrow0}\langle a^\zeta,g\rangle=\langle \bar{a},g\rangle\ ,\quad
\quad\forall g\in\mathcal{D}_\lambda(\RR^l)\ .
\end{equation}
Analogously to \eqref{regW-2}, we can write $\langle a_W,f\rangle$ as
\begin{equation}
\langle a_W,f\rangle=\langle \bar{a},Wf\rangle=\lim_{\zeta\rightarrow0}
\left[\langle a^\zeta,f\rangle - \sum_{|b|\leq\lambda}
\langle a^\zeta,w_b\rangle\;\partial^b f(0)\right]\ ,\quad f\in\mathcal{D}(\RR^l)\ .
\end{equation}
Again, for an analytic regularization the principal parts of the two terms on the r.h.s.~must
cancel. Therefore, $\pp(a^\zeta)$ is a local distribution,
\begin{equation}
\pp(a^\zeta) = \sum_{|b|\leq\lambda}C_b(\zeta)\;\partial^b\delta\,,\quad\text{with}
\quad C_b(\zeta)=(-1)^{|b|}\pp(\langle a^\zeta,w_b\rangle)\ ,
\end{equation}
and
\begin{equation}
\langle a^\MS,f\rangle :=\lim_{\zeta\rightarrow0} \rp(\langle a^\zeta,f\rangle)
\end{equation}
is a distinguished solution of the splitting problem \eqref{splitting0}
(the '$\MS$-solution').
{\it Differential renormalization} works also for the splitting problem \cite[Sect.~2.2]{Duet96}: let
\begin{equation}
d(x)=\partial_{r_1}...\partial_{r_{\omega+1}}d_1(x)\ ,\quad\quad\omega\in\NN_0\ ,
\end{equation}
where $d_1$ has also causal support and $\mathrm{sd}(d_1)=\mathrm{sd}(d)-(\omega+1)<l$.
Then, $d_1$ can be splitted directly
(Theorem~\ref{thm:splitting}):
\begin{equation}
a_1(x):=\Theta(v\cdot x)\,d_1(x)\in\mathcal{D}'(\RR^l)\ .
\end{equation}
With that a splitting solution $a$ of $d$ is obtained by
\begin{equation}\label{splitting-diffren}
a(x)=\partial_{r_1}...\partial_{r_{\omega+1}}\bigl(\Theta(v\cdot x)\,d_1(x)\bigr)\ .
\end{equation}
Assuming that $d^{\boldsymbol{\zeta}}$ scales homogeneously in $x$
with a non-integer degree $\kappa^{\boldsymbol{\zeta}}$, it follows that $d^{\boldsymbol{\zeta}}$
satisfies \eqref{diffrenhom} (with $P_0\equiv 0\equiv Q_0$). Hence, we can apply \eqref{splitting-diffren}
to split $d^{\boldsymbol{\zeta}}$:
\begin{equation}
a^{\boldsymbol{\zeta}}(x)=\frac{1}{\prod_{k=0}^\omega (2{\bf l}{\boldsymbol{\zeta}}-k)}\,
\sum_{r_1...r_{\omega+1}}\partial_{r_1}...\partial_{r_{\omega+1}}
\Bigl(\Theta(v\cdot x)\,m_{x_{r_1}}...m_{x_{r_{\omega+1}}}\,d^{\boldsymbol{\zeta}}(x)\Bigr)\ .
\end{equation}
Obviously, $a^{\boldsymbol{\zeta}}$ scales also homogeneously with degree $\kappa^{\boldsymbol{\zeta}}$; it
is the only splitting solution with this property. (The latter follows from the fact
that $E$ \eqref{solutions-extension} contains precisely one homogeneous element and that
$S=E+a'$, taking into account that $a'$ is homogeneous with the same degree.)
\\
\\
\end{appendix}
{\bf Acknowledgments.} We profitted a lot from stimulating discussions with Jos{\'e} M. Gracia-Bond{\'i}a.
During working at this paper M.~D.~was mainly at the Max Planck Institute
for Mathematics in the Sciences, Leipzig; he thanks Eberhard Zeidler for the invitations to
Leipzig and for enlightening discussions.
{\small
\bibliographystyle{amsalpha}
|
2,877,628,089,798 | arxiv | \section{Introduction}
Calorimetric studies have long been valuable tools for rigorous tests of physical law, ranging from Joseph Black's early work on latent heat, to measurements of the metallic specific heat showing the inadequacy of Drude's classical model. More recently, the high degree of control available in cold atomic gases has opened up exciting avenues for experimental verification of finite temperature theories of Bose and Fermi gases \citep{Blakie2007}. To date, however, there have been few works experimentally investigating the energy-temperature relationship of a harmonically trapped Bose gas. Pioneering work performed by Ensher~\emph{et al.} relied on extracting both release energy and temperature information from time-of-flight images at different evaporation points \cite{Ensher1996}. This work was extended by Gerbier~\emph{et al.}, whose measurements of the release energy were found to be in good agreement with Hartree-Fock theory for an interacting gas \cite{Gerbier2004}. Gati~\emph{et al.} have measured temperature dependent phase fluctuations of an ideal Bose gas and revealed qualitatively that the system deviates from a classical gas \cite{Gati2006}.
To further study the energy dependence of Bose-Einstein condensate (BEC) thermodynamics, two options present themselves. Firstly, an improvement in the ability to extract thermodynamic information from time-of-flight images could extend the results presented in \cite{Ensher1996,Gerbier2004}. Secondly, an alternative method to study the energy-temperature relationship could be envisioned, allowing the total internal energy of the system to be measured, rather than just the release energy. New methods to study this relationship are emerging, motivated by the recent characterization of the heat capacity of a strongly interacting Fermi gas \cite{Kinast2005,Ku2012}. An attempt to measure the specific heat of an ultracold Bose gas using a time-dependent trapping potential, as well as by heating using laser pulses, has been performed \cite{deJong2013}, although obtaining accurate data was found to be impractical in their system. A recent experiment has extracted information regarding the heat capacity of a Bose gas using global variables \cite{Shiozaki2014}.
In this paper we follow the theoretical proposal of Blakie \emph{et al.} to transfer a known quantity of irreversible work to a BEC and measure the resulting temperature~\cite{Blakie2007}. By utilizing two independent methods we perform known, precise amounts of work on a$~^{87}$Rb condensate. The resulting temperature is measured after a period of thermalization, giving the transferred energy as a function of temperature. This provides a rigorous test of the energy dependence of the thermodynamics of our system, as energy and temperature measurements are performed independently. Our approach contrasts that of Ensher \emph{et al.}, who measure their system at differing evaporation points, extracting both energy and temperature information from time-of-flight images. Furthermore, our approach is not sensitive to the ground state energy of the system present at $T=0$, allowing a more direct comparison with the specific heat. Our results from the two methods compare well, both with each other and with Hartree-Fock numerical calculations for an interacting gas.
The paper is organised as follows: Section \ref{sec:exp} reviews the experimental parameters and methods. Sec.~\ref{sec:temp} details the temperature measurements.
The two methods of energy transfer are discussed in Sec.~\ref{sec:grav} and Sec.~\ref{sec:kick}. We close with a discussion and conclusion in Sec.~\ref{sec:dis} and Sec.~\ref{sec:con} respectively.
\section{Experimental parameters}~\label{sec:exp} Our experiment involves a BEC of \mbox{$\sim2\times 10^4~^{87}$Rb} atoms prepared in the \mbox{$\left|F=1;m_F=-1\right\rangle$} ground state, and held in an optical dipole trap~\cite{Wenas2008}. The trap is formed at the intersection of two focused CO$_2$ laser beams, with wavelength 10.6~$\mu$m, and each with a $1/e^2$ radius of 33~$\mu$m. The CO$_2$ laser power is stabilised using a closed-loop feedback system to ensure long term reproducibility of the trap depth, BEC atom number, and temperature. After loading atoms into the dipole trap from a magneto-optical trap operating on the 780.2~nm $(5s)^2S_{1/2} \rightarrow (5p)^2P_{3/2}$ transition, a 6~second evaporative cooling sequence is used to produce a BEC. We then execute the experimental sequence shown in Figure \ref{fig:1}. The laser power is adiabatically ramped to a higher value over 100~ms using an exponential profile. The deeper potential resulting from this ramp prevents atom loss during the heating process. The adiabaticity of this ramp has been confirmed by ensuring that negligible non-condensed fraction exists following the ramp and a 100~ms hold time at the final laser power.
We then transfer a precise amount of energy to the system, using one of two methods, before allowing the system to rethermalize for 100~ms.
\begin{figure}[t]
\centering
\includegraphics[width=83mm]{one.eps}
\caption{(Color online) The trap laser power sequence in the experiment. Following the production of the BEC, the laser power is adiabatically ramped up with an exponential profile over 100 ms, increasing the trap depth to $3.3~\mu\mathrm{K}$ - $5~\mu\mathrm{K}$. Work is then done on the condensate in one of two ways: (a) the release of the atoms for a time $t_{heat} =$ 0-1000~$\mu$s leads to falling expansion of the cloud, resulting in increased kinetic and potential energy when the trap is subsequently reinstated; (b) a 300 ns pulse of an off-resonant standing wave leads to diffraction of a fraction of the atoms. Following either of these is a 100 ms period of thermalization, before the condensate is left to expand for 10 ms to allow the momentum distribution to be imaged via time-of-flight.}
\label{fig:1}
\end{figure}
We approximate our optical dipole trap as a harmonic potential characterized by a set of frequencies $\omega_j$ that define the potential in three dimensions. These frequencies are measured through a parametric heating process \cite{Friebel1998}, where the trap depth is modulated sinusoidally for a period of 200~ms with an amplitude of $\sim10$\%
of the total trap depth. Parametric excitation along dimension $j$ occurs for \mbox{$\omega_{mod} = 2\omega_j/n$}, for integer $n$. Measurements of the excitation frequencies allow us to characterize our trap and calculate the critical condensation temperature for an ideal Bose gas $T_c^0$, given by
\begin{equation}
T_c^0 = \frac{\hbar\bar\omega}{k_B}\left[\frac{N}{\zeta(3)}\right]^{1/3},
\end{equation}
where $\bar\omega=(\omega_x\omega_y\omega_z)^{1/3}$ is the geometric mean of the trapping frequencies, $k_B$ is the Boltzmann constant, $N$ is the number of atoms, and \mbox{$\zeta(\alpha) = \sum_{n=1}^{\infty}n^{-\alpha}$} is the Riemann zeta function (e.g. see \cite{Giorgini1996}).
We ensure that the initial system is at zero temperature by evaporating to a point where the thermal fraction can no longer be observed. We find that any further lowering of the trapping potential only leads to strong depletion of the condensate, and conclude that the zero initial temperature condition is satisfied.
In our system, the interaction energy of the initial BEC far outweighs the kinetic energy, as \mbox{$Na_s/a_{ho}\gg1$}, where $a_s$ is the s-wave scattering length, and \mbox{$a_{ho} = \sqrt{\hbar/m\bar\omega}$} is the characteristic harmonic oscillator length~\cite{Giorgini1997}, where $m$ is the mass of an atom. Typically we find that \mbox{$Na_s/a_{ho}> 100$}, and can assume the Thomas-Fermi approximation applies.
\section{Temperature measurement}
\label{sec:temp} The temperature and number of atoms are measured using time-of-flight imaging with resonant absorption, and the properties of the atomic clouds are inferred from these images. Following an experimental sequence, the dipole trap containing the atoms is rapidly switched off using an acousto-optic modulator, and the atoms are allowed to freely expand for 10~ms. After a repumping pulse, the atoms are probed with a $100~\mu \mathrm{s}$ pulse on resonance with the \mbox{$\left|F=2\right\rangle \rightarrow\left|F^\prime=3\right\rangle$} transition. The probe light has an intensity of \mbox{$1~\mathrm{mW/cm^2}$}, which is less than the saturation intensity of \mbox{$1.6~\mathrm{mW/cm^2}$}.
When processing time-of-flight images we first subtract the average background of all images and apply a fringe-removal algorithm \cite{Ockeloen2010}, improving our signal-to-noise ratio and ability to detect low density components of the expanded atomic cloud. The images are then integrated along the $x$- and $y$-dimensions to obtain two one-dimensional density profiles, from which we extract both the temperature and the number of atoms of our sample.
Above the critical temperature, the expansion of an ideal Bose gas evolves according to a simple scaling relation, where we can define the effective temperature after an expansion time $t$ in dimension $i = x,y$ as
\begin{equation}
T_i = \frac{m}{k_B}\frac{\omega_i^2\sigma_i^2(t)}{1+(\omega_it)^2}.
\end{equation}
Here, $\sigma_i^2(t)$ is the variance of the resulting distribution as a function of the expansion time. Far above the experimentally observed critical temperature $T_c$, this can be determined by a fit to a Gaussian function. Close to $T_c$ and below, the density distribution of the thermal cloud becomes predominantly the Bose-distribution, and by setting the chemical potential to zero, it can be described by a Bose-enhanced Gaussian \cite{Ketterle1999}. In the hydrodynamic regime, there is the possibility of anisotropic expansion for a very elongated trap, which occurs when the mean free path of the atoms is less than the dimension of the trap (e.g. see \cite{Gerbier2004}). For the experiments presented in this paper our trap is nearly isotropic, as we have $\omega_x \approx 1.4\omega_y$, with $\bar\omega/(2\pi) = (220\pm5)~\mathrm{Hz}$ and $(271\pm5)~\mathrm{Hz}$ for the two experiments. In addition, the 10~ms expansion time used sets $(\omega_it)^2\gg1$, and hence we can assume that the expansion of the ideal thermal component to be isotropic, as observed experimentally. We therefore are able to assume that $T_x = T_y = T$.
Below the critical temperature there exists a non-negligible condensed fraction, and as such the assumption of a ballistic expansion for the entire cloud is no longer valid. We therefore extract the temperature using the method presented in \cite{Szczepkowski2009}.
The central interacting region is systematically excluded from our measurements by performing multiple fits of a Bose-enhanced Gaussian to the wings of the profile, with a varying cut-off width for the excluded central region. For each fit, the width of this region is determined by a scaling factor $S$, such that the region $|x_i|\leq SR_i$ is excluded from the fit, as shown in Figure \ref{fig:splot}(a). Here $R_i$ is the Thomas-Fermi radius of the condensed fraction in dimension $x_i$, with $i=x,y$. $S$ is chosen such that the region excluded for the fit is larger than the width of the condensed fraction, as sampling this region would cause us to systematically underestimate the temperature. On the other hand, if $S$ is too large, we are limited by the signal-to-noise ratio of our images. There exists an intermediate region where the measured temperature depends only weakly on the width of the excluded region, which we typically find to be $1.1\leq S \leq 1.4$. We infer the temperature from this region, as illustrated in Figure \ref{fig:splot}(b).
\begin{figure}[t]
\centering
\includegraphics[width=83mm]{two.eps}
\caption{An example temperature measurement below $T_c$. (a) The bimodal fit applied to a sample along the $y$-dimension, where we have inferred the temperature from a fit to the grey shaded region, here with $S=1.2$. Here the position $y$ has been scaled by the Thomas-Fermi radius $R_y$. (b) The effect of scanning $S$ on the temperature measurement for a sample image. The grey shaded region indicates the typical values of $S$ used to infer the temperature.}
\label{fig:splot}
\end{figure}
This technique is only valid when the extent of the thermal profile is much larger than that of the condensed fraction, and thus fails at very low temperatures. In our experiment this is more pronounced in one direction, due to higher oscillator frequency in that dimension ($\omega_x \approx 1.4\omega_y$). We therefore choose to extract temperature information from the $y-$dimension, where the resulting momentum distribution of the condensate fraction is narrower, and apply a temperature cut-off at $T\leq 0.3T_c$. An investigation of the very low temperature region ($T<0.3T_c$) would require a new thermometry technique to be used, such as recently presented by Olf~\emph{et al.} This technique involves analyzing the decoherence of a quantum superposition of spin states, which allows them to measure temperatures as low as $0.02T_c$ \cite{Olf2015}.
To check the accuracy of our temperature measurement well below $T_c$, we have simulated the mean-field effect of the condensate on the thermal cloud as the gas expands. We model the expansion of the condensate using hydrodynamic scaling \cite{Castin1996}. For the thermal cloud we use two approaches, Monte Carlo with $10^7$ test particles \cite{Jackson2002}, and a scaling approach after \cite{Hu2003}. In both cases we find that the effect on the temperature measurement is less than 10\%.
For higher temperatures, still below $T_c$, the magnitude of this effect is decreased due to the smaller condensed fraction. We estimate the uncertainty on $T$ here to be 5\%, accounting for the uncertainty in length calibration, possible collisional effects during expansion, and any variance of the temperature measurement depending on the specific choice of $S$.
The atom number is obtained from a bimodal fit to the absorption profile \cite{Ketterle1999}. For temperatures below $T_c$, we perform a bimodal fit of a Bose-enhanced Gaussian to the thermal fraction, and a Thomas-Fermi profile to the condensed fraction using the method presented in \cite{Szczepkowski2009}. Integration over the entire bimodal profile gives the total optical density $\sum n_{OD}$, which is directly proportional to the number of atoms $N$, given by $N =\sum n_{OD}A_p/\sigma$. Here $A_p$ is the area of a pixel, and $\sigma = \alpha_s\sigma_0$ is the experimental absorption cross-section. Here $\sigma_0=3\lambda_P/(2\pi)$, with $\lambda_P$ being the wavelength of the probe laser beam used in the imaging process, and $\alpha_s$ is a scaling factor that allows us to account for the Clebsch-Gordan coefficients combined with the experimental distribution of magnetic substates following repumping. For an even distribution across the magnetic substates, as would be the case after repumping, $\alpha_s=0.47$. Experimentally, we independently determine $\alpha_s$ by observing experimental images having a temperature close to $T_c$. We then scale our measured number of atoms such that the measured temperature agrees with the theoretical critical temperature $T_{int}$, which we have determined to be \mbox{$T_{int} = 0.94T_c^0$} from Hartree-Fock numerical simulations. Due to the nature of the interacting transition, the condensed fraction does not go abruptly to zero as the temperature crosses $T_{int}$, as in the ideal case. Using this method we determine that $\alpha_s = 0.45\pm 0.07$.
\section{Energy transfer via gravity and expansion.}\label{sec:grav}~We utilize two separate methods for transferring energy to our system. The first method was proposed by \mbox{Blakie {\it et al.} \cite{Blakie2007}}. Here we consider an irreversible work process: a Bose-Einstein condensate is released from the trapping potential and allowed to expand under the influence of gravity. After a time $t_{heat}$ (typically $0-1000~\mathrm{\mu s}$), the atoms are recaptured, and allowed to rethermalize.
There are three contributions to the amount of work done on the atoms during this process. One, the atoms fall under gravity (acceleration $g$) and gain kinetic energy; two, the displacement $h=\frac{1}{2}gt_{heat}^2$ from the fall leads to a potential energy gain when the trap is reinstated; three, the larger cloud size after the expansion results in greater potential energy when the trap potential is restored. Energy from the first two contributions will be coupled to a center-of-mass oscillation in the potential in the $z$-direction, known as the ``Kohn'' mode \cite{Dobson1994}. Although this mode will theoretically persist in a harmonic trap, we observe that these oscillations are damped in the thermalization process after $\sim50$~ms. We attribute this to anharmonicities in the Gaussian laser trap profile \cite{Pantel2012}. Due to the observed damping of the Kohn mode, we consider this energy to be completely available for rethermalization, and to have the form
\begin{equation}
\label{eq:Edrop}
\mathcal{E}_{drop} = N\left(\frac{1}{2}m\omega_z^2h^2 + mgh\right),
\end{equation}
\noindent where $\omega_z$ is the trap frequency parallel to the direction of gravity.
\begin{figure}[t]
\centering
\includegraphics[width = 83mm]{three.eps}
\caption{(Color online) Simulation results showing the energy per atom for a mean-field expansion of the ground state of our harmonic trap as a function of $t_{heat}$. (a) The energy contributions due to a free expansion without gravity about the trap minimum, including the ground-state energy. $\mathcal{E}_{exp}$ is the sum of the potential, interaction, and kinetic terms. (b) The contributions to the total energy transferred to the system, excluding any initial ground state energy. Here $\mathcal{E}_{exp}$ is the same curve shown in (a), minus the initial ground state energy, and $\mathcal{E}_{drop}$ has been split into its potential and kinetic components.}
\label{fig:simTest}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 83mm]{four.eps}
\caption{(Color online) Experimental data for the gravity experiment plotted with theoretical curves. Experimental parameters are $N = (2.2\pm0.3)\times 10^4$ and $\bar\omega/2\pi = 220\pm5$~Hz. The interacting gas curve is a Hartree-Fock numerical simulation, with the $T=0$ ground state energy subtracted, to represent the transferred energy. The shaded region represents our uncertainty in determining $T_c^0$ for the finite-size effects theoretical curve.}
\label{fig:2}
\end{figure}
To calculate the energy acquired from expansion of the cloud, we assume that the atoms will undergo a self-similar expansion, with widths evolving according to \mbox{$R_j(t) = \lambda_j(t)R_j(0)$} \cite{Castin1996}. Here $R_j(t)$ are the Thomas-Fermi profile condensate widths in directions $j = x,y,z$, and the evolution of $\lambda_j$ is given by
\begin{equation}
\ddot\lambda_j = \frac{\omega^2_j(0)}{\lambda_j\lambda_x\lambda_y\lambda_z}
\end{equation}
\noindent with $\lambda_j(0) = 1$. Blakie {\it et al.}~\cite{Blakie2007} have shown that the energy transferred to the system due to a symmetric expansion about the trap minimum is given by
\begin{equation}\label{eq:Blakie1}
\mathcal{E}_{exp}=\frac{N\mu_{TF}}{7}\left(2-5\bar\gamma^{6/5}+\sum_{j=1}^3\gamma_j^2 \lambda_j^2(t_{heat})\right),
\end{equation}
\noindent where $\mu_{TF}$ is the Thomas-Fermi chemical potential, \mbox{$\gamma_j = \omega^\prime_j/\omega_j$} is the ratio of trapping frequencies before and after $t_{heat}$, and \mbox{$\bar{\gamma} = \left(\gamma_x\gamma_y\gamma_z\right)^{1/3}$}. In our experiment we constrain the trapping frequencies both before and after $t_{heat}$ to be identical, such that $\gamma_j=1$, reducing this expression to
\begin{equation}\label{eq:Blakie2}
\mathcal{E}_{exp}=\frac{N\mu_{TF}}{7}\left(-3+\sum_{j=1}^3 \lambda_j^2(t_{heat})\right).
\end{equation} The various contributions to $\mathcal{E}_{exp}$ are shown in Figure \ref{fig:simTest}(a).
The total amount of energy transferred to the system and available for rethermalization is then the sum of the contributions $\mathcal{E}_{drop}$ and $\mathcal{E}_{exp}$.
We wish to reduce the dependence of our energy calculation on the absolute number of atoms $N$, which has an uncertainty of 15\% due to the error in $\alpha_s$. We therefore choose to calculate the energy per particle, rather than the total energy transferred to the atoms. In this scenario, only $\mathcal{E}_{exp}/N$ maintains a dependence on $N$, with $\mu_{TF}$ proportional to $N^{2/5}$. This term accounts for less than 20\% of the total energy in our experiment, as shown by a numerical simulation in Figure \ref{fig:simTest}. Here we have calculated the ground state of our harmonic potential in three dimensions by solving for the ground state of the Gross-Pitaevskii equation for our trap parameters. This initial condition is then allowed to expand under gravity for up to 1000~$\mu$s using a split-step Fourier method.
\begin{figure}[t]
\centering
\includegraphics[width = 83mm]{five.eps}
\caption{(Color online) Experimental determination of the energy of a kick. The top image represents a time-of-flight image. The bottom image shows the integral of the time-of-flight image along the $x$-dimension (vertical in image), allowing us to determine the fraction of atoms at each momenta by fitting a Gaussian profile to each peak.}
\label{fig:kickTest}
\end{figure}
The uncertainty in $\mathcal{E}_{drop}/N$ is mainly due to the uncertainty in the measurement of the trap frequency $\omega_z$, used for determining the potential energy contribution in equation \ref{eq:Edrop}, giving an uncertainty of 4\% for this component. The accuracy of the kinetic component of $\mathcal{E}_{drop}/N$ is due to our accuracy in determining the local value of $g$, as well as the timing accuracy in our experiment. This component accounts for more than half of the total energy transferred to the system, and has an uncertainty \mbox{of $<1$\%}.
Experimental results are shown in Figure \ref{fig:2}, and have been plotted with a Hartree-Fock calculation for this system. As we are only interested in the energy transferred to the system, the ground state energy present at $T=0$ is subtracted from the theoretical curve. This allows us to make close comparisons with the specific heat of the system, which is defined as the temperature derivative of the energy per particle. The theoretical curve for an ideal gas with finite-size effects includes a shaded region representing the uncertainty in our determination of the critical temperature $T_c^0$, which is mainly due to the uncertainty in the absolute measurement of $N$. Notwithstanding the 15\% error in $N$, the Hartree-Fock numerical simulation gives a better description of the behavior of our data than an ideal gas having finite-size effects.
\section{Energy transfer via an optical phase grating}\label{sec:kick}~To support our previous evidence, which relies on a calculation of the work done on the BEC, we utilize a separate method which allows for a direct measurement of the transferred energy. Here, we transfer energy to the system using a single pulse of an optical standing wave, as indicated in Figure \ref{fig:1}(b), before allowing the system to rethermalize. Using a setup analogous to the atom-optics kicked rotor pioneered by Raizen and co-workers \cite{Moore1995,Raizen1999}, we apply a short 300 ns pulse to the atoms, diffracting the system into quantized momentum orders. An additional advantage of this method is that the Kohn mode is naturally not present, as the diffraction is symmetric about zero momentum. We use a pair of counter-propagating laser beams to form our standing wave, with the beams red-detuned by $120$~GHz from the \mbox{$\left|F=1\right\rangle \rightarrow\left|F^\prime=2\right\rangle$} resonant transition, such that the effects of spontaneous emission are negligible. The atoms are diffracted into quantized momentum states, with the $n$th momentum state having momenta $2n\hbar k_L$, where $k_L$ is the wavenumber of the laser and $n$ is integer.
\begin{figure}[t]
\centering
\includegraphics[width = 83mm]{six.eps}
\caption{(Color online) Logarithmic plot of experimental data and theoretical curves. The theoretical curves represent energy transferred to the system only, and any ground state energy at $T=0$ has been neglected. Parameters for the gravity experiment are $N = (2.2\pm0.3)\times 10^4$, $\bar\omega/2\pi = 220\pm5~\mathrm{Hz}$ and $t_{heat} = 0-1000~\mu$s. Parameters for the kicking experiment are $N = (2.1\pm0.3)\times 10^4$, $\bar\omega/2\pi = 271\pm5~\mathrm{Hz}$ and $k = 0\rightarrow1.9$.}
\label{fig:3}
\end{figure}
In a separate calibration experiment, we quantitatively measure the amount of energy transferred as a function of kick-strength, given by $k = \tau\Omega^2/\delta$, where $\tau$ is the pulse length, $\Omega$ is the Rabi frequency of a single beam and $\delta$ is the detuning. Experimentally, $k$ is varied by controlling the intensity of the laser for each pulse using an acousto-optic modulator, and we utilise $k = 0\rightarrow1.9$. We measure the resulting momentum distribution by turning off the trap immediately after the kick and observing the atoms after a 10~ms expansion time, as shown in Figure \ref{fig:kickTest}. The total energy transferred to the system is computed as
\begin{equation}
\mathcal{E}_{kick} = \frac{2N\hbar^2 k_L^2}{m}\sum_n f_n n^2,
\end{equation}
\noindent where $f_n$ is the fraction of atoms in the $n$th momentum state, for integer $n$. We again scale this energy measurement by $N$ to remove the dependence of our energy measurement on our determination of the absolute number of atoms. The uncertainty in $\mathcal{E}_{kick}/N$ is then due to shot-to-shot variation in intensity of the kicking laser and the temporal width of the pulse. We perform multiple calibration runs to obtain an average transferred energy, and allow the variation to be experimentally represented as a variation in the resulting temperature after thermalization. We find that the shot-to-shot variation in $\mathcal{E}_{kick}/N$ can be up to 10\%.
Once $\mathcal{E}_{kick}/N$ has been calibrated we repeat the experiment, but allow the atoms to rethermalize in the trap for 100~ms following the kick, before imaging the system via time-of-flight. The experimental data are shown in Figure \ref{fig:3}.
\begin{figure}[t]
\centering
\includegraphics[width = 83mm]{seven.eps}
\caption{(Color online) Theoretical curves showing the specific heat, defined as the temperature derivative of the energy per particle, for various theoretical treatments of a harmonically trapped Bose gas.}
\label{fig:hc}
\end{figure}
\section{Discussion}
\label{sec:dis}
Figure \ref{fig:3} shows both sets of experimental data on a logarithmic axis. Here we scale the temperature by the ideal gas critical temperature $T_c^0$, and the energy per particle by the characteristic energy of the transition $k_BT_c^0$. The data clearly deviates from the classical prediction of $E = 3Nk_BT$, where we would expect $E = k_BT/2$ in each of the three potential and three kinetic degrees of freedom from the equipartition theorem. There is also a deviation from the prediction for an ideal Bose gas. This deviation is beyond finite-size corrections \cite{Grossmann1995,Ketterle1996}, and is consistent with Hartree-Fock simulations of an interacting gas. Corrections to the Hartree-Fock approximation, such as in Hartree-Fock-Bogoliubov-Popov theory \cite{Giorgini1997}, are very small for our system, and impractical to measure. Theoretical studies have shown that the Hartree-Fock approximation can accurately reproduce the thermodynamic properties of a trapped Bose gas \cite{Krauth1996,Holzmann1999}, and we have confirmed through simulation that these two theories give very similar results for our system.
The specific heat is defined as the temperature derivative of the energy per particle, in our case with the external potential held constant. Taking numerical derivatives of our experimental data is impractical. We can instead make comparisons with the specific heat extracted from derivatives of the theoretical curves, shown in Figure \ref{fig:hc}. We find that our experiments support the notion that the presence of interactions will tend to increase the specific heat at low temperatures when compared to an ideal Bose gas. This can can be understood as a consequence of the repulsion of the thermal atoms by a large condensate fraction. The effective potential seen by the thermal atoms is modified to a ``Mexican hat'' type potential \cite{Tammuz2011}, increasing the volume occupied by the thermal atoms, thereby increasing the density of states. Consequently, this allows the otherwise ``saturated'' thermal cloud to hold more atoms, and hence more energy.
\section{Conclusion}
\label{sec:con}
We have directly measured the energy-temperature relationship of an interacting, harmonically trapped, ultracold Bose gas. Two separate calorimetric techniques have produced similar results; namely, that interactions lead to an increased specific heat from the ideal gas prediction, which is proportional to $T^3$ below $T_c^0$. We have performed quantitative measurements, utilising independent determinations of the energy and temperature, that are well described by Hartree-Fock theory. Future research could involve a thorough investigation of the effect of interactions on the specific heat by employing Feshbach resonances, an investigation into ways to reduce error in the experiment, and a detailed investigation of the thermalization process.
We are grateful to P.~B.~Blakie for useful discussions. This research was supported by the Marsden Fund, administered by the Royal Society of New Zealand on behalf of the New Zealand Government.
\\% |
2,877,628,089,799 | arxiv | \section{Introduction}
When a plasma comprised of different ionic species crystallizes, the newly-formed solid generally has a different composition than the coexisting liquid. This composition change is characterized by a phase diagram. Accurate phase diagrams of multicomponent plasmas are essential for the study of dense astrophysical objects, and in particular for white dwarf stars. The crystallization of their C/O interiors leads to the formation of a solid core enriched in O \citep{stevenson1980,mochkovitch1983,tremblay2019}. The separation of the C and O ions releases gravitational energy that delays the cooling of white dwarf stars by $\simeq 1\,{\rm Gyr}$ \citep{garcia1988b,segretain1994,isern1997,isern2000,althaus2012,blouin2020}, with important implications for the use of white dwarfs as cosmic clocks \citep{garcia1988,fontaine2001}. A similar phase separation process is expected to occur for other minor chemical species in white dwarf interiors (chiefly $^{22}$Ne and $^{56}$Fe) \citep{isern1991,xu1992,segretain1996}. Phase diagrams of dense multicomponent plasmas are also needed for the study of accreting neutron stars \citep{horowitz2007}.
Over the few last decades, several methods have been proposed and used to map the phase diagrams of dense astrophysical plasmas. They can roughly be classified into three categories: density-functional methods \citep{barrat1988,segretain1993}, Monte Carlo-based (MC) techniques \citep{ichimaru1988,ogata1993,dewitt1996,dewitt2003,medin2010}, and molecular dynamics (MD) approaches \citep{horowitz2007,horowitz2010,hughto2012,schneider2012}. Density-functional techniques, which rely on analytical models of the free energies of the relevant phases are inherently more approximate than MC and MD simulation techniques. MC-based methods generally consist of constructing analytic fits to results from MC simulations in the canonical (NVT) ensemble in order to obtain an analytical model for the Helmholtz free energies of the coexisting phases. The free energies of the liquid and solid are then compared to identify the location of the coexistence curve. A major limitation of this approach is that the resulting phase diagram is sensitive to the (somewhat arbitrary) choices of analytic functions used to interpolate the MC data. This can even affect the qualitative shape of the phase diagram. For example, due to minute differences in their interpolation functions for the internal energies of binary ionic mixtures (BIMs), ref.~\cite{ogata1993} concludes that the C/O phase diagram is of the azeotrope shape, while ref.~\cite{dewitt1996} finds that it is spindle shaped. This extreme sensitivity is due to the very small differences between the free energies of the liquid and solid phases near the coexistence conditions, and highlights the need for accurate ``at-parameter'' calculations.
Two-phase MD methods such as those used by the Horowitz \textit{et al.} group \citep{horowitz2007,horowitz2010,hughto2012,schneider2012} have the advantage of not requiring any interpolation. Typically, a liquid and a solid phase are initially put in contact and then evolved in time in the microcanonical or canonical ensemble. The particles diffuse through the liquid--solid interface and eventually a state of equilibrium is reached, which allows to pinpoint the coexistence conditions that characterize the liquid--solid transition. However, a practical drawback of MD approaches is their steep computational cost. Because of this, the phase diagram can only be partially sampled, leading to rather coarse coexistence curves (e.g., see Fig.~2 of ref.~\cite{horowitz2010}). This can be a problem for astrophysical applications. For instance, the coarse sampling of the melting curves of the C/O phase diagram of ref.~\cite{horowitz2010} leads to sizeable uncertainties on the gravitational energy released by the O sedimentation process in white dwarfs. Similarly, a fine sampling of phase diagrams is required to precisely identify the location of interesting features such as an azeotropic or eutectic point. In addition, MD simulations with liquid--solid interfaces are subject to detrimental finite-size effects. While these artifacts can in principle be mitigated using a large enough number of particles, the required number of particles and the cost of the corresponding simulations often prohibit detailed studies of this type \cite{horowitz2010}.
Previous approaches (both MC- and MD-based) commonly assumed a constant volume during the phase transition. This simplifies the problem: the electronic background does not need to be treated explicitly as the electronic density remains constant. Only the screening effect of the electrons on the bare ion--ion interactions was usually included (using a Yukawa potential instead of a Coulomb potential). But this simplification is not strictly correct. Phase transitions occur at constant pressure and are accompanied by volume changes. That being said, in the particular case of dense astrophysical plasmas, where the total pressure is dominated by the degenerate electron gas, the constant volume approximation is well justified (we demonstrate this point explicitly in Appendix~\ref{sec:validation}, see also refs.~\cite{ogata1993,medin2010}).
An alternative technique to calculate phase diagrams is the so-called Gibbs--Duhem integration method (we prefer the term ``Clapeyron integration method''), where the coexistence curve is obtained by direct integration of the appropriate Clapeyron relation \cite{kofke1993a,kofke1993b}. As discussed below, this new approach to calculate the phase diagrams of dense plasmas is largely free of the limitations that characterize the competing methods outlined above. While the method has so far been applied with great success to simple models of neutral mixtures \citep{agrawal1995a,agrawal1995b,agrawal1995c,hitchcock1999,
lamm2001,lamm2001b,lamm2002,lamm2004}, it has never been used for electron--ion plasmas. Adapting this method to charged systems is not as straightforward as substituting an interaction potential by another. In particular, electrons must be explicitly included in the calculations, since the method involves volume changes and ionic identity changes that affect the electronic background. This added complexity has its advantages (it is physically more satisfying to perform all calculations at constant pressure rather than at constant volume), but requires additional care.
The central goal of this paper is to explain how the Clapeyron integration technique can be adapted to map the phase diagrams of dense plasmas. This work is a companion paper to ref.~\cite{blouin2020} (where we presented the astrophysical implications of our new C/O phase diagram) that provides a detailed account of the method. Because the Clapeyron integration approach is not commonly used, we begin in Section~\ref{sec:gd} with a self-contained and pedagogical introduction to this technique instead of simply referring the reader to the original papers; for clarity and completeness, a number of technical details are given in the appendices. The application to electron--ion plasma mixtures requires some care and the needed adaptations are highlighted. Section~\ref{sec:gd} also includes an illustration of the method and its inner workings using a simple analytic model of plasma mixtures. After this general discussion, we delve into the specifics of the plasma model that we use to compute the phase diagrams of dense plasmas (Section~\ref{sec:theory}, with additional details in Appendices~\ref{sec:partfunc} and \ref{sec:yukawa}). We then describe the MC method that we have implemented for this purpose (Section~\ref{sec:mc_code}). Extensive tests of our code are presented in Appendix~\ref{sec:validation}. As an example application of this new simulation capability, we present the calculation of the phase diagram of the C/O interior of white dwarf stars in Section~\ref{sec:CO}, where we also provide useful analytic fits for implementation in white dwarf models. Finally, a short summary is given in Section~\ref{sec:conclusion}.
\section{Clapeyron Equation Integration Method}
\label{sec:gd}
\subsection{Qualitative Overview of the Method}
A Clapeyron equation is a relation between the intensive thermodynamic variables that characterize the conditions of coexistence between two or more thermodynamic phases of a physical system. The present method consists in numerically integrating this Clapeyron equation along the coexistence curve. At each equilibrium point along the coexistence curve, the thermodynamic properties of each coexisting phase are calculated simultaneously, but separately (we perform this step using MC simulations, see Section~\ref{sec:mc_code}). This allows to numerically evaluate and integrate the Clapeyron equation from one state point on the coexistence curve to a neighboring point on the curve. Pairs of MC simulations for the liquid and solid phases are computed in succession until the coexistence line is fully mapped.
The thermodynamic properties of the system only have to be evaluated at the coexistence conditions, meaning that no uninteresting state points are calculated. This has three major advantages compared to the above-mentioned standard methods where free energy models are built by interpolating between many intermediate state points: (1) it is more efficient from a computational point of view (less states to simulate), (2) no arbitrary interpolation is required, thereby increasing the numerical accuracy of the calculation, and (3) all thermodynamic properties of the system at the phase transition are readily available at no additional cost.
The first two advantages given above are also shared with the two-phase MD approach. However, the Clapeyron integration approach is also free of what is probably the greatest limitation of the two-phase MD technique. Since each phase is treated independently (at each coexistence point, one MC simulation is performed for the liquid phase and another one for the solid phase), there is no liquid--solid interface to simulate. This eliminates a major contributor to detrimental finite-size effects. Finite-size effects can be easily mitigated in (isotropic) single-phase simulations. Note also that the MC calculations needed to integrate the Clapeyron equation are relatively cheap, which allows a finer sampling of the phase diagram than interfacial MD simulations and a better resolution of its interesting features (e.g., an azeotropic point).
The MC simulations needed to integrate the Clapeyron equation are performed in an isomolar ensemble: no particle insertions or deletions are needed. This constitutes another important advantage of this approach, as methods that require transfers of particles are not practical for strongly interacting systems such as those in which we are interested \cite{panagiotopoulos1994}.
Finally, all calculations in the Clapeyron integration approach are performed at constant pressure. This is to be contrasted with most phase transition calculations where a constant volume is assumed, while in reality phase transitions virtually always occur at constant pressure and imply volume changes. Even if the constant volume approximation is often very accurate, it is inherently more satisfying to perform all calculations in the correct thermodynamic ensemble and it makes the method applicable to a broader range of systems.
\subsection{Thermodynamics of electron--ion plasmas} \label{Sec_II_B}
As we shall see, the Clapeyron integration method is formulated in an isobaric semi-grand statistical ensemble (NPT$\Delta \mu$), i.e., at constant pressure $P$, constant temperature $T$, constant total number of particles $N$, and constant relative chemical potentials $\Delta\mu_a$. In this ensemble, the volume of the system $V$ and the number of particles $N_a$ of the different particle species fluctuate. Therefore, the application of the method to an electron--ion plasma raises questions regarding the inclusion of electrons in the calculation. Both the allowed variations in volume and in particle numbers imply variations in the electronic density. One consequence of those variations is that the screening length used to screen the ionic interactions can no longer be assumed to be constant as in standard methods. While this is obvious in the case of volume fluctuations, the effect of fluctuations of particle numbers is more subtle. For the finite-size calculations to be physically meaningful and have a well-defined thermodynamic limit (i.e., $\{N_a\},V\to \infty$ at constant density $\{N_a\}/V$), it is necessary to enforce the global neutrality of the system. In other words, the thermodynamic limit should be taken at constant $N_Z=\sum_a{Z_aN_a}=0$, where the sum includes the plasma electrons and $Z_a e$ is the charge of species $a$. For our purpose, we found it useful to constrain the number of electrons. If $\left( \{ N_i \} , \{ Z_i \} \right)_{i=1,\dots,c}$ denotes a given ionic composition of the plasma, with $c$ the number of ionic species, $N_i$ the fluctuating number of ions of species $i$ and $Z_i e$ the charge of each species, then we enforce the number of electrons $N_e = \sum_{i=1}^c Z_i N_i$ to ensure neutrality.
With this choice, the independent extensive variables are the entropy $S$, the volume $V$, and the number of ions of each species $\{ N_i \}$. The internal energy is given by
\begin{equation}
U \left( S,V, \{ N_i \} \right) = TS - PV + \sum_{i=1}^c N_i \mu_i,
\end{equation}
where $\mu_i$ is the electrochemical potential of species $i$, defined as the sum of the ionic chemical potential and an electronic contribution,
\begin{equation}
\mu_i = \mu_{{\rm ion},i} + Z_i \mu_e.
\end{equation}
With these variables, the equilibrium conditions between a liquid $(\ell)$ and a solid $(s)$ phase are $P^{\ell} = P^s$, $T^{\ell} = T^s$, and $\mu^{\ell}_i = \mu^s_i$ for all $i=1,\dots,c$. Here, we restrict the discussion to the coexistence line between a liquid and a solid phase, although what follows applies to other phase boundaries as well. In addition, the case of systems of neutral particles \citep{kofke1993a,kofke1993b,agrawal1995a,agrawal1995b,agrawal1995c,hitchcock1999,
lamm2001,lamm2001b,lamm2002,lamm2004} is recovered by setting $\mu_e=0$ in the previous and following equations.
\subsection{Clapeyron Equation}
We now turn to the derivation of the Clapeyron equations that form the backbone of our integration technique. The latter are conveniently derived from the Gibbs--Duhem relation among the temperature $T$, pressure $P$, and chemical potentials $\mu_i$ \citep{denbigh1981}, and below we limit ourselves to examples relevant to our purpose. For a $c$-component mixture, the Gibbs--Duhem relation can be expressed as
\begin{equation}
d \mu_1 = -s dT + v dP - \sum_{i=2}^c x_i d \left(\mu_i - \mu_1 \right),
\label{eq:gd}
\end{equation}
where $s = S/N$ is the entropy per ion (with $N = \sum_{i=1}^c N_i$), $v = V/N = 1/n$ is the volume per particle, and $x_i = N_i/N$ is the number concentration of species $i$. For a one-component system, this relation directly leads to the usual form of the Clapeyron equation. For the solid and liquid phases to coexist, a change in temperature must cause a change in pressure such that the chemical potentials $\mu^{\ell}$ and $\mu^{s}$ of the liquid and solid phases remain equal. From Eq.~\eqref{eq:gd}, this implies $s^{\ell} dT - v^{\ell} dP = s^{s} dT - v^s dP$ along the coexistence line, and in turn
\begin{equation}
\frac{d P}{d T} = \frac{s^{\ell} - s^s }{v^{\ell}-v^s} = \frac{L_m}{T \left( v^{\ell}-v^s \right)},
\end{equation}
where $L_m$ is the latent heat released per particle.
For multicomponent systems, different Clapeyron equations between two field variables can be similarly derived by fixing the other field variables to their phase equilibrium values. For a two-component mixture, fixing $P$ leads to the Clapeyron relation
\begin{equation}
\left. \frac{dT}{d\left( \mu_2 - \mu_1 \right)} \right|_P = - \frac{x_2^{\ell} - x_2^s}{s^\ell - s^s},
\label{eq:clapeyron2}
\end{equation}
which describes the relation between $T$ and the difference in the chemical potentials of the two species, $\mu_2 - \mu_1$, along the coexistence line. Similarly, other Clapeyron relations can be derived for $c>2$ component mixtures. We detail the case of a three-component mixture in Appendix~\ref{sec:3component}.
In order to exploit Eq.~\eqref{eq:clapeyron2}, it will be beneficial to work with a thermodynamic potential that explicitly depends on $\mu_i - \mu_1$. This can be achieved using the isobaric semi-grand canonical potential,
\begin{multline}
{\cal{A}}(T,P,N,\{\tilde{\mu}_i-\tilde{\mu}_1\}_{i=2,\dots,c}) =\\ U-TS+PV-\sum_{i=2}^{c}{(\tilde{\mu}_i-\tilde{\mu}_1)N_i} = N \tilde{\mu}_1,
\label{eq:muref_first}
\end{multline}
where we have defined $\tilde{\mu}_i = \mu_i + \mu_i^{\rm ref}$. Here, $\mu_i^{\rm ref}$ is a given reference chemical potential that we have added to the formalism to deal with plasma systems. $\mu_i^{\rm ref}$ is innocuous at the level of the theory, but, as we shall see in Section~\ref{sec:mc_code}, it plays an important role in the numerical applications. In addition, it will be more convenient in practice to work in terms of fugacity fractions $\xi_i$ instead of chemical potentials. Let the fugacity of species $i$ be
\begin{equation}
f_i = e^{\beta \left( \tilde{\mu}_i - \mu_i^0 \right)},
\label{eq:fugacity_def}
\end{equation}
where $\beta = 1/\left( k_B T \right)$, $e^{- \beta \mu_i^0} = V/ \Lambda_i^3$, $\Lambda_i$ is the thermal de Broglie wavelength, $\Lambda_i = h/\left(2 \pi M_i k_B T\right)^{1/2}$,
$k_B$ is Boltzmann's constant, $h$ is Planck's constant, and $M_i$ is the mass of species $i$. Then, in the general case of a $c$-component mixture, the fugacity fraction is defined as
\begin{equation}
\xi_i = \frac{f_i}{\sum_{i=1}^c f_i}.
\label{eq:fugacityfrac_def}
\end{equation}
Unlike the chemical potentials that generally can take any real values, the fugacity fractions are constrained to vary between $0$ and $1$ (i.e., $0 \leq \xi_i \leq 1$), which is useful numerically. Moreover, one can show that $\xi_i=0$ ($\xi_i=1$) when the number concentration $x_i=0$ ($x_i=1$).
Using these new definitions, the $c$-component Gibbs--Duhem equation [Eq.~\eqref{eq:gd}] can be written as
\begin{equation}
d\ln\left(\sum_{i=1}^c{f_i}\right)=h_r d\beta+\frac{\beta P}{n}d\ln P-\sum_{i=1}^{c}{x_i\frac{d\xi_i}{\xi_i}},
\label{eq:gd2}
\end{equation}
where $h_r=h-\sum_{i=1}^c{x_i\frac{d}{d\beta} \beta(\mu_i^0+\mu_i^{\rm ref})}$, with $h=(U+PV)/N$ the enthalpy per ion. When expressed in terms of the fugacity fractions, the two-component Clapeyron relation of Eq.~\eqref{eq:clapeyron2} reads \cite{hitchcock1999}
\begin{equation}
\left. \frac{d \beta}{d \xi_2} \right|_{P}=\frac{x_2^{\ell}-x_2^{s}}{ \xi_2(1-\xi_2)(h_r^{\ell}-h_r^{s})}.
\label{eq:clapeyron}
\end{equation}
For a given pressure, this form of the Clapeyron equation describes how the temperature changes with the fugacity fraction along the liquid--solid coexistence line. This is the equation that we will integrate to map the phase diagrams of two-component plasmas. The properties of the fugacity fractions imply that in order to map the phase diagram of a given two-component plasma, Eq.~\eqref{eq:clapeyron} simply needs to be integrated from $\xi_2=0$ to $\xi_2=1$.
To carry out the Clapeyron integration, the concentrations and enthalpies that appear in the right-hand side of Eq.~\eqref{eq:clapeyron} have to be evaluated for the fixed $P$, $T$, and $\xi_i$'s that characterize each state point along the coexistence curve. It is for this reason that it makes sense to work in the isobaric semi-grand canonical ensemble (NPT$\Delta \mu$). To link the microphysics of our system to the thermodynamic relations given above, we have \cite{briano1984}
\begin{equation}
{\cal{A}}=-k_BT\ln {\cal{Q}},
\end{equation}
with the partition function,
\begin{align}
&{\cal{Q}} \left(T,P,N,\{\tilde{\mu}_i-\tilde{\mu}_1\}_{i=2,\dots,c}\right) = \nonumber\\
&\quad \quad \int_0^\infty \Bigg[ \frac{dV}{V_0} \sum_{i_1=1}^c \dots \sum_{i_N=1}^{c} \frac{\prod_{i=1}^c{N_i!}}{N!} \label{eq:calQ} \\
&\quad \quad \quad \times e^{-\beta{\cal{F}}\left[T,V,\{N_i\}_{i=1,\dots,c}\right]-\beta PV+\beta\sum\limits_{j=1}^{N}(\tilde{\mu}_{i_j}-\tilde{\mu}_1)} \Bigg] , \nonumber
\end{align}
where
\begin{equation}
{\cal{F}} = - k_B T \ln \cal{Z}
\label{eq:calF}
\end{equation}
is the Helmholtz free energy and $\cal{Z}$ is the usual canonical partition function. The statistical average of a thermodynamic quantity $B$ is given by the following equation,
\begin{align}
&\langle B \rangle
=\frac{1}{\cal{Q}} \int_0^\infty \Bigg[ \frac{dV}{V_0}
\sum_{i_1=1}^c \dots\sum_{i_N=1}^{c} \frac{\prod_{i=1}^c{N_i!}}{N!} \nonumber\\
&\quad \quad \times e^{-\beta{\cal{F}}\left[T,V,\{N_i\}_{i=1,\dots,c}\right]-\beta PV+\beta\sum\limits_{j=1}^{N}(\tilde{\mu}_{i_j}-\tilde{\mu}_1)}B \Bigg],
\label{eq:Bmoy}
\end{align}
which we will evaluate using a MC sampler (Section~\ref{sec:mc_code}). All the microphysics of the system is contained in the partition function $\cal{Z}$. Our model for $\cal{Z}$ is detailed in Section~\ref{sec:theory}.
\subsection{A Simple Application of the Clapeyron Integration Method}
\label{sec:gd_validation}
As a simple example of the Clapeyron integration method, we now use an analytic plasma model for a BIM to evaluate the right-hand side of Eq.~\eqref{eq:clapeyron} and map the phase diagram of a two-component plasma. The purpose of this application is to illustrate the integration procedure and to show that the Clapeyron integration technique can reproduce exactly the same results as those obtained using more conventional techniques when the same input physics is assumed. An application of the full-fledged Clapeyron integration technique (using isobaric semi-grand canonical MC simulations) is presented in Section~\ref{sec:CO}.
We use the BIM model described by Ogata \textit{et al.} \citep{ogata1993} to compute the phase diagram of a C/O plasma. We choose this particular BIM model for this exercise as Ogata \textit{et al.} have published a C/O phase diagram based on this model to which our results can be compared. To allow a direct comparison with their results, we assume that the volume change during the phase transition is negligible, meaning that Eq.~\eqref{eq:clapeyron} simplifies to
\begin{equation}
\frac{d \beta}{d \xi_2} = \frac{\left( x_2^{\ell} - x_2^{s} \right)}{\xi_2 \left( 1 - \xi_2 \right) \left(u^{\ell} - u^s \right)},
\label{eq:clapeyron_vfix}
\end{equation}
where $u=U/N$ and where we have also fixed $\mu_1^{\rm ref}=\mu_2^{\rm ref}=0$.
To evaluate the right-hand side of Eq.~\eqref{eq:clapeyron_vfix} and integrate along the melting line, we need to extract $x_2$ and $u$ in the liquid and solid phases from the BIM model. The energy is obtained using a linear mixing rule of the one-component plasma (OCP) energies for which accurate fits to MC calculations already exist,
\begin{equation}
u = x_1 u^{\rm OCP} (\Gamma_1) + x_2 u^{\rm OCP} (\Gamma_2) + \Delta u^{\rm BIM} (R_Z, x_2, \Gamma_1),
\end{equation}
where $\Gamma_i = \frac{(Z_i e)^2}{a_i k_B T}$, $a_i = \left( \frac{3 Z_i}{4 \pi n_e} \right)^{1/3}$, and $R_Z = Z_2/Z_1$. The OCP terms are evaluated using Eq.~(11) of Ogata \textit{et al.} for the liquid (see also ref.~\cite{ogata1987}) and using their Eq.~(21) for the solid (see also ref.~\cite{dubin1990}). As for the correction term $\Delta u^{\rm BIM}$ to the linear mixing rule, we use the fits provided by Eqs.~(12) and (20) of Ogata \textit{et al.}
The calculation of the concentrations requires a relation between $\xi_2$ and $x_2$. From the definition of the fugacity and fugacity fraction [Eqs.~\eqref{eq:fugacity_def} and \eqref{eq:fugacityfrac_def}], we find
\begin{equation}
\frac{\xi_2}{1-\xi_2} = e^{\beta ( \mu_2 - \mu_1)}.
\end{equation}
From $\mu_2 - \mu_1$, $x_2$ can then be obtained by numerically solving
\begin{equation}
\frac{\partial {\cal F}(N,N_2,V,T)}{\partial N_2} = \mu_2 - \mu_1.
\end{equation}
The Helmholtz free energy $\cal F$ is evaluated using the analytic fits provided by Ogata \textit{et al.} (their Eqs. 16, 25, 27, and 28),
\begin{equation}
{\cal F} = x_1 F^{\rm OCP} (\Gamma_1) + x_2 F^{\rm OCP} (\Gamma_2) + \Delta F^{\rm BIM} (R_Z, x_2, \Gamma_1).
\end{equation}
With those equations in hand, we have everything we need to determine $x_2$ and $u$ in the liquid and solid phases and evaluate Eq.~\eqref{eq:clapeyron_vfix}. In what follows, we define species $i=1$ as C and $i=2$ as O. To start the integration of Eq.~\eqref{eq:clapeyron_vfix}, we must first specify an initial coexistence point $(\xi_{\rm O}^0,\beta^0)$. We choose to start the integration at $\xi_{\rm O}=0$, where only C ions are present in the plasma. The temperature $\beta^0$ at this coexistence point is then given by the melting temperature of the OCP, $\Gamma_m \simeq 175$ \citep{potekhin2000}.
As $\frac{d \beta}{d \xi_2}$ is undefined in Eq.~\eqref{eq:clapeyron_vfix} for $\xi_2 = 0$, the first derivative has to be computed by other means. In applications of the Clapeyron integration technique to neutral systems (e.g., Lennard--Jones fluids), this initial derivative can be obtained through the infinite dilution limit and Henry's law \cite{mehta1994,hitchcock1999}. Here, this method demands the evaluation of $\langle \exp \left( - \beta \Delta u_{{\rm C} \rightarrow {\rm O}} \right) \rangle_{\rm NPT}$, where $\Delta u_{{\rm C} \rightarrow {\rm O}}$ represents the energy change that would result from the transformation of a C ion into an O ion. In practice, for dense plasmas, evaluating this term using MC simulations is challenging due to the strong fluctuations of the energy change $\Delta u_{{\rm C} \rightarrow {\rm O}}$. This issue is reminiscent of the limitations that affect particle insertion methods when applied to strongly coupled systems. Although this problem does not apply to the present section (as we are modeling the plasma using an analytic model), we still use the workaround that we have developed for our full MC-based Clapeyron integration (Section~\ref{sec:CO}). We initially assume that the temperature at the first $\xi_{\rm O}>0$ integration step ($\xi_{\rm O}^1$) is the same as that at $\xi_{\rm O}=\xi_{\rm O}^0=0$, i.e., $ \left. d \beta / d \xi_{\rm O} \right|_{\xi_{\rm O}^0}=0$. Using Eq.~\eqref{eq:clapeyron_vfix} and the BIM model, this allows us to get a first estimate of $ \left. d \beta / d \xi_{\rm O} \right|_{\xi_{\rm O}^1}$ and we then approximate the initial derivative as $ \left. d \beta / d \xi_{\rm O} \right|_{\xi_{\rm O}^0} = \left. d \beta / d \xi_{\rm O} \right|_{\xi_{\rm O}^1}$. We then use this improved estimate of the initial derivative to obtain a refined estimate of the temperature at $\xi_{\rm O}^1$ and repeat this procedure until the derivative $\left. d \beta / d \xi_{\rm O} \right|_{\xi_{\rm O}^1}$ converges to a stable value.
Now that we have specified the initial coexistence condition and its initial derivative, the integration of Eq.~\eqref{eq:clapeyron_vfix} can begin. We define a grid of $\xi_{\rm O}$ values and use it to step from one $\xi_{\rm O}$ to the next. As the temperature at the next $\xi_{\rm O}$ value is initially unknown, we use a predictor--corrector algorithm to gradually refine its value until it stops varying by more than a fraction $\gamma$ of its value at the previous iteration. We refer the reader to ref.~\citep{hitchcock1999} for a detailed description of this algorithm. For each step, the $T$, $x_{\rm O}^{\ell}$, and $x_{\rm O}^s$ values are saved. After integrating all the way to $\xi_{\rm O}=1$, the phase diagram is directly given by the relation between those temperatures and concentrations. More specifically, $T(x_{\rm O}^{\ell})$ corresponds to the liquidus and $T(x_{\rm O}^{s})$ to the solidus.
Fig.~\ref{fig:gb_ogata} displays the resulting C/O phase diagram, which is almost identical to Fig.~5a of Ogata \textit{et al.} \citep{ogata1993}. The general shape as well as the position of the azeotropic point are the same. The only slight difference concerns the spurious behavior of our liquidus at very small O concentrations (see the inset of Fig.~\ref{fig:gb_ogata}). We attribute this difference to Ogata \textit{et al.}'s fit of $\Delta u_{\rm ex}^{\rm BIM}$ for the liquid, which is known to lead to unphysical results at small O concentrations \citep{dewitt1996,dewitt2003}.
\begin{figure}
\includegraphics[width=\columnwidth]{ogata.pdf}
\caption{C/O phase diagram computed using the Clapeyron integration technique and the BIM model of Ogata \textit{et al.} \citep{ogata1993} The horizontal axis is the O number concentration and the vertical axis is the ratio between the temperature and the melting temperature of a pure C OCP ($\Gamma=175$). The upper curve corresponds to the liquidus, and the lower one is the solidus.}
\label{fig:gb_ogata}
\end{figure}
We also tested what happens if we fix $\Delta u^{\rm BIM}=0$ for the liquid phase, which corresponds to the prescription adopted by Medin \& Cumming \citep{medin2010}. This simplification can be justified by the fact that $\Delta u^{\rm BIM}$ is very small in the liquid phase compared to the other energy terms. It also eliminates the spurious behavior of Ogata \textit{et al.}'s fit. With this approximation, we are able to reproduce the azeotropic phase diagram of Medin \& Cumming (compare Fig.~\ref{fig:gb_medin} to their Fig.~5a). We also replicated their finding that the phase diagram transitions from an azeotrope shape to a spindle shape when the charge ratio $R_Z$ goes below $\simeq 1.2$. This result is to be contrasted with the findings of ref.~\citep{dewitt1996}, who use different analytic fits and find that this transition occurs near $R_Z = 1.4$. This difference is very important in the context of white dwarf interiors ($R_Z = 1.33$ for a C/O plasma), where the shape of the phase diagram determines the composition profile of the frozen core. This comparison stresses the sensitivity of the final results on the fits used to derive the phase diagram. It clearly highlights the advantage of the Clapeyron integration method, which, once used in conjunction with isobaric semi-grand canonical MC simulations (Section~\ref{sec:CO}), requires no interpolation between simulation results.
\begin{figure}
\includegraphics[width=\columnwidth]{medin.pdf}
\caption{Same as Fig.~\ref{fig:gb_ogata}, but assuming $\Delta u^{\rm BIM}=0$ for the liquid phase.}
\label{fig:gb_medin}
\end{figure}
\section{Plasma Model}
\label{sec:theory}
So far, our discussion has been general in the sense that no model for our plasma has been assumed. The microphysics is all contained in the canonical partition function $\cal{Z}$ [see Eqs.~\eqref{eq:calQ} and \eqref{eq:calF}]. We now specify a model for the electron--ion plasma appropriate for the conditions in white dwarf cores. Note, however, that the Clapeyron integration method is by no means limited to this particular model.
We consider a fully ionized plasma mixture of $c$ ionic species defined as in Section~\ref{Sec_II_B}. The system is contained in a volume $V=L^3$ with periodic boundary conditions in all three spatial directions. For notational simplicity, the charge (mass) of ion $J$ is denoted $Z_J$ ($M_J$), with $J=1,\dots,N$. We assume classical ions and quantum electrons. Approximating the ions as classical particles is well justified for the conditions in which we are interested. At the onset of crystallization in white dwarf cores, the interparticle distance is larger that the thermal de Broglie wavelength $\Lambda$. More specifically, $\Lambda/a \simeq 0.1 - 0.3$ where the liquid and solid phases coexist in white dwarf interiors.
Ion--ion, electron--ion, and electron--electron interactions need to be taken into account. We include electron--ion interactions to the lowest order, which yields (Appendix~\ref{sec:partfunc})
\begin{equation}
{\cal{Z}}\left(T,V,\{N_i\}_{i=1,\dots,c}\right)=\frac{1}{\prod_{i=1}^{c}{N_i! \Lambda_i^{3N_i}}}\int{dR^{3N} e^{-\beta{\cal{U}}(R^{3N})}}.
\label{eq:calZ_model}
\end{equation}
Eq.~\eqref{eq:calZ_model} is the classical partition function of the ions interacting through the effective interaction energy
\begin{equation}
{\cal{U}}(R^{3N}) = {\cal{U}}_{\kappa}(R^{3N})+ F_{\rm jel}[n_e,T] \label{eq:calU}.
\end{equation}
The first term here is the potential of the system of ions interacting through the Yukawa (or screened Coulomb) interaction (which we justify in Appendix~\ref{sec:yukawa}), and the second term is the Helmholtz free energy of a relativistic homogeneous electron gas modeled at density $n_e=N_e/V$ and temperature $T$ (see Appendix~\ref{sec:partfunc}). For a pair of ions with charges $Z_i$ and $Z_j$, the Yukawa interaction potential takes the form
\begin{equation}
v_{\kappa} (r) = \frac{Z_i Z_j e^2}{4 \pi \epsilon_0} \frac{e^{-\kappa r}}{r},
\end{equation}
where $\epsilon_0$ is the vacuum permittivity and $1/\kappa$ is the relativistic Thomas--Fermi screening length. Fig.~\ref{fig:kappa} illustrates how this screening parameter varies as a function of the electronic density. Assuming this interaction potential, it follows that the potential of the system of interacting ions is given by
\begin{align}
{\cal{U}}_{\kappa}(R^{3N})&=\frac{e^2}{2\epsilon_0 V}\sum_{{\bf
k},{\bf k}\neq 0}{{\vphantom{\sum}} \frac{1}{{\bf k}^2+\kappa^2}\left\{n_i({\bf
k})n_i(-{\bf
k})-\sum_{I=1}^N{Z_I^2}\right\}} \nonumber \\
&+\frac{e^2}{8\pi\epsilon_0}\left(\sum_{I=1}^N{Z_I^2}\right)
\left(E_\kappa-\kappa \right) \label{U_kappa},
\end{align}
where $n_i({\bf k})=\sum_{J=1}^{N}{Z_J e^{i{\bf k}\cdot{\bf R}_{J}}}$
is the Fourier transform of the ion charge density and $E_\kappa$ is the Madelung energy for the Yukawa interaction (see Appendix~\ref{sec:partfunc}).
\begin{figure}
\includegraphics[width=\columnwidth]{kappa.pdf}
\caption{Ratio of the average interparticle distance, $a=\left( 3Z/4\pi n \right)^{1/3}$, to the screening length, $1/\kappa$, as a function of the electronic density for a fully ionized C plasma [Eq.~\eqref{eq:kappa}]. The electrons are assumed to form a completely degenerate electron gas. $x_r=\frac{\hbar k_F}{m_e c}$ (with $k_F= (3 \pi^2 n_e)^{1/3}$ the Fermi momentum) is the relativistic parameter, and typical white dwarf (WD) core and neutron star (NS) crust conditions are highlighted. Note that the long-wavelength approximation (Appendix~\ref{sec:yukawa}) used here to evaluate $\kappa$ is no longer valid for the lower densities shown in this figure. It is nevertheless an excellent approximation for the dense astrophysical plasmas that are the focus of this work.}
\label{fig:kappa}
\end{figure}
In contrast with this approach, most previous studies of plasma phase diagrams do not explicitly include the electronic background. This is a natural choice when working in the canonical (NVT) ensemble, but it is not appropriate when working in an isobaric ensemble. Indeed, when the volume and number of electrons are not fixed (as it is the case in the isobaric semi-grand canonical on which our Clapeyron integration approach is based), the background electronic energy is not a fixed quantity and it becomes necessary to include it explicitly. In any case, it is more rigorous to include the complete system of ions and electrons.
\section{Monte Carlo Implementation}
\label{sec:mc_code}
\subsection{General Overview of the MC Sampler}
We now describe how, given this plasma model, Eq.~\eqref{eq:Bmoy} can be evaluated using a MC sampler. Using Eqs.~\eqref{eq:fugacityfrac_def} and \eqref{eq:calZ_model}, the partition function [Eq.~\eqref{eq:calQ}] reads as
\begin{align}
&{\cal{Q}}\left(T,P,N, \{\xi_i\}_{i=2,\dots,c}\right)=\frac{1}{N!}\int_0^\infty \Bigg[ \frac{dV}{V_0} \sum_{i_1=1}^c \dots\sum_{i_N=1}^{c} \nonumber \\
&\quad \int \frac{dR^{3N}}{V^N} e^{-\beta{\cal{U}}(R^N) -\beta PV+N\ln \frac{V}{\Lambda_1^3}+\sum\limits_{j=1}^{N}\left(\ln\frac{\xi_{i_j}}{\xi_1}+\beta\mu_{i_j}^{\rm ref}\right)} \Bigg].
\label{eq:Qcal3}
\end{align}
To reach the targeted $P$ and $\{\xi_i\}_{i=2,\dots,c}$ conditions at a given $T$, three types of moves are required in isobaric semi-grand canonical MC simulations: (1) particles displacements, (2) volume changes, and (3) identity changes. From Eq.~\eqref{eq:Qcal3}, it follows that a particle displacement $R\to R'$, volume change $V\to V'$, or ionic identity change $i\to i'$ is accepted with probability
\begin{align}
{\rm min}(1,e^\chi) \label{minoneexplambda},
\end{align}
where
\begin{align}
\chi=&-\beta ({\cal{U}}^\prime-{\cal{U}})-\beta P(V^\prime-V)+N\ln\frac{V'}{V} \nonumber\\
&+\ln\frac{\xi_{i^\prime}}{\xi_i}+\beta (\mu_{i^\prime}^{\rm ref}-\mu_{i}^{\rm ref}).
\label{eq:lambda}
\end{align}
The role of the reference chemical potential $\mu_i^{\rm ref}$ that we have introduced earlier [Eq.~\eqref{eq:muref_first}] now becomes apparent. If we assume that $\mu_i^{\rm ref}=0$ for all $i=1,...,c$, then $\chi$ is dominated by the change in the electron free energy, i.e., $\chi \simeq -\beta(F[n_e^\prime,T]-F[n_e,T])$. We have $n_e^\prime-n_e=(Z_{i^\prime}-Z_i)\frac{N}{V}$ and typically $\chi \simeq -\beta (Z_{i^\prime}-Z_i)\mu_e$ with $\mu_e$ the electron chemical potential. Under degenerate conditions, $\mu_e \propto n_e^{2/3}$ and therefore $\left| \chi \right| \gg 1$ in dense plasmas (under white dwarf conditions, $n_e \sim 10^{30}\,{\rm cm}^{-3}$). Then, the acceptance probability ${\rm min}(1,e^\chi)$ is either 1 or 0 and eventually all but one ionic species disappear from the MC simulation. In other words, the mapping $x_i \leftrightarrow \xi_i$ is not practical: all the variation in $x_i$ occurs over a small range of $\xi_i$ values very close to 0 or 1. The reference chemical potentials were introduced to overcome this difficulty. We define them as
\begin{equation}
\mu_i^{\rm ref} = Z_i \mu_e^{\rm ref},
\end{equation}
where $\mu_e^{\rm ref}$ is fixed to a value close to $\mu_e [n_e,T]$. With this choice, the dominant $\beta (Z_{i^\prime}-Z_i)\mu_e$ term in Eq.~\eqref{eq:lambda} is largely cancelled by the $\beta (\mu_{i^\prime}^{\rm ref}-\mu_{i}^{\rm ref})$ term and the $x_i \leftrightarrow \xi_i$ mapping becomes more practical (i.e., $x_i \sim \xi_i$). Note that the electronic density needed to define $\mu_e^{\rm ref}$ is a priori unknown, as the electronic density fluctuates during the MC simulation. An iterative process is therefore needed in order to find the value of $\mu_e^{\rm ref}$ that leads to a well-behaved $x_i \leftrightarrow \xi_i$ mapping (once selected, $\mu_e^{\rm ref}$ is kept fixed during the integration of the phase diagram).
At each iteration during the MC simulation, the algorithm decides randomly whether a particle displacement, volume change, or identity change is attempted (and Eq.~\ref{minoneexplambda} is then evaluated to decide whether the move is actually accepted). A standard prescription is to assign probabilities of $N/(2N+1)$, $1/(2N+1)$ and $N/(2N+1)$ to attempting a particle displacement, volume change, and identity change, respectively (with this prescription, $2N+1$ attempts represent one MC cycle) \cite{hitchcock1999}. Empirically, we found that assigning a probability of $90\%$ to ion displacements, $5\%$ to volume changes, and $5\%$ to identity changes leads to a much quicker convergence of the MC simulation. While this choice affects the particular trajectory of the MC simulations, we have verified that it does not influence the average energies, concentrations, and densities extracted from the simulations.
Typically, a few thousand MC cycles are needed before the targeted pressure and fugacity fractions are reached. After that, a few more thousand cycles are performed in order to accurately evaluate the average concentrations and enthalpies needed for the integration of the Clapeyron equation (uncertainties are estimated using the block-averaging technique \citep{flyvbjerg1989,grossfield2009}). A series of tests designed to validate our MC code are presented in Appendix~\ref{sec:validation}.
\subsection{Numerical Implementation of the MC Sampler}
\label{sec:mc_details}
Consider a given ionic configuration ${\bf R}=R^{3N}$ of the $N$ ions in a cubic simulation box of volume $V=L^3$ where periodic conditions are imposed on all boundaries. The MC algorithm necessitates the calculation of the energy ${\cal{U}}_{\kappa}({\bf R})$ (Eq.~\ref{U_kappa}), and of the all the interparticle forces $-\frac{\partial{\cal{U}}_{\kappa}({\bf R})}{\partial {\bf R}}$, which are used to calculate the instantaneous contribution to the total pressure. More specifically, we have
\begin{equation}
P=-\frac{\partial {\cal{F}}}{\partial V}=P_i+P_e,
\end{equation}
where the electronic pressure is given by $P_e=-\frac{\partial F_{\rm jel}}{\partial V}$ and, using Eqs.~\eqref{eq:calF} and \eqref{eq:calZ_model}, the ionic pressure is
\begin{equation}
P_i=nk_BT-\frac{1}{3V}\left\langle {\bf R}\cdot \frac{\partial{\cal{U}}_{\kappa}}{\partial {\bf R}}\right\rangle -\frac{1}{3V}\left\langle L\frac{\partial {\cal{U}}_{\kappa}}{\partial L}\right\rangle.
\end{equation}
Because the bare Coulomb interactions between ions are only weakly shielded by the electrons ($\kappa a\sim 0.35$, see Fig.~\ref{fig:kappa}), the range of the Yukawa potential is large in the sense that one cannot safely truncate the potential at the distance $r=L/2$ and make use of the usual minimum image convention and neglect the interaction of a particle with the particles in the periodically replicated cells.
To overcome this problem, we use the Ewald summation technique to evaluate the sums over ${\bf n}\in\mathbb{Z}^3$ in Eq.~\eqref{U_kappa} (e.g., see ref.~\cite{frenkel2002}).
For a Yukawa potential (e.g., ref.~\cite{salin2000}), the interaction energy $v_\kappa(r)=\frac{e^2}{4\pi\epsilon_0}\frac{e^{-\kappa r}}{r}$ between two particles at distance $r$ is represented by the sum of a short-range (sr) and a long-range (lr) component,
\begin{equation}
v(r)=\phi_{\rm sr}(r)+\phi_{\rm lr}(r),
\end{equation}
with
\begin{align}
\phi_{\rm sr}(r)&=\frac{e^2}{8\pi\epsilon_0 r}\left[\erfc\left(\alpha r+\frac{\kappa}{2\alpha}\right)e^{\kappa r}\right. \nonumber \\
&\quad \left.+\erfc\left(\alpha r-\frac{\kappa}{2\alpha}\right)e^{-\kappa r}\right]
\end{align}
and
\begin{equation}
\phi_{\rm lr}(r)\!=\!\!\frac{e^2}{\epsilon_0 V}\sum_{{\bf n}\in\mathbb{Z}^3}{\frac{e^{-(k^2+\kappa_{sc}^2)/(4\alpha^2)}}{k^2+\kappa_{sc}^2} e^{i{\bf k}\cdot{\bf r}}},
\end{equation}
where ${\bf k}=\frac{2\pi}{L}{\bf n}$, $\alpha>0$ is the Ewald parameter (a numerical parameter conveniently chosen to optimize the evaluation of the previous expressions \cite{frenkel2002}), and $\erfc$ is the complementary error function. In our simulation code, the Ewald sum is numerically evaluated with the particle--particle--particle--mesh ($\rm P^3M$) method, which combines high resolution of close encounters (the sr term is calculated using nearest-neighbor techniques) and rapid long-range force calculations (the lr forces are computed on a mesh using three-dimensional fast Fourier transforms) \cite{frenkel2002}. The code is fully parallelized using Message Passage Interface (MPI). Compared to the standard implementation of the $\rm P^3M$ algorithm, here, MC simulations in the isobaric semi-grand canonical ensemble require carefully reinitializing the algorithm to account for the changes in the simulation box size and screening length $1/\kappa$ that occur whenever a volume change or a particle identity change is performed.
\section{Application: the C/O interior of white dwarfs}
\label{sec:CO}
\subsection{C/O Phase Diagram}
We now combine the MC code presented in the previous Section with the Clapeyron integration algorithm described in Section~\ref{sec:gd} to obtain the phase diagram of a dense C/O plasma under white dwarf conditions. As in Section~\ref{sec:gd_validation}, we start our integration from a pure C plasma and we define species $i=1$ as C and $i=2$ as O. We perform the integration of Eq.~\eqref{eq:clapeyron} on a grid defined by $\xi_{\rm O} = \{0,0.02,0.04,\dots,1\}$ and we fix the pressure to $10^{18}$~bar, a typical pressure for white dwarf cores. Other numerical parameters are listed in Table~\ref{tab:num_params}.
\begin{table}
\caption{\label{tab:num_params} Numerial parameters used for our integration of the C/O phase diagram.}
\begin{ruledtabular}
\begin{tabular}{lr}
Parameter & Value\\
\hline
Total number of ions in each MC simulation & 686\\
Number of MC cycles per simulation & 6000\\
$\mu_{e}^{\rm ref}$ & 523.7\,keV\\
Predictor--corrector convergence criterion ($\gamma$) & 0.001 \\
\end{tabular}
\end{ruledtabular}
\end{table}
For the initial coexistence condition $\left( \xi_{\rm O}^0, \beta^0 \right)$, we use the melting temperature given in ref.~\cite{hamaguchi1996} for Yukawa systems (here, $\kappa a \simeq 0.35$, which implies $\Gamma_m \simeq 178$). The initial derivative $\frac{d \beta}{d \xi_{\rm O}}$ is computed as in Section~\ref{sec:gd_validation} and the numerical integration of Eq.~\eqref{eq:clapeyron} is performed using the predictor--corrector algorithm described in ref.~\cite{hitchcock1999}. For each $\xi_{\rm O}$ value, the liquid and solid phases are simulated simultaneously (and independently), yielding a pair of concentrations $(x_{\rm O}^{\ell},x_{\rm O}^{s})$ for each $T(\xi_{\rm O})$ point along the coexistence line (see Fig.~\ref{fig:CO_explain}). The evolution of one of those MC simulations is shown in Fig.~\ref{fig:CO_MC_demo}. For the solid phase, the C and O ions are randomly positioned on a bcc lattice and their displacements are limited in order to prevent the solid from melting. The bcc phase is the only solid phase accessible to Yukawa systems near the OCP limit (large screening length $1/\kappa$) such as those found in white dwarf interiors \cite{hamaguchi1996}.
\begin{figure}
\includegraphics[width=\columnwidth]{CO_PRE2.pdf}
\caption{Clapeyron integration of the C/O phase diagram. The black line shows the $T(\xi_{\rm O})$ coexistence line that we integrate from $\xi_{\rm O}=0$ to 1 using Eq.~\eqref{eq:clapeyron}. Each point along the $T(\xi_{\rm O})$ coexistence line yields an O concentration in the liquid ($x_{\rm O}^{\ell}$) and in the solid ($x_{\rm O}^{s}$) phases, tracing the liquidus and the solidus, respectively. The temperature is shown in units of the melting temperature of a pure C Yukawa plasma (here, $\Gamma=178$). Note that $T(\xi_{\rm O})$ depends on our choice of $\mu_e^{\rm ref}$ ($523.7\,{\rm keV}$ here), but the $T(x_{\rm O})$ lines do not.}
\label{fig:CO_explain}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{CO_MC_demo.pdf}
\caption{Evolution of the ion excess energy (${\cal U}_{\kappa}/N k_B T$), O number fraction ($x_{\rm O}$), and ion density ($N/V$) during one the MC simulations used to map the C/O phase diagram. For this simulation, the plasma is in the liquid phase, $\xi_{\rm O}=0.7$, $T=470.48\,{\rm eV}$, and $P=10^{18}$\,bar. Other numerical parameters are specified in Table~\ref{tab:num_params}. The region in gray corresponds to the equilibration phase that we ignore when we compute the average enthalpies and concentrations needed to evaluate the Clapeyron equation.}
\label{fig:CO_MC_demo}
\end{figure}
Fig.~\ref{fig:CO} shows the resulting C/O phase diagram. The smoothness of our phase diagram illustrates the high level of accuracy achieved by our MC simulations. Moreover, we recover the melting temperature of a pure O plasma, $T_{m,{\rm O}}$, at the end of our integration at $\xi_{\rm O}=x_{\rm O}=1$, $T_{m,{\rm O}} = \left[Z({\rm O}) / Z({\rm C}) \right]^{5/3} T_{m,{\rm C}} \simeq 1.62 T_{m,{\rm C}}$. This result is not explicitly enforced by the Clapeyron integration method. It can only be achieved if the integration from $\xi_{\rm O}=0$ to $\xi_{\rm O}=1$ is accurate enough to recover this known limit.
\begin{figure}
\includegraphics[width=\columnwidth]{CO_PRE.pdf}
\caption{C/O phase diagram obtained via integration of the Clapeyron relation [Eq.~\eqref{eq:clapeyron}] using our plasma model described in Section~\ref{sec:theory} (in red). For comparison, we also show the C/O phase diagrams of refs.~\cite{medin2010,horowitz2010}. The temperature is shown in units of the melting temperature of a pure C Yukawa plasma (here, $\Gamma=178$). A similar figure was presented in ref.~\cite{blouin2020}.}
\label{fig:CO}
\end{figure}
As pointed out in the companion paper \cite{blouin2020}, our C/O phase diagram is close to that of Medin \& Cumming \cite{medin2010} (dashed lines in Fig.~\ref{fig:CO}). Both have a similar azeotrope shape, with azeotropic points at about the same concentrations. This is a remarkable result given the approximations underlying their phase diagram. Namely, the ion--ion interactions are not screened and the excess energy of the liquid phase is assumed to simply be the sum of the OCP energies of each ionic component (but note that Medin \& Cumming explore the impact of deviations from this linear mixing rule in their Appendix~B). Our C/O phase diagram is also similar to that of Horowitz \textit{et al.} \cite{horowitz2010}. Apart from the superior sampling made possible by our relatively inexpensive method, the main difference is that the separation between the liquidus and the solidus, $\Delta x_{\rm O}$, is slightly larger in our case (at concentrations higher than the azeotrope). As briefly discussed by Horowitz \textit{et al.}, $\Delta x_{\rm O}$ could be underestimated in their simulations due to finite-size effects that cause an artificial composition gradient across the liquid--solid interface. We can expect that their phase diagram would converge to something closer to ours if they used larger MD simulations.
The results shown in Fig.~\ref{fig:CO} were obtained at $P=10^{18}\,{\rm bar}$. We also computed additional versions of this phase diagram assuming different pressures. Consistent with our findings for the OCP (Appendix~\ref{sec:validation}), we found that the resulting phase diagram remains practically unchanged for the range of pressures that characterize white dwarf interiors.
\subsection{Analytic Fits to our C/O Phase Diagram}
\label{sec:fits}
To facilitate the implementation of our phase diagram in white dwarf codes, we provide analytic fits to the Coulomb coupling parameter at the melting temperature, $\Gamma_m (x_{\rm O}^{\ell})$, and to the separation between the liquidus and the solidus, $\Delta x_{\rm O} (x_{\rm O}^{\ell})$. The coupling parameter of the mixture is computed as
\begin{equation}
\Gamma=\frac{\langle Z ^{5/3} \rangle e^2}{a_e k_B T},
\end{equation}
with $\langle Z^{\alpha} \rangle = \sum_i Z_i^{\alpha} n_i /n$ and $a_e = (3/ 4\pi n_e)^{1/3}$.
Both $\Gamma_m (x_{\rm O}^{\ell})$ and $\Delta x_{\rm O} (x_{\rm O}^{\ell})$ can be accurately fitted with a fifth-order polynomial,
\begin{equation}
\sum_{i=0}^5 a_i (x_{\rm O}^{\ell})^i,
\label{eq:fit}
\end{equation}
where the coefficients $a_i$ are given in Table~\ref{tab:fit}. Our fit to $\Gamma_m (x_{\rm O}^{\ell})$ is shown in Fig.~\ref{fig:gammam_fit}. Note that the fit recovers the known limits $\Gamma_m (x_{\rm O}^{\ell}=0) = \Gamma_m (x_{\rm O}^{\ell}=1) = 178$ for a one-component Yukawa system with a screening parameter typical of white dwarf interiors ($\kappa a \simeq 0.35$) \cite{horowitz2010,hamaguchi1996,vaulina2002}. Fig.~\ref{fig:deltax_fit} shows our fit to $\Delta x_{\rm O} (x_{\rm O}^{\ell})$. The fit is such that $\Delta x_{\rm O} (x_{\rm O}^{\ell})=0$ in the pure C and pure O limits, as well as at the azeotropic point ($x_{\rm O}^{\ell} \approx 0.18$). Both fits reproduce the simulations accurately within the statistical noise.
\begin{table}
\caption{\label{tab:fit} Fit parameters for $\Gamma_m (x_{\rm O}^{\ell})$ and $\Delta x_{\rm O} (x_{\rm O}^{\ell})$ [Eq.~\eqref{eq:fit}].}
\begin{ruledtabular}
\begin{tabular}{lrr}
& $\Gamma_m (x_{\rm O}^{\ell})$ & $\Delta x_{\rm O} (x_{\rm O}^{\ell})$\\
\hline
$a_0$ & 178.000000 & 0.000000 \\
$a_1$ & 167.178104 & $-$0.311540\\
$a_2$ & $-$3.973461 & 2.114743\\
$a_3$ & $-$741.863826 & $-$1.661095\\
$a_4$ & 876.516929 & $-$1.406005 \\
$a_5$ & $-$297.857813 & 1.263897 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}
\includegraphics[width=\columnwidth]{gammam_fit.pdf}
\caption{Coulomb coupling parameter at which the C/O liquid with a concentration $x_{\rm O}^{\ell}$ crystallizes. Results from our Clapeyron integration are shown in black and the analytic fit given by Eq.~\eqref{eq:fit} is in red.}
\label{fig:gammam_fit}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{deltax_fit.pdf}
\caption{Difference between the O concentration of the coexisting solid and liquid phases as a function of the O concentration of the liquid at the phase transition. In black, we show the results taken directly from Fig.~\ref{fig:CO} and, in red, we show our analytic fit [Eq.~\eqref{eq:fit}].}
\label{fig:deltax_fit}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We have presented how the Clapeyron integration method, in conjunction with isobaric semi-grand canonical MC simulations, can be used to map the phase diagrams of dense multicomponent plasmas. This technique has many advantages compared to competing approaches: (1) all calculations are performed directly at the coexistence conditions (no analytic fits required and no uninteresting state points to simulate); (2) no phase transition interface needs to be modeled (thereby greatly simplifying the mitigation of finite-size effects); (3) no particle insertions/deletions are required; (4) all thermodynamic properties of the system at the phase transition are available at no additional cost; (5) the underlying MC simulations allow a fine sampling of the coexistence line at a reasonable cost; (6) the electronic background is explicitly included; and (7) all calculations are performed at constant pressure, as is appropriate for phase equilibrium. As an example application, we have computed the phase diagram of a dense C/O plasma under conditions relevant for white dwarf interiors. Our results are in good agreement with previous calculations and we have provided analytic fits to facilitate the implementation of this new, accurate C/O phase diagram in existing white dwarf evolution codes.
This paper has focused on applications to dense, two-component electron--ion plasmas. However, the Clapeyron integration method can in principle be used for a much wider range of systems. In planetary science for instance, it could prove useful to tackle problems such as the demixing of H/He mixtures in the interiors of giant planets \citep{morales2009,lorenzen2009} or the melting temperature of iron in Earth's core \citep{laio2000,alfe2009}. In the near future, we plan to apply this technique to $c>2$ component dense plasmas and to generalize it to metallic alloys.
\acknowledgements
The authors are indebted to Didier Saumon for insightful discussions from which both the conception and execution of this project have benefited. Research presented in this article was supported
by the Laboratory Directed Research and Development program of Los
Alamos National Laboratory under project number 20190624PRD2.
This work was performed under the auspices of the U.S. Department of Energy
under Contract No. 89233218CNA000001.
|
2,877,628,089,800 | arxiv | \section{Introduction}
\label{sec:intro}
In cellular wireless systems, mobility is achieved through handover (HO) mechanism. This enables UEs to move seamlessly within the coverage area of the network. The HO mechanism involves reassigning an ongoing session handled by one cell into another. A UE in the network will either be in an idle or connected mode. In idle mode, the UE just camps into a cell and does not have any active signaling or data-bearers to the base-stations (BSs) . However, when in connected mode, the BS will allocate resources to the UEs and there will be an active signaling on the data and control channels. In this paper, we describe a novel technique for connected-state intra-frequency HOs in 5G context. In typical cellular networks, UEs continuously monitor the signal strengths of the serving and neighbor cells, and report them to the serving base station. To illustrate this, consider a UE moving away from the serving cell near the cell edge. As shown in Fig.~\ref{fig:handover}, when the serving cell reference signal received power (RSRP) decreases below the acceptable level, and the neighbor cell RSRP is higher than the serving cell by a threshold (hysteresis value), then the serving BS initiates a HO. The RSRP measurements are typically done on the downlink reference signals. This algorithm is discussed in more detail in \cite{TS36331,handover-book}. The hysteresis value ($\Delta$) along with time-to-trigger\footnote{The duration for which the target cell RSRP is above serving cell by $\Delta$ (Refer to Fig~\ref{fig:handover})} ($\Gamma$) is used to overcome the ping-pong effect.
\subsection{Related Work}
\label{ss:rw}
There exist several algorithms which computes the HO parameters such as time-to-trigger ($\Gamma$) and hysteresis value ($\Delta$) optimally. The algorithms in \cite{Leu-1} and \cite{Leu-2} discuss methods to overcome the ping-pong effect during HO. Optimization of HO between macro and femto BS by exploiting the UE information such as velocity, RSSI, etc. is discussed in \cite{Wu}.
Machine Learning has been proposed in several HO optimization problems. In hybrid cellular networks consisting of drone and terrestrial UEs, the main-lobe of a BS antenna is down-titled to serve terrestrial UEs. This results in drone UEs being frequently served by the side-lobes of the BS antennas \cite{Drone1}. This creates a fragmented coverage area served by different BSs, thus increasing the radio link failures (RLFs) and ping-pongs \cite{Drone2}. In \cite{DroneMobility}, the author's propose an RL based mobility model for drones. The proposed model learns the fragmented 3D-coverage described in \cite{Drone2}, while trading off throughput, ping-pong, and RLFs. In \cite{mmWaveMobility}, authors address reliability and latency in terrestrial millimeter-wave (mmWave) mobile systems based on channel blockage prediction. In \cite{igorHandover}, authors propose reinforcement learning based approach for HO optimization. Here, authors propose threshold selection for handover parameters such as $\Delta$ and $\Gamma$ as ``action'', with reward configuration derived from the throughput values aggregated over some duration.
\subsection{Contribution}
While the previous works for HO optimization using machine learning have focused on specific use-cases such as drones, railways, etc. or considers specific channel aspects such as mmWave, blocking, etc., there is no work to the best of our knowledge which considers the handover mobility optimization exploiting the deployment aspects of 5G. In typical 5G stand-alone (SA) deployments, the control and synchronization are carried on much wider (large beam-width) beams called access-beams, while data for the connected UE is carried on a much narrow beams (due to beamforming) called link-beams. In typical 5G systems, the coverage for the access and link-beams are different. The link-beams are typically on mmWave with much narrow beam-width and sometimes penetrate deep into the neighbor cells, while the access-beams are in mid-band with wide beam-widths. In all the state-of-the-art HO algorithms discussed above, the HO decisions are based on the measurements done on the access-beams. In a dense 5G deployments, the connected state UE can receive access-beams from multiple BSs with sufficient power to perform initial entry procedure. In contrast to prior works, we propose a methodology to formulate HO procedure as a sub-class of reinforcement learning problem called contextual multi-arm bandit (CMAB) problem. The CMAB agent will handover UEs opportunistically such that their average link-beam gain (and hence the throughput) can be maximized. In the proposed system, serving BS will collect measurement reports containing beam RSRP measurements from UEs as before, however it will not take decision on the HO, instead will forward the UE measurement reports to a centralised CMAB agent which will choose the HO action. We demonstrate the utility of such a formulation through simulations.
\begin{figure}[t]
\centering
\fbox{\includegraphics[width=3.2 in]{handover}}
\caption{Illustration of the HO mechanism in cellular networks}
\label{fig:handover}
\end{figure}
\section{Reinforcement Learning}
\label{sec:rl}
In this Section, we will describe briefly the reinforcement learning (RL) framework. RL is an evaluative feedback based learning paradigm as shown in Fig.~\ref{fig:rl}. Here the agent learns about the optimal action in a given situation based on trial and error. This is achieved through exploration and exploitation. During exploitation the agent takes the actions that yields maximum reward, while during exploration the agent takes action which may not yield maximum reward instantaneously, however will help the agent to discover newer actions that are profitable in the longer run.
Markov decision process (MDP) is often employed to describe RL. It is characterized by a tuple consisting of $\left\{ \mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R}\right\}$, where $\mathcal{S}$ and $\mathcal{A}$ denotes the set of possible states and actions respectively, $\mathcal{P}$ denotes the transition probabilities for the states when a particular action is taken. MDP can be solved to obtain an optimal policy, where a policy is defined as the action to be taken at each state to maximize the reward.
\begin{figure}
\centering
\fbox{ \includegraphics[width = 3.2 in]{rl}}
\caption{Reinforcement learning framework}
\label{fig:rl}
\end{figure}
A multi-arm bandit (MAB) problem is a variant of the RL problem where the actions taken by agent does not alter the operating environment. This problem involves identifying the best arm among several arms of a muti-arm bandit whose reward distribution is unknown by trial and error. The contextual multi-arm bandit (CMAB) problem is an extension of the MAB problem in which the agent associates a context or state with the MAB. Depending on the context a particular arm yields maximum average reward. The task of the agent now is to learn the relationship between the context, arm, and reward, so that it can predict the best arm to play for a given context vector. This is further illustrated in a $3$ context CMAB example as shown in Fig.~\ref{fig:banditMarkov}, where for each of the contexts, $s_i, i\in\{1,2,3\}$, a particular action, $a_i,i\in\{1,\ldots,4\}$, yields a better average reward.
In many practical RL problems, the model defining the $\mathcal{P}$ and $\mathcal{R}$ are not available. In these problems, an optimal policy can still be derived using Q-learning algorithm. In Q-learning method, Q-value for a policy, $\pi(a|s)$, is defined as an expected long-term reward when the agent takes an action $a$ at state $s$ and follows the policy $\pi$ thereafter. With an iterative process of exploration and exploitation, the agent can learn the optimal Q-values, $Q^*(s,a)$. The optimal policy is to take action, $a$, which maximizes the Q-values for each state. The $\epsilon$-greedy algorithm for Q-learning is described in Algorithm~\ref{alg:cmab}. Here, the agent explores by taking random-action with probability $\epsilon$ and exploits by following an optimal-policy with probability $(1-\epsilon)$ \cite{Q-Learning}.
\begin{algorithm}[t]
\label{alg:Qlearning}
\DontPrintSemicolon
\caption{Q-Learning with $\epsilon$-greedy algorithm for CMAB}
\label{alg:cmab}
\KwData{$Q(s,a)\gets \mb{Initialized with random rewards}$}
\KwResult{$Q^*(s,a) \gets \mb{Optimized Q-Table}$}
$t\gets 0, s_0=s_{\mb{\scriptsize init}}$ //At time-step 0\;
\While{$t<\mb{MAX\_STEP}$}{
$g=\mb{rand()}$ //Random number in range [0,1] \;
\eIf{$g<\epsilon$} {
$a_t= \mb{rand(}\mathcal{A}\mb{)}$ //Take random action, $a_t\in\mathcal{A}$
} {
$a_t = \underset{a}{\mb{argmax}} \{Q(s_{t},a)\}$
}
//Q-Table updated with average reward for $(s_t,a_t)$ combination\;
$Q(s_t,a_t)\gets \frac{1}{T} \sum_{0}^{t}r(s_t,a_t) $// $T$ indicates number of times action, $a_t$ is taken at state, $s_t$ up to time-step $t$\;
$t \gets t+1$ \;
}
\end{algorithm}
\begin{figure}
\centering
\fbox{ \includegraphics[width = 2.5 in]{CMAB_markov}}
\caption{CMAB as an extension of MAB. In CMAB the best arm depends on the state of the bandit.}
\label{fig:banditMarkov}
\end{figure}
\begin{figure}
\centering
\fbox{ \includegraphics[width = 2.5 in]{beams}}
\caption{An illustration of a UE moving from position $x_1$ to $x_2$ in a $2$-Node network with $1$ access-beam and $4$ link-beams.}
\label{fig:beams}
\end{figure}
\begin{figure}[t]
\fbox{ \includegraphics[width = 3 in]{HOSignaling}}
\caption{HO process using centralized CMAB agent. }
\label{fig:HOSignaling}
\end{figure}
\section{System Aspects}
\label{sec:cmab}
In 5G, there exists two possible deployments, stand-alone (SA) and non-standalone (NSA). In the NSA deployment, the long-term evolution (LTE) is used for cell acquisition and control, while the data is transferred using the new-radio (NR). In NR SA deployment, there are no cell specific reference signals, instead the cell acquisition is performed via synchronization signal block (SSB) beams. In this paper, we call the beams used for cell acquisition as access-beams. In 5G NR SA deployments in mmWave bands, the data is typically carried on narrow beam called link-beams. An example illustration with $2$-BSs with $1$ access-beam and $4$ link-beams is shown in Fig.~\ref{fig:beams}. Note that in Fig.~\ref{fig:beams}, the access and link-beams are indicated by $a_{ij}$ and $l_{ij}$ respectively with $i$ and $j$ indicating the BS-id and beam
numbers. In the state-of-the-art HO algorithms, the wide access-beams are used in the HO inference. However, due to the dense deployment, it is possible to have access-beams corresponding to many BS are strong enough to perform HOs. Instead of using only access-beams RSRP for HO inferencing, performance can be improved by opportunistically choosing BSs for HO, which has higher link-beam gain, among candidate BSs with sufficient access-beam power to perform initial-entry. This can be accomplished using a CMAB with link-beam RSRP after the HO action as the reward which is further explained below.
In this paper, we propose an architecture, where a centralized CMAB agent will perform HO inference. The signaling involved in the process is as shown in the Fig.~\ref{fig:HOSignaling}. The measurements from the UEs are forwarded to the centralized CMAB agent by the serving BS. The context for the CMAB agent consist of the access-beam measurements for serving and neighbor BSs together with the serving BS-id. Each BS can be considered as an arm. The CMAB action, i.e., pulling the arm of the bandit is analogous to choosing an appropriate BS to HO or to stay in the current BS. The goal is to select an action in a given context that maximizes the expected reward. We consider RSRP of the link-beam after HO as the reward. Since the link-beam RSRP is proportional to the throughput of the UE after HO, the HO inference from CMAB tries to maximize the throughput.
No special measurements or signaling is needed for this method, traditional 3GPP signaling for HO as shown in Fig.~\ref{fig:HOSignaling} between BS and UE can be reused. Apart from the RSRP measurements of the access-beams for serving and neighbor cells, context for CMAB agent could also include location, speed, antenna-setup, etc. Different reward configurations such as downlink-throughput, SINR of the link-beam, etc. after HO can also be considered. Though we have implemented CMAB using Q-learning in this work, it can also be implemented using algorithms such as neural network, random forest, etc.
\section{Algorithms}
\label{sec:alg}
The most common 5G HO method, is based on the RSRP measurements of access-beams. The main essence of this algorithm is that the HO is triggered by the serving BS when the access-beam RSRP of the target BS is higher than the RSRP of the serving BS by a hysteresis value for a duration greater than time-to-trigger parameter. This algorithm runs in every BS to make an inference on whether a particular UE needs a HO. This method is briefly described in Algorithm~\ref{alg:acc}.
\begin{algorithm}[t]
\DontPrintSemicolon
\caption{HO Algorithm using Access-Beams}
\label{alg:acc}
\small
\KwIn{Measurement Report, $\RPT$ }
\KwOut{Base station to Handover, $n$}
$\ACCNBR \gets \getNbr(\RPT)$ \;
$\ACCSER \gets \getSer(\RPT)$ \;
$\BMAX \gets \argmax(\ACCNBR)$ \;
$\TTTVAL \gets \getTTTValue()$ \;
\uIf{$\BMAX < \ACCSER + \Delta$}{
$n \gets \getServingNode()$\;
}
\uElseIf{$\BMAX > \ACCSER + \Delta$ and $\TTTVAL < \Gamma$}{
$\startTTTtimer()$\;
$n \gets \getServingNode()$\;
}
\Else{
$n \gets \getNodeId(\BMAX)$ \;
}
\Return{n}
\end{algorithm}
\begin{figure}[t]
\centering
\fbox{\includegraphics[width=2.8 in]{env_new}}
\caption{The block diagram to illustrate the performance evaluation setup}
\label{fig:env}
\end{figure}
\begin{figure*}[t!]
\centering
\fbox{\includegraphics[width=5.5 in]{RF_environment}}
\caption{The beam power density in a 2D area for different environments used in the performance evaluation. (A) shows the access-beam setup ($P_a=\max(p_{a_{ij}}), i\in[1,\ldots,7], j\in[1,2,3], \mbox{since there are 7 BS and each having 3 access-beams} $), while (B) and (C) shows link-beam energy distributions for Environment-$1$ and Environment-$2$. (D) shows the link-beam setup for the Environment-$3$}
\label{fig:rf}
\end{figure*}
In this paper, we employ Q-learning method discussed in Algorithm \ref{alg:Qlearning}. The access-beam RSRP together with serving cell-ID forms the ``context/state'' (refer to Fig.~\ref{fig:beams}), target BS to HO forms ``action'', and received RSRP of the link-beam after the HO forms ``value/reward''\footnote{Value and reward are interchangeably used}.
During training phase, we set, $\epsilon=1$ in Algorithm \ref{alg:Qlearning} and the UEs are made to take random walks in a 2D radio environment. They are made to report the access-beam RSRP measurements of serving and neighbor cells which forms the context. A Q-Table is built by trying random actions for the received states (contexts)\footnote{State and context are interchangeably used} and recording the reward (link-beam RSRP after HO) observed. During the active or online phase\footnote{Active phase is used to denote exploitation mode of the RL based HO algorithm}, we exploit the built Q-Table for optimal policy by
\begin{equation}
\label{eq:optpolicy}
a_t = \underset{a}{\mb{argmax}} \{Q(s_{t},a)\}.
\end{equation}
This way of separating offline training ($\epsilon=0$) and online exploitation phase ($\epsilon=1$) will make the proposed solution practical from the 5G operational perspective by preventing the CMAB agent taking catastrophic HO action during active or live phase.
Access-beam RSRP from serving and neighbor cells which forms the context are continuous variables and it is not possible to store all possible states/contexts in the Q-Table. Only those states which are observed during the random walk are stored in the Q-table. As long as the random walk of UEs during training phase are sufficiently long, a good representation of possible contexts are observed and corresponding state-action-values are captured in the Q-Table. During the active phase, a similarity function based on the minimum Euclidean distance measure between the Q-Table contexts and the newly received context from the measurement report can be used to choose actions from the Q-Table. This is shown in \eqref{eq:euclid}
\begin{equation}
\label{eq:euclid}
\mbf{c}^{\prime} = \underset{\mbf{c\in\mathcal{Q}}}{\min}\norm{\mbf{c}-\mbf{p}^{\prime}}
\end{equation}
Where $\mbf{p}^{\prime}$ denotes the received context having access-beam RSRP measurements and the serving-cell ID during the active phase. The $\mbf{c}^{\prime}$ denotes the context in the Q-Table ($\mathcal{Q}$) with minimum Euclidean distance to $\mbf{p}^{\prime}$.
The choice of the BS during the active phase is given by
\begin{equation}
\label{eq:val}
i^{*}=\underset{i}{\argmax}\{V_Q \left (\mbf{c}^{\prime},C_i \right )\},
\end{equation}
where $V_Q (\mbf{c}^{\prime},C_i )$ denotes the value/reward\footnote{Value in the Q-Table is derived from the link-beam power obtained after HO} for the ``action'' of choosing the BS, $C_i$ for HO from the Q-Table for the context $\mbf{c}^{\prime}$. The $i^{*}$ denotes the BS index with maximum reward.
Below, we illustrate the difference between the 3GPP method based on access-beam RSRP discussed in Algorithm~\ref{alg:acc}, and the proposed CMAB method using a simple example. Consider a network with two BS in which a UE served by BS-2 is moving from $x_1$ to $x_2$ as shown in Fig.~\ref{fig:beams}. At $x_2$, the method described in Algorithm~\ref{alg:acc} will choose to stay in BS-2, as the access-beam RSRP of BS-2 is stronger than BS-1. However the proposed CMAB agent would choose the action of HO to BS-1 because of larger reward since $l_{12}>\underset{j}{\mb{max}}(l_{2j}),j\in{1,\ldots,4}$.
\section{Results}
\label{sec:sim}
In this Section, we will evaluate the proposed method with the 3GPP access-beam based method discussed in Algorithm~\ref{alg:acc} for three distinct deployment setup. The Environment-$1$ and Environment-$2$ are based on the synthetic data generated from a system emulator with different configurations of access and link-beams. The configurations of access and link-beams for Environment-$1$ and Environment-$2$ are as shown in Fig.~\ref{fig:env}. For propagation, we used a simple path-loss model having a path-loss exponent of $3.1$. Environment-$3$ consist of $7$ roof-top sites with $21$ BSs each having $1$ access-beam and $8$ link-beams. Propagation model for Environment-$3$ is ITU standard based with WINNER urban-macro (UMA) propagation model and is inspired by the city environment of Tokyo and Seoul \cite{winner}. The resulting RF beam patterns with the discussed configuration for the three environments are shown Fig.~\ref{fig:rf}.
As explained in the previous section, we build a Q-Table during an offline training phase by setting $\epsilon=1$ in the Q-learning method discussed in Algorithm~\ref{alg:Qlearning}. During this phase, CMAB agent takes random actions (HO to random BS) for the measurement report (context) reported by the UE while performing a random-walk in 2-D coverage area of the network. The actions which yielded maximum reward (link-beam RSRP after HO) in a given state are retained in the Q-table after the training phase and was used in inferencing during active phase.
To access the performance in all the three environments, we employ a semi-deterministic mobility where UEs will take steps in vertical direction and when a UE hits the edge of the network, it will relocate randomly to a different X-position, and the whole process will repeat again. In each step, the measurement is sent to the CMAB agent which exploits the Q-Table using \eqref{eq:optpolicy} to make an inference on the HO decision\footnote{5G periodic measurement reporting strategy from UE to BS is employed here}. Every $10000$ steps forms an episode in our CMAB formulation. We assess the performance using the following:
\begin{itemize}
\item Average received link-beam power, $E\left(P_l\right)$ per episode
\item Probability density function (PDF) of the received link-beam power, $\mbox{p(} P_l\mbox{)}$.
\end{itemize}
We compare the performance of the above metric with the access-beam based method having $\Delta$ and $\beta$ set to $0$. This will ensure a fair comparison of the 3GPP access-beam based algorithm with the proposed RL algorithm, since RL-algorithm does not penalize the ping-pong during HO. We define gain, $\mathcal{G}$, as increase in the average link-beam gain by using CMAB, and is defined as
\begin{equation}
\label{eq:G}
\mathcal{G}=\{E\left(P_l\right)\}_{\mbox{\scriptsize Algorithm-}\ref{alg:Qlearning}}-\{E\left(P_l\right)\}_{\mbox{\scriptsize Algorithm-}\ref{alg:acc}},
\end{equation}
where Algorithm-\ref{alg:acc} is the 3GPP HO algorithm based on access-beam RSRP and the Algorithm-\ref{alg:Qlearning} is the proposed CMAB based HO algorithm. The gain, $\mathcal{G}$, for different episodes with different initialization points are given in Fig.~\ref{fig:perfA}.
\begin{figure}[t]
\centering
\fbox{\includegraphics[width=3 in]{performanceA}}
\caption{The gain, $\mathcal{G}$, for 10 different episodes. Each episode constitutes $10000$ UE steps.}
\label{fig:perfA}
\end{figure}
\begin{figure}[t]
\centering
\fbox{\includegraphics[width=3 in]{performanceB}}
\caption{The PDF of gain, $\mathcal{G}$ for different environments.}
\label{fig:perfB}
\end{figure}
\begin{figure}[t]
\centering
\fbox{\includegraphics[width=3 in]{performanceC}}
\caption{The PDF of link-beam RSRP, $P_l$, for all three environment.}
\label{fig:perfC}
\end{figure}
Notice from Fig~\ref{fig:perfA} that the gain, $\mathcal{G}$, is positive for all the episodes in all three environments.
The PDF of the $\mathcal{G}$ is shown in Fig.~\ref{fig:perfB}. Notice that the gain, $\mathcal{G}$ in the Environment-$2$ is more than in Environment-$1$ this is due to the higher opportunity for RL based handover to pick better BSs as the link-beams in this environment are narrow and penetrate deep into the neighbor cells (refer to Fig.~\ref{fig:rf}). For Environment-3 which is based on the WINNER UMA propagation model, the results indicate a gain between $[0.3 \mbox{ - }0.5]~\mb{dB}$. The PDF of the link-beam RSRP, $P_l$, experienced by the UE using both the algorithms are shown in Fig.~\ref{fig:perfC}. Notice that the distribution of $p(P_l)$ for CMAB algorithm is shifted to the right for the proposed method indicating the improvement in the link-beam performance. Since the link-beams carry physical downlink shared channel (PDSCH) data, the improvement in link-beam RSRP will increase the downlink throughput for the UE. The quantum of improvement depends on among other things, channel condition, interference perceived by the UE.
\section{Conclusion and Discussion}
\label{sec:conclusion}
In this paper, we proposed a HO algorithm for 5G system using RL. We showed that the HO problem can be posed as a sub-class of RL problems called CMAB. We showed how such a system can be developed using a Q-learning method. We also discussed mitigation strategies for some of the challenges of the design such as state-space explosion by building a Q-Table with representative states during the training phase and with suitable choice of similarity function to pick the closest state in Euclidean space during the active phase for HO inference.
We assessed the performance for different deployment and propagation environments including an ITU standard based one. We demonstrated the utility of the method through average link-beam performance using a semi-deterministic mobility model in three distinct environments. In all the considered environments, the proposed method of this paper performs better than the existing methods. The results also indicate that when the link-beams are narrow and penetrate deep into the neighbor cells, which will be common in dense 5G cellular deployments in mmWave band, the RL based HO algorithm performs better due to the increased opportunity to optimize long-term link gains.
\section*{Acknowledgements}
We would like to thank Ankit Jauhari and Mirsad Cirkic of Ericsson Research for sharing their insights. The discussions with them on Q-Learning and 5G deployments were helpful in improving the paper.
|
2,877,628,089,801 | arxiv | \section{Introduction and statement of the results}
Bounded cohomology of discrete groups was introduced into geometry by Gromov \cite{Gromov/Volume-and-bounded-cohomology}. The theory was subsequently extended to locally compact second countable groups by Burger and Monod \cite{Burger/Bounded-cohomology-of-lattices-in-higher-rank-Lie-groups,Burger/Continuous-bounded-cohomology-and-applications-to-rigidity-theory,Monod/Continuous-bounded-cohomology-of-locally-compact-groups}, who coined the term \emph{continuous bounded cohomology}. Bounded cohomology has by now proved itself an indispensable tool in geometry, topology and group theory, see for example the references surveyed in \cite{Hartnick/Bounded-cohomology-via-partial-differential-equations-I}. Nevertheless, the structure of the bounded cohomology ring of a given group is in general not very well understood. Existing results are chiefly concerned with bounded cohomology in low degrees, most notably bounded cohomology in degree $2$, which is intimately linked with quasi-morphisms (e.g.~\cite{Brooks/Some-remarks-on-bounded-cohomology,Grigorchuk/Some-results-on-bounded-cohomology,Epstein/The-second-bounded-cohomology-of-word-hyperbolic-groups,Bestvina/Bounded-cohomology-of-subgroups-of-mapping-class-groups,Burger/Bounded-cohomology-of-lattices-in-higher-rank-Lie-groups,Burger/Continuous-bounded-cohomology-and-applications-to-rigidity-theory,BestvinaBrombergFujiwara/Bounded-cohomology-with-coefficients-in-uniformly-convex-Banach-spaces,HullOsin/Induced-quasicocycles-on-groups-with-hyperbolically-embedded-subgroups,AntolinMjSistoTaylor/Intersection-properties-of-stable-subgroups-and-bounded-cohomology,HartnickSisto/Bounded-cohomology-and-virtually-free-hyperbolically-embedded-subgroups}), and bounded cohomology in degree $3$, which has close ties with the geometry of $3$-manifolds (e.g.~\cite{Brooks/Some-remarks-on-bounded-cohomology,Soma/Bounded-cohomology-and-topologically-tame-Kleinian-groups,Soma/Bounded-cohomology-of-closed-surfaces,Soma/The-zero-norm-subspace-of-bounded-cohomology,Farre/Bounded-cohomology-of-finitely-generated-Kleinian-groups,Farre/Relations-in-bounded-cohomology,Burger/On-and-around-the-bounded-cohomology-of-SL2,Pieters/Continuous-cohomology-of-the-isometry-group-of-hyperbolic-space-realizable-on-the-boundary,FranceschiniFrigerioPozzettiSisto/The-zero-norm-subspace-of-bounded-cohomology-of-acylindrically-hyperbolic-groups}). Bounded cohomology in higher degrees, on the contrary, is still largely unexplored. There is a number of known bounded cohomology classes in higher degree, often emerging from explicit geometric constructions (e.g.~ \cite{Dupont/Bounds-for-characteristic-numbers-of-flat-bundles,Thurston/Three-dimensional-geometry-and-topology.-Vol.-1,Gromov/Volume-and-bounded-cohomology,Goncharov/Geometry-of-configurations-polylogarithms-and-motivic-cohomology,Mineyev/Bounded-cohomology-characterizes-hyperbolic-groups,Bucher-Karlsson/Finiteness-properties-of-characteristic-classes-of-flat-bundles,Lafont/Simplicial-volume-of-closed-locally-symmetric-spaces-of-non-compact-type,Bucher-Karlsson/Simplicial-volume-of-locally-symmetric-spaces-covered-by-rm-SLsb-3Bbb-R/rm-SO3,Bucher/The-norm-of-the-Euler-class,Hartnick/Surjectivity-of-the-comparison-map-in-bounded-cohomology-for-Hermitian-Lie-groups,Hartnick/Bounded-cohomology-via-partial-differential-equations-I,BucherMonod/The-cup-product-of-Brooks-quasimorphisms,Heuer/Cup-Product-in-Bounded-Cohomology-of-the-Free-Group}). On the other hand, a classical result due to Johnson \cite{Johnson/Cohomology-in-Banach-algebras} asserts that the bounded cohomology of an amenable group vanishes in all positive degrees. Moreover, L\"oh \cite{Loh/A-note-on-bounded-cohomological-dimension-of-discrete-groups} recently found non-amenable groups whose bounded cohomology with trivial real coefficients vanishes in all positive degrees, and Bucher and Monod \cite{BucherMonod/The-bounded-cohomology-of-SL2-over-local-fields-and-S-integers} proved a similar statement for $\mathrm{SL}_{2}$ over non-Archimedian local fields. These latter results have in common that the bounded cohomological dimension of the respective group is zero. In fact, it is presently not known if there exists any group with non-zero finite bounded cohomological dimension. In a different direction, Monod \cite{Monod/On-the-bounded-cohomology-of-semi-simple-groups-S-arithmetic-groups-and-products,Monod/Vanishing-up-to-the-rank-in-bounded-cohomology} proved vanishing in degree below twice the rank for the bounded cohomology of non-amenable semisimple groups with non-trivial coefficients.
\medskip
Our goal in this article is to initiate a systematic study of bounded cohomology in large degree. In view of the following conjecture of Monod
it is natural to focus attention, for the time being, on the continuous bounded cohomology of Lie groups with trivial real coefficients. The conjecture also suggests what the precise meaning of large degree should be in this case, as we will readily see.
\begin{conjecture*}[Monod \cite{Monod/An-invitation-to-bounded-cohomology}]
Let G be a connected semisimple Lie group with finite center. Then the natural comparison map $H^\bullet_{\mathrm{cb}}(G;\mathbb{R}) \to H^\bullet_{\c}(G;\mathbb{R})$ from the continuous bounded cohomology to the continuous cohomology of $G$ is an isomorphism in all degrees.
\end{conjecture*}
Surjectivity of the comparison map was already studied by Dupont \cite{Dupont/Bounds-for-characteristic-numbers-of-flat-bundles} and has since been established in many cases \cite{Dupont/Bounds-for-characteristic-numbers-of-flat-bundles,Thurston/Three-dimensional-geometry-and-topology.-Vol.-1,Gromov/Volume-and-bounded-cohomology,Goncharov/Geometry-of-configurations-polylogarithms-and-motivic-cohomology,Bucher-Karlsson/Finiteness-properties-of-characteristic-classes-of-flat-bundles,Lafont/Simplicial-volume-of-closed-locally-symmetric-spaces-of-non-compact-type,Bucher-Karlsson/Simplicial-volume-of-locally-symmetric-spaces-covered-by-rm-SLsb-3Bbb-R/rm-SO3,Hartnick/Surjectivity-of-the-comparison-map-in-bounded-cohomology-for-Hermitian-Lie-groups}, while still almost nothing is known about injectivity. In fact, injectivity of the comparison map has so far only been proved in degree $2$ by Burger and Monod \cite{Burger/Bounded-cohomology-of-lattices-in-higher-rank-Lie-groups}, in degree $3$ for certain groups of rank $1$ by Burger and Monod \cite{Burger/On-and-around-the-bounded-cohomology-of-SL2}, Bloch \cite{Bloch/Higher-regulators-algebraic-K-theory-and-zeta-functions-of-elliptic-curves} and Pieters \cite{Pieters/Continuous-cohomology-of-the-isometry-group-of-hyperbolic-space-realizable-on-the-boundary}, and in degree $4$ for $\mathrm{SL}_{2}(\mathbb{R})$ by Hartnick and the author \cite{Hartnick/Bounded-cohomology-via-partial-differential-equations-I}. The conjecture predicts that the bounded cohomological dimension of a connected semisimple Lie group $G$ with finite center equals the dimension of the symmetric space associated to $G$, and is hence positive and finite. In particular, we expect that $H^{n}_{\mathrm{cb}}(G;\mathbb{R}) = 0$ whenever the degree $n$ is sufficiently large in the sense that it exceeds the dimension of the symmetric space of $G$.
\medskip
The present article is devoted to the examination of this sort of conjectural vanishing of continuous bounded cohomology in large degree. We will always assume that $G$ is a connected real Lie group that is locally isomorphic to $\mathrm{PSL}_{2}(\mathbb{R})$. Note that in this case, Monod's conjecture predicts that $H^{n}_{\mathrm{cb}}(G;\mathbb{R}) = 0$ for all $n>2$. Theorem~\ref{thm:VanishingForStronglyReducibleClasses} below shows that the conjecture holds for all classes in degree $n>2$ that are strongly reducible in the following sense. A bounded cohomology class $\alpha \in H^{n}_{\mathrm{cb}}(G;\mathbb{R})$ is called \emph{strongly reducible} if it admits a product decomposition
\[
\alpha = \alpha^{\prime} \smallsmile \alpha^{\prime\prime}
\]
with factors $\alpha^{\prime} \in H^{2}_{\mathrm{cb}}(G;\mathbb{R})$ and $\alpha^{\prime\prime} \in H^{n-2}_{\mathrm{cb}}(G;\mathbb{R})$. Here we denote by $\smallsmile$ the natural cup product on the continuous bounded cohomology of $G$ (see Section \ref{subsec:ContinuousBoundedCohomology}). We are going to prove the following vanishing theorem for strongly reducible classes.
\begin{theorem} \label{thm:VanishingForStronglyReducibleClasses}
Let $G$ be a connected real Lie group that is locally isomorphic to $\mathrm{PSL}_{2}(\mathbb{R})$, and consider a class $\alpha \in H^{n}_{\mathrm{cb}}(G;\mathbb{R})$ of degree $n>2$ in the continuous bounded cohomology of $G$ with trivial real coefficients. If $\alpha$ is strongly reducible, then $\alpha = 0$.
\end{theorem}
Thinking of $G$ as the Hermitian Lie group $\mathrm{PU}(1,1)$, we may also regard Theorem \ref{thm:VanishingForStronglyReducibleClasses} from the following different perspective. Recall that by a result of Burger and Monod, in this case the second continuous bounded cohomology $H^{2}_{\mathrm{cb}}(G;\mathbb{R})$ is generated by the bounded K\"ahler class $\kappa$ (see Section \ref{subsec:ContinuousBoundedCohomology}). We then consider the \emph{bounded Lefschetz map}
\begin{equation} \label{map:BoundedLefschetzMap} \tag{1}
\map{L_{\kappa}^{\bullet}}{H^{\bullet}_{\mathrm{cb}}(G;\mathbb{R})}{H^{\bullet+2}_{\mathrm{cb}}(G;\mathbb{R})}
\end{equation}
defined by $L_{\kappa}(\alpha) = \kappa \smallsmile \alpha$.
\begin{corollary*}
The bounded Lefschetz map in \eqref{map:BoundedLefschetzMap} is zero in all positive degrees.
\end{corollary*}
Returning to Theorem \ref{thm:VanishingForStronglyReducibleClasses}, we note that in small degrees $n=3,4$, much stronger vanishing theorems apply: Burger and Monod \cite{Burger/On-and-around-the-bounded-cohomology-of-SL2} proved that $H^{3}_{\mathrm{cb}}(G;\mathbb{R}) = 0$, while Hartnick and the author \cite{Hartnick/Bounded-cohomology-via-partial-differential-equations-I} showed that $H^{4}_{\mathrm{cb}}(G;\mathbb{R}) = 0$. In large degree, on the other hand, our Theorem \ref{thm:VanishingForStronglyReducibleClasses} establishes the first non-trivial vanishing result for classes in $H^{n}_{\mathrm{cb}}(G;\mathbb{R})$ in arbitrary degree $n>4$. The proofs of all these vanishing results crucially rely on the boundary resolution for continuous bounded cohomology due to Ivanov \cite{Ivanov/Foundations-of-the-theory-of-bounded-cohomology} and Burger and Monod \cite{Burger/Bounded-cohomology-of-lattices-in-higher-rank-Lie-groups}. In fact, in this particular resolution all cocycles vanish in degree $n=3$. In degree $n>3$, this is no longer the case and one faces the problem of constructing bounded primitives. This was accomplished in degree $n=4$ by Hartnick and the author \cite{Hartnick/Bounded-cohomology-via-partial-differential-equations-I} by means of a new technique that employs differential equations in order to explicitly construct bounded primitives; the arguments, however, crucially rely on the assumption that $n$ be sufficiently small.
\medskip
In this article, we will take the ideas from \cite{Hartnick/Bounded-cohomology-via-partial-differential-equations-I} further and develop an algebro-analytic framework that allows to overcome any upper bounds on the degree in constructing bounded primitives by means of differential equations. At the heart of this approach lies the \emph{transgression map}
\begin{equation} \label{eqn:IntroductionTransgressionMap} \tag{2}
\map{\Lambda^{n}}{H^{n-2}(\mathcal{A}^{\infty})}{H^{n}_{\mathrm{cb}}(G;\mathbb{R}}) \quad\quad (n>2)
\end{equation}
from the shifted cohomology of a certain cochain complex $\mathcal{A}^{\infty}$ to the continuous bounded cohomology of $G$ (see Section \ref{subsec:TransgressionMap}). Its construction is the main theme of this work. Notice that the transgression map is defined in every degree $n>2$. Theorem~\ref{thm:VanishingForStronglyReducibleClasses} is then a consequence of the next theorem, which clarifies how transgression gives rise to the vanishing of strongly reducible bounded cohomology classes.
\begin{theorem} \label{thm:ImageOfTransgressionMap}
For every $n>2$, the transgression map in \eqref{eqn:IntroductionTransgressionMap} has the following properties.
\begin{enumerate}[leftmargin=1cm,topsep=0.5ex,itemsep=0.5ex]
\item The cochain complex $\mathcal{A}^{\infty}$ is acyclic, and hence all elements in the image of $\Lambda^{n}$ necessarily vanish.
\item Strongly reducible classes in $H^{n}_{\mathrm{cb}}(G;\mathbb{R})$ are contained in the image of $\Lambda^{n}$.
\end{enumerate}
\end{theorem}
We will refer to elements in the image of the transgression map as \emph{transgressive} classes. The main ingredient of our proof of Theorem \ref{thm:ImageOfTransgressionMap} is then a cohomological characterization of transgressive classes, see Proposition \ref{prop:CriterionForTransgressive} in Section \ref{subsec:TransgressiveClasses}. Let us note in passing that in view of Monod's conjecture, it appears natural to speculate that the transgression map $\Lambda^{n}$ in \eqref{eqn:IntroductionTransgressionMap} is in fact surjective for every $n>2$.
\medskip
A particular feature of our approach is that it yields explicit formulas for primitives of bounded cocycles. To make this precise, let us assume that $G = \mathrm{PU}(1,1)$ and recall that for all $n\ge0$ the boundary model of Burger and Monod gives rise to an isomorphism
\[
H^{n}_{\mathrm{cb}}(G;\mathbb{R}) \cong H^{n}( L^{\infty}(\mathbb{T}^{\bullet+1},\mathbb{R})^{G},\de^{\bullet} )
\]
between the continuous bounded cohomology of $G$ and the cohomology of the homogeneous cochain complex
\[
\begin{tikzcd}[cramped]
0 \arrow[r,rightarrow]
& L^{\infty}(\mathbb{T}^{1},\mathbb{R})^{G} \arrow[r,"\de^{0}",rightarrow]
& L^{\infty}(\mathbb{T}^{2},\mathbb{R})^{G} \arrow[r,"\de^{1}",rightarrow]
& L^{\infty}(\mathbb{T}^{3},\mathbb{R})^{G} \arrow[r,"\de^{2}",rightarrow]
& \cdots
\end{tikzcd}
\]
of $G$-invariant bounded functions defined on the Furstenberg boundary of the Lie group $G$ (see Section \ref{subsec:BoundaryModel}). In this way, any class $\alpha \in H^{n}_{\mathrm{cb}}(G;\mathbb{R})$ is identified with the cohomology class $[c]$ of some $G$-invariant bounded cocycle $c \in L^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{G}$. We see that $\alpha$ vanishes if and only if the cocycle $c$ admits a $G$-invariant bounded primitive $p \in L^{\infty}(\mathbb{T}^{n},\mathbb{R})^{G}$ that satisfies the cohomological equation
\begin{equation} \label{eqn:IntroductionPrimitive} \tag{3}
\de p = c.
\end{equation}
Explicit solutions of this equation, as well as the equation itself, are often closely related with classical transcendental functions and their functional equations. For example, in degree $n=4$ there is an intricate connection with Euler's dilogarithm function and the Spence-Abel functional equation \cite{Bloch/Higher-regulators-algebraic-K-theory-and-zeta-functions-of-elliptic-curves,Burger/On-and-around-the-bounded-cohomology-of-SL2,HartnickOtt/Perturbations-of-the-Spence-Abel-equation-and-deformations-of-the-dilogarithm-function}.
Our next result, which is Theorem \ref{thm:ExplicitFormulasForPrimitives} below, systematically constructs measurable solutions of \eqref{eqn:IntroductionPrimitive} in all degrees $n>2$ by means of certain explicit line integrals, and provides a sufficient criterion for their boundedness. We denote by $L^{0}(\mathbb{T}^{n},\mathbb{R})^{G}$ the space of $G$-invariant measurable functions (see Section \ref{subsec:BoundedMeasurableFunctions}), by $\mathrm{or} \in L^{\infty}(\mathbb{T}^{3},\mathbb{R})^{G}$ the orientation cocycle, and by $\cup$ the natural cup product for cochains on the boundary of $G$ (see Section \ref{subsec:ReducibleClasses}).
\begin{theorem} \label{thm:ExplicitFormulasForPrimitives}
There exists a linear operator
\[
\map{\P^{n}}{L^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{G} \supset \ker \de^{n}}{L^{0}(\mathbb{T}^{n},\mathbb{R})^{G}} \quad\quad (n>2)
\]
with the following properties.
\begin{enumerate}[leftmargin=1cm,topsep=0.5ex,itemsep=0.5ex]
\item The operator $\P^{n}$ is defined by Formula \eqref{eqn:DefinitionOfOperatorP} in Section \ref{subsec:TheOperatorP}.
\item For every $n>2$, and for every $G$-invariant bounded function $c \in L^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{G}$ satisfying the cocycle relation $\de c = 0$, the function $\P c \in L^{0}(\mathbb{T}^{n},\mathbb{R})^{G}$ is a $G$-invariant primitive for $c$ that solves the cohomological equation
\[
\de \P c = c.
\]
\item Assume in addition that the cocycle $c$ admits a product decomposition
\begin{equation} \label{eqn:ProductDecompositionForCocycle} \tag{4}
c = \mathrm{or} \cup c^{\prime}
\end{equation}
for some cocycle $c^{\prime} \in L^{\infty}(\mathbb{T}^{n-1},\mathbb{R})^{G}$, where $\mathrm{or} \in L^{\infty}(\mathbb{T}^{3},\mathbb{R})^{G}$ denotes the orientation cocycle (see Section \ref{subsec:ReducibleClasses}). Then the primitive $\P c$ is bounded.
\end{enumerate}
\end{theorem}
We remark that in those cases in which the cocycle $c$ does not admit a product decomposition as in \eqref{eqn:ProductDecompositionForCocycle}, it is presently not known whether the solution $p = \P c$ of \eqref{eqn:IntroductionPrimitive} provided by Theorem~\ref{thm:ExplicitFormulasForPrimitives} is bounded or not.
\medskip
The article is organized as follows. In Section \ref{sec:FunctionSpaces}, we fix some notation and terminology, and define several function spaces that will later be used when working with differential equations in the context of bounded cohomology. Section \ref{sec:Cohomology} collects basic facts about the continuous bounded cohomology of Lie groups, and studies the cohomological properties of the function spaces defined in the previous section. In Section \ref{sec:CauchyFrobeniusComplex}, we introduce the Cauchy-Frobenius differential complex. We construct solutions of the corresponding differential equations and study their boundedness properties. In Section \ref{sec:Transgression}, we combine the cohomological results from Section \ref{sec:Cohomology} with the analytic results from Section \ref{sec:CauchyFrobeniusComplex} in order to define the transgression map in \eqref{eqn:IntroductionTransgressionMap}. We investigate strongly reducible bounded cohomology classes and prove Theorem \ref{thm:VanishingForStronglyReducibleClasses} and Theorem \ref{thm:ImageOfTransgressionMap}. The final Section \ref{sec:ConstructionOfPrimitives} is concerned with the explicit construction of solutions for \eqref{eqn:IntroductionPrimitive}, leading to a proof of Theorem \ref{thm:ExplicitFormulasForPrimitives}.
\bigskip
\noindent \textbf{Acknowledgements.} \!The author wishes to thank Y.\,Benoist, M.\,Burger, O.\,Forster, T.\,Hart-nick, G.\,Kings, R.\,Mazzeo, M.\,Puschnigg, J.\,Schmidt, J.\,Swoboda, R.\,Weissauer, A.\,Wienhard, F.\,Ziltener, and M.\,Zworski for helpful discussions and suggestions, as well as I.\,Agol and the UC Berkeley Mathematics Department for their hospitality and excellent working conditions. He was supported by the European Research Council under ERC-Consolidator Grant 614733 ``Deformation Spaces of Geometric Structures'', and by the Priority Program 2026 ``Geometry at Infinity'' of the German Research Foundation under DFG grant 340014991. The author further acknowledges support from U.S.\,National Science Foundation grants DMS 1107452, 1107263, 1107367 ``RNMS:\,GEometric structures And Representation varieties'' (the GEAR Network).
\section{Function spaces}
\label{sec:FunctionSpaces}
\subsection{The Lie group $G$}
\label{subsec:LieGroupG}
Let us fix the Lie group $G \mathrel{\mathop:}= \mathrm{PU}(1,1)$. Elements of this Lie group are represented by matrices of the form
\[
g_{a,b} \mathrel{\mathop:}= \begin{pmatrix} a & b \\ \overline{b} & \overline{a} \end{pmatrix},
\]
with complex numbers $a,b \in \mathbb{C}$ satisfying $|a|^{2}-|b|^{2} = 1$. Denote by $[g_{a,b}]$ the equivalence class of the matrix $g_{a,b}$ in $G$. In particular, for $t \in \mathbb{R}$ we fix the notation
\[
k_{t} \mathrel{\mathop:}= \big[ g_{e^{i t/2}, \, 0} \big], \quad a_{t} \mathrel{\mathop:}= \big[ g_{\cosh(-t/2), \, \sinh(-t/2)} \big], \quad n_{t} \mathrel{\mathop:}= \big[ g_{1+\frac{1}{2}i t, \, -\frac{1}{2}i t} \big].
\]
Note that these elements are elliptic, hyperbolic and parabolic, respectively. They give rise to Lie subgroups
\[
K \mathrel{\mathop:}= \{ k_{t} \,|\, t \in \mathbb{R} \}, \quad A \mathrel{\mathop:}= \{ a_{t} \,|\, t \in \mathbb{R} \}, \quad N \mathrel{\mathop:}= \{ n_{t} \,|\, t \in \mathbb{R} \},
\]
which are $1$-parameter subgroups in the sense that the maps $t \mapsto k_{t}$, $t \mapsto a_{t}$ and $t \mapsto n_{t}$ are smooth homomorphisms $\mathbb{R} \to G$. The group $K$ is a maximal compact subgroup of $G$. It is isomorphic with the unit circle $S^{1}$ via the identification $k_{t} \mapsto e^{i\,t}$. For later reference, we note that $A$ normalizes $N$, and in particular, that there is a relation
\begin{equation} \label{eqn:ANormalizesN}
a_{s}.n_{t}.a_{s}^{-1} = n_{e^{-s} \cdot t}
\end{equation}
for $s,t \in \mathbb{R}$. The product $P \mathrel{\mathop:}= AN$ is a parabolic subgroup of $G$. Moreover, every elliptic, hyperbolic or parabolic element in $G$ is conjugate to an element in the subgroup $K$, $A$ or $N$, respectively. In this way, we obtain the Iwasawa decomposition $G = KAN$ and a Cartan decomposition $G = KAK$. Note that the Iwasawa decomposition is unique, while the Cartan decomposition is not. We will write a Cartan decomposition for any $g \in G$ in the form
\begin{equation} \label{eqn:CartanDecomposition}
g = k^{\prime} \, a_{t} \, k
\end{equation}
with elements $k, k^{\prime} \in K$ and $a_{t} \in A$ for some $t \in \mathbb{R}$. For more details see \cite[Ch.\,VI]{Knapp/Lie-groups-beyond-an-introduction} and \cite[Ch.\,V]{Sugiura/Unitary-representations-and-harmonic-analysis}.
\subsection{Boundary action}
\label{subsec:BoundaryAction}
$G$ acts smoothly on the closed unit disk $\overline{\mathbb{D}} = \{ z \in \mathbb{C} \,|\, |z| \le 1 \}$ by fractional linear transformations. This action restricts to a smooth $G$-action $G \times S^{1} \to S^{1}$ on the unit circle $S^{1} \subset \mathbb{C}$, denoted by $(g,z) \mapsto g.z$. Thinking of $S^{1}$ as the Furstenberg boundary of $G$, we will refer to this action as the \emph{boundary action} of $G$. Recall that the boundary action is strictly $3$-transitive \cite[Thm.\,11.1]{Kerby/On-infinite-sharply-multiply-transitive-groups} and amenable \cite[Prop.\,4.3.2]{Zimmer/Ergodic-theory-and-semisimple-groups}. The induced action of the maximal compact subgroup $K$ is by counter-clockwise rotation, given by $k_{t}.z = e^{i t} \cdot z$, while the actions of the subgroups $A$ and $N$ have fixed point sets $\{\pm 1\}$ and $\{1\}$, respectively.
Consider the $n$-torus $\mathbb{T}^{n} \mathrel{\mathop:}= (S^{1})^{n}$ for $n \ge 1$. Its points are denoted by $\mathbf{z} = (z_{0},\ldots,z_{n-1})$ with $z_{j} \in S^{1}$ for all $0 \le j \le n-1$. Let us further denote by $\mathbb{T}^{(n)} \subset \mathbb{T}^{n}$ the \emph{configuration space} of configurations of $n$ pairwise distinct points on $S^{1}$, which forms the open subset of $\mathbb{T}^{n}$ consisting of all points $\mathbf{z} = (z_{0},\ldots,z_{n-1})$ satisfying $z_{i} \neq z_{j}$ for all $0 \le i < j \le n-1$. We further introduce the open subset $\mathring{\mathbb{T}}^{(n)} \subset \mathbb{T}^{(n)}$ of all configurations $\mathbf{z} = (z_{0},\ldots,z_{n-1})$ with the additional property that $z_{j} \neq 1$ for all $0 \le j \le n-1$.
The boundary action of $G$ gives rise to a smooth diagonal action of $G$ on the torus $\mathbb{T}^{n}$. We will always consider $\mathbb{T}^{n}$ as a $G$-space in this sense, denoting the action on points by $g.\mathbf{z} = (g.z_{0},\ldots,g.z_{n-1})$. Observe that the $G$-action on $\mathbb{T}^{n}$ restricts to a $G$-action on the configuration space $\mathbb{T}^{(n)}$. Since the boundary action of $G$ is strictly $3$-transitive, it follows that $G$ acts freely on $\mathbb{T}^{(n)}$ as long as $n \ge 3$. Observe moreover that, since $1$ is a fixed point for the action of $P = AN$, the $P$-action on $\mathbb{T}^{(n)}$ preserves the subset $\mathring{\mathbb{T}}^{(n)}$.
Let us write $\mu_{K}$ for the unique $K$-invariant probability measure on the unit circle $S^{1}$. It induces a $K$-invariant probability measure $\mu_{K}^{\otimes n}$ on the torus $\mathbb{T}^{n} = (S^{1})^{n}$. We fix this measure on $\mathbb{T}^{n}$, and will not usually indicate it in the notation. Notice that both the configuration space $\mathbb{T}^{(n)}$ and its open subset $\mathring{\mathbb{T}}^{(n)}$ are subspaces of full measure in $\mathbb{T}^{n}$.
\subsection{Coefficient modules}
\label{subsec:CoefficientModules}
Fix an integer $\mu \in \mathbb{Z}$. We will write $\mathbb{C}_{\mu}$ for the $K$-module $K \times \mathbb{C} \to \mathbb{C}$ defined by the standard linear action with weight $\mu$ of the maximal compact subgroup $K \cong S^{1}$ on $\mathbb{C}$ given by
\[
k_{t}.z \mathrel{\mathop:}= e^{i \mu t} \cdot z
\]
for $t \in \mathbb{R}$ and $z \in \mathbb{C}$. Note that $\mathbb{C}_{0}$ is a trivial $K$-module, and we further define $\mathbb{C}_{0} = \mathbb{C}$ and its subspace $\mathbb{R}$ to be trivial $G$-modules.
\subsection{Bounded measurable functions}
\label{subsec:BoundedMeasurableFunctions}
We denote by $\mathscr{L}^{0}(\mathbb{T}^{n},\C)$ the space of complex measurable functions on $\mathbb{T}^{n}$ and by $\mathscr{L}^{\infty}(\mathbb{T}^{n},\C) \subset \mathscr{L}^{0}(\mathbb{T}^{n},\C)$ the subspace of bounded functions. Throughout this article, we will adhere to the convention from \cite{Monod/Equivariant-measurable-liftings} that $\mathscr{L}^{\infty}(\mathbb{T}^{n},\C)$ consists of actual bounded functions, excluding all essentially bounded functions that are not bounded.
For $p\in\{0,\infty\}$, the quotient of the space $\mathscr{L}^{p}(\mathbb{T}^{n},\C)$ defined by identifying functions that take the same values almost everywhere in $\mathbb{T}^{n}$ is denoted by $L^{p}(\mathbb{T}^{n},\C)$. We remind the reader that elements of this space are function classes rather than actual functions, and identities for such function classes correspond to identities for the representing functions that hold pointwise only on the complement of a subset of measure zero. Throughout we will follow the standard convention not to distinguish between functions and function classes in the notation.
We denote by $\mathscr{L}^{p}(\mathbb{T}^{n},\mathbb{C})^{G}$ and $L^{p}(\mathbb{T}^{n},\mathbb{C})^{G}$ the corresponding subspaces of $G$-invariant functions. Recall that functions $f$ contained in the former space satisfy $f(g.\mathbf{z}) = f(\mathbf{z})$ for all $\mathbf{z} \in \mathbb{T}^{n}$ and $g \in G$, while functions $f$ in the latter space satisfy this identity only for almost every $\mathbf{z} \in \mathbb{T}^{n}$, for each $g \in G$.
For later reference, we observe that the canonical projection $\mathscr{L}^{\infty}(\mathbb{T}^{n},\C) \to L^{\infty}(\mathbb{T}^{n},\C)$ is $G$-equivariant and hence gives rise to a canonical map
\[
\mathscr{L}^{\infty}(\mathbb{T}^{n},\C)^{G} \to L^{\infty}(\mathbb{T}^{n},\C)^{G}.
\]
The properties of this map will be further discussed in Section \ref{subsec:EquivariantMeasurableLiftings}.
\subsection{Orbitwise smooth functions}
\label{subsec:OrbitwiseSmoothFunctions}
We recall from \cite{Hartnick/Bounded-cohomology-via-partial-differential-equations-I} the following definition.
\begin{definition}
Let $H$ be any Lie subgroup of $G$. A measurable function $f \in \mathscr{L}^{0}(\mathbb{T}^{n},\C)$ is called \emph{$H$-orbitwise smooth} if for every point $\mathbf{z} \in \mathbb{T}^{n}$ the map
\begin{equation} \label{map:DefinitionSmoothness}
H \to \C, \quad h \mapsto f(h.\mathbf{z})
\end{equation}
is smooth.
\end{definition}
The space of complex $H$-orbitwise smooth measurable functions is denoted by $\mathscr{S}_{H}(\mathbb{T}^{n},\C)$. We will henceforth apply this concept in the cases where the subgroup $H$ is either the group $G$ itself or the parabolic subgroup $P = AN$. Notice that there is a natural inclusion $\mathscr{S}_{G}(\mathbb{T}^{n},\C) \subset \mathscr{S}_{P}(\mathbb{T}^{n},\C)$.
We denote by $\L_{K}$, $\L_{A}$ and $\L_{N}$ the fundamental vector fields for the action of the $1$-parameter subgroups $K$, $A$ and $N$ on $\mathbb{T}^{n}$, given pointwise by $\L_{K}(\mathbf{z}) = \left. \dd{}{t} \right|_{t=0} k_{t}.\mathbf{z}$, and likewise for $\L_{A}$ and $\L_{N}$. In order to obtain a more concrete description of these vector fields, we think of the unit circle $S^{1}$ as the quotient $S^{1} = \mathbb{R}/2\pi\mathbb{Z}$, covered by the real line $\mathbb{R}$. We choose a coordinate $\th \in \mathbb{R}$ defined by the exponential mapping $z = e^{i \th}$ for $z\in S^{1}$. This coordinate will be called the \emph{angular coordinate} for $S^{1}$. Note that it is unique only up to multiples of $2\pi$. In this way, the torus $\mathbb{T}^{n}$ is endowed with angular coordinates $\bm{\uptheta} = (\th_{0},\ldots,\th_{n-1}) \in \mathbb{R}^{n}$, where each coordinate $\th_{j} \in \mathbb{R}$ is unique up to multiples of $2\pi$. We may therefore consider functions on $\mathbb{T}^{n}$ as functions on $\mathbb{R}^{n}$ that are $2\pi$-periodic in each variable $\th_{j}$.
The boundary action of $G$ induces a smooth $G$-action on the angular coordinate $\th$, defined by the relation $g.z = e^{i (g.\th)}$. Note in particular that $K$ acts on $\th$ by translation, and that the $K$-invariant measure $\mu_{K}$ on $S^{1}$ corresponds to the usual Lebesgue measure on $\mathbb{R}$, normalized by a factor of $1/2\pi$. We obtain a corresponding diagonal action of $G$ on angular coordinates for $\mathbb{T}^{n}$, denoted by $g.\bm{\uptheta} = (g.\th_{0},\ldots,g.\th_{n-1})$. A short calculation (cf.~\cite[Sec.\,3.2]{Hartnick/Bounded-cohomology-via-partial-differential-equations-I}) shows that in angular coordinates the vector fields $\L_{K}$, $\L_{A}$ and $\L_{N}$ are given by
\begin{equation} \label{eqn:FundamentalVectorFields}
\L_{K} = \sum_{j=0}^{n-1} \pd{}{\th_{j}}, \quad \L_{A} = \sum_{j=0}^{n-1} \sin(\th_{j}) \pd{}{\th_{j}}, \quad \L_{N} = \sum_{j=0}^{n-1} \bigl( 1-\cos(\th_{j}) \bigr) \pd{}{\th_{j}}
\end{equation}
and satisfy the commutator relations
\begin{equation} \label{eqn:CommutatorRelationsKAN}
\left[ \L_{K},\L_{A} \right] = \L_{K}-\L_{N}, \quad \left[ \L_{K},\L_{N} \right] = \L_{A}, \quad \left[ \L_{A},\L_{N} \right] = \L_{N}.
\end{equation}
The fundamental vector fields give rise to first order linear partial differential operators
\[
\map{\L_{K},\L_{A},\L_{N}}{\mathscr{S}_{G}(\mathbb{T}^{n},\C)}{\mathscr{S}_{G}(\mathbb{T}^{n},\C)}
\]
and
\[
\map{\L_{A},\L_{N}}{\mathscr{S}_{P}(\mathbb{T}^{n},\C)}{\mathscr{S}_{P}(\mathbb{T}^{n},\C)}
\]
acting on orbitwise smooth functions. For example, the action of the operator $\L_{K}$ is given by $(\L_{K} f)(\mathbf{z}) = \left. \dd{}{t} \right|_{t=0} f(k_{t}.\mathbf{z})$, and likewise for $\L_{A}$ and $\L_{N}$.
For later reference, we record the following useful formula. Consider $f \in \mathscr{S}_{P}(\mathbb{T}^{n},\C)$ and let $\mathbf{z} \in \mathbb{T}^{n}$. Since $t \mapsto a_{t}$ is a $1$-parameter group, we have
\[
(\L_{A} f)(a_{t}.\mathbf{z}) = \dd{}{t} f(a_{t}.\mathbf{z}).
\]
Integrating this identity then yields
\begin{equation} \label{eqn:IntegrateAlongA}
f(a_{T}.\mathbf{z}) = f(\mathbf{z}) + \int_{0}^{T} (\L_{A} f)(a_{t}.\mathbf{z}) \, \mathrm{d} t
\end{equation}
for every $T \in \mathbb{R}$.
For any positive integer $\ell > 0$, and for any collection of indices $(i_{1},\ldots,i_{\ell}) \in \{K,A,N\}^{\ell}$ we define the $\ell$-th order linear partial differential operators
\[
\L_{i_{1},\ldots,i_{\ell}} \mathrel{\mathop:}= \L_{i_{1}} \circ \cdots \circ \L_{i_{\ell}}.
\]
Here we think of the letters $K$, $A$ and $N$ as formal indices.
\begin{definition}
A $G$-orbitwise smooth function $f \in \mathscr{S}_{G}(\mathbb{T}^{n},\C)$ is said to have \emph{bounded $G$-derivatives} if all of its directional derivatives $\L_{i_{1},\ldots,i_{\ell}} f \in \mathscr{S}_{G}(\mathbb{T}^{n},\C)$ are bounded for all $\ell > 0$ and all $(i_{1},\ldots,i_{\ell}) \in \{K,A,N\}^{\ell}$.
Likewise, a $P$-orbitwise smooth function $f \in \mathscr{S}_{P}(\mathbb{T}^{n},\C)$ is said to have \emph{bounded $P$-derivatives} if its directional derivatives $\L_{i_{1},\ldots,i_{\ell}} f \in \mathscr{S}_{P}(\mathbb{T}^{n},\C)$ are bounded for all $\ell > 0$ and all $(i_{1},\ldots,i_{\ell}) \in \{A,N\}^{\ell}$.
\end{definition}
We denote by $\mathscr{S}^{\b}_{G}(\mathbb{T}^{n},\C)$ the space of all $G$-orbitwise smooth functions with bounded $G$-derivatives, and by $\mathscr{S}^{\b}_{P}(\mathbb{T}^{n},\C)$ the space of all $P$-orbitwise smooth functions with bounded $P$-derivatives. Notice the canonical inclusion $\mathscr{S}^{\b}_{G}(\mathbb{T}^{n},\C) \subset \mathscr{S}^{\b}_{P}(\mathbb{T}^{n},\C)$.
\subsection{$K$-equivariant functions}
\label{subsec:KEquivariantFunctions}
Recall from Section \ref{subsec:CoefficientModules} the definition of the coefficient module $\mathbb{C}_{\mu}$ for $\mu \in \mathbb{Z}$. We denote by $\mathscr{L}^{0}(\mathbb{T}^{n},\mathbb{C}_{\mu})^{K}$ the space of \emph{$K$-equivariant} measurable functions with values in the $K$-module $\mathbb{C}_{\mu}$, i.e., functions $f$ satisfying
\begin{equation} \label{eqn:KEquivariance}
f(k_{t}.\mathbf{z}) = e^{i \mu t} \cdot f(\mathbf{z})
\end{equation}
for all $\mathbf{z} \in \mathbb{T}^{n}$ and $t\in\mathbb{R}$. Note that in the case $\mu=0$, such functions are precisely the $K$-invariant functions. We moreover denote by $\mathscr{S}_{G}(\mathbb{T}^{n},\mathbb{C}_{\mu})^{K} \subset \mathscr{L}^{0}(\mathbb{T}^{n},\mathbb{C}_{\mu})^{K}$ the subspace of $G$-orbitwise smooth functions, and by $\mathscr{S}^{\b}_{G}(\mathbb{T}^{n},\mathbb{C}_{\mu})^{K} \subset \mathscr{S}_{G}(\mathbb{T}^{n},\mathbb{C}_{\mu})^{K}$ the subspace of functions with bounded $G$-derivatives. For later reference, we provide the following infinitesimal characterization of $K$-equivariance.
\begin{lemma} \label{lemma:InvarianceAndEquivarianceUnderK}
Fix $\mu \in \mathbb{Z}$. A $G$-orbitwise smooth function $f \in \mathscr{S}_{G}(\mathbb{T}^{n},\mathbb{C}_{\mu})$ is $K$-equivariant if and only if it satisfies the differential equation
\begin{equation} \label{eqn:InfinitesimalKEquivariance}
\L_{K} f - i \mu \cdot f = 0.
\end{equation}
\end{lemma}
\begin{proof}
Let $\mu \in \mathbb{Z}$ and $f \in \mathscr{S}_{G}(\mathbb{T}^{n},\mathbb{C}_{\mu})$. Assume first that $f$ is $K$-equivariant. It follows from \eqref{eqn:KEquivariance} that
\[
(\L_{K} f) (\mathbf{z}) = \left. \dd{}{t} \right|_{t=0} f(k_{t}.\mathbf{z}) = \left. \dd{}{t} \right|_{t=0} e^{i \mu t} \cdot f(\mathbf{z}) = i \mu \cdot f(\mathbf{z})
\]
for all $\mathbf{z} \in \mathbb{T}^{n}$ and $t \in \mathbb{R}$. For the converse, assume that $f$ satisfies \eqref{eqn:InfinitesimalKEquivariance}. We may then define an orbitwise smooth function $\tilde{f} \in \mathscr{S}_{G}(\mathbb{T}^{n},\mathbb{C})$ by the relation
\[
f(z_{0},\ldots,z_{n-1}) = z_{0}^{\mu} \cdot \tilde{f}(z_{0},\ldots,z_{n-1})
\]
for all $(z_{0},\ldots,z_{n-1}) \in \mathbb{T}^{n}$. Then
\[
0 = (\L_{K} f) (z_{0},\ldots,z_{n-1}) - i \mu \cdot f(z_{0},\ldots,z_{n-1}) = z_{0}^{\mu} \cdot (\L_{K} \tilde{f}) (z_{0},\ldots,z_{n-1}).
\]
Hence $\L_{K} \tilde{f} = 0$. Since $\tilde{f}$ is $G$-orbitwise smooth and $G$-orbits are connected, this implies that $\tilde{f}$ is $K$-invariant. Thus
\[
f(k_{t}.z_{0},\ldots,k_{t}.z_{n-1}) = e^{i \mu t} \cdot z_{0}^{\mu} \cdot \tilde{f}(z_{0},\ldots,z_{n-1}) = e^{i \mu t} \cdot f(z_{0},\ldots,z_{n-1})
\]
for all $(z_{0},\ldots,z_{n-1}) \in \mathbb{T}^{n}$ and $t\in\mathbb{R}$.
\end{proof}
To simplify notation, let us write
\[
\mathscr{A}(\mathbb{T}^{n},\mathbb{C}_{\mu}) \mathrel{\mathop:}= \mathscr{S}^{\b}_{G}(\mathbb{T}^{n},\mathbb{C}_{\mu})^{K}
\]
for the space of $G$-orbitwise smooth $K$-equivariant $\mathbb{C}_{\mu}$-valued functions with bounded derivatives, and denote by $\mathscr{A}^{\infty}(\mathbb{T}^{n},\mathbb{C}_{\mu}) \subset \mathscr{A}(\mathbb{T}^{n},\mathbb{C}_{\mu})$ the subspace of bounded functions. We further denote by $A(\mathbb{T}^{n},\mathbb{C}_{\mu})$ and $A^{\infty}(\mathbb{T}^{n},\mathbb{C}_{\mu})$ the quotients of the spaces $\mathscr{A}(\mathbb{T}^{n},\mathbb{C}_{\mu})$ and $\mathscr{A}^{\infty}(\mathbb{T}^{n},\mathbb{C}_{\mu})$ defined by identifying functions that take the same values pointwise on the configuration space $\mathbb{T}^{(n)} \subset \mathbb{T}^{n}$. This means that representatives of a function class in $A(\mathbb{T}^{n},\mathbb{C}_{\mu})$ may differ only on the complement $\mathbb{T}^{n} \setminus \mathbb{T}^{(n)}$ of the configuration space. In particular, identities for such function classes correspond to identities for the representing functions that hold pointwise on $\mathbb{T}^{(n)}$. Notice that this definition makes sense since the subset $\mathbb{T}^{(n)} \subset \mathbb{T}^{n}$ is invariant under the action of $G$.
We denote by $A(\mathbb{T}^{n},\mathbb{R})^{P}$ and $A^{\infty}(\mathbb{T}^{n},\mathbb{R})^{P}$ the spaces of $P$-invariants in the spaces $A(\mathbb{T}^{n},\mathbb{R})$ and $A^{\infty}(\mathbb{T}^{n},\mathbb{R})$. Function classes in these spaces are represented by functions $f$ contained in $\mathscr{A}(\mathbb{T}^{n},\mathbb{R})$ and $\mathscr{A}^{\infty}(\mathbb{T}^{n},\mathbb{R})$, respectively, that are $P$-invariant on the configuration space $\mathbb{T}^{(n)}$, i.e., they satisfy
\begin{equation} \label{eqn:GInvarianceInA}
f(p.\mathbf{z}) = f(\mathbf{z})
\end{equation}
for all $\mathbf{z} \in \mathbb{T}^{(n)}$ and $p \in P$. Notice that this identity is not required to hold for points $\mathbf{z} \in \mathbb{T}^{n} \setminus \mathbb{T}^{(n)}$ in the complement of $\mathbb{T}^{(n)}$. Moreover, since $G = KAN$ by the Iwasawa decomposition, it also follows that $f$ is $G$-invariant on $\mathbb{T}^{(n)}$, i.e., it satisfies $f(g.\mathbf{z}) = f(\mathbf{z})$ for all $\mathbf{z} \in \mathbb{T}^{(n)}$ and $g \in G$.
\begin{definition}
Let $\mu \in \mathbb{Z}$. A function $f \in \mathscr{A}^{\infty}(\mathbb{T}^{n},\mathbb{C}_{\mu})$ is called \emph{tame} if its real part $\Re f$ satisfies
\[
\sup \left \{ \left\lvert \int_{0}^{T} (\Re f)(a_{t}.\mathbf{z}) \, \mathrm{d} t \right\rvert \,:\, T \in \mathbb{R}, \, \mathbf{z} \in \mathbb{T}^{n} \right\} < \infty.
\]
\end{definition}
We introduce the notation $\mathscr{A}^{\infty}_{\t}(\mathbb{T}^{n},\mathbb{C}_{\mu}) \subset \mathscr{A}^{\infty}(\mathbb{T}^{n},\mathbb{C}_{\mu})$ for the subspace of tame functions. The image of this space under the quotient map $\mathscr{A}^{\infty}(\mathbb{T}^{n},\mathbb{C}_{\mu}) \to A^{\infty}(\mathbb{T}^{n},\mathbb{C}_{\mu})$ will be denoted by $A^{\infty}_{\t}(\mathbb{T}^{n},\mathbb{C}_{\mu})$.
\section{Cohomology}
\label{sec:Cohomology}
\subsection{Continuous bounded cohomology}
\label{subsec:ContinuousBoundedCohomology}
We briefly review some basic facts about the continuous bounded cohomology of $G$. Let us denote for all $n\ge0$ by $C_{\b}(G^{n+1},\mathbb{R})$ the space of bounded continuous functions $G^{n+1} \to \mathbb{R}$, and let $C_{\b}(G^{n+1},\mathbb{R})^{G} \subset C_{\b}(G^{n+1},\mathbb{R})$ be the subspace of functions that are invariant under the diagonal action of $G$ on the product $G^{n+1}$. Then the \emph{continuous bounded cohomology} of $G$ with trivial real coefficients is defined as the cohomology of the cochain complex
\[
\begin{tikzcd}[cramped]
0 \arrow[r,rightarrow]
& C_{\b}(G,\mathbb{R})^{G} \arrow[r,"\mathfrak{d}^{0}",rightarrow]
& C_{\b}(G^{2},\mathbb{R})^{G} \arrow[r,"\mathfrak{d}^{1}",rightarrow]
& C_{\b}(G^{3},\mathbb{R})^{G} \arrow[r,"\mathfrak{d}^{2}",rightarrow]
& \cdots
\end{tikzcd}
\]
where the map $\map{\mathfrak{d}^{n}}{C_{\b}(G^{n+1},\mathbb{R})^{G}}{C_{\b}(G^{n+2},\mathbb{R})^{G}}$ given by
\[
(\mathfrak{d}^{n} f)(g_{0},\ldots,g_{n+1}) \mathrel{\mathop:}= \sum_{j=0}^{n+1} (-1)^{j} \cdot f(g_{0},\ldots,\widehat{g_{j}},\ldots,g_{n+1})
\]
denotes the homogeneous coboundary operator \cite{Burger/Bounded-cohomology-of-lattices-in-higher-rank-Lie-groups,Burger/Continuous-bounded-cohomology-and-applications-to-rigidity-theory,Monod/Continuous-bounded-cohomology-of-locally-compact-groups}.
The continuous bounded cohomology of $G$ is endowed with a natural ring structure determined by the \emph{cup product}
\[
\map{\smallsmile}{H_{\mathrm{cb}}^{n}(G;\mathbb{R}) \otimes H_{\mathrm{cb}}^{m}(G;\mathbb{R})}{H_{\mathrm{cb}}^{n+m}(G;\mathbb{R})},
\]
see for example \cite[Sec.\,1.8]{Burger/Continuous-bounded-cohomology-and-applications-to-rigidity-theory}. This cup product is induced by a corresponding cup product
\[
\map{\smallsmile}{C_{\b}(G^{n+1},\mathbb{R}) \otimes C_{\b}(G^{m+1},\mathbb{R})}{C_{\b}(G^{n+m+1},\mathbb{R})}
\]
on the level of cochains, which is defined by
\[
(c \smallsmile e)(g_{0},\ldots,g_{n+m}) \mathrel{\mathop:}= c(g_{0},\ldots,g_{n}) \cdot e(g_{n},g_{n+1},\ldots,g_{n+m})
\]
for any two cochains $c \in C_{\b}(G^{n+1},\mathbb{R})$ and $e \in C_{\b}(G^{m+1},\mathbb{R})$.
As was already mentioned in the introduction, Burger and Monod \cite[Thm.\,2.30]{BurgerIozzi/A-useful-formula-from-bounded-cohomology} proved that the comparison map is an isomorphism $H^{2}_{\mathrm{cb}}(G;\mathbb{R}) \cong H^{2}_{\c}(G;\mathbb{R})$ in degree $2$. Since in our case the Lie group $G$ is Hermitian, this amounts to an isomorphism $H^{2}_{\mathrm{cb}}(G;\mathbb{R}) \cong \mathbb{R}$ with an explicit generator given by the \emph{bounded K\"ahler class} $\kappa \in H^{2}_{\mathrm{cb}}(G;\mathbb{R})$. The bounded K\"ahler class is determined by a certain geometric bounded cocycle known as the \emph{Dupont cocycle} \cite[Sec.\,2.3]{BurgerIozzi/A-useful-formula-from-bounded-cohomology}.
For more background on the continuous bounded cohomology of locally compact groups we refer the reader to \cite{Monod/Continuous-bounded-cohomology-of-locally-compact-groups, Burger/Continuous-bounded-cohomology-and-applications-to-rigidity-theory, BurgerIozzi/A-useful-formula-from-bounded-cohomology}.
\subsection{The boundary model}
\label{subsec:BoundaryModel}
The approach taken in this article relies on the boundary model for the continuous bounded cohomology of $G$ due to Ivanov \cite{Ivanov/Foundations-of-the-theory-of-bounded-cohomology} and Burger and Monod \cite{Burger/Bounded-cohomology-of-lattices-in-higher-rank-Lie-groups}. Let us first consider the cochain complex
\begin{equation} \label{map:MeasurableCochainComplex}
%
\begin{tikzcd}[cramped]
0 \arrow[r,rightarrow]
&\mathscr{L}^{0}(\mathbb{T}^{1},\C) \arrow[r,"\de^{0}",rightarrow]
& \mathscr{L}^{0}(\mathbb{T}^{2},\C) \arrow[r,"\de^{1}",rightarrow]
& \mathscr{L}^{0}(\mathbb{T}^{3},\C) \arrow[r,"\de^{2}",rightarrow]
& \cdots
\end{tikzcd}
%
\end{equation}
of complex measurable functions on the Furstenberg boundary of $G$, where
\[
\map{\de^{n}}{\mathscr{L}^{0}(\mathbb{T}^{n+1},\C)}{\mathscr{L}^{0}(\mathbb{T}^{n+2},\C)} \quad\quad (n\ge0)
\]
denotes the homogeneous coboundary operator acting by
\begin{equation} \label{eqn:CoboundaryOperator}
(\de^{n} f)(z_{0},\ldots,z_{n+1}) \mathrel{\mathop:}= \sum_{j=0}^{n+1} (-1)^{j} \cdot f(z_{0},\ldots,\widehat{z_{j}},\ldots,z_{n+1}).
\end{equation}
It follows from the definitions that this coboundary operator induces coboundary operators, all denoted by the same symbol $\de^{\bullet}$, acting on the function space $L^{\infty}(\mathbb{T}^{\bullet+1},\mathbb{R})^{G}$, as well as on the function spaces $A(\mathbb{T}^{\bullet+1},\mathbb{R})$, $A^{\infty}(\mathbb{T}^{\bullet+1},\mathbb{R})$, $A(\mathbb{T}^{\bullet+1},\mathbb{R})^{P}$, $A^{\infty}(\mathbb{T}^{\bullet+1},\mathbb{R})^{P}$, $A^{\infty}(\mathbb{T}^{\bullet+1},\mathbb{C}_{1})$ and $A_{\t}^{\infty}(\mathbb{T}^{\bullet+1},\mathbb{C}_{1})$ which we are going to work with. In this manner, we obtain corresponding cochain complexes that are subcomplexes of quotients of subcomplexes of \eqref{map:MeasurableCochainComplex}.
Of particular interest in this section is the first of these induced cochain complexes, which is the complex
\[
\begin{tikzcd}[cramped]
0 \arrow[r,rightarrow]
& L^{\infty}(\mathbb{T}^{1},\mathbb{R})^{G} \arrow[r,"\de^{0}",rightarrow]
& L^{\infty}(\mathbb{T}^{2},\mathbb{R})^{G} \arrow[r,"\de^{1}",rightarrow]
& L^{\infty}(\mathbb{T}^{3},\mathbb{R})^{G} \arrow[r,"\de^{2}",rightarrow]
& \cdots
\end{tikzcd}
\]
of $G$-invariant bounded measurable functions on the Furstenberg boundary of $G$. The boundary model realizes the continuous bounded cohomology of $G$ in terms of this cochain complex. More specifically, since the boundary action of $G$ is amenable, by \cite[Thm.\,7.5.3]{Monod/Continuous-bounded-cohomology-of-locally-compact-groups} there is an isomorphism
\begin{equation} \label{map:BoundaryModel}
H^{n}_{\mathrm{cb}}(G;\mathbb{R}) \cong H^{n}\left( L^{\infty}(\mathbb{T}^{\bullet+1},\mathbb{R})^{G},\de^{\bullet} \right)
\end{equation}
in every degree $n \ge 0$. Notice that this collection of isomorphisms is compatible with the cup product introduced in Section \ref{subsec:ContinuousBoundedCohomology} and hence gives rise to an isomorphism of the respective bounded cohomology rings.
\subsection{Equivariant measurable liftings}
\label{subsec:EquivariantMeasurableLiftings}
First of all, we observe that the canonical map
\begin{equation} \label{map:EquivariantLifting}
\mathscr{L}^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{G} \to L^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{G} \quad\quad (n \ge 0)
\end{equation}
considered in Section \ref{subsec:BoundedMeasurableFunctions} is in fact a cochain map that intertwines with the action of the coboundary operator $\de$. As Monod \cite{Monod/Equivariant-measurable-liftings} points out, there is a priori no reason for this map to be surjective. However, since the boundary action of $G$ is amenable, by a result of Monod \cite[Thm.\,A, Rem.\,1 and Cor.\,6]{Monod/Equivariant-measurable-liftings} the cochain map in \eqref{map:EquivariantLifting} does in fact admit a section that intertwines with $\de$. An immediate consequence of this is the following proposition, which paves the way for applying differential geometric methods in the study of the boundary model for the continuous bounded cohomology of $G$.
\begin{proposition}
\label{prop:EquivariantLifting}
There is a surjective homomorphism
\begin{equation} \label{map:EquivariantLiftingOnCohomology}
H^{n}\left( A^{\infty}(\mathbb{T}^{\bullet+1},\mathbb{R})^{P},\de^{\bullet} \right) \to H^{n}\left( L^{\infty}(\mathbb{T}^{\bullet+1},\mathbb{R})^{G},\de^{\bullet} \right)
\end{equation}
for every $n \ge 0$.
\end{proposition}
\begin{proof}
Recall from Section \ref{subsec:KEquivariantFunctions} that every function $f \in A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{P}$ has a representative $f \in \mathscr{A}^{\infty}(\mathbb{T}^{n+1},\mathbb{R})$ that is bounded and $G$-invariant on the configuration space $\mathbb{T}^{(n+1)}$. Since the subspace $\mathbb{T}^{(n+1)}$ is of full measure in $\mathbb{T}^{n+1}$, this function $f$ therefore defines an element of $L^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{G}$. Hence we obtain a natural cochain map
\begin{equation} \label{map:EquivariantLiftingFromA^P}
A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{P} \to L^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{G} \quad\quad (n \ge 0)
\end{equation}
that intertwines with $\de$. This map admits a section that is induced by the section of the cochain map in \eqref{map:EquivariantLifting} \cite[Thm.\,A, Rem.\,1 and Cor.\,6]{Monod/Equivariant-measurable-liftings}. To see this, we note that every $G$-invariant bounded measurable function $f \in \mathscr{L}^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{G}$ is constant along all $G$-orbits in $\mathbb{T}^{n+1}$, hence it is in particular $G$-orbitwise smooth and bounded with bounded $G$-derivatives. Thus it determines a function in the space $A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{P}$.
\end{proof}
\subsection{Cochain contractions}
\label{subsec:CochainContractions}
For every $\mu \in \mathbb{Z}$, we define a linear integral operator
\[
\map{\I^{n}}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{\mu})}{A^{\infty}(\mathbb{T}^{n},\mathbb{C}_{\mu})} \quad\quad (n \ge 1)
\]
by
\begin{equation} \label{eqn:DefinitionOfOperatorI}
(\I^{n} f)(z_{0},\ldots,z_{n-1}) \mathrel{\mathop:}= \int_{S^{1}} f(z,z_{0},\ldots,z_{n-1}) \, \mathrm{d}\mu_{K}(z).
\end{equation}
We are now going to prove that this operator is well-defined and gives rise to a cochain contraction for the complex $(A^{\infty}(\mathbb{T}^{\bullet+1},\mathbb{C}_{\mu}),\de^{\bullet})$. Recall that this means that the operator $\I$ satisfies the identity
\begin{equation} \label{eqn:CochainContraction}
\I^{n+1} \circ \de^{n} + \de^{n-1} \circ \I^{n} = \Id
\end{equation}
for all $n>0$.
\begin{proposition} \label{prop:ComplexAIsAcyclic}
For every $\mu \in \mathbb{Z}$, the operator $\I$ is a well-defined cochain contraction for the complex $(A^{\infty}(\mathbb{T}^{\bullet+1},\mathbb{C}_{\mu}),\de^{\bullet})$.
\end{proposition}
\begin{proof}
Fix $\mu \in \mathbb{Z}$. First of all, we check that the operator $\I$ is well-defined. Consider a function $f \in \mathscr{A}^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{\mu})$ representing a class in $A^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{\mu})$.
Since $f$ is bounded, the integral in \eqref{eqn:DefinitionOfOperatorI} exists for every point $(z_{0},\ldots,z_{n-1}) \in \mathbb{T}^{n}$ and defines a bounded measurable function $\I f$ on $\mathbb{T}^{n}$. Then $K$-invariance of the measure $\mu_{K}$ together with $K$-equivariance of $f$ imply that the function $\I f$ is $K$-equivariant.
Next we prove that $\I f$ is $G$-orbitwise smooth and has bounded derivatives. Let us begin by considering the first order derivatives of $\I f$. Since $\I f$ is $K$-equivariant, it follows from Lemma~\ref{lemma:InvarianceAndEquivarianceUnderK} that the derivative $\L_{K} \I f$ exists and is bounded on $\mathbb{T}^{n}$. We now inspect the derivatives $\L_{A} \I f$ and $\L_{N} \I f$. This requires some computations, which are best carried out in angular coordinates $(\th_{0},\ldots,\th_{n-1}) \in \mathbb{T}^{n}$. Formally, we have
\begin{equation} \label{eqn:ADerivativeOfIf1}
\begin{aligned}
(\L_{A} \I f)(\th_{0},\ldots,\th_{n-1}) &= \left. \dd{}{t}\right|_{t=0} (\I f)(a_{t}.\th_{0},\ldots,a_{t}.\th_{n-1}) \\
&= \frac{1}{2\pi} \left. \dd{}{t}\right|_{t=0} \, \int_{0}^{2\pi} f(\eta,a_{t}.\th_{0},\ldots,a_{t}.\th_{n-1}) \, \mathrm{d}\eta \\
&= \frac{1}{2\pi} \int_{0}^{2\pi} \, \left. \dd{}{t}\right|_{t=0} \left( \dd{(a_{t}.\eta)}{\eta} \cdot f(a_{t}.\eta,a_{t}.\th_{0},\ldots,a_{t}.\th_{n-1}) \right) \mathrm{d}\eta.
\end{aligned}
\end{equation}
Since $f$ is $G$-orbitwise smooth, the derivative appearing under the integral sign is given by
\begin{equation} \label{eqn:ADerivativeOfIf2}
\begin{aligned}
& \left. \dd{}{t}\right|_{t=0} \left( \dd{(a_{t}.\eta)}{\eta} \cdot f(a_{t}.\eta,a_{t}.\th_{0},\ldots,a_{t}.\th_{n-1}) \right) \\
&= \left. \dd{}{t}\right|_{t=0} f(a_{t}.\eta,a_{t}.\th_{0},\ldots,a_{t}.\th_{n-1}) + \left. \dd{}{t}\right|_{t=0} \dd{(a_{t}.\eta)}{\eta} \cdot f(\eta,\th_{0},\ldots,\th_{n-1}) \\
&= (\L_{A} f)(\eta,\th_{0},\ldots,\th_{n-1}) + \cos(\eta) \cdot f(\eta,\th_{0},\ldots,\th_{n-1}),
\end{aligned}
\end{equation}
where in the last step we used the identity
\[
\left. \dd{}{t}\right|_{t=0} \dd{(a_{t}.\eta)}{\eta} = \dd{}{\eta} \left. \dd{(a_{t}.\eta)}{t}\right|_{t=0} = \dd{}{\eta} \sin(\eta) = \cos(\eta)
\]
which follows from \eqref{eqn:FundamentalVectorFields}. Since the function $f$ is bounded with bounded derivatives, we see that the derivative in \eqref{eqn:ADerivativeOfIf2} is bounded on $\mathbb{T}^{n}$. Hence the Lebesgue dominated convergence theorem justifies the computation in \eqref{eqn:ADerivativeOfIf1} and therefore the derivative $\L_{A} \I f$ exists. Further, combining \eqref{eqn:ADerivativeOfIf1} and \eqref{eqn:ADerivativeOfIf2} we obtain the formula
\begin{multline} \label{eqn:FormulaL_AI}
(\L_{A} \I f)(\th_{0},\ldots,\th_{n-1}) = (\I \L_{A} f)(\th_{0},\ldots,\th_{n-1}) \\ + \frac{1}{2\pi} \, \int_{0}^{2\pi} \cos(\eta) \cdot f(\eta,\th_{0},\ldots,\th_{n-1}) \, \mathrm{d}\eta.
\end{multline}
Likewise we have
\begin{multline} \label{eqn:FormulaL_NI}
(\L_{N} \I f)(\th_{0},\ldots,\th_{n-1}) = (\I \L_{N} f)(\th_{0},\ldots,\th_{n-1}) \\ + \frac{1}{2\pi} \, \int_{0}^{2\pi} \sin(\eta) \cdot f(\eta,\th_{0},\ldots,\th_{n-1}) \, \mathrm{d}\eta.
\end{multline}
Since $f$ is bounded with bounded derivatives it follows that the derivatives $\L_{A} \I f$ and $\L_{N} \I f$ are bounded functions on $\mathbb{T}^{n}$. In the general case, a similar argument shows that the directional derivatives $\L_{i_{1},\ldots,i_{\ell}} \I f$ are bounded functions on $\mathbb{T}^{n}$ for all $\ell > 0$ and all $(i_{1},\ldots,i_{\ell}) \in \{K,A,N\}^{\ell}$. Since $G = KAN$ by the Iwasawa decomposition, this proves that $\I f$ is $G$-orbitwise smooth and bounded with bounded derivatives.
We see from \eqref{eqn:DefinitionOfOperatorI} that the restriction of $\I f$ to the configuration space $\mathbb{T}^{(n)}$ does not depend on the choice of $f$ since $f$ is uniquely determined on the configuration space $\mathbb{T}^{(n+1)}$ and $(z,z_{0},\ldots,z_{n-1}) \in \mathbb{T}^{(n+1)}$ for almost every $z \in S^{1}$. Hence the function $\I f$ defines a class in $A^{\infty}(\mathbb{T}^{n},\mathbb{C}_{\mu})$.
A straightforward calculation shows that $\I^{n+1} \circ \de^{n} f + \de^{n-1} \circ \I^{n} f = f$ holds pointwise on the configuration space $\mathbb{T}^{(n+1)}$ for every function $f \in \mathscr{A}(\mathbb{T}^{n+1},\mathbb{C}_{\mu})$ and for all $n \ge 0$.
\end{proof}
An immediate consequence of the proposition is the following vanishing theorem for the cohomology of the complex $(A^{\infty}(\mathbb{T}^{\bullet+1},\mathbb{C}_{\mu}),\de^{\bullet})$.
\begin{corollary} \label{cor:VanishingForA}
For every $\mu \in \mathbb{Z}$, the cochain complex $(A^{\infty}(\mathbb{T}^{\bullet+1},\mathbb{C}_{\mu}),\de^{\bullet})$ is acyclic and hence $H^{n}(A^{\infty}(\mathbb{T}^{\bullet+1},\mathbb{C}_{\mu}),\de^{\bullet}) = 0$ for all $n>0$.
\end{corollary}
\section{The Cauchy-Frobenius complex}
\label{sec:CauchyFrobeniusComplex}
\subsection{The differential operators $\L$ and $\Q$}
\label{subsec:OperatorsLAndQ}
We introduce two basic first order linear partial differential operators acting on $P$-orbitwise smooth functions. The first operator is defined by combining the real operators $\L_{A}$ and $\L_{N}$, introduced in Section \ref{subsec:OrbitwiseSmoothFunctions}, into a single complex operator.
\begin{definition} \label{def:OperatorL}
The \emph{Cauchy operator} is the complex operator
\begin{equation} \label{map:OperatorLUpstairs}
\map{\L \mathrel{\mathop:}= \L_{A} + \,i \L_{N}}{\mathscr{S}_{P}(\mathbb{T}^{n+1},\mathbb{R})}{\mathscr{S}_{P}(\mathbb{T}^{n+1},\mathbb{C})}.
\end{equation}
\end{definition}
The Cauchy operator naturally acts on $P$-orbitwise smooth functions. Its complex conjugate will be denoted by $\Lbar \mathrel{\mathop:}= \L_{A} - \,i \L_{N}$. For later reference, we note that as an immediate consequence of the real commutator relations in \eqref{eqn:CommutatorRelationsKAN}, the operators $\L_{K}$, $\L$ and $\Lbar$ satisfy the complex commutator relations
\begin{equation} \label{eqn:CommutatorRelationsL}
\left[ \L_{K},\L \right] - \L_{K} - \,i\L = 0, \quad \left[ \L_{K},\Lbar \right] - \L_{K} + \,i\Lbar = 0, \quad \left[ \L,\Lbar \right] + \L - \,\Lbar = 0.
\end{equation}
The second operator is defined in terms of the conjugated Cauchy operator $\Lbar$.
\begin{definition} \label{def:OperatorQ}
The \emph{Frobenius operator} is the real operator
\begin{equation} \label{map:OperatorQUpstairs}
\map{\Q \mathrel{\mathop:}= \Im\bigl( \Id - \Lbar \bigr)}{\mathscr{S}_{P}(\mathbb{T}^{n+1},\mathbb{C})}{\mathscr{S}_{P}(\mathbb{T}^{n+1},\mathbb{R})}
\end{equation}
defined as the imaginary part of the operator $\Id - \Lbar$.
\end{definition}
We reserve the notation $u = u^{\sharp} + i u^{\flat}$ for the decomposition of a complex function $u$ into its real and imaginary parts. For later reference, we note that the action of the Frobenius operator on some function $u \in \mathscr{S}_{P}(\mathbb{T}^{n+1},\mathbb{C})$ then takes the form
\begin{equation} \label{eqn:RealVersionActionOfQ}
\Q u = u^{\flat} - \L_{A} u^{\flat} + \L_{N} u^{\sharp}.
\end{equation}
\subsection{The Cauchy-Frobenius complex}
The goal of this section is to investigate the interaction between the differential operators $\L$ and $\Q$. We denote by
\[
\map{\iota^{n}}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{P}}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})} \quad\quad (n \ge 0)
\]
the canonical inclusion. We begin with the following basic observation.
\begin{proposition} \label{prop:CauchyFrobeniusSequence}
The differential operators $\L$ and $\Q$ in \eqref{map:OperatorLUpstairs} and \eqref{map:OperatorQUpstairs} induce linear operators
\begin{equation} \label{map:OperatorLDownstairs}
\map{\L^{n}}{A(\mathbb{T}^{n+1},\mathbb{R})}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{1})} \quad\quad (n \ge 0)
\end{equation}
and
\begin{equation} \label{map:OperatorQDownstairs}
\map{\Q^{n}}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{1})}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})} \quad\quad (n \ge 0)
\end{equation}
which give rise to a differential complex
\begin{equation} \label{map:CauchyFrobeniusComplex}
%
\begin{tikzcd}[column sep = small]
0 \arrow[r,rightarrow] & A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{P} \arrow[r,"\iota^{n}",rightarrow]
& A^{\infty}(\mathbb{T}^{n+1},\mathbb{R}) \arrow[r,"\L^{n}",rightarrow]
& A^{\infty}_{\t}(\mathbb{T}^{n+1},\mathbb{C}_{1}) \arrow[r,"\Q^{n}",rightarrow]
& A^{\infty}(\mathbb{T}^{n+1},\mathbb{R}) \arrow[r,rightarrow] & 0
\end{tikzcd}
%
\end{equation}
for every $n \ge 0$.
\end{proposition}
The differential complex in \eqref{map:CauchyFrobeniusComplex} will be called the \emph{Cauchy-Frobenius complex}.
\begin{proof}
\noindent{\textbf{Step 1.}} We prove that the Cauchy operator in \eqref{map:OperatorLUpstairs} induces a linear operator
\[
\map{\L^{n}}{A(\mathbb{T}^{n+1},\mathbb{R})}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{1})}.
\]
\medskip
Consider a function $p \in A(\mathbb{T}^{n+1},\mathbb{R})$ that is represented by some function $p \in \mathscr{A}(\mathbb{T}^{n+1},\mathbb{R})$. The function $\L p$ is bounded with bounded derivatives since $p$ has bounded derivatives. By Lemma \ref{lemma:InvarianceAndEquivarianceUnderK} we have $\L_{K} p = 0$. Hence it follows with the commutator relations from \eqref{eqn:CommutatorRelationsL} that
\[
\L_{K} (\L p) - i \, \L p = \left[ \L_{K},\L \right] p - \L_{K} p - \,i\L p = 0,
\]
which by Lemma \ref{lemma:InvarianceAndEquivarianceUnderK} implies that $\L p$ is $K$-equivariant as a function taking values in the $K$-module $\mathbb{C}_{1}$. Thus the function $\L p$ determines a well-defined class in $A^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{1})$ since the configuration space $\mathbb{T}^{(n+1)}$ is invariant under the action of $P$.
\medskip
\noindent{\textbf{Step 2.}} We prove that the operator $\L^{n}$ from Step 1 restricts to an operator
\[
\map{\L^{n}}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})}{A^{\infty}_{\t}(\mathbb{T}^{n+1},\mathbb{C}_{1})}.
\]
\medskip
Continuing with the argument from Step 1, it remains to check that $\L p$ is tame. To this end, we observe that $\Re(\L p) = \L_{A} p$. Hence by \eqref{eqn:IntegrateAlongA} we obtain for $\mathbf{z} \in \mathbb{T}^{(n+1)}$ the identity
\[
\int_{0}^{T} (\Re(\L p))(a_{t}.\mathbf{z}) \, \mathrm{d} t = p(a_{T}.\mathbf{z}) - p(\mathbf{z})
\]
for all $T \in \mathbb{R}$. Since $p$ is bounded we conclude that $\L p$ is tame.
\medskip
\noindent{\textbf{Step 3.}} We prove that $\im \, \iota^{n} \subset \ker \L^{n}$.
\medskip
Consider a function $p \in A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{P}$. It is $G$-orbitwise smooth and $P$-invariant when restricted to the configuration space $\mathbb{T}^{(n+1)}$, hence invariant under the actions of $A$ and $N$ thereon. Thus $\L_{A} p = 0 = \L_{N} p$ on $\mathbb{T}^{(n+1)}$, which implies that $\L \iota p = 0$ in $A(\mathbb{T}^{n+1},\mathbb{C}_{1})$ because $p$ is real valued.
\medskip
\noindent{\textbf{Step 4.}} We prove that the Frobenius operator in \eqref{map:OperatorQUpstairs} induces a linear operator
\[
\map{\Q^{n}}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{1})}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})}.
\]
\medskip
Consider a function $u \in A^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{1})$ represented by some function $u \in \mathscr{A}^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{1})$. The function $\Q u$ is bounded with bounded derivatives for $u$ has bounded derivatives. Since $u$ is $K$-equivariant with values in $\mathbb{C}_{1}$, by Lemma \ref{lemma:InvarianceAndEquivarianceUnderK} we have $\L_{K} u - i\, u = 0$. Thus it follows with \eqref{eqn:CommutatorRelationsL} that
\[
\L_{K} \Q u = \L_{K} ( u - \,\Lbar u ) = \L_{K} u - \left[ \L_{K},\Lbar \right] u - \Lbar \L_{K} u = i\, u - \L_{K} u + i\,\Lbar u - i\,\Lbar u = 0.
\]
By Lemma \ref{lemma:InvarianceAndEquivarianceUnderK} this implies that $\Q u$ is $K$-equivariant as a $\mathbb{C}_{0}$-valued function, hence $K$-invariant. As in Step~1 we see that $\Q u$ defines a class in $A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})$.
\medskip
\noindent{\textbf{Step 5.}} We prove that $\im \L^{n} \subset \ker \Q^{n}$.
\medskip
Let $p \in A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})$. Using the commutator relations from \eqref{eqn:CommutatorRelationsL} we compute
\begin{equation*} \label{eqn:PLq}
\Q \L p = \Im \left( \L p - \Lbar \L p \right) = \frac{1}{2\,i} \left( \left[ \L,\Lbar \right] p + \L p - \,\Lbar p \right) = 0. \qedhere
\end{equation*}
\end{proof}
The next proposition, which is the main result of this section, characterizes the interaction between the differential operators $\L$ and $\Q$. Its proof will occupy the remainder of this section.
\begin{proposition} \label{prop:ExactnessOfCauchyFrobeniusComplex}
The Cauchy-Frobenius complex in \eqref{map:CauchyFrobeniusComplex} is exact for every $n \ge 2$. Moreover, for $n=1$ it is exact at the last term, i.e., the map $\Q^{1}$ is surjective.
\end{proposition}
\begin{proof}
Exactness of the Cauchy-Frobenius complex at the first term is clear. Exactness at the other terms holds by Proposition \ref{prop:InfinitesimalPInvariance}, Proposition \ref{prop:CauchyProblem} and Proposition \ref{prop:FrobeniusProblem} below.
\end{proof}
\subsection{Infinitesimal $P$-invariance}
\label{subsec:InfinitesimalPInvariance}
\begin{proposition} \label{prop:InfinitesimalPInvariance}
In \eqref{map:CauchyFrobeniusComplex} we have $\im \, \iota^{n} = \ker \L^{n}$ for every $n \ge 1$.
\end{proposition}
\begin{proof}
By Proposition \ref{prop:CauchyFrobeniusSequence}, it remains to show that $\ker \L^{n} \subset \im \, \iota^{n}$. So consider a function $p \in A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})$. Since $p$ is real valued, $\L^{n} p = 0$ implies that $\L_{A} p = 0 = \L_{N} p$ on the configuration space $\mathbb{T}^{(n+1)}$. Since $p$ is smooth along $P$-orbits and $P$-orbits are connected, it follows that $p$ is invariant under the actions of $A$ and $N$ on $\mathbb{T}^{(n+1)}$, hence $P$-invariant thereon. We conclude that $p \in A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{P}$.
\end{proof}
\subsection{$K$-reduction and $K$-extension}
\label{subsec:KReductionAndExtension}
We introduce the concepts of $K$-reduction and $K$-extension, which will be useful when dealing with differential equations for $K$-equivariant functions. Let $n \ge 0$. Given a measurable function $f \in \mathscr{L}^{0}(\mathbb{T}^{n+1},\mathbb{C})$, the \emph{$K$-reduction} of $f$ is the function $f_{K} \in \mathscr{L}^{0}(\mathbb{T}^{n},\mathbb{C})$ defined by
\begin{equation} \label{eqn:KReduction}
f_{K}(z_{1},\ldots,z_{n}) \mathrel{\mathop:}= f(1,z_{1},\ldots,z_{n}).
\end{equation}
Conversely, given a weight $\mu \in \mathbb{Z}$ and a function $f \in \mathscr{L}^{0}(\mathbb{T}^{n},\C)$, the \emph{$K$-extension of $f$ with weight $\mu$} is the function $f^{K}_{\mu} \in \mathscr{L}^{0}(\mathbb{T}^{n+1},\mathbb{C}_{\mu})^{K}$ defined by
\begin{equation} \label{eqn:KExtension}
f^{K}_{\mu}(z_{0},\ldots,z_{n}) \mathrel{\mathop:}= z_{0}^{\mu} \cdot f(z_{1}/z_{0},\ldots,z_{n}/z_{0}).
\end{equation}
The next lemma collects some basic properties of $K$-reduction and $K$-extension.
\begin{lemma} \label{lemma:ReductionAndExtensionByK}
Let $n \ge 0$, and fix an integer $\mu \in \mathbb{Z}$.
\begin{enumerate}[leftmargin=1cm,topsep=0.5ex,itemsep=0.5ex]
\item
For all $f \in \mathscr{L}^{0}(\mathbb{T}^{n+1},\mathbb{C}_{\mu})^{K}$ we have $(f_{K})^{K}_{\mu} = f$, and for all $f \in \mathscr{L}^{0}(\mathbb{T}^{n},\mathbb{C})$ we have $(f^{K}_{\mu})_{K} = f$.
\item Let $f, f^{\prime} \in \mathscr{L}^{0}(\mathbb{T}^{n+1},\mathbb{C}_{\mu})^{K}$. If $f_{K} = f^{\prime}_{K}$, then $f=f^{\prime}$.
\item Let $f \in \mathscr{L}^{0}(\mathbb{T}^{n+1},\mathbb{C})$. Then $f$ is bounded if and only if $f_{K}$ is bounded if and only if $f^{K}_{\mu}$ is bounded.
\item If $f \in \mathscr{S}_{G}(\mathbb{T}^{n+1},\mathbb{C})$, then $f_{K} \in \mathscr{S}_{P}(\mathbb{T}^{n},\mathbb{C})$. Moreover, we have
\[
\L_{A} f_{K} = (\L_{A} f)_{K}, \quad \L_{N} f_{K} = (\L_{N} f)_{K}.
\]
\item If $f \in \mathscr{S}^{\b}_{G}(\mathbb{T}^{n+1},\mathbb{C})$, then $f_{K} \in \mathscr{S}^{\b}_{P}(\mathbb{T}^{n},\mathbb{C})$.
\item If $f \in \mathscr{S}_{P}(\mathbb{T}^{n},\mathbb{C})$, then $f^{K}_{\mu} \in \mathscr{S}_{G}(\mathbb{T}^{n+1},\mathbb{C}_{\mu})^{K}$.
\item If $f \in \mathscr{S}^{\b}_{P}(\mathbb{T}^{n},\mathbb{C})$, then $f^{K}_{\mu} \in \mathscr{S}^{\b}_{G}(\mathbb{T}^{n+1},\mathbb{C}_{\mu})^{K}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Claims (i) and (iii) are immediate from \eqref{eqn:KReduction} and \eqref{eqn:KExtension}, and (ii) follows from (i).
\medskip
To prove (iv), recall that for a $G$-orbitwise smooth function $f \in \mathscr{S}_{G}(\mathbb{T}^{n+1},\mathbb{C})$ the map
\[
G \to \C, \quad g \mapsto f(g.z_{0},\ldots,g.z_{n})
\]
is smooth for every $(z_{0},\ldots,z_{n}) \in \mathbb{T}^{n+1}$. Recall moreover that $1 \in S^{1}$ is a fixed point for the action of the parabolic subgroup $P=AN$. Firstly, this implies that the map
\[
P \to \C, \quad p \mapsto f(p.1,p.z_{1},\ldots,p.z_{n}) = f_{K}(p.z_{1},\ldots,p.z_{n})
\]
is smooth, which shows that $f_{K} \in \mathscr{S}_{P}(\mathbb{T}^{n},\C)$. Secondly, it implies that $K$-reduction commutes with the action of the operators $\L_{A}$ and $\L_{N}$. In fact, for $(z_{1},\ldots,z_{n}) \in \mathbb{T}^{n}$ we have
\[
\begin{aligned}
(\L_{A} f_{K})(z_{1},\ldots,z_{n}) &= \left. \dd{}{t} \right|_{t=0} f_{K}(a_{t}.z_{1},\ldots,a_{t}.z_{n}) \\
&= \left. \dd{}{t} \right|_{t=0} f(a_{t}.1,a_{t}.z_{1},\ldots,a_{t}.z_{n}) \\
&= (\L_{A} f)(1,z_{1},\ldots,z_{n}) \,=\, (\L_{A} f)_{K}(z_{1},\ldots,z_{n}),
\end{aligned}
\]
and likewise for $\L_{N}$.
\medskip
Let us prove (v). Assume that $f \in \mathscr{S}^{\b}_{G}(\mathbb{T}^{n+1},\mathbb{C})$. By (iv) above it remains to show that $f_{K}$ has bounded $P$-derivatives. By (iv) we have
\[
\L_{i_{1},\ldots,i_{\ell}} f_{K} = ( \L_{i_{1},\ldots,i_{\ell}} f )_{K}
\]
for all $\ell > 0$ and all $(i_{1},\ldots,i_{\ell}) \in \{A,N\}^{\ell}$. The claim now follows with (iii) above since $f$ has bounded $G$-derivatives.
\medskip
To prove claim (vi) we consider $f \in \mathscr{S}_{P}(\mathbb{T}^{n},\C)$ and let $(z_{0},\ldots,z_{n}) \in \mathbb{T}^{n+1}$. We are going to show that the map
\begin{equation} \label{map:SmoothnessOfKExtension}
G \to \C, \quad g \mapsto f^{K}_{\mu}(g.z_{0},\ldots,g.z_{n})
\end{equation}
is smooth. Let us fix $t \in \mathbb{R}$ such that $k_{t} = z_{0} \in S^{1} \cong K$. Then $k_{t}^{-1}.z_{0}=1$. From the Iwasawa decomposition $G = KAN = KP$ we obtain the decomposition $G = K P^{\prime}$ with the parabolic subgroup $P^{\prime} \mathrel{\mathop:}= k_{t}\,P\,k_{t}^{-1}$. Any $g \in G$ may therefore be written in the form
\begin{equation} \label{eqn:ConjugateIwasawa}
g = k\,k_{t}\,p\,k_{t}^{-1},
\end{equation}
with $k \in K$ and $p \in P$ smoothly depending on $g$. Let us write $k \, k_{t} = e^{t^{\prime}i}$ with $t^{\prime} \in \mathbb{R}$ smoothly depending on $g$. Then it follows from \eqref{eqn:KExtension} and the fact that $1 \in S^{1}$ is a fixed point for the action of $P$ that
\[
\begin{aligned}
f^{K}_{\mu}(g.z_{0},\ldots,g.z_{n}) &= e^{i \mu t^{\prime}} \cdot f^{K}_{\mu}\left( p\,k_{t}^{-1}.z_{0},p\,k_{t}^{-1}.z_{1},\ldots,p\,k_{t}^{-1}.z_{n} \right) \\
&= e^{i \mu t^{\prime}} \cdot f^{K}_{\mu}\left( 1,p\,k_{t}^{-1}.z_{1},\ldots,p\,k_{t}^{-1}.z_{n} \right) \\
&= e^{i \mu t^{\prime}} \cdot f\left( p\,k_{t}^{-1}.z_{1},\ldots,p\,k_{t}^{-1}.z_{n} \right).
\end{aligned}
\]
Since the function $f$ is $P$-orbitwise smooth and $t^{\prime}$ and $p$ depend smoothly on $g$, we conclude that the map \eqref{map:SmoothnessOfKExtension} is in fact smooth.
\medskip
Lastly, we prove (vii). Let $f \in \mathscr{S}^{\b}_{P}(\mathbb{T}^{n},\C)$. By (vi) above it remains to show that the function $f^{K}_{\mu}$ has bounded $G$-derivatives. To this end, let us first introduce some notation. We abbreviate $\E_{0} \mathrel{\mathop:}= \L_{K}$, $\E_{1} \mathrel{\mathop:}= \L$ and $\E_{2} \mathrel{\mathop:}= \Lbar$. Given an integer $\ell > 0$, for any collection of indices $(j_{1},\ldots,j_{\ell}) \in \{0,1,2\}^{\ell}$ we then consider the $\ell$-th order linear partial differential operators
\begin{equation*}
\E_{j_{1},\ldots,j_{\ell}} \mathrel{\mathop:}= \E_{j_{1}} \circ \cdots \circ \E_{j_{\ell}}.
\end{equation*}
For $\ell = 0$ we set $\E_{j_{1},\ldots,j_{\ell}} \mathrel{\mathop:}= \Id$. Observe that any of the differential operators $\L_{i_{1},\ldots,i_{\ell}}$ defined in Section \ref{subsec:OrbitwiseSmoothFunctions} may be expressed as a complex linear combination of the differential operators $\E_{j_{1},\ldots,j_{\ell}}$. Hence in order to prove that $f^{K}_{\mu}$ has bounded derivatives it will be sufficient to show that the derivatives $\E_{j_{1},\ldots,j_{\ell}} f^{K}_{\mu}$ are bounded for all $\ell > 0$ and all $(j_{1},\ldots,j_{\ell}) \in \{0,1,2\}^{\ell}$.
Let us consider the first order derivatives of the function $f^{K}_{\mu}$. Since by (v) above $f^{K}_{\mu}$ is $K$-equivariant, by Lemma \ref{lemma:InvarianceAndEquivarianceUnderK} we have
\[
\E_{0} f^{K}_{\mu} = \L_{K} f^{K}_{\mu} = i \mu \cdot f^{K}_{\mu},
\]
which is bounded since $f^{K}_{\mu}$ is bounded. Let now $j_{1} \in \{1,2\}$. Using the commutator relations from \eqref{eqn:CommutatorRelationsL} we arrive at the differential equation
\begin{equation} \label{eqn:EqnBoundednessOfDerivativesFirstOrder}
\L_{K} \left( \E_{j_{1}} f^{K}_{\mu} \right) = i \nu \cdot \left( \E_{j_{1}} f^{K}_{\mu} \right) + R_{0}(f^{K}_{\mu})
\end{equation}
for the derivative $\E_{j_{1}} f^{K}_{\mu}$, with $\nu \in \mathbb{Z}$ and the lower order perturbation term
\[
R_{0}(f^{K}_{\mu}) = i \beta \cdot f^{K}_{\mu}
\]
for some $\beta \in \mathbb{Z}$. Notice that \eqref{eqn:EqnBoundednessOfDerivativesFirstOrder} is a first order linear ordinary differential equation along each $K$-orbit in $\mathbb{T}^{n+1}$. By Lemma \ref{lemma:InvarianceAndEquivarianceUnderK} and (iii) above, any solution of the unperturbed equation in \eqref{eqn:EqnBoundednessOfDerivativesFirstOrder} is bounded if and only if its $K$-reduction is bounded. Observe moreover that the perturbation term in \eqref{eqn:EqnBoundednessOfDerivativesFirstOrder} is bounded. Since $K \cong S^{1}$ is compact, we therefore conclude that the solution $\E_{j_{1}} f^{K}_{\mu}$ of the perturbed equation in \eqref{eqn:EqnBoundednessOfDerivativesFirstOrder} is bounded if and only if its $K$-reduction is bounded (cf.\,\cite[Sec.\,3.3]{Arnold/Ordinary-differential-equations}). Now by (iv) and (i) above this $K$-reduction is given by
\[
(\E_{j_{1}} f^{K}_{\mu})_{K} = \E_{j_{1}} (f^{K}_{\mu})_{K} = \E_{j_{1}} f,
\]
which is bounded since $f$ has bounded $P$-derivatives. Hence the derivatives $\E_{j_{1}} f^{K}_{\mu}$ are bounded for $j_{1} \in \{0,1,2\}$.
We may now consider derivatives of the function $f^{K}_{\mu}$ of any order $\ell > 1$. To this end, we let $(j_{1},\ldots,j_{\ell}) \in \{0,1,2\}^{\ell}$ and inductively apply the commutator relations from \eqref{eqn:CommutatorRelationsL} to obtain the differential equation
\begin{equation*} \label{eqn:EqnBoundednessOfDerivatives}
\L_{K} \left( \E_{j_{1},\ldots,j_{\ell}} f^{K}_{\mu} \right) = i \gamma \cdot \E_{j_{1}\ldots,j_{\ell}} f^{K}_{\mu} + R_{\ell-1}(f^{K}_{\mu})
\end{equation*}
for the derivative $\E_{j_{1},\ldots,j_{\ell}} f^{K}_{\mu}$, with $\gamma \in \mathbb{Z}$ and the lower order perturbation term
\[
R_{\ell-1}(f^{K}_{\mu}) = \sum_{0 \le \kappa < \ell} \,\, \sum_{(l_{1},\ldots,l_{\kappa}) \in \{0,1,2\}^{\kappa}} i \alpha_{l_{1},\ldots,l_{\kappa}} \cdot \E_{l_{1},\ldots,l_{\kappa}} f^{K}_{\mu}
\]
with $\alpha_{l_{1},\ldots,l_{\kappa}} \in \mathbb{Z}$. It follows by induction that the function $R_{\ell-1}(f^{K}_{\mu})$ is bounded. Hence a similar argument as in the case $\ell=1$ above shows that the derivative $\E_{j_{1},\ldots,j_{\ell}} f^{K}_{\mu}$ is in fact bounded.
\end{proof}
We will also need the following useful criterion for tameness.
\begin{lemma} \label{lemma:CriterionForTameness}
If the real part of a bounded function $f \in \mathscr{S}^{\b}_{P}(\mathbb{T}^{n},\mathbb{C})$ satisfies $\Re f = 0$, then the $K$-extension $f^{K}_{1} \in \mathscr{S}^{\b}_{G}(\mathbb{T}^{n+1},\mathbb{C}_{1})^{K}$ of $f$ with weight $1$ is tame.
\end{lemma}
\begin{proof}
Let $f \in \mathscr{S}^{\b}_{P}(\mathbb{T}^{n},\mathbb{C})$, and assume that $\Re f = 0$. By Lemma \ref{lemma:ReductionAndExtensionByK}\,(iii) we know that $f^{K}_{1}$ is bounded since $f$ is bounded by assumption. It will be convenient to work with angular coordinates $(\th_{0},\ldots,\th_{n}) \in \mathbb{T}^{n+1}$. Recall from \eqref{eqn:KExtension} that the $K$-extension $f^{K}_{1} \in \mathscr{S}^{\b}_{G}(\mathbb{T}^{n+1},\mathbb{C}_{1})^{K}$ is given by
\[
f^{K}_{1}(\th_{0},\ldots,\th_{n}) = e^{i\th_{0}} \cdot f( \th_{1}-\th_{0},\ldots,\th_{n}-\th_{0} ).
\]
Since $\Re f = 0$ by assumption, it follows that
\[
\bigl( \Re f^{K}_{1} \bigr)(\th_{0},\ldots,\th_{n}) = - \sin(\th_{0}) \cdot (\Im f)(\th_{1}-\th_{0},\ldots,\th_{n}-\th_{0}).
\]
Hence boundedness of $f$ yields an estimate
\begin{equation} \label{Eqn:TamenessOfu1}
\left\lvert \int_{0}^{T} \bigl( \Re f^{K}_{1} \bigr)(a_{t}.\th_{0},\ldots,a_{t}.\th_{n}) \, \mathrm{d} t \right\rvert \le \lVert f \rVert_{\infty} \cdot \int_{0}^{T} \lvert \sin(a_{t}.\th_{0}) \rvert \, \mathrm{d} t
\end{equation}
for every $T \in \mathbb{R}$. Recall that the fixed points for the boundary action of $a_{t}$ on $S^{1}$ are $\pm 1$, which in angular coordinates correspond to the multiples of $\pi$. Hence we have
\begin{equation} \label{Eqn:TamenessOfu2}
\lvert \sin(a_{t}.\th_{0}) \rvert \le \pm \sin(a_{t}.\th_{0})
\end{equation}
for all $t \in \mathbb{R}$, depending on whether $\sin(\th_{0}) \gtreqless 0$. Now with the explicit formula for the operator $\L_{A}$ from \eqref{eqn:FundamentalVectorFields} we compute
\begin{equation} \label{Eqn:TamenessOfu3}
\int_{0}^{T} \sin(a_{t}.\th_{0}) \, \mathrm{d} t = \int_{0}^{T} \dd{}{t}(a_{t}.\th_{0}) \, \mathrm{d} t = a_{T}.\th_{0} - \th_{0}
\end{equation}
for every $T \in \mathbb{R}$. Combining \eqref{Eqn:TamenessOfu1}, \eqref{Eqn:TamenessOfu2} and \eqref{Eqn:TamenessOfu3} we finally arrive at
\[
\left\lvert \int_{0}^{T} \bigl( \Re f^{K}_{1} \bigr)(a_{t}.\th_{0},\ldots,a_{t}.\th_{n}) \, \mathrm{d} t \right\rvert \le \lVert f \rVert_{\infty} \cdot \lvert a_{T}.\th_{0} - \th_{0} \rvert \le \lVert f \rVert_{\infty} \cdot \pi
\]
for all $T \in \mathbb{R}$ and every point $(\th_{0},\ldots,\th_{n}) \in \mathbb{T}^{n+1}$, which implies that $f^{K}_{1}$ is tame.
\end{proof}
\subsection{The Cauchy problem}
\label{SubSec:CauchyProblem}
Consider the partial differential equation
\begin{equation} \label{eqn:CauchyProblem}
\L p = u
\end{equation}
with right-hand side $u \in A^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{1})$. Our goal in this section is to explicitly construct solutions $p \in A(\mathbb{T}^{n+1},\mathbb{R})$ of this equation, and to study their boundedness properties. As it turns out, solutions of \eqref{eqn:CauchyProblem} are uniquely determined by a suitable choice of initial condition. To formalize this, we make the following definition.
\pagebreak
\begin{definition} \label{def:MeasurableSetOfBasepoints}
A subset $B_{n} \subset \mathbb{T}^{n+1}$ is called a \emph{measurable set of basepoints} for the boundary action of $G$ on $\mathbb{T}^{n+1}$ if the following two conditions are satisfied.
\begin{enumerate}[leftmargin=1cm,topsep=0.5ex,itemsep=0.5ex]
\item The set $B_{n}$ is a measurable subset of $\mathbb{T}^{n+1}$.
\item The map
\[
B_{n} \to \mathbb{T}^{n+1}/G, \quad (b_{0},\ldots,b_{n}) \mapsto G.(b_{0},\ldots,b_{n})
\]
taking each basepoint to its corresponding $G$-orbit in $\mathbb{T}^{n+1}$ is bijective.
\end{enumerate}
\end{definition}
We remark that measurable sets of basepoints $B_{n} \subset \mathbb{T}^{n+1}$ for the boundary action of $G$ on $\mathbb{T}^{n+1}$ as in Definition \ref{def:MeasurableSetOfBasepoints} above exist for every $n \ge 0$ (cf.\,\cite[App.\,B]{Zimmer/Ergodic-theory-and-semisimple-groups}). For any fixed such measurable set of basepoints we may then impose the initial condition
\begin{equation} \label{eqn:InitialCondition}
p|_{B_{n}} = 0
\end{equation}
upon the solutions of \eqref{eqn:CauchyProblem}. Note that this condition involves pointwise evaluation of the function class $p \in A(\mathbb{T}^{n+1},\mathbb{R})$ on the set $B_{n} \subset \mathbb{T}^{n+1}$. This is well-defined only on the configuration space $\mathbb{T}^{(n+1)}$, but void on its complement $\mathbb{T}^{n+1} \setminus \mathbb{T}^{(n+1)}$. Nevertheless, as we will see, the initial condition in \eqref{eqn:InitialCondition} uniquely determines the solution $p$. We will refer to \eqref{eqn:CauchyProblem}--\eqref{eqn:InitialCondition} as the \emph{Cauchy problem}. The next proposition characterizes its solutions.
\begin{proposition} \label{prop:CauchyProblem}
Fix a collection $\mathcal{B} = \{B_{n}\}_{n \ge 2}$ of measurable sets of basepoints $B_{n} \subset \mathbb{T}^{n+1}$ for the boundary action of $G$ on $\mathbb{T}^{n+1}$ for all $n \ge 2$. Then there exists a linear operator
\begin{equation} \label{map:SolutionOperatorCauchyProblem}
\map{\Rop^{n}_{\mathcal{B}}}{\im \L^{n}}{A(\mathbb{T}^{n+1},\mathbb{R})} \quad\quad (n \ge 2)
\end{equation}
which is a right inverse of the Cauchy operator $\L^{n}$ in \eqref{map:OperatorLDownstairs}. More precisely, for every $n \ge 2$ and for every function $u \in A^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{1})$ satisfying the integrability condition
\begin{equation} \label{eqn:IntegrabilityConditionCauchyProblem}
\Q u = 0
\end{equation}
the following hold.
\begin{enumerate}[leftmargin=1cm,topsep=0.5ex,itemsep=0.5ex]
\item Fix a basepoint $\mathbf{b} \in B_{n} \cap \mathbb{T}^{(n+1)}$, an element $g \in G$, and a Cartan decomposition $g = k^{\prime}\,a_{T}\,k$ with $k, k^{\prime} \in K$, $a_{T} \in A$ and $T \in \mathbb{R}$ as in \eqref{eqn:CartanDecomposition}. Then the value of the function $\Rop_{\mathcal{B}} u$ at the point $g.\mathbf{b}$ is given by the integral
\begin{equation} \label{eqn:FormulaForRu}
\bigl( \Rop_{\mathcal{B}} u \bigr)( g.\mathbf{b} ) = \int_{0}^{T} (\Re u)(a_{t}\,k.\mathbf{b}) \, \mathrm{d} t.
\end{equation}
\item The function $p \mathrel{\mathop:}= \Rop_{\mathcal{B}} u$ is a solution of the Cauchy problem \eqref{eqn:CauchyProblem}--\eqref{eqn:InitialCondition}.
\item If the function $u$ is tame, then the solution $p = \Rop_{\mathcal{B}} u$ is bounded. In particular, the Cauchy-Frobenius complex in \eqref{map:CauchyFrobeniusComplex} is exact at the third term $A_{\t}^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{1})$.
\end{enumerate}
\end{proposition}
We note that the pointwise evaluation of the function $\Rop_{\mathcal{B}} u$ in \eqref{eqn:FormulaForRu} is only defined for points in the configuration space $\mathbb{T}^{(n+1)}$. This is not a loss, however, since we are working with function classes in the sense of Section \ref{subsec:KEquivariantFunctions}.
\begin{proof}
Fix $n \ge 2$, let $B_{n} \subset \mathbb{T}^{n+1}$ be a measurable set of basepoints, and let $u \in A(\mathbb{T}^{n+1},\mathbb{C}_{1})$ such that \eqref{eqn:IntegrabilityConditionCauchyProblem} holds. Since
\[
\im \L^{n} \subset \ker \Q^{n}
\]
by Proposition \ref{prop:CauchyFrobeniusSequence}, it will be sufficient to explicitly construct the solution $p \in A(\mathbb{T}^{n+1},\mathbb{R})$ of the Cauchy problem \eqref{eqn:CauchyProblem}--\eqref{eqn:InitialCondition} and to show that it is bounded if $u$ is tame.
\medskip
\noindent{\textbf{Step 1.}} Since the configuration space $\mathbb{T}^{(n+1)}$ is invariant under the action of $G$, we may pick a representative $u \in \mathscr{A}^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{1})$ such that $u(\mathbf{z}) = 0$ for all $\mathbf{z} \in \mathbb{T}^{n+1} \setminus \mathbb{T}^{(n+1)}$.
\medskip
\noindent{\textbf{Step 2.}} The measurable subset $B_{n} \subset \mathbb{T}^{n+1}$ of basepoints defines a measurable subset $(B_{n})_{K} \subset \mathbb{T}^{n}$ defined by
\[
(B_{n})_{K} \mathrel{\mathop:}= \left\{ (b_{1}/b_{0},\ldots,b_{n}/b_{0}) \,\middle|\, (b_{0},\ldots,b_{n}) \in B_{n} \right\}.
\]
In this way, we obtain a bijective parametrization $(B_{n})_{K} \to \mathbb{T}^{n}/P$ of the $P$-orbits in $\mathbb{T}^{n}$.
\medskip
\noindent{\textbf{Step 3.}} We construct a function $q \in \mathscr{S}_{P}(\mathbb{T}^{n},\mathbb{R})$ that solves the Cauchy initial value problem
\begin{equation} \label{eqn:ReducedCauchyProblem}
\begin{cases} \L q = u_{K}, \\ q|_{(B_{n})_{K}} = 0. \end{cases}
\end{equation}
Here $u_{K} \in \mathscr{S}_{P}(\mathbb{T}^{n},\mathbb{C})$ by Lemma \ref{lemma:ReductionAndExtensionByK}\,(iv), and $(B_{n})_{K} \subset \mathbb{T}^{n}$ is the measurable subset constructed in Step 2. Note that the first equation is obtained from \eqref{eqn:CauchyProblem} by means of $K$-reduction.
\medskip
We will proceed in two stages. First, we solve the initial value problem \eqref{eqn:ReducedCauchyProblem} on the open subset $\mathring{\mathbb{T}}^{(n)} \subset \mathbb{T}^{n}$, which was defined in Section \ref{subsec:BoundaryAction}. We will later extend the solution to all of $\mathbb{T}^{n}$. Writing $u_{K} = u_{K}^{\sharp} + i u_{K}^{\flat}$ for the decomposition of $u_{K}$ into its real and imaginary parts, we observe that the complex differential equation $\L q = u_{K}$ in \eqref{eqn:ReducedCauchyProblem} is equivalent to the system of real differential equations
\begin{equation} \label{eqn:RealReducedCauchyProblem}
\L_{A} q = u_{K}^{\sharp}, \quad \L_{N} q = u_{K}^{\flat}.
\end{equation}
Applying Frobenius' theorem (cf.\,\cite[Sec.\,1.3 and Thm.\,1.3.8]{CandelConlon/Foliations.-I}) simultaneously on each $P$-orbit in $\mathring{\mathbb{T}}^{(n)}$, it follows that the system in \eqref{eqn:RealReducedCauchyProblem} admits a $P$-orbitwise smooth solution $q$ on $\mathring{\mathbb{T}}^{(n)}$ if and only if it is involutive (cf.\,\cite{BerhanuCordaroHounie/An-introduction-to-involutive-structures} and \cite[App.\,B]{Hartnick/Bounded-cohomology-via-partial-differential-equations-I}). Note that this argument crucially relies on the facts that $P$ acts freely on $\mathring{\mathbb{T}}^{(n)}$ since $n \ge 2$ by assumption, and that $P$-orbits in $\mathring{\mathbb{T}}^{(n)}$ are connected and simply connected submanifolds of $\mathring{\mathbb{T}}^{(n)}$. The system of differential equations in \eqref{eqn:RealReducedCauchyProblem} is involutive if and only if
\[
[ \L_{A}, \L_{N} ] \, q = \L_{A} u_{K}^{\flat} - \L_{N} u_{K}^{\sharp}.
\]
By the commutator relations from \eqref{eqn:CommutatorRelationsKAN} this amounts to the integrability condition
\[
u_{K}^{\flat} - \L_{A} u_{K}^{\flat} + \L_{N} u_{K}^{\sharp} = 0.
\]
By \eqref{eqn:RealVersionActionOfQ} this is equivalent to
\[
\Q u_{K} = 0.
\]
But this equation is satisfied on $\mathring{\mathbb{T}}^{(n)}$ because $\Q u_{K} = (\Q u)_{K}$ by Lemma \ref{lemma:ReductionAndExtensionByK}\,(iv), and because $\Q u (\mathbf{z}) = 0$ for all $\mathbf{z} \in \mathbb{T}^{(n+1)}$ by \eqref{eqn:IntegrabilityConditionCauchyProblem}. Thus by Frobenius' theorem it follows that the system in \eqref{eqn:RealReducedCauchyProblem} admits a smooth solution $q$ on each $P$-orbit in the open subset $\mathring{\mathbb{T}}^{(n)}$. We may adjust this solution $q$ in such a way that it satisfies the initial condition in \eqref{eqn:ReducedCauchyProblem} on each $P$-orbit in $\mathring{\mathbb{T}}^{(n)}$. Since the subset $(B_{n})_{K} \subset \mathbb{T}^{n}$ is measurable and the right-hand side $u_{K}$ in \eqref{eqn:ReducedCauchyProblem} is a measurable function, it follows that the solution $q$ is a measurable function on $\mathring{\mathbb{T}}^{(n)}$.
It remains to extend the solution $q$ to the whole torus $\mathbb{T}^{n}$. This will be done by setting $q(\mathbf{z}) \mathrel{\mathop:}= 0$ for all $\mathbf{z} \in \mathbb{T}^{n} \setminus \mathring{\mathbb{T}}^{(n)}$. Since the complement $\mathbb{T}^{n} \setminus \mathring{\mathbb{T}}^{(n)}$ is of measure zero in $\mathbb{T}^{n}$, since $u_{K}$ vanishes on $\mathbb{T}^{n} \setminus \mathring{\mathbb{T}}^{(n)}$ by Step 1, and since $\mathring{\mathbb{T}}^{(n)}$ is $P$-invariant, this finally yields the desired solution $q \in \mathscr{S}_{P}(\mathbb{T}^{n},\mathbb{R})$ of the Cauchy initial value problem in \eqref{eqn:ReducedCauchyProblem}.
\medskip
\noindent{\textbf{Step 4.}} We prove that the $K$-extension $p \mathrel{\mathop:}= q^{K}_{0} \in \mathscr{S}_{G}(\mathbb{T}^{n+1},\mathbb{R})^{K}$ of the function $q$ with weight $0$ is a solution of \eqref{eqn:CauchyProblem}.
\medskip
Applying Lemma \ref{lemma:ReductionAndExtensionByK}\,(iv, i) we deduce from \eqref{eqn:ReducedCauchyProblem} that
\[
(\L p)_{K} = \L p_{K} = \L q = u_{K}.
\]
By Proposition \ref{prop:CauchyFrobeniusSequence} we know that $\L p, u \in \mathscr{S}_{G}(\mathbb{T}^{n+1},\mathbb{C}_{1})^{K}$. Hence by Lemma \ref{lemma:ReductionAndExtensionByK}\,(ii) it follows that $\L p = u$.
\medskip
\noindent{\textbf{Step 5.}} We observe that the solution $p \in \mathscr{S}_{G}(\mathbb{T}^{n+1},\mathbb{R})^{K}$ of \eqref{eqn:CauchyProblem} constructed in Step 4 satisfies the initial condition in \eqref{eqn:InitialCondition}.
\medskip
In fact, the solution $q$ of \eqref{eqn:ReducedCauchyProblem} in Step 3 was constructed in such a way that
\[
q( b_{1}/b_{0},\ldots,b_{n}/b_{0} ) = 0
\]
for all basepoints $(b_{0},\ldots,b_{n}) \in B_{n}$. Hence \eqref{eqn:InitialCondition} follows from \eqref{eqn:KExtension} since $p = q^{K}_{0}$ by Step~4.
\medskip
\noindent{\textbf{Step 6.}} We show that $p \in \mathscr{S}^{\b}_{G}(\mathbb{T}^{n+1},\mathbb{R})^{K}$. This proves part (ii) of the proposition.
\medskip
Write $u = u^{\sharp} + i u^{\flat}$ for the decomposition of $u$ into its real and imaginary parts. By assumption, $u^{\sharp}$ and $u^{\flat}$ are bounded functions with bounded $G$-derivatives. By Lemma \ref{lemma:InvarianceAndEquivarianceUnderK} we have $\L_{K} p = 0$. Moreover, $\L p = u$ by Step 4 implies that $\L_{A} p = u^{\sharp}$ and $\L_{N} p = u^{\flat}$. It follows that $p$ has bounded $G$-derivatives.
\medskip
\noindent{\textbf{Step 7.}} We derive an explicit formula for the function $p$. This proves part (i) of the proposition.
\medskip
Fix a basepoint $\mathbf{b} \in B_{n}$ and an element $g \in G$. We are going to compute the value of the function $p$ at the point $\mathbf{z} \mathrel{\mathop:}= g.\mathbf{b} \in \mathbb{T}^{n+1}$. Choose a Cartan decomposition $g = k^{\prime}\,a_{T}\,k$ with $k, k^{\prime} \in K$ and $a_{T} \in A$ for some $T \in \mathbb{R}$ as in \eqref{eqn:CartanDecomposition}. Then
\begin{equation} \label{Eqn:Computep1}
p(\mathbf{z}) = p(g.\mathbf{b}) = p(k^{\prime}.(a_{T}\,k).\mathbf{b}) = p(a_{T}.(k.\mathbf{b}))
\end{equation}
since $p$ is $K$-invariant. Taking the real part of the equation $\L p = u$ we obtain $\L_{A} p = \Re u$. Hence by \eqref{eqn:IntegrateAlongA} we have
\begin{equation} \label{Eqn:Computep2}
p(a_{T}.(k.\mathbf{b})) = p(k.\mathbf{b}) + \int_{0}^{T} (\Re u)(a_{t} \, k.\mathbf{b}) \, \mathrm{d} t.
\end{equation}
Observe that $p(k.\mathbf{b}) = p(\mathbf{b}) = 0$, which follows from $K$-invariance of $p$ and the initial condition in \eqref{eqn:InitialCondition}. Hence combining \eqref{Eqn:Computep1} and \eqref{Eqn:Computep2} we obtain
\begin{equation} \label{Eqn:Computep3}
p(\mathbf{z}) = \int_{0}^{T} (\Re u)(a_{t} \, k.\mathbf{b}) \, \mathrm{d} t,
\end{equation}
which is the formula in \eqref{eqn:FormulaForRu}. Note that because of the assumption on $u$ in Step 1, the formula in \eqref{Eqn:Computep3} holds for all basepoints in $B_{n}$ including those in the complement of the configuration space.
\medskip
\noindent{\textbf{Step 8.}} Assume that $u$ is tame. Then there exists a constant $C = C(u)$ such that
\[
\left\lvert \int_{0}^{T} (\Re u)(a_{t}.\mathbf{z}) \, \mathrm{d} t \right\rvert < C
\]
for all $\mathbf{z} \in \mathbb{T}^{n+1}$ and $T \in \mathbb{R}$. Hence we conclude from \eqref{Eqn:Computep3} that the solution $p$ is bounded. This proves part (iii) of the proposition.
\end{proof}
\subsection{The Frobenius problem}
\label{SubSec:FrobeniusProblem}
Consider the partial differential equation
\begin{equation} \label{eqn:FrobeniusProblem}
\Q u = \psi
\end{equation}
with right-hand side $\psi \in A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})$. Our aim in this section is to explicitly construct a solution $u \in A^{\infty}_{\t}(\mathbb{T}^{n+1},\mathbb{C}_{1})$ of this equation. We will refer to \eqref{eqn:FrobeniusProblem} as the \emph{Frobenius problem}.
\begin{proposition} \label{prop:FrobeniusProblem}
There exists a linear operator
\begin{equation} \label{map:SolutionOperatorFrobeniusProblem}
\map{\Sop^{n}}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})}{A^{\infty}_{\t}(\mathbb{T}^{n+1},\mathbb{C}_{1})} \quad\quad (n \ge 1)
\end{equation}
which is a right inverse of the Frobenius operator $\Q^{n}$ in \eqref{map:OperatorQDownstairs}. More precisely, for every $n \ge 1$ and for every function $\psi \in A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})$ the following hold.
\begin{enumerate}[leftmargin=1cm,topsep=0.5ex,itemsep=0.5ex]
\item The value of the function $\Sop \psi \in A^{\infty}_{\t}(\mathbb{T}^{n+1},\mathbb{C}_{1})$ at any point $(z_{0},\ldots,z_{n}) \in \mathbb{T}^{n+1}$ is given by the integral
\begin{equation} \label{eqn:FormulaForSpsi}
(\Sop \psi)(z_{0},\ldots,z_{n}) = i \cdot z_{0} \cdot \int_{0}^{\infty} \psi\bigl( 1,a_{t}.(z_{1}/z_{0}),\ldots,a_{t}.(z_{n}/z_{0}) \bigr) \cdot e^{-t} \, \mathrm{d} t.
\end{equation}
\item The function $u \mathrel{\mathop:}= \Sop \psi$ is a solution of the Frobenius problem \eqref{eqn:FrobeniusProblem}. In particular, the Cauchy-Frobenius complex in \eqref{map:CauchyFrobeniusComplex} is exact at the fourth term $A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})$.
\end{enumerate}
\end{proposition}
\begin{proof}
Fix $n \ge 1$ and let $\psi \in A(\mathbb{T}^{n+1},\mathbb{R})$. To prove the proposition, it will be sufficient to construct an explicit solution $u \in A^{\infty}_{\t}(\mathbb{T}^{n+1},\mathbb{C}_{1})$ of the Frobenius problem \eqref{eqn:FrobeniusProblem}.
\medskip
\noindent{\textbf{Step 1.}} Pick a representative $\psi \in \mathscr{A}^{\infty}(\mathbb{T}^{n+1},\mathbb{R})$.
\medskip
\noindent{\textbf{Step 2.}} Observe that $\psi_{K} \in \mathscr{L}^{\infty}(\mathbb{T}^{n},\mathbb{R})$ by Lemma \ref{lemma:ReductionAndExtensionByK}\,(iii). We define a measurable function $v \in \mathscr{L}^{0}(\mathbb{T}^{n},\mathbb{C})$ by
\begin{equation} \label{eqn:DefinitionOfv}
v(\mathbf{z}) \mathrel{\mathop:}= i \cdot \int_{0}^{\infty} \psi_{K}(a_{s}.\mathbf{z}) \cdot e^{-s} \, \mathrm{d} s
\end{equation}
for all $\mathbf{z} \in \mathbb{T}^{n}$.
\medskip
\noindent{\textbf{Step 3.}} We prove that $v$ is a bounded function contained in $\mathscr{S}^{\b}_{P}(\mathbb{T}^{n},\mathbb{C})$.
\medskip
We have seen in Step 2 that $\psi_{K}$ is bounded. It follows that
\begin{equation} \label{eqn:BoundednessOfv}
\lvert v(\mathbf{z}) \rvert \le \lVert \psi_{K} \rVert_{\infty} \cdot \int_{0}^{\infty} e^{-s} \, \mathrm{d} s = \lVert \psi_{K} \rVert_{\infty}
\end{equation}
for every $\mathbf{z} \in \mathbb{T}^{n}$, which implies that $v$ is bounded. Next we observe that $\psi_{K} \in \mathscr{S}^{\b}_{P}(\mathbb{T}^{n},\mathbb{R})$ by Lemma \ref{lemma:ReductionAndExtensionByK}\,(v). We are going to show that $v$ is $P$-orbitwise smooth with bounded $P$-derivatives. For $s \ge 0$ consider the function $f_{s} \in \mathscr{L}^{\infty}(\mathbb{T}^{n},\mathbb{R})$ defined by
\[
f_{s}(\mathbf{z}) \mathrel{\mathop:}= \psi_{K}(a_{s}.\mathbf{z}).
\]
It is $P$-orbitwise smooth since the map
\begin{equation} \label{map:SmoothnessAndBoundednessOfv}
P \to \mathbb{R}, \quad p \mapsto f_{s}(p.\mathbf{z}) = \psi_{K}(a_{s}\,p.\mathbf{z})
\end{equation}
is smooth for every $\mathbf{z} \in \mathbb{T}^{n}$ because $a_{s}\,p \in P$ and $\psi_{K}$ is $P$-orbitwise smooth. Now for every $\mathbf{z} \in \mathbb{T}^{n}$ we compute
\[
\begin{aligned}
(\L_{A} f_{s})(\mathbf{z}) &= \left. \dd{}{t} \right|_{t=0} f_{s}(a_{t}.\mathbf{z}) \, = \left. \dd{}{t} \right|_{t=0} \psi_{K}(a_{s}.(a_{t}.\mathbf{z})) \\
&= \left. \dd{}{t} \right|_{t=0} \psi_{K}(a_{t}.(a_{s}.\mathbf{z})) \, = (\L_{A} \psi_{K})(a_{s}.\mathbf{z})
\end{aligned}
\]
and, using the relation $a_{s}.n_{t} = n_{e^{-s} \cdot t}.a_{s}$ from \eqref{eqn:ANormalizesN},
\[
\begin{aligned}
(\L_{N} f_{s})(\mathbf{z}) &= \left. \dd{}{t} \right|_{t=0} f_{s}(n_{t}.\mathbf{z}) \, = \left. \dd{}{t} \right|_{t=0} \psi_{K}(a_{s}.(n_{t}.\mathbf{z})) \\
&= \left. \dd{}{t} \right|_{t=0} \psi_{K}(n_{e^{-s} \cdot t}.(a_{s}.\mathbf{z})) \, = e^{-s} \cdot (\L_{N} \psi_{K})(a_{s}.\mathbf{z}).
\end{aligned}
\]
Since $\psi_{K}$ has bounded $P$-derivatives and $s \ge 0$, we conclude that $\L_{A}f_{s}$ and $\L_{N}f_{s}$ are both bounded. Hence by an estimate as in \eqref{eqn:BoundednessOfv} above, by \eqref{eqn:DefinitionOfv} the Lebesgue dominated convergence theorem implies that the derivatives $\L_{A}v$ and $\L_{N}v$ exist and are bounded. Since $\psi_{K}$ has bounded $P$-derivatives, a similar argument involving the derivatives $\L_{i_{1},\ldots,i_{\ell}} f_{s}$ for all integers $\ell > 0$ and all $(i_{1},\ldots,i_{\ell}) \in \{A,N\}^{\ell}$ shows that the function $v$ has bounded $P$-derivatives.
\medskip
\noindent{\textbf{Step 4.}} We show that the function $v$ is a solution of the differential equation
\begin{equation} \label{eqn:ReducedFrobeniusProblem1}
\Q v = \psi_{K},
\end{equation}
which is obtained from \eqref{eqn:FrobeniusProblem} by means of $K$-reduction.
\medskip
Recall that $v = v^{\sharp} + i \, v^{\flat}$ denotes the decomposition of the complex function $v$ into its real and imaginary parts. We see from \eqref{eqn:DefinitionOfv} that $v^{\sharp} = 0$, hence we obtain
\[
\Q v = v^{\flat} - \L_{A} v^{\flat} + \L_{N} v^{\sharp} = v^{\flat} - \L_{A} v^{\flat}
\]
by \eqref{eqn:RealVersionActionOfQ}. Thus \eqref{eqn:ReducedFrobeniusProblem1} turns out to be equivalent to
\begin{equation} \label{eqn:ReducedFrobeniusProblem2}
v^{\flat} - \L_{A} v^{\flat} = \psi_{K}.
\end{equation}
We know from Step 3 that the derivative $\L_{A} v^{\flat}$ exists. Hence by the Lebesgue dominated convergence theorem, for every $\mathbf{z} \in \mathbb{T}^{n}$ we compute
\[
\begin{aligned}
(\L_{A} v^{\flat}) (\mathbf{z}) &= \left. \dd{}{t}\right|_{t=0} v^{\flat}(a_{t}.\mathbf{z}) \,=\, \left. \dd{}{t}\right|_{t=0} \int_{0}^{\infty} \psi_{K}(a_{t}.(a_{s}.\mathbf{z})) \cdot e^{-s} \, \mathrm{d} s \\
&= \int_{0}^{\infty} \left( \left. \dd{}{t}\right|_{t=0} \psi_{K}(a_{t+s}.\mathbf{z}) \right) \cdot e^{-s} \, \mathrm{d} s \, = \, \int_{0}^{\infty} \left( \dd{}{s} \psi_{K}(a_{s}.\mathbf{z}) \right) \cdot e^{-s} \, \mathrm{d} s \\
&= \left[ \psi_{K}(a_{s}.\mathbf{z}) \cdot e^{-s} \right]_{0}^{\infty} + \int_{0}^{\infty} \psi_{K}(a_{s}.\mathbf{z}) \cdot e^{-s} \, \mathrm{d} s \,=\, - \psi_{K}(\mathbf{z}) + v^{\flat}(\mathbf{z}).
\end{aligned}
\]
Here the second last identity holds by integration by parts. Hence $v^{\flat}$ is a solution of \eqref{eqn:ReducedFrobeniusProblem2}.
\medskip
\noindent{\textbf{Step 5.}} We prove that the $K$-extension $u = v^{K}_{1} \in \mathscr{A}^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{1})$ of the function $v$ with weight $1$ is a solution of \eqref{eqn:FrobeniusProblem}. Together with \eqref{eqn:DefinitionOfv} and \eqref{eqn:KExtension} this implies part (i) of the proposition.
\medskip
Applying Lemma \ref{lemma:ReductionAndExtensionByK}\,(iv, i) we deduce from \eqref{eqn:ReducedFrobeniusProblem1} that
\[
(\Q u)_{K} = \Q u_{K} = \Q v = \psi_{K}.
\]
By Proposition \ref{prop:CauchyFrobeniusSequence} we know that $\Q u, \psi \in \mathscr{S}_{G}(\mathbb{T}^{n+1},\mathbb{R})^{K}$. Hence by Lemma \ref{lemma:ReductionAndExtensionByK}\,(ii) it follows that $\Q u = \psi$.
\medskip
\noindent{\textbf{Step 6.}} We see from \eqref{eqn:DefinitionOfv} that $\Re v = 0$. Hence by Lemma \ref{lemma:CriterionForTameness} the $K$-extension $u = v^{K}_{1}$ is tame and therefore defines a function $u \in A^{\infty}_{\t}(\mathbb{T}^{n+1},\mathbb{C}_{1})$. This proves part (ii) of the proposition.
\end{proof}
\section{Transgression}
\label{sec:Transgression}
\subsection{The transgression map}
\label{subsec:TransgressionMap}
Let us begin with the following basic observation.
\begin{lemma} \label{lemma:OperatorsLAndQIntertwine}
The differential operators $\L^{n}$ and $\Q^{n}$ in \eqref{map:OperatorLDownstairs} and \eqref{map:OperatorQDownstairs} satisfy the relations
\[
\L^{n+1} \circ \de^{n} = \de^{n} \circ \L^{n} \quad \text{and} \quad \Q^{n+1} \circ \de^{n} = \de^{n} \circ \Q^{n}
\]
for every $n \ge 0$. They therefore define cochain maps
\[
\map{\L^{n}}{A(\mathbb{T}^{n+1},\mathbb{R})}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{1})} \quad\quad (n \ge 0)
\]
and
\[
\map{\Q^{n}}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{1})}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})} \quad\quad (n \ge 0).
\]
\end{lemma}
\begin{proof}
By \cite[Lemma 3.3]{Hartnick/Bounded-cohomology-via-partial-differential-equations-I} the action of the differential operators $\L_{K}$, $\L_{A}$ and $\L_{N}$ on orbitwise smooth functions intertwines with the action of the homogeneous coboundary operator $\de$. Hence the claim follows from Definitions \ref{def:OperatorL} and \ref{def:OperatorQ}.
\end{proof}
By the lemma, the Cauchy-Frobenius complex in \eqref{map:CauchyFrobeniusComplex} gives rise to a double complex
\[
\begin{tikzcd}[column sep = scriptsize]
& \vdots \arrow[d,rightarrow]
& \vdots \arrow[d,rightarrow]
& \vdots \arrow[d,rightarrow]
& \vdots \arrow[d,rightarrow] \\
0 \arrow[r,rightarrow]
& A^{\infty}(\mathbb{T}^{n-1},\mathbb{R})^{P} \arrow[r,"\iota^{n-2}",rightarrow] \arrow[d,"\de^{n-2}",rightarrow]
& A^{\infty}(\mathbb{T}^{n-1},\mathbb{R}) \arrow[r,"\L^{n-2}",rightarrow] \arrow[d,"\de^{n-2}",rightarrow]
& A^{\infty}_{\t}(\mathbb{T}^{n-1},\mathbb{C}_{1}) \arrow[r,"\Q^{n-2}",rightarrow] \arrow[d,"\de^{n-2}",rightarrow]
& A^{\infty}(\mathbb{T}^{n-1},\mathbb{R}) \arrow[r,rightarrow] \arrow[d,"\de^{n-2}",rightarrow]
& 0 \\
0 \arrow[r,rightarrow]
& A^{\infty}(\mathbb{T}^{n},\mathbb{R})^{P} \arrow[r,"\iota^{n-1}",rightarrow] \arrow[d,"\de^{n-1}",rightarrow]
& A^{\infty}(\mathbb{T}^{n},\mathbb{R}) \arrow[r,"\L^{n-1}",rightarrow] \arrow[d,"\de^{n-1}",rightarrow]
& A^{\infty}_{\t}(\mathbb{T}^{n},\mathbb{C}_{1}) \arrow[r,"\Q^{n-1}",rightarrow] \arrow[d,"\de^{n-1}",rightarrow]
& A^{\infty}(\mathbb{T}^{n},\mathbb{R}) \arrow[r,rightarrow] \arrow[d,"\de^{n-1}",rightarrow]
& 0 \\
0 \arrow[r,rightarrow]
& A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{P} \arrow[r,"\iota^{n}",rightarrow] \arrow[d,"\de^{n}",rightarrow]
& A^{\infty}(\mathbb{T}^{n+1},\mathbb{R}) \arrow[r,"\L^{n}",rightarrow] \arrow[d,"\de^{n}",rightarrow]
& A^{\infty}_{\t}(\mathbb{T}^{n+1},\mathbb{C}_{1}) \arrow[r,"\Q^{n}",rightarrow] \arrow[d,"\de^{n}",rightarrow]
& A^{\infty}(\mathbb{T}^{n+1},\mathbb{R}) \arrow[r,rightarrow] \arrow[d,"\de^{n}",rightarrow]
& 0 \\
& \vdots & \vdots & \vdots & \vdots
\end{tikzcd}
\]
with commuting differentials. Abbreviating the vertical complexes by $\mathcal{A}^{\infty}_{P}$, $\mathcal{A}^{\infty}$ and $\mathcal{A}^{\infty}_{\t}$, where
\[
\begin{aligned}
(\mathcal{A}^{\infty}_{P})^{n} &\mathrel{\mathop:}= ( A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{P},\de^{n} ), \\
(\mathcal{A}^{\infty})^{n} &\mathrel{\mathop:}= ( A^{\infty}(\mathbb{T}^{n+1},\mathbb{R}),\de^{n} ), \\
(\mathcal{A}^{\infty}_{\t})^{n} &\mathrel{\mathop:}= ( A^{\infty}_{\t}(\mathbb{T}^{n+1},\mathbb{C}_{1}),\de^{n} )
\end{aligned}
\]
for all $n \ge 0$, we may write this double complex more conveniently as
\begin{equation} \label{map:DoubleComplex}
%
\begin{tikzcd}
0 \arrow[r,rightarrow] & \mathcal{A}^{\infty}_{P} \arrow[r,"\iota",rightarrow]
& \mathcal{A}^{\infty} \arrow[r,"\L",rightarrow]
& \mathcal{A}^{\infty}_{\t} \arrow[r,"\Q",rightarrow]
& \mathcal{A}^{\infty} \arrow[r,rightarrow] & 0.
\end{tikzcd}
%
\end{equation}
Define a subcomplex $\mathcal{E} \subset \mathcal{A}^{\infty}_{\t}$ by
\[
\mathcal{E}^{n} \mathrel{\mathop:}= \ker \Q^{n} = \im \L^{n}
\]
for all $n \ge 0$, and denote by $\map{i^{n}}{\mathcal{E}^{n}}{(\mathcal{A}^{\infty}_{\t})^{n}}$ the canonical inclusion. The sequence in \eqref{map:DoubleComplex} then splits into the short sequences
\begin{subequations}
\begin{equation} \label{map:ShortExactSequenceL}
%
\begin{tikzcd}
0 \arrow[r,rightarrow] & (\mathcal{A}^{\infty}_{P})^{n} \arrow[r,"\iota^{n}",rightarrow]
& (\mathcal{A}^{\infty})^{n} \arrow[r,"\L^{n}",rightarrow]
& \mathcal{E}^{n} \arrow[r,rightarrow] & 0 \quad\quad (n \ge 0)
\end{tikzcd}
%
\end{equation}
and
\begin{equation} \label{map:ShortExactSequenceQ}
%
\begin{tikzcd}
0 \arrow[r,rightarrow] & \mathcal{E}^{n} \arrow[r,"i^{n}",rightarrow]
& (\mathcal{A}^{\infty}_{\t})^{n} \arrow[r,"\Q^{n}",rightarrow]
& (\mathcal{A}^{\infty})^{n} \arrow[r,rightarrow] & 0 \quad\quad (n \ge 0).
\end{tikzcd}
%
\end{equation}
\end{subequations}
From the exactness properties of the sequence in \eqref{map:DoubleComplex} we then obtain long exact sequences in cohomology, as follows.
\begin{lemma} \label{lemma:LongExactSequences}
There are long exact sequences
\begin{subequations}
\begin{equation} \label{map:LongExactSequenceL}
%
\begin{tikzcd}[column sep = scriptsize]
H^{2}(\mathcal{A}_{P}^{\infty}) \arrow[r,"\iota^{\ast}"] & H^{2}(\mathcal{A}^{\infty}) \arrow[r,"\L^{\ast}"] & H^{2}(\mathcal{E}) \arrow[r,"\Phi_{\L}^{2}"] & H^{3}(\mathcal{A}_{P}^{\infty}) \arrow[r,"\iota^{\ast}"] & \cdots \hspace{1.4cm} \\
\hspace{5mm} \cdots \arrow[r,"\iota^{\ast}"] & H^{n}(\mathcal{A}^{\infty}) \arrow[r,"\L^{\ast}"] & H^{n}(\mathcal{E}) \arrow[r,"\Phi_{\L}^{n}"] & H^{n+1}(\mathcal{A}_{P}^{\infty}) \arrow[r,"\iota^{\ast}"] & H^{n+1}(\mathcal{A}^{\infty}) \arrow[r] & \cdots
\end{tikzcd}
%
\end{equation}
and
\begin{equation} \label{map:LongExactSequenceQ}
%
\begin{tikzcd}[column sep = scriptsize]
H^{1}(\mathcal{E}) \arrow[r,"i^{\ast}"] & H^{1}(\mathcal{A}_{\t}^{\infty}) \arrow[r,"\Q^{\ast}"] & H^{1}(\mathcal{A}^{\infty}) \arrow[r,"\Phi_{\Q}^{1}"] & H^{2}(\mathcal{E}) \arrow[r,"i^{\ast}"] & \cdots \hspace{1.5cm} \\
\hspace{5mm} \cdots \arrow[r,"i^{\ast}"] & H^{n}(\mathcal{A}_{\t}^{\infty}) \arrow[r,"\Q^{\ast}"] & H^{n}(\mathcal{A}^{\infty}) \arrow[r,"\Phi_{\Q}^{n}"] & H^{n+1}(\mathcal{E}) \arrow[r,"i^{\ast}"] & H^{n+1}(\mathcal{A}_{\t}^{\infty}) \arrow[r,] & \cdots
\end{tikzcd}
%
\end{equation}
\end{subequations}
with connecting homomorphisms
\begin{subequations}
\begin{equation} \label{map:ConnectingHomomorphismL}
\map{\Phi_{\L}^{n}}{H^{n}(\mathcal{E})}{H^{n+1}(\mathcal{A}_{P}^{\infty})} \quad\quad (n \ge 2)
\end{equation}
and
\begin{equation} \label{map:ConnectingHomomorphismQ}
\map{\Phi_{\Q}^{n}}{H^{n}(\mathcal{A}^{\infty})}{H^{n+1}(\mathcal{E})} \quad\quad (n \ge 1).
\end{equation}
\end{subequations}
\end{lemma}
\begin{proof}
By Proposition \ref{prop:ExactnessOfCauchyFrobeniusComplex} the sequence in \eqref{map:ShortExactSequenceL} is exact for every $n \ge 2$, and hence gives rise to the long exact sequence in \eqref{map:LongExactSequenceL}. Likewise, by Proposition \ref{prop:ExactnessOfCauchyFrobeniusComplex} the sequence in \eqref{map:ShortExactSequenceQ} is exact for every $n \ge 1$ and thus gives rise to the long exact sequence in \eqref{map:LongExactSequenceQ}.
\end{proof}
We denote the composition of the homomorphism in \eqref{map:EquivariantLiftingOnCohomology} with the isomorphism in \eqref{map:BoundaryModel} by
\begin{equation} \label{map:HomomorphismEquivariantLift}
\map{\Pi^{n}}{H^{n}(\mathcal{A}_{P}^{\infty})}{H^{n}_{\mathrm{cb}}(G;\mathbb{R})} \quad\quad (n \ge 0).
\end{equation}
This map is surjective by Proposition \ref{prop:EquivariantLifting}. It will be called the \emph{lifting homomorphism}.
\begin{definition}
The concatenation of the lifting homomorphism in \eqref{map:HomomorphismEquivariantLift} with the connecting homomorphisms in \eqref{map:ConnectingHomomorphismL} and \eqref{map:ConnectingHomomorphismQ} defines the \emph{transgression map}
\begin{equation} \label{map:TransgressionMap}
\map{\Lambda^{n} \mathrel{\mathop:}= \Pi^{n} \circ \Phi_{\L}^{n-1} \circ \Phi_{\Q}^{n-2}}{H^{n-2}(\mathcal{A}^{\infty})}{H^{n}_{\mathrm{cb}}(G;\mathbb{R}}) \quad\quad (n > 2).
\end{equation}
\end{definition}
Notice that the transgression map is defined for all degrees $n>2$, and that it shifts the degree by $2$.
\subsection{Transgressive classes}
\label{subsec:TransgressiveClasses}
We characterize classes in the image of the transgression map.
\begin{definition}
A class $\alpha \in H^{n}_{\mathrm{cb}}(G;\mathbb{R})$ of degree $n>2$ is called \emph{transgressive} if it is contained in the image of the transgression map $\Lambda^{n}$ in \eqref{map:TransgressionMap}.
\end{definition}
\begin{proposition} \label{prop:VanishingForTransgressiveClasses}
Let $\alpha \in H^{n}_{\mathrm{cb}}(G;\mathbb{R})$ with $n>2$. If $\alpha$ is transgressive, then $\alpha = 0$.
\end{proposition}
\begin{proof}
This is immediate since $H^{n-2}(\mathcal{A}^{\infty}) = 0$ by Corollary \ref{cor:VanishingForA} for every $n>2$.
\end{proof}
We next derive a useful criterion that helps to decide whether a given bounded cohomology class is transgressive. To this end, we first recall the vanishing $H^{n}(\mathcal{A}^{\infty}) = 0$ which holds for every $n>0$ by Corollary \ref{cor:VanishingForA}. Exactness of the long sequence in \eqref{map:LongExactSequenceL} therefore implies that the connecting homomorphism $\Phi_{\L}^{n-1}$ in \eqref{map:ConnectingHomomorphismL} is in fact an isomorphism for every $n>2$. Consider now the diagram
\[
\begin{tikzcd}[column sep = large, row sep = large]
H^{n-1}(\mathcal{E}) \arrow[d,"i^{\ast}",rightarrow] \arrow[r,"\Phi_{\L}^{n-1}",rightarrow] & H^{n}(\mathcal{A}_{P}^{\infty}) \arrow[r,"\Pi^{n}",rightarrow] & H^{n}_{\mathrm{cb}}(G;\mathbb{R}) \\
H^{n-1}(\mathcal{A}_{\t}^{\infty})
\end{tikzcd}
\]
It gives rise to the following cohomological characterization of transgressive classes.
\begin{proposition} \label{prop:CriterionForTransgressive}
A class $\alpha \in H^{n}_{\mathrm{cb}}(G;\mathbb{R})$ with $n>2$ is transgressive if and only if there exists a class $\beta \in H^{n}(\mathcal{A}_{P}^{\infty})$ such that
\begin{equation} \label{eqn:CriterionForTransgressive}
i^{\ast} \circ (\Phi_{\L}^{n-1})^{-1} \, \beta = 0 \quad \text{and} \quad \Pi^{n} \, \beta = \alpha.
\end{equation}
\end{proposition}
\begin{proof}
Fix $n>2$, and consider a class $\alpha \in H^{n}_{\mathrm{cb}}(G;\mathbb{R})$. If $\alpha$ is transgressive, then $\alpha=0$ by Proposition \ref{prop:VanishingForTransgressiveClasses} above and hence the class $\beta=0$ satisfies the conditions in \eqref{eqn:CriterionForTransgressive}. For the converse, assume that there exists $\beta \in H^{n}(\mathcal{A}_{P}^{\infty})$ such that \eqref{eqn:CriterionForTransgressive} holds. Since $\Phi_{\L}^{n-1}$ is an isomorphism, there is $\nu \in H^{n-1}(\mathcal{E})$ such that $\Phi_{\L}^{n-1} \nu = \beta$ and $i^{\ast} \nu = 0$. Hence exactness of the long sequence in \eqref{map:LongExactSequenceQ} implies that there exists $\omega \in H^{n-2}(\mathcal{A}^{\infty})$ such that $\Phi_{\Q}^{n} \, \omega = \nu$. It follows that $\Lambda^{n-2} \, \omega = \alpha$ and hence $\alpha$ is transgressive.
\end{proof}
\subsection{Reducible classes}
\label{subsec:ReducibleClasses}
Recall from Section \ref{subsec:ContinuousBoundedCohomology} that the continuous bounded cohomology of $G$ is endowed with a natural cup product
\begin{equation} \label{map:CupProductOnCohomOfG}
\map{\smallsmile}{H^{n}_{\mathrm{cb}}(G;\mathbb{R}) \otimes H^{m}_{\mathrm{cb}}(G;\mathbb{R})}{H^{n+m}_{\mathrm{cb}}(G;\mathbb{R})}.
\end{equation}
We may define a similar cup product on the cohomology of the complex $\mathcal{A}_{P}^{\infty}$, as follows. Consider first the cup product
\[
\map{\cup}{\mathscr{L}^{0}(\mathbb{T}^{n+1},\mathbb{C}) \otimes \mathscr{L}^{0}(\mathbb{T}^{m+1},\mathbb{C})}{\mathscr{L}^{0}(\mathbb{T}^{n+m+1},\mathbb{C})} \quad\quad (n,m \ge 0)
\]
on the homogeneous bar complex \eqref{map:MeasurableCochainComplex} of measurable cochains, which for $f \in \mathscr{L}^{0}(\mathbb{T}^{n+1},\mathbb{C})$ and $g \in \mathscr{L}^{0}(\mathbb{T}^{m+1},\mathbb{C})$ is defined by
\begin{equation} \label{eqn:DefinitionOfCupProduct}
(f \cup g)(z_{0},\ldots,z_{n+m}) \mathrel{\mathop:}= f(z_{0},\ldots,z_{n}) \cdot g(z_{n},z_{n+1},\ldots,z_{n+m}).
\end{equation}
It gives rise to cup products
\begin{equation} \label{map:CupProductOnAinftyReal}
\map{\cup}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{P} \otimes A^{\infty}(\mathbb{T}^{m+1},\mathbb{R})^{P}}{A^{\infty}(\mathbb{T}^{n+m+1},\mathbb{R})^{P}} \quad\quad (n,m \ge 0)
\end{equation}
and
\begin{equation} \label{map:CupProductOnAinftyComplex}
\map{\cup}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{\mu}) \otimes A^{\infty}(\mathbb{T}^{m+1},\mathbb{R})}{A^{\infty}(\mathbb{T}^{n+m+1},\mathbb{C}_{\mu})} \quad\quad (n,m \ge 0)
\end{equation}
for every $\mu \in \mathbb{Z}$. The former product in \eqref{map:CupProductOnAinftyReal} then induces a corresponding cup product
\begin{equation} \label{map:CupProductOnCohomOfA^P}
\map{\cup}{H^{n}(\mathcal{A}_{P}^{\infty}) \otimes H^{m}(\mathcal{A}_{P}^{\infty})}{H^{n+m}(\mathcal{A}_{P}^{\infty})}
\end{equation}
on the cohomology of the complex $\mathcal{A}_{P}^{\infty}$ \cite[Sec.\,2]{Bucher-Karlsson/The-simplicial-volume-of-closed-manifolds-covered-by-Bbb-Hsp-2timesBbb-Hsp-2}.
\begin{lemma} \label{lemma:CupProductIntertwinesWithLifting}
The cup products $\cup$ and $\smallsmile$ in \eqref{map:CupProductOnCohomOfA^P} and \eqref{map:CupProductOnCohomOfG} intertwine with the lifting homomorphism $\Pi^{n}$ in \eqref{map:HomomorphismEquivariantLift}.
\end{lemma}
\begin{proof}
This follows from the naturality of the cochain map in \eqref{map:EquivariantLiftingFromA^P}, in combination with the fact that the isomorphism in \eqref{map:BoundaryModel} is compatible with the ring structure on cohomology determined by the cup products $\smallsmile$ and $\cup$ \cite[Thm.\,7.5.3]{Monod/Continuous-bounded-cohomology-of-locally-compact-groups}.
\end{proof}
\begin{lemma} \label{lemma:OperatorsActingOnCupProduct}
Fix an integer $\mu \in \mathbb{Z}$, let $n \ge 1 $ and $m \ge 0$, and let $f \in A^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{\mu})$ and $g \in A^{\infty}(\mathbb{T}^{m+1},\mathbb{R})$. Then the cup product in \eqref{map:CupProductOnAinftyComplex} has the following properties.
\begin{enumerate}[leftmargin=1cm,topsep=0.5ex,itemsep=0.5ex]
\item $\L^{n+m} (f \cup g) = (\L^{n} f) \cup g + f \cup (\L^{m} g)$;
\item $\I^{n+m} (f \cup g) = (\I^{n} f) \cup g$;
\item $(f \cup g)_{K} = f_{K} \cup g$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $f \in A^{\infty}(\mathbb{T}^{n+1},\mathbb{C}_{\mu})$ and $g \in A^{\infty}(\mathbb{T}^{m+1},\mathbb{R})$ with $n \ge 1 $ and $m \ge 0$, and consider the formula for the product $f \cup g$ in \eqref{eqn:DefinitionOfCupProduct}. We see from Definition \ref{def:OperatorL} that (i) is a consequence of the product rule for differentiable functions, while (ii) is immediate from \eqref{eqn:DefinitionOfOperatorI}. The formula in (iii) follows from \eqref{eqn:KReduction}.
\end{proof}
\begin{definition}
A bounded cohomology class $\alpha \in H^{n}_{\mathrm{cb}}(G;\mathbb{R})$ of degree $n>2$ is called \emph{strongly reducible} if it admits a product decomposition
\begin{equation} \label{eqn:ProductDecomposition}
\alpha = \alpha^{\prime} \smallsmile \alpha^{\prime\prime}
\end{equation}
with factors $\alpha^{\prime} \in H^{2}_{\mathrm{cb}}(G;\mathbb{R})$ and $\alpha^{\prime\prime} \in H^{n-2}_{\mathrm{cb}}(G;\mathbb{R})$.
\end{definition}
Let us make this definition more concrete. To this end, we recall from Section \ref{subsec:ContinuousBoundedCohomology} that the second bounded cohomology $H^{2}_{\mathrm{cb}}(G;\mathbb{R}) \cong \mathbb{R}$ is generated by the bounded K\"ahler class $\kappa \in H^{2}_{\mathrm{cb}}(G;\mathbb{R})$. Hence the first factor $\alpha^{\prime}$ in the product decomposition in \eqref{eqn:ProductDecomposition} is in fact a real multiple of $\kappa$. For our purposes in this section, we will further need to know that under the isomorphism in \eqref{map:BoundaryModel}, the bounded K\"ahler class $\kappa \in H^{2}_{\mathrm{cb}}(G;\mathbb{R})$ is identified with the cohomology class of the \emph{orientation cocycle} $\mathrm{or} \in L^{\infty}(\mathbb{T}^{3},\mathbb{R})^{G}$ \cite[Sec.\,2.3]{BurgerIozzi/A-useful-formula-from-bounded-cohomology}. This latter cocycle is defined by
\begin{equation} \label{eqn:OrientationCocycle}
\mathrm{or}(z_{0},z_{1},z_{2}) \mathrel{\mathop:}=
%
\begin{cases}
1 & \text{if $(z_{0},z_{1},z_{2})$ is positively oriented;} \\
-1 & \text{if $(z_{0},z_{1},z_{2})$ is negatively oriented;} \\
0 & \text{otherwise}
\end{cases}
%
\end{equation}
for all triples $(z_{0},z_{1},z_{2}) \in \mathbb{T}^{3}$ of points on $S^{1}$. The orientation cocycle naturally defines a cocycle in $A^{\infty}(\mathbb{T}^{3},\mathbb{R})^{P}$ as well. This cocycle is given by the same formula as in \eqref{eqn:OrientationCocycle} and will be denoted by the same symbol $\mathrm{or}$. We are now ready to prove the following sufficient criterion for a class to be transgressive.
\begin{proposition} \label{prop:StronlgyReducibleImpliesTransgressive}
Let $\alpha \in H^{n}_{\mathrm{cb}}(G;\mathbb{R})$ with $n>2$. If $\alpha$ is strongly reducible, then $\alpha$ is transgressive.
\end{proposition}
The proof of the proposition relies on the following lemma.
\begin{lemma} \label{lemma:FunctionILIOr}
The function $\I \L \I \mathrm{or} \in A^{\infty}(\mathbb{T}^{1},\mathbb{C}_{1})$ is given by
\[
(\I \L \I \mathrm{or})(z) = \frac{i}{\pi} \cdot z.
\]
\end{lemma}
\begin{proof}
Borrowing \eqref{eqn:FormulaL_AI} and \eqref{eqn:FormulaL_NI} from the proof of Proposition \ref{prop:ComplexAIsAcyclic}, we infer that in angular coordinates, the function $\I \L \I \mathrm{or} \in A^{\infty}(\mathbb{T}^{1},\mathbb{C}_{1})$ is expressed by the integral
\[
( \I \L \I \mathrm{or} )(\th) = \frac{1}{4\pi^{2}} \, \int_{0}^{2\pi} \int_{0}^{2\pi} e^{i \eta} \cdot \mathrm{or}(\eta,\varphi,\th) \, \mathrm{d}\eta \, \mathrm{d}\varphi.
\]
Here we used that the orientation cocycle is $G$-invariant. Then it is an exercise to compute from the explicit formula in \eqref{eqn:OrientationCocycle} that this integral equals $(i/\pi) \cdot e^{i \th}$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:StronlgyReducibleImpliesTransgressive}]
Fix $n>2$, and consider a strongly reducible class
\[
\alpha = \alpha^{\prime} \smallsmile \alpha^{\prime\prime}
\]
with $\alpha^{\prime} \in H^{2}_{\mathrm{cb}}(G;\mathbb{R})$ and $\alpha^{\prime\prime} \in H^{n-2}_{\mathrm{cb}}(G;\mathbb{R})$. We are going to show that $\alpha$ satisfies the criterion in Proposition \ref{prop:CriterionForTransgressive}. This is trivially true if $\alpha^{\prime} = 0$, hence we will assume that $\alpha^{\prime} \neq 0$. For ease of notation, we will mostly suppress the canonical inclusions $\iota$ and $i$ throughout this proof.
\medskip
\noindent{\textbf{Step 1.}} Since the lifting homomorphism in \eqref{map:HomomorphismEquivariantLift} is surjective, there exist classes $\beta^{\prime} \in H^{2}(\mathcal{A}_{P}^{\infty})$ and $\beta^{\prime\prime} \in H^{n-2}(\mathcal{A}_{P}^{\infty})$ such that
\[
\Pi^{2} \, \beta^{\prime} = \alpha^{\prime} \quad \text{and} \quad \Pi^{n-2} \, \beta^{\prime\prime} = \alpha^{\prime\prime}.
\]
\medskip
\noindent{\textbf{Step 2.}} Recall from the above that $H^{2}_{\mathrm{cb}}(G;\mathbb{R}) \cong \mathbb{R}$, with an explicit generator determined by the orientation cocycle $\mathrm{or} \in L^{\infty}(\mathbb{T}^{3},\mathbb{R})^{G}$ via the isomorphism in \eqref{map:BoundaryModel}. We think of the orientation cocycle as an element of $A^{\infty}(\mathbb{T}^{3},\mathbb{R})^{P}$. Since $\alpha^{\prime} \neq 0$ by assumption, it follows that $\alpha^{\prime}$ is in fact a real multiple of $\Pi^{2} \, [\mathrm{or}]$. Hence, rescaling $\alpha^{\prime\prime}$ and $\beta^{\prime\prime}$ with the same factor if necessary, we may without loss of generality assume that
\begin{equation} \label{eqn:BetaPrimeIsOr}
\beta^{\prime} = [\mathrm{or}].
\end{equation}
\medskip
\noindent{\textbf{Step 3.}} Consider now the product $\beta \mathrel{\mathop:}= \beta^{\prime} \cup \beta^{\prime\prime} \in H^{n}(\mathcal{A}_{P}^{\infty})$. It follows with Lemma \ref{lemma:CupProductIntertwinesWithLifting} and Step 1 that
\[
\Pi^{n} \, \beta = \Pi^{2} \, \beta^{\prime} \smallsmile \Pi^{n-2} \, \beta^{\prime\prime} = \alpha^{\prime} \smallsmile \alpha^{\prime\prime} = \alpha.
\]
\medskip
\noindent{\textbf{Step 4.}} We pick a cocycle $b^{\prime\prime} \in (\mathcal{A}_{P}^{\infty})^{n-2}$ representing the class $\beta^{\prime\prime}$. Then \eqref{eqn:BetaPrimeIsOr} implies that the cocycle
\begin{equation} \label{eqn:ProductCocycleb}
b \mathrel{\mathop:}= \mathrm{or} \cup b^{\prime\prime} \in (\mathcal{A}_{P}^{\infty})^{n}
\end{equation}
is a representative for the class $\beta$.
\medskip
\noindent{\textbf{Step 5.}} We claim that the cocycle $u \in (\mathcal{A}_{\t}^{\infty})^{n-1}$ defined by
\[
u \mathrel{\mathop:}= \L \, \I \, b
\]
represents the class $i^{\ast} \circ (\Phi_{\L}^{n-1})^{-1} \, \beta \in H^{n-1}(\mathcal{A}_{\t}^{\infty})$. Here $\map{\I}{(\mathcal{A}^{\infty})^{n}}{(\mathcal{A}^{\infty})^{n-1}}$ is the cochain contraction defined in Section \ref{subsec:CochainContractions}.
\medskip
In fact, with the definition of the connecting homomorphism $\Phi_{\L}^{n}$ in \eqref{map:ConnectingHomomorphismL} understood, this follows from the diagram
\[
\begin{tikzcd}[column sep = large, row sep = large]
&(\mathcal{A}^{\infty})^{n-1} \arrow[r,"\L^{n-1}",twoheadrightarrow] \arrow[d,"\de^{n-1}",rightarrow]
& \mathcal{E}^{n-1} \arrow[r,"i^{n-1}",hookrightarrow]
& (\mathcal{A}^{\infty}_{\t})^{n-1}
&
& \\
(\mathcal{A}_{P}^{\infty})^{n} \arrow[r,"\iota^{n}",hookrightarrow]
& (\mathcal{A}^{\infty})^{n} \arrow[bend left, u,"\I^{n}",rightarrow]
&
&
&
\end{tikzcd}
\]
together with the fact that $\I^{n}$ is a cochain contraction by Proposition \ref{prop:ComplexAIsAcyclic}.
\medskip
\noindent{\textbf{Step 6.}} We claim that the cocycle $u$ is given by the formula
\begin{equation} \label{eqn:FormulaForuTransgression}
u = ( \L \, \I \, \mathrm{or}) \cup b^{\prime\prime}.
\end{equation}
\medskip
To see this, we apply Lemma \ref{lemma:OperatorsActingOnCupProduct}\,(i, ii) to the defining formula for $u$ from Step 5. By \eqref{eqn:ProductCocycleb} we obtain
\[
u = \L \, \I \, b = \L \, \I \, ( \mathrm{or} \cup b^{\prime\prime} ) = \L \, \bigl( (\I \, \mathrm{or}) \cup b^{\prime\prime} \bigr) = ( \L \, \I \, \mathrm{or}) \cup b^{\prime\prime} + ( \I \, \mathrm{or}) \cup ( \L \, b^{\prime\prime} ).
\]
Since $b^{\prime\prime}$ is $P$-invariant, we have $\L b^{\prime\prime} = 0$. The claimed formula follows.
\medskip
\noindent{\textbf{Step 7.}} Define a cochain $v \in (\mathcal{A}^{\infty})^{n-1}$ by
\[
v \mathrel{\mathop:}= \I u.
\]
We claim that $v$ is a tame function. Since $\I$ is a cochain contraction by Proposition \ref{prop:ComplexAIsAcyclic}, this will then imply that $[u] = 0$ in $H^{n-1}(\mathcal{A}_{\t}^{\infty})$.
\medskip
First of all, by Lemma \ref{lemma:OperatorsActingOnCupProduct}\,(ii) we obtain from \eqref{eqn:FormulaForuTransgression} the expression
\[
v = ( \I \, \L \, \I \, \mathrm{or}) \cup b^{\prime\prime}.
\]
By Lemma \ref{lemma:OperatorsActingOnCupProduct}\,(iii), the $K$-reduction of this cochain is the function
\[
v_{K} = ( \I \, \L \, \I \, \mathrm{or})_{K} \cup b^{\prime\prime},
\]
which by Lemma \ref{lemma:FunctionILIOr} and \eqref{eqn:KReduction} equals
\[
v_{K} = \frac{i}{\pi} \cdot b^{\prime\prime}.
\]
Since $b^{\prime\prime}$ is real valued, it follows that $\Re v_{K} = 0$. Moreover, by Lemma \ref{lemma:ReductionAndExtensionByK}\,(i) we may write the function $v$ as the $K$-extension $v = (v_{K})^{K}_{1}$. Hence it follows from Lemma \ref{lemma:CriterionForTameness} that $v$ is tame.
\medskip
\noindent{\textbf{Step 8.}} Combining the results from Step 3, Step 5 and Step 7, we have proved that
\[
i^{\ast} \circ (\Phi_{\L}^{n-1})^{-1} \, \beta = 0 \quad \text{and} \quad \Pi^{n} \, \beta = \alpha,
\]
which by Proposition \ref{prop:CriterionForTransgressive} implies that $\alpha$ is transgressive.
\end{proof}
\subsection{Proof of Theorem \ref{thm:VanishingForStronglyReducibleClasses} and Theorem \ref{thm:ImageOfTransgressionMap}}
By invariance of continuous bounded cohomology of connected Lie groups under local isomorphisms \cite[Cor.\,7.5.10]{Monod/Continuous-bounded-cohomology-of-locally-compact-groups} it will be enough to prove the theorem for the Lie group $G = \mathrm{PU}(1,1)$. Fix a degree $n>2$, and consider a strongly reducible class $\alpha \in H^{n}_{\mathrm{cb}}(G;\mathbb{R})$. Then by Proposition \ref{prop:StronlgyReducibleImpliesTransgressive} the class $\alpha$ is transgressive, hence $\alpha = 0$ by Proposition \ref{prop:VanishingForTransgressiveClasses}.
\section{Construction of primitives}
\label{sec:ConstructionOfPrimitives}
\subsection{Explicit formulas for primitives}
\label{subsec:ExplicitFormulasForPrimitives}
Fix an integer $n>2$, and consider a $G$-invariant bounded cocycle $c \in A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{P}$ satisfying $\de c = 0$. By a \emph{primitive} of the cocycle $c$ we mean a $G$-invariant function $p \in A(\mathbb{T}^{n},\mathbb{R})^{P}$ that solves the cohomological equation
\[
\de p = c.
\]
Note that we do not require the function $p$ to be bounded. The aim of this section is to provide a systematic way of constructing such primitives in explicit terms for any given $G$-invariant bounded cocycle $c$. We will moreover see that the primitives obtained in this way are bounded under suitable additional assumptions on the cocycle $c$.
\begin{proposition} \label{prop:ExplicitFormulasForPrimitives}
Fix an integer $n>2$ and a measurable set of basepoints $B_{n} \subset \mathbb{T}^{n+1}$ for the boundary action of $G$ on $\mathbb{T}^{n+1}$. Let $c \in A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{P}$ be a $G$-invariant bounded cocycle satisfying $\de c = 0$. Define a function $p \in A(\mathbb{T}^{n},\mathbb{R})$ by
\begin{equation} \label{eqn:PrimitivesDefinitionOfp}
p \mathrel{\mathop:}= \I c - \de \Rop_{\mathcal{B}} u,
\end{equation}
where $u \in A^{\infty}(\mathbb{T}^{n-1},\mathbb{C}_{1})$ is the function
\begin{equation} \label{eqn:PrimitivesDefinitionOfu}
u \mathrel{\mathop:}= \bigl( \Id - \de \Sop \I \Q \bigr) \I \L \I c
\end{equation}
(see Figure \ref{fig:Primitivep}). Here $\I$ is the cochain contraction in \eqref{eqn:DefinitionOfOperatorI}, $\L$ and $\Q$ are the differential operators in \eqref{map:OperatorLDownstairs} and \eqref{map:OperatorQDownstairs}, and $\Rop_{\mathcal{B}}$ and $\Sop$ are the integral operators in \eqref{map:SolutionOperatorCauchyProblem} and \eqref{map:SolutionOperatorFrobeniusProblem}.
\begin{enumerate}[leftmargin=1cm,topsep=0.5ex,itemsep=0.5ex]
\item The function $p$ is a well-defined primitive for the cocycle $c$, i.e., $p \in A(\mathbb{T}^{n},\mathbb{R})^{P}$ and
\[
\de p = c.
\]
\item Assume in addition that the cocycle $c$ admits a product decomposition
\[
c = \mathrm{or} \cup c^{\prime}
\]
for some cocycle $c^{\prime} \in A^{\infty}(\mathbb{T}^{n-1},\mathbb{R})^{P}$, where $\mathrm{or} \in A^{\infty}(\mathbb{T}^{3},\mathbb{R})^{P}$ is the orientation cocycle defined in \eqref{eqn:OrientationCocycle} and $\cup$ denotes the cup product in \eqref{map:CupProductOnAinftyReal}. Then the primitive $p$ is bounded.
\end{enumerate}
\end{proposition}
Motivated by the schematic diagram in Figure \ref{fig:Primitivep}, we will refer to the formulas in \eqref{eqn:PrimitivesDefinitionOfp} and \eqref{eqn:PrimitivesDefinitionOfu} as the \emph{staircase construction} of the primitive $p$ for the cocycle $c$.
\begin{figure}[ht] \label{fig:Primitivep}
\begin{tikzcd}[column sep = large, row sep = large]
&&
A^{\infty}_{\t}(\mathbb{T}^{n-2},\mathbb{C}_{1}) \arrow[r,"\Q^{n-3}",twoheadrightarrow] \arrow[d,"\de^{n-3}",rightarrow]
& A^{\infty}(\mathbb{T}^{n-2},\mathbb{R}) \arrow[d,"\de^{n-3}",rightarrow] \arrow[bend right, l,"\Sop^{n-3}"',rightarrow] \\
& A(\mathbb{T}^{n-1},\mathbb{R}) \arrow[r,"\L^{n-2}",rightarrow] \arrow[d,"\de^{n-2}",rightarrow]
& A^{\infty}(\mathbb{T}^{n-1},\mathbb{C}_{1}) \arrow[r,"\Q^{n-2}",twoheadrightarrow] \arrow[d,"\de^{n-2}",rightarrow] \arrow[bend right, l,"\Rop_{\mathcal{B}}^{n-2}"',dashrightarrow]
& A^{\infty}(\mathbb{T}^{n-1},\mathbb{R}) \arrow[bend left, u,"\I^{n-2}",rightarrow] \\
A(\mathbb{T}^{n},\mathbb{R})^{P} \arrow[r,hookrightarrow] \arrow[d,"\de^{n-1}",rightarrow]
& A(\mathbb{T}^{n},\mathbb{R}) \arrow[r,"\L^{n-1}",rightarrow] \arrow[d,"\de^{n-1}",rightarrow]
& A^{\infty}(\mathbb{T}^{n},\mathbb{C}_{1}) \arrow[bend left, u,"\I^{n-1}",rightarrow]
& \\
A(\mathbb{T}^{n+1},\mathbb{R})^{P} \arrow[r,hookrightarrow]
& A(\mathbb{T}^{n+1},\mathbb{R}) \arrow[bend left, u,"\I^{n}",dashrightarrow]
&&
\end{tikzcd}
\caption{The staircase construction of primitives. The dashed arrows indicate that the respective map is defined on a smaller domain.}
\end{figure}
\begin{proof}[Proof of Proposition \ref{prop:ExplicitFormulasForPrimitives}]
Fix $n>2$ and a measurable set of basepoints $B_{n} \subset \mathbb{T}^{n+1}$ for the boundary action of $G$ on $\mathbb{T}^{n+1}$. Let $c \in A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{P}$ be such that $\de c = 0$.
\medskip
\noindent{\textbf{Step 1.}} We claim that the function $\I \L \I c \in A^{\infty}(\mathbb{T}^{n-1},\mathbb{C}_{1})$ satisfies
\[
\de \I \L \I c = \L \I c.
\]
\medskip
First of all, we note that by Proposition \ref{prop:ComplexAIsAcyclic} and Proposition \ref{prop:CauchyFrobeniusSequence} the function $\I \L \I c$ is well-defined. Observe that
\[
\de \L \I c = \L \de \I c = \L c = 0.
\]
Here in the first equality we used that $\L$ is a cochain map by Lemma \ref{lemma:OperatorsLAndQIntertwine}, while the second equality follows from Proposition \ref{prop:ComplexAIsAcyclic} since $c$ is a cocycle, and the third equality follows from Proposition \ref{prop:CauchyFrobeniusSequence} since $c$ is $P$-invariant. The claim is then a consequence of Proposition \ref{prop:ComplexAIsAcyclic}.
\medskip
\noindent{\textbf{Step 2.}} Define a function $u \in A^{\infty}(\mathbb{T}^{n-1},\mathbb{C}_{1})$ by
\begin{equation} \label{eqn:DefinitionOfu}
u \mathrel{\mathop:}= \I \L \I c - \de \Sop \I \Q \I \L \I c = \bigl( \Id - \de \Sop \I \Q \bigr) \I \L \I c.
\end{equation}
It follows from Proposition \ref{prop:ComplexAIsAcyclic}, Proposition \ref{prop:CauchyFrobeniusSequence} and Proposition \ref{prop:FrobeniusProblem} that this function is well-defined.
\medskip
\noindent{\textbf{Step 3.}} We claim that $\de u = \L \I c$.
\medskip
This is immediate from \eqref{eqn:DefinitionOfu} using Step 1 and the fact that $\de ^{2} = 0$.
\medskip
\noindent{\textbf{Step 4.}} We claim that $\Q u = 0$.
\medskip
To prove this, we first observe that
\[
\de \Q \I \L \I c = \Q \de \I \L \I c = \Q \L \I c = 0
\]
by Step 1 and Proposition \ref{prop:CauchyFrobeniusSequence}. Hence it follows with Proposition \ref{prop:FrobeniusProblem} that
\[
\Q \de \Sop \I \Q \I \L \I c = \de \Q \Sop \I \Q \I \L \I c = \de \I \Q \I \L \I c = \Q \I \L \I c.
\]
The claim is now immediate from \eqref{eqn:DefinitionOfu}.
\medskip
\noindent{\textbf{Step 5.}} Define a function $p \in A(\mathbb{T}^{n},\mathbb{R})$ by
\begin{equation} \label{eqn:DefinitionOfp}
p \mathrel{\mathop:}= \I c - \de \Rop_{\mathcal{B}} u.
\end{equation}
It follows from Proposition \ref{prop:ComplexAIsAcyclic} and Proposition \ref{prop:CauchyProblem} in combination with Step~4 that this function is well-defined.
\medskip
\noindent{\textbf{Step 6.}} We claim that $\de p = c$.
\medskip
This follows from Proposition \ref{prop:ComplexAIsAcyclic} since $c$ is a cocycle, together with the fact that $\de ^{2} = 0$.
\medskip
\noindent{\textbf{Step 7.}} We claim that $\L p = 0$ and hence $p \in A(\mathbb{T}^{n+1},\mathbb{R})^{P}$. Together with Step 6 this proves part (i) of the proposition.
\medskip
Using Proposition \ref{prop:CauchyProblem} and Step 3, we compute that
\[
\L \de \Rop_{\mathcal{B}} u = \de \L \Rop_{\mathcal{B}} u = \de u = \L \I c.
\]
The claim now follows from \eqref{eqn:DefinitionOfp}.
\medskip
\noindent{\textbf{Step 8.}} Assume from now that $c = \mathrm{or} \cup c^{\prime}$ for some cocycle $c^{\prime} \in A^{\infty}(\mathbb{T}^{n-1},\mathbb{R})^{P}$. We claim that the function $\I \L \I c \in A^{\infty}(\mathbb{T}^{n-1},\mathbb{C}_{1})$ is tame.
\medskip
Since $c^{\prime}$ is $P$-invariant and hence $\L c^{\prime} = 0$, we compute with Lemma \ref{lemma:OperatorsActingOnCupProduct}\,(i, ii) that
\[
\I \L \I c = \I \bigl( (\L \I \mathrm{or}) \cup c^{\prime} + (\I \mathrm{or}) \cup (\L c^{\prime}) \bigr) = (\I \L \I \mathrm{or}) \cup c^{\prime}.
\]
By Lemma \ref{lemma:OperatorsActingOnCupProduct}\,(iii), the $K$-reduction of this function is
\[
(\I \L \I c)_{K} = ( \I \L \I \mathrm{or})_{K} \cup c^{\prime},
\]
which by Lemma \ref{lemma:FunctionILIOr} and \eqref{eqn:KReduction} equals
\[
(\I \L \I c)_{K} = \frac{i}{\pi} \cdot c^{\prime}.
\]
Since $c^{\prime}$ is real valued, it follows that $\Re (\I \L \I c)_{K} = 0$. Moreover, by Lemma \ref{lemma:ReductionAndExtensionByK}\,(i) we may write the function $\I \L \I c$ as the $K$-extension $\I \L \I c = ((\I \L \I c)_{K})^{K}_{1}$. Hence the claim follows from Lemma \ref{lemma:CriterionForTameness}.
\medskip
\noindent{\textbf{Step 9.}} We know from Proposition \ref{prop:FrobeniusProblem} that the function $\de \Sop \I \Q \I \L \I c \in A^{\infty}(\mathbb{T}^{n-1},\mathbb{C}_{1})$ is tame. Combining this with Step 8, it follows from \eqref{eqn:DefinitionOfu} that the function $u$ is tame.
\medskip
\noindent{\textbf{Step 10.}} Since $u$ is tame by Step 9, Proposition \ref{prop:CauchyProblem}\,(iii) implies that the function $\Rop_{\mathcal{B}} u$ in Step 5 is bounded. Since $c$ is bounded, the function $\I c$ is bounded by Proposition \ref{prop:ComplexAIsAcyclic}. Hence we conclude from \eqref{eqn:DefinitionOfp} that the primitive $p$ is bounded as well. This proves part (ii) of the proposition.
\end{proof}
\subsection{The operator $\P$}
\label{subsec:TheOperatorP}
Our goal in this section is to define the linear operator
\[
\map{\P^{n}}{L^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{G} \supset \ker \de^{n}}{L^{0}(\mathbb{T}^{n},\mathbb{R})^{G}} \quad\quad (n>2)
\]
that appears in Theorem \ref{thm:ExplicitFormulasForPrimitives}. To begin with, let us denote by
\[
\map{\piop^{n}}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{P}}{L^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{G}} \quad\quad (n \ge 0)
\]
the natural cochain map \eqref{map:EquivariantLiftingFromA^P}. We have seen in the proof of Proposition \ref{prop:EquivariantLifting} that by a result of Monod, this map admits a section which we denote by
\[
\map{\sigmaop^{n}}{L^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{G}}{A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{P}} \quad\quad (n \ge 0).
\]
Let us further fix a collection $\mathcal{B} = \{B_{n}\}_{n \ge 2}$ of measurable sets of basepoints $B_{n} \subset \mathbb{T}^{n+1}$ for the boundary action of $G$ on $\mathbb{T}^{n+1}$ for all $n \ge 2$ (cf.\,\cite[App.\,B]{Zimmer/Ergodic-theory-and-semisimple-groups}). We are now in a position to define the operator $\P$.
Let $n>2$, and let $c \in L^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{G}$ be a $G$-invariant bounded function satisfying the cocycle relation $\de^{n} c = 0$. We then define
\begin{multline} \label{eqn:DefinitionOfOperatorP}
\P^{n} c \mathrel{\mathop:}= \piop^{n-1} \I^{n} \sigmaop^{n} c \,\,- \\ \piop^{n-1} \de^{n-2} \Rop_{\mathcal{B}}^{n-2} \Big( \Id \,-\, \de^{n-3} \Sop^{n-3} \I^{n-2} \Q^{n-2} \Big) \I^{n-1} \L^{n-1} \I^{n} \sigmaop^{n} c. \hspace{15mm}
\end{multline}
Here $\I$ is the cochain contraction in \eqref{eqn:DefinitionOfOperatorI}, $\L$ and $\Q$ are the differential operators in \eqref{map:OperatorLDownstairs} and \eqref{map:OperatorQDownstairs}, and $\Rop_{\mathcal{B}}$ and $\Sop$ are the integral operators in \eqref{map:SolutionOperatorCauchyProblem} and \eqref{map:SolutionOperatorFrobeniusProblem}. Comparing with the formulas in \eqref{eqn:PrimitivesDefinitionOfp} and \eqref{eqn:PrimitivesDefinitionOfu}, it follows from Proposition \ref{prop:ExplicitFormulasForPrimitives} that the function $\P^{n} c$ is in fact well-defined.
The formula in \eqref{eqn:DefinitionOfOperatorP} is illustrated schematically in Figure \ref{fig:Primitivep}. By abuse of notation, we will usually suppress the maps $\pi$ and $\sigma$, writing
\[
\P c = \I c - \de \Rop_{\mathcal{B}} \bigl( \Id - \de \Sop \I \Q \bigr) \I \L \I c
\]
for short. One should, however, keep in mind that the right-hand side of this formula will only be defined for representatives of the cocycle $c$ that are contained in the space $A^{\infty}(\mathbb{T}^{n+1},\mathbb{R})^{P}$.
\subsection{Proof of Theorem \ref{thm:ExplicitFormulasForPrimitives}}
Let $\P$ be the linear operator defined by \eqref{eqn:DefinitionOfOperatorP} in Section \ref{subsec:TheOperatorP}. It is well-defined by Proposition \ref{prop:ExplicitFormulasForPrimitives}. This proves (i). Comparing with \eqref{eqn:PrimitivesDefinitionOfp} and \eqref{eqn:PrimitivesDefinitionOfu}, we see that (ii) and (iii) follow from the corresponding statements in Proposition \ref{prop:ExplicitFormulasForPrimitives}\,(i, ii).
|
2,877,628,089,802 | arxiv | \section{Introduction}
\label{sec:intro}
\input{intro.tex}
\section{The Graph Neural Network}
\label{sec:statement}
\input{problem_statement}
\section{Algorithm and Main Results}
\label{sec:analysis}
\input{analysis}
\bibliographystyle{aaai21}
\subsection{Algorithm Design}
We present the algorithm in Algorithm~\ref{algo} for NGNN and GGNN with two auxiliary matrices defined as
\begin{align}\label{mat_1}
\Xi_N = \E\left[ \mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top \sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} ) \right],
\end{align}
and
\begin{align}\label{mat_2}
\Xi_G = \frac{1}{n} \sum_{j=1}^{n} \E\left[ \mathbf{H}_{j}^\top \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top \mathbf{a}_j^\top \mathbf{a}_j \sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j ) \right].
\end{align}
It is easy to see that the above two matrices are invertible under Assumption~\ref{assump:1}.
For learning NGNN (similar for GGNN), if we replace $ \Xi_N^{-1} $ by the gradient $\sigma^\prime \left(\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \right) $, we have exact gradients for $\mathbf{W}$ and $\mathbf{v}$ with a square loss $\frac{1}{n}\norm{\mathbf{y}-\sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W}) \mathbf{v}}^2$. However, calculating $\sigma^\prime \left(\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \right) $ for some activation functions could be computationally expensive. Note that to compute these matrices of \eqref{mat_1} and \eqref{mat_2}, one only needs to make an epoch pass on the dataset. Compared with a regular training by DNNs with tons of epoch passes on the entire dataset to acquire exact gradients, obtaining the two matrices adds very limited burden to the training computation. Hence, the gradient of $\mathbf{W}$ is inexact in the algorithm and a similar idea is also adopted in analysis of CNNs~\cite{cao2019tight}.
\begin{algorithm}[ht]
\SetAlgoLined
\KwIn{Training data $\{\mathbf{H},\mathbf{y}\}$, number of iterations $T$, step size $\alpha$, initialization $\mathbf{W}_0, \mathbf{v}_0, t=1$.}
\While{$t < T$}{
$\bar c = \begin{cases}
\mathbf{D}^{-1}\mathbf{A}\mathbf{H}, & \text{NGNN}, \\
\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j, & \text{GGNN}.
\end{cases} $
$\bar y = \begin{cases}
\sigma (c^\prime \mathbf{W}_t) \mathbf{v}_t, & \text{NGNN}, \\
\mathbf{a}_j \sigma (c^\prime \mathbf{W}_t) \mathbf{v}_t, & \text{GGNN}.
\end{cases} $
$\mathbf{G}^W_t = \begin{cases}
-\frac{1}{n} \Xi_N^{-1} \bar c^\top \left(\mathbf{y} - \bar y \right)\mathbf{v}_t^\top, & \text{NGNN}, \\
-\frac{1}{n} \Xi_G^{-1} \sum_{j=1}^{n} \bar c^\top \mathbf{a}_j^\top \left(y_j -\bar y \right)\mathbf{v}_t^\top, & \text{GGNN}.
\end{cases} $
$\mathbf{g}^v_t = \begin{cases}
-\frac{1}{n} \sigma(\mathbf{W}_t^\top\bar c^\top) \left(\mathbf{y}- \bar{y}\right), & \text{NGNN},\\
-\frac{1}{n}\sum_{j=1}^n \sigma(\mathbf{W}_t^\top\bar{c}^\top) \mathbf{a}_j^\top\left(y_j - \bar{y}\right), &\text{GGNN}.
\end{cases} $
$\mathbf{U}_{t+1} = \mathbf{W}_t - \alpha \mathbf{G}^W_t$
$\mathbf{W}_{t+1} = \mathbf{U}_{t+1}/\|\mathbf{U}_{t+1}\|_2$
$\mathbf{v}_{t+1} = \mathbf{v}_t -\alpha \mathbf{g}^v_t$
}
\KwResult{$\mathbf{W}_{T}, \mathbf{v}_T$}
\caption{Approximate Gradient Descent for Learning GNNs}
\label{algo}
\end{algorithm}
\subsection{Convergence Analysis}
For the ease of presentation, we first introduce some quantities that we use in this paper.
\begin{align}
&\gamma_1 =\begin{cases}
\sqrt{ \frac{\alpha L_\sigma^2 d_{out}}{2} - 3\alpha^2(d+4d\bar{d})^2L_\sigma^4}, & \text{NGNN},\\
\sqrt{ \frac{\alpha L_\sigma^2 d_{out}}{2} - 12\alpha^2 d^2L_\sigma^4n_{\max}^4}, & \text{GGNN};
\end{cases}\\
&\gamma_2 =\begin{cases}
\sqrt{\frac{2\alpha (d+4d\bar{d})^2}{d_{out}}+6\alpha^2 (d+4d\bar{d})^2L_\sigma^2}, & \text{NGNN},\\
\sqrt{\frac{8\alpha d^2 n_{\max}^4}{d_{out}}+24\alpha^2 d^2L_\sigma^4n_{\max}^2}, & \text{GGNN};
\end{cases} \\
&\gamma_3 =
\sqrt{\frac{2\alpha + 3\alpha ^2 L_\sigma^2 d_{out}}{\gamma^2_1 L_\sigma^2 d_{out}}},
\end{align}
where $\bar{d} = \frac{1}{n}\sum_i d_i$ is the average node degree in the graph for NGNN, and $n_{\max} = \max_j n_j$ is the maximum number of nodes in a single graph for GGNN.
Note that the above quantities only depend on the sample and feature properties, model architecture, and learning rate, and are independent of the number of samples $n$.
Next, we define
\begin{align}D=\max \left\lbrace \norm{\mathbf{v}_0- \mathbf{v}_\ast}, \sqrt{\frac{2\gamma_2\norm{\mathbf{v}_\ast}^2}{\gamma_1}+{\gamma_3}}\right\rbrace,
\end{align}
and
\begin{align}
\rho = \begin{cases}
\min\left\lbrace \mathbf{v}_0^\top \mathbf{v}_\ast
, \frac{L_\sigma^2d_{out}\norm{\mathbf{v}_\ast}^2}{(d+4d\bar{d})L_\sigma^2+1} \right\rbrace , & \text{NGNN}, \\
\min\left\lbrace \mathbf{v}_0^\top \mathbf{v}_\ast
, \frac{L_\sigma^2d_{out}\norm{\mathbf{v}_\ast}^2}{2dL_\sigma^2n_{\max}^2+1} \right\rbrace , & \text{GGNN}.
\end{cases}
\end{align}
and let $D_0 = D + \norm{\mathbf{v}_\ast}$. Similarly, the above quantities do not depend on the number of samples $n$.
\subsection{Useful Lemmas}
We provide in this section some useful lemmas to drive our main result. This could also serve as sketch of the proof of the main results and to help improve the presentation. First, two lemmas of the general results for both NGNN and GGNN are presented.
\begin{lemma}\label{graph_level_W_init}
As the assumptions in Theorem~\ref{convergence_thm} hold, there exists $\eta_W$ such that we have
\begin{align}
& \|\mathbf{W}_{t+1} - \mathbf{W}_\ast\|_2 - \frac{4\eta_W\left(1+\alpha\rho+\sqrt{1+\alpha\rho}\right)}{\rho}\\
& \le \frac{1}{\sqrt{1+\alpha\rho}}\left[ \|\mathbf{W}_{t} - \mathbf{W}_\ast\|_2-\frac{4\eta_W\left(1+\alpha\rho+\sqrt{1+\alpha\rho}\right)}{\rho}\right].
\end{align}
\end{lemma}
\begin{lemma}\label{graph_level_v_init}
As the assumptions in Theorem~\ref{convergence_thm} hold, there exist $\sigma_m$, $\sigma_M$, $L$, and $\eta_v$, such that we have
\begin{align}
&\norm{\mathbf{v}_{t+1}-\mathbf{v}_\ast}^2 \le \left(1-\frac{\alpha\sigma_m}{2}+3\sigma^2_M\alpha^2\right)\norm{\mathbf{v}_t-\mathbf{v}_\ast}^2\\ &+\left(\frac{\alpha L^2}{\sigma_m}+3L^2\alpha^2\right)\norm{\mathbf{W}_t-\mathbf{W}_\ast}^2\norm{\mathbf{v}_\ast}^2+\left(\frac{2\alpha}{\sigma_m}+3\alpha^2\right)\eta_v^2.
\end{align}
\end{lemma}
Next, we provide results specifically for NGNN and GGNN, respectively. We start by defining the following
\begin{align}
\bar{\mathbf{G}}^W(\mathbf{W},\mathbf{v}) &= \E_{\mathbf{H}}\left[{\mathbf{G}}^W(\mathbf{W},\mathbf{v}) \right] \\
\bar{\mathbf{g}}^v(\mathbf{W},\mathbf{v}) &= \E_{\mathbf{H}}\left[ \mathbf{g}^v(\mathbf{W},\mathbf{v}) \right]
\end{align}
\subsubsection{Analysis for NGNN}
\begin{lemma}\label{node_level_lips}
If $n\ge \frac{8\ln \frac{2}{\delta}}{d d_{\min}}$, with probability at least $1-\delta$
\begin{align}
&\frac{\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W}^\prime,\mathbf{v}) }}{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le D_0^2 \norm{\Xi^{-1}} (d+4d\bar{d}) ,\\
&\frac{\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W},\mathbf{v}^\prime) }}{\norm{\mathbf{v}-\mathbf{v}^\prime}} \le 4D_0\norm{\Xi^{-1}}(d+4d\bar{d})L_\sigma,\\
&\frac{\norm{\mathbf{g}^v(\mathbf{W},\mathbf{v})-\mathbf{g}^v(\mathbf{W}^\prime,\mathbf{v})}}{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le 5(d+4d\bar{d})D_0L_\sigma\nonumber \\
&\ \ \ \ \ \ \ \ \ \ + \frac{d+4d\bar{d}}{2}+ \nu^2,\\
&\frac{\norm{\mathbf{g}^v(\mathbf{W},\mathbf{v}) -\mathbf{g}^v(\mathbf{W},\mathbf{v}^\prime) }}{\norm{\mathbf{v}-\mathbf{v}^\prime}} \le {L_\sigma^2}(d+4d\bar{d}).
\end{align}
\end{lemma}
\begin{lemma}\label{node_level_eta}
If $\sqrt{n\log n}\ge \frac{3}{2}\log \frac{2}{\delta}$, with probability at least $1-\delta$
\begin{align}
&\norm{\mathbf{G}^W-\bar\mathbf{G}^W } \\
&\le \frac{4}{c}\sqrt{{\frac{\log n}{n}}} \norm{\Xi^{-1}}D_0 \left(D_0\frac{d}{d_{\min}}(1+|\sigma(0)|)+\sqrt{\frac{d}{d_{\min}}}\nu\right) ,\\
&\norm{\mathbf{g}^v-\bar\mathbf{g}^v }\\
& \le \frac{2}{c}\sqrt{{\frac{\log n}{n}}} \left(D_0\frac{d}{d_{\min}}(1+|\sigma(0)|)^2+\sqrt{\frac{d}{d_{\min}}}(1+|\sigma(0)|)\nu\right) ,
\end{align}
for some absolute constant $c$.
\end{lemma}
\subsubsection{Analysis for GGNN}
\begin{lemma}\label{graph_level_lips}
If $\sum_{j=1}^{n}n_j \ge\frac{8\ln\frac{1}{\delta}}{d} $, with probability at least $1-\delta$
\begin{align}
&\frac{\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W}^\prime,\mathbf{v}) }}{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le 2dD_0^2 \norm{\Xi^{-1}} n^2_{\max}, \\
&\frac{\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W},\mathbf{v}^\prime) }}{\norm{\mathbf{v}-\mathbf{v}^\prime}} \le 8D_0d\Xi^{-1}n^2_{\max}L_\sigma,\\
&\frac{\norm{\mathbf{g}^v(\mathbf{W},\mathbf{v})-\mathbf{g}^v(\mathbf{W}^\prime,\mathbf{v})}}{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le 10dD_0L_\sigma n_{\max}^2\nonumber\\
&+{d\sqrt{n^3_{\max}}}+{\sqrt{n_{\max}}}\nu^2,\\
&\frac{\norm{\mathbf{g}^v(\mathbf{W},\mathbf{v}) -\mathbf{g}^v(\mathbf{W},\mathbf{v}^\prime) }}{\norm{\mathbf{v}-\mathbf{v}^\prime}} \le 2d{L_\sigma^2}n_{\max}^2.
\end{align}
\end{lemma}
\begin{lemma}\label{graph_level_eta}
If $\sqrt{n\log n}\ge \frac{3}{2}\log \frac{2}{\delta}$, with probability at least $1-\delta$
\begin{align}
&\norm{\mathbf{G}^W-\bar\mathbf{G}^W }\\
& \le \frac{4}{c}\sqrt{\frac{\log n}{n}} \norm{\Xi^{-1}} \sqrt{n_{\max}} \left(\sqrt{n_{\max}}2(1+|\sigma(0)|)D_0+\nu\right),\\
&\norm{\mathbf{g}^v-\bar\mathbf{g}^v } \\
&\le \frac{2}{c}\sqrt{\frac{\log n}{n}} \sqrt{n_{\max}}2(1+|\sigma(0)|) \left( \sqrt{n_{\max}}2(1+|\sigma(0)|)D_0+\nu\right),
\end{align}
for some absolute constant $c$.
\end{lemma}
Next, we provide the main results for NGNN and GGNN with the proposed algorithms.
\begin{theorem}\label{convergence_thm}
Suppose that the initialization $(\mathbf{W}_0,\mathbf{v}_0)$ satisfies
\begin{align}\label{initialization}
\Tr\left( \mathbf{W}_{\ast}^\top \mathbf{W}_0\right) \ge 0, \mathbf{v}_\ast^\top\mathbf{v}_0 \ge 0,
\end{align}
and the learning rate $\alpha$ is chosen such that
\begin{align}
\alpha \le \begin{cases}
\frac{1}{2(\norm{\mathbf{v}_0-\mathbf{v}_\ast}^2+\norm{\mathbf{v}_\ast}^2)} \wedge \frac{d_{out}}{6(d+4d\bar{d})^2L_\sigma^2}, & \text{NGNN},\\
\frac{1}{2(\norm{\mathbf{v}_0-\mathbf{v}_\ast}^2+\norm{\mathbf{v}_\ast}^2)} \wedge \frac{d_{out}}{24d^2L_\sigma^2n_{\max}^4}, & \text{GGNN}.
\end{cases}
\end{align}
If the number of samples $n$ is large enough such that for NGNN
\begin{align}\label{sample_node}
&\frac{2}{c}\sqrt{\frac{\log n}{n}} \le \frac{\frac{\rho}{16(1+\alpha \rho )\norm{\Xi^{-1}}D_0}\Tr(\mathbf{W}_\ast^\top\mathbf{W}_0)\wedge \frac{\rho}{\norm{\mathbf{v}_\ast}(1+|\sigma(0)|)}}{D_0\frac{d}{d_{\min}^2}(1+|\sigma(0)|)+\frac{d}{d_{\min}}\nu},
\end{align}
and for GGNN
\begin{align}\label{sample_graph}
&\frac{2}{c}\sqrt{\frac{\log n}{n}} \le \frac{\frac{\rho}{16(1+\alpha \rho )\norm{\Xi^{-1}}D_0}\Tr(\mathbf{W}_\ast^\top\mathbf{W}_0)\wedge \frac{\rho}{\norm{\mathbf{v}_\ast}(1+|\sigma(0)|)}}{ {n_{\max}}(1+|\sigma(0)|)D_0+\sqrt{n_{\max}}\nu},
\end{align}
for some large enough absolute constant $c$, then with probability at least $1-\delta$ and for large $n$ such that $\sqrt{n\log n}\ge \frac{3}{2} \log \frac{2}{\delta}$ and $n\ge \frac{8\ln \frac{2}{\delta}}{d d_{\min}}$ for NGNN or $\sum_{j=1}^{n}n_j \ge\frac{8\ln\frac{1}{\delta}}{d} $ for GGNN, we have that
\begin{align}
\norm{\mathbf{W}_{t} - \mathbf{W}_\ast} \le &\left(\frac{1}{\sqrt{1+\alpha\rho}}\right)^t\norm{\mathbf{W}_0-\mathbf{W}_\ast}\\
&+\eta_W(6\alpha +\frac{8}{\rho})
\end{align}
with
\begin{align}
&\bar a_W = \frac{4}{c}\sqrt{{\frac{\log n}{n}}}\norm{\Xi^{-1}} D_0,
\end{align}
and
\begin{align}
&\eta_W = \begin{cases}
\bar a_W \left(D_0\frac{d}{d_{\min}}(1+|\sigma(0)|)+\sqrt{\frac{d}{d_{\min}}}\nu\right), & \text{NGNN},\\
\bar a_W \sqrt{n_{\max}} \left(\sqrt{n_{\max}}(1+|\sigma(0)|)D_0+\nu\right), & \text{GGNN};
\end{cases}
\end{align}
and
\begin{align}\
& \norm{\mathbf{v}_{t}-\mathbf{v}_\ast} \le \left(\sqrt{1-\gamma^2_1}\right)^t\norm{\mathbf{v}_0-\mathbf{v}_\ast}\nonumber\\ &+\gamma_2\norm{\mathbf{W}_0-\mathbf{W}_{\ast}}\norm{\mathbf{v}_\ast} \sqrt{t}\left(\sqrt{1-\gamma^2_1}\vee \sqrt{\frac{1}{{1+\alpha\rho}}}\right)^{t-1}\nonumber\\
&+\gamma_3\eta_v+\frac{\gamma_2}{\gamma_1}(6\alpha +\frac{8}{\rho})\norm{\mathbf{v}_\ast}\eta_W
\end{align}
with
\begin{align}
&\bar{a}_v = \frac{2}{c}\sqrt{{\frac{\log n}{n}}},
\end{align}
and
\begin{align}
&\eta_v = \nonumber\\
&\begin{cases}
\bar{a}_v \left(D_0\frac{d}{d_{\min}}(1+|\sigma(0)|)^2+\sqrt{\frac{d}{d_{\min}}}(1+|\sigma(0)|)\nu\right), & \text{NGNN},\\
\bar{a}_v \sqrt{n_{\max}}(1+|\sigma(0)|) \left( \sqrt{n_{\max}}(1+|\sigma(0)|)D_0+\nu\right), & \text{GGNN}.
\end{cases}
\end{align}
\end{theorem}
\begin{remark}
Essentially, it is shown in Theorem~\ref{convergence_thm} that the proposed algorithm can provably learn GNNs with a linear convergence to the true parameters of GNNs up to a statistical error. The error is governed by
the number of training samples $n$, and approaches 0 when $n$ is infinitely large. Therefore, exact convergence of the algorithm is guaranteed for large $n$, and this result is intuitively desirable and expected.
\end{remark}
\begin{remark}
Theorem~\ref{convergence_thm} requires that the initialization satisfies \eqref{initialization}, which may not necessarily be satisfied by a random initialization. One solution is to try out all 4 sign combinations of
\[(\mathbf{W}_0,\mathbf{v}_0) \in \{(\mathbf{W},\mathbf{v}), (-\mathbf{W},\mathbf{v}),(\mathbf{W},-\mathbf{v}),(-\mathbf{W},-\mathbf{v})\} \] as suggested in \cite{du2017gradient}. This
guarantees that the initialization condition of~\eqref{initialization} can be satisfied, and further the convergence of the algorithm.
\end{remark}
\begin{remark}
The conditions on the number of sampels in~\eqref{sample_node} and~\eqref{sample_graph} pose sufficient conditions on the number of samples $n$ for learning NGNN and GGNN, respectively. Intuitively, an efficient learning algorithm needs sufficiently many learning samples, and we provide such analytical results in this regard by the two conditions.
\end{remark}
It would also be interesting to explore the impact of the structures of the graphs on the convergence, e.g., the convergence could be possibly optimized with respect to the spectral properties fo the graphs . Although we include node degrees, number of nodes in the derived convergence, which also relate to the structures of the graphs, we leave such a work to the interested community and our future work due to the length limit of this paper.
While the general differences between the GNNs and CNNs are discussed in Related Work, we emphasize hereby that the resulting difference for technical analysis is even more significant. This is due to unordered and overlapping neighbors in GNNs. There has been limited work on CNNs with overlapping structures (see \cite{cao2019tight} and the references therein). For technical analysis, non-overlapping CNNs yields independent data variables for each convolution patch and then tractable concentration bounds like Lemma 5.2 in \cite{cao2019tight}, which leads to easier and simpler analysis compared with the analysis in this paper. On the contrary, Also, the analysis over graph for GNNs adds much more difficulties compared with that of CNNs. We develop new techniques in the theoretical analysis to tackle challenges from dependent variables (from overlapping neighbors) in graphs (it is equivalently grid for CNNs), and to design provable efficient algorithm on top of it. Please refer to Lemmas 3-10 in the supplementary material for related detail. Thus, the analyses for CNNs are special cases in our paper.
\section{Training Dynamics}
In the previous section, we show that the outputs of the proposed algorithms are provably convergent to underlying true parameters. At this point, we have not fully understood the training dynamics of the estimated parameters $ \mathbf{W}_{t}$ and $ \mathbf{v}_{t}$. A missing building brick to the analysis would be the proposition of a compact subspace that the training sequences of $ \mathbf{W}_{t}$ and $ \mathbf{v}_{t}$ lie within.
Toward this goal, we first define the space
\begin{align}
&\mathcal{W}=\left\lbrace \mathbf{W}: \norm{\mathbf{W}}=1, \Tr(\mathbf{W}_\ast^\top\mathbf{W}) \ge {\Tr\left(\mathbf{W}^\top_\ast \mathbf{W}_0\right)}/2 \right\rbrace,
\end{align}
and
\begin{align}
&\mathcal{V} = \left\lbrace \mathbf{v}: \norm{\mathbf{v}-\mathbf{v}_\ast} \le D, \mathbf{v}^\top_\ast\mathbf{v} \ge \rho \right\rbrace .
\end{align}
We then have the following theorem on the stability of the training procedure.
\begin{theorem}\label{training_dynamics}
Suppose the assumptions in Theorem~\ref{convergence_thm} hold, then the training dynamics are confined to
\begin{align}
\left\lbrace \mathbf{W}_{t}, \mathbf{v}_{t}\right\rbrace \in \mathcal{W} \times \mathcal{V}.
\end{align}
\end{theorem}
Theorem~\ref{training_dynamics} states that the training process of the proposed learning algorithm is provably stable, in addition to Theorem~\ref{convergence_thm} which grants convergence of the training outputs.
\section{Experimental Results}
We provide numerical experiments to support and validate our theoretical analysis. We test Algorithm~\ref{algo} for NGNN and GGNN tasks, respectively. Different activation functions of ReLU, Leaky ReLU, Sigmoid, Tanh and Swish are used in the networks, and the results with respect to the distance between the estimated and the true parameters, i.e., $\norm{\mathbf{W}_\ast-\mathbf{W}_t}$ and $\norm{\mathbf{v}_\ast-\mathbf{v}_t}$, versus the number of training epochs are reported. Specifically, for the networks with Leaky ReLU, we show results for two different slope parameter of 0.2 and 0.05. We choose $d=2$ and $d_{out} = 1$, and set the variance $\nu$ to 0.04. We generate $\mathbf{W}_{\ast}$ from unit sphere with a normalized Gaussian matrix, and generate $\mathbf{v}_\ast$ as a standard Gaussian vector.
The nodes in the graphs are probabilistically connected according to the distribution of Bernoulli(0.5).
\subsection{NGNN Tasks}
For node-level tasks, we have one graph and 1000 nodes.
\begin{figure}[!h]
\centering
\caption{Training dynamics for NGNN with different activation functions.}
\subfloat[ReLU]{\label{nfigur:1}\includegraphics[width=0.5\linewidth]{node_relu_521dot04.png}}
\subfloat[Leaky ReLU (0.2)]{\label{nfigur:2}\includegraphics[width=0.5\linewidth]{node_relu_521dot04.png}}
\\
\subfloat[Leaky ReLU (0.05)]{\label{nfigur:3}\includegraphics[width=0.5\linewidth]{node_relu_521dot04.png}}
\subfloat[Sigmoid]{\label{nfigur:4}\includegraphics[width=0.5\linewidth]{node_sigmoid_521dot04.png}}\\
\subfloat[Tanh]{\label{nfigur:5}\includegraphics[width=0.5\linewidth]{node_tanh_521dot04.png}}
\subfloat[Swish]{\label{nfigur:6}\includegraphics[width=0.5\linewidth]{node_swish_521dot04.png}} \label{fig-node}
\end{figure}
The learning rate $\alpha$ is chosen as 0.1. As presented in Figure~\ref{fig-node}, we can see clearly the stable training of the GNN and the efficiency of the algorithm. It is show that $\mathbf{W}$ converges slower than $\mathbf{v}$. Another interesting finding is that when Sigmoid is used as the activation function, the training of $\mathbf{v}$ very quickly converges to the optimum, and then lingers at the neighborhood of $\mathbf{v}_\ast$. This observation validates our theoretical result that the convergence is up to a statistical error. Also due to this reason, the actual convergence might be slightly slower than linear rate, which however is inevitable because of the statistical model that we consider.
\subsection{GGNN Tasks}
\begin{figure}[!h]
\centering
\caption{Training dynamics for GGNN with different activation functions.}
\subfloat[ReLU]{\label{gfigur:1}\includegraphics[width=0.5\linewidth]{graph_relu_521dot04.png}}
\subfloat[Leaky ReLU (0.2)]{\label{gfigur:2}\includegraphics[width=0.5\linewidth]{graph_relu_521dot04.png}}
\\
\subfloat[Leaky ReLU (0.05)]{\label{gfigur:3}\includegraphics[width=0.5\linewidth]{graph_relu_521dot04.png}}
\subfloat[Sigmoid]{\label{gfigur:4}\includegraphics[width=0.5\linewidth]{graph_sig_521dot04.png}}\\
\subfloat[Tanh]{\label{gfigur:5}\includegraphics[width=0.5\linewidth]{graph_tanh_521dot04.png}}
\subfloat[Swish]{\label{gfigur:6}\includegraphics[width=0.5\linewidth]{graph_swish_521dot04.png}} \label{fig-graph}
\end{figure}
For GGNN tasks, we have 1000 graphs and each graph has 5 nodes. The learning rate $\alpha$ is chosen as 0.005. As presented in Figure~\ref{fig-graph}, we can see clearly the training dynamics of the GNN and the efficiency of the algorithm. The curves are smooth, indicating that the training is very stable. By looking at the slope of the curves, for NGNN, we find that $\mathbf{W}$ converges faster than $\mathbf{v}$, which is very different from the NGNN task.
\section{Conclusion}
In this paper, we developed the first provably efficient algorithm for learning GNNs with one hidden layer for node information convolution. We investigated two types of GNNs, namely, node-level and graph-level GNNs. Our results provided a comprehensive framework and a set of tools for designing and analyzing GNNs. More importantly, our results only rely on mild condition on the activation functions, which can be satisfied by a wide range of activation functions used in practice. We constructed our training algorithm using the idea of approximate gradient descent, and proved that it recovers the true parameters of the teacher network with a linear convergence rate. How fast the algorithm converges depends on various parameters of the algorithm, which was also analytically characterized.
\newpage
\section*{Broader Impact}
GNN's success has been demonstrated empirically in a wide range of applications, including social networks, recommender systems, knowledge graphs, protein interface networks, and generative models. Our work is the first to theoretically understand why GNNs work well in practice, and our algorithm converges to underlying true parameters of the teacher network linearly. Our results and tools developed in this paper provide a general framework to develop further theoretical understandings of GNNs.
There are many benefits to using the proposed algorithm, such as guaranteeing the convergence and learnability in decision-critical applications. This can help mitigate fairness, privacy and safety risks. The potential risks of increasing convergence and learnability include: (i) the risk of an undue trust in models, (ii) if the guarantee of learnability means the algorithm may now be used by those with lower levels of domain or ML expertise, this could increase the risk of the model or its outputs being used incorrectly.
\section{Proofs of the Main Results}
\subsection{General Results for NGNN and GGNN}
\begin{lemma}\label{graph_level_W_init}
As the assumptions in Theorem~\ref{convergence_thm} hold, there exists $\eta_W$ such that we have
\begin{align}
\|\mathbf{W}_{t+1} - \mathbf{W}_\ast\|_2 - \frac{4\eta_W\left(1+\alpha\rho+\sqrt{1+\alpha\rho}\right)}{\rho}\le \frac{1}{\sqrt{1+\alpha\rho}}\left[ \|\mathbf{W}_{t} - \mathbf{W}_\ast\|_2-\frac{4\eta_W\left(1+\alpha\rho+\sqrt{1+\alpha\rho}\right)}{\rho}\right].
\end{align}
\end{lemma}
\begin{lemma}\label{graph_level_v_init}
As the assumptions in Theorem~\ref{convergence_thm} hold, there exist $\sigma_m$, $\sigma_M$, $L$, and $\eta_v$, such that we have
\begin{align}
\norm{\mathbf{v}_{t+1}-\mathbf{v}_\ast}^2 \le \left(1-\frac{\alpha\sigma_m}{2}+3\sigma^2_M\alpha^2\right)\norm{\mathbf{v}_t-\mathbf{v}_\ast}^2 +\left(\frac{\alpha L^2}{\sigma_m}+3L^2\alpha^2\right)\norm{\mathbf{W}_t-\mathbf{W}_\ast}^2\norm{\mathbf{v}_\ast}^2+\left(\frac{2\alpha}{\sigma_m}+3\alpha^2\right)\eta_v^2.
\end{align}
\end{lemma}
\subsection{Analysis for NGNN}
We first give a few definitions as follows
\begin{align}
\bar{\mathbf{G}}^W(\mathbf{W},\mathbf{v}) &= \E\left[{\mathbf{G}}^W(\mathbf{W},\mathbf{v}) \right] =\frac{1}{n}\left( \mathbf{W}\mathbf{v}\vb^\top-\mathbf{W}^\ast\mathbf{v}^\ast\mathbf{v}^\top\right), \\
\phi(\mathbf{W},\mathbf{W}^\prime) &= \E_{\mathbf{H}\sim \mathcal{N}(0,1)}\left[\frac{1}{n} \left( \sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H}\mathbf{W})\right) ^\top\sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H}\mathbf{W}^\prime)\right],\\
\bar{\mathbf{g}}^v(\mathbf{W},\mathbf{v}) &= \E\left[ \mathbf{g}^v(\mathbf{W},\mathbf{v}) \right]=\left( \phi(\mathbf{W},\mathbf{W})\mathbf{v}-\phi(\mathbf{W},\mathbf{W}_\ast)\mathbf{v}_\ast\right) \\
&=\left( \phi(\mathbf{W},\mathbf{W})(\mathbf{v}-\mathbf{v}_\ast)+\left(\phi(\mathbf{W},\mathbf{W})-\phi(\mathbf{W},\mathbf{W}_\ast))\right)\mathbf{v}_\ast\right) ,
\\
\phi_{t,t} &= \phi(\mathbf{W}_t,\mathbf{W}_t), \phi_{t,\ast} = \phi(\mathbf{W}_t,\mathbf{W}_\ast),
\end{align}
\begin{align}
\bar{\mathbf{v}}_{t+1} = \mathbf{v}_t - \alpha\bar{\mathbf{g}}^v_t.
\end{align}
We replace the parentheses of $(\mathbf{W},\mathbf{v})$ in above definitions by subscript $t$ in proper places to denote $(\mathbf{W}_t,\mathbf{v}_t)$ for the ease of presentation.
\begin{lemma}\label{node_level_lips}
If $n\ge \frac{8\ln \frac{2}{\delta}}{d d_{\min}}$, with probability at least $1-\delta$
\begin{align}
&\frac{\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W}^\prime,\mathbf{v}) }}{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le D_0^2 \norm{\Xi^{-1}} (d+4d\bar{d}) ,\\
&\frac{\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W},\mathbf{v}^\prime) }}{\norm{\mathbf{v}-\mathbf{v}^\prime}} \le 4D_0\norm{\Xi^{-1}}(d+4d\bar{d})L_\sigma,\\
&\frac{\norm{\mathbf{g}^v(\mathbf{W},\mathbf{v})-\mathbf{g}^v(\mathbf{W}^\prime,\mathbf{v})}}{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le 5(d+4d\bar{d})D_0L_\sigma + \frac{d+4d\bar{d}}{2}+ \nu^2,\\
&\frac{\norm{\mathbf{g}^v(\mathbf{W},\mathbf{v}) -\mathbf{g}^v(\mathbf{W},\mathbf{v}^\prime) }}{\norm{\mathbf{v}-\mathbf{v}^\prime}} \le {L_\sigma^2}(d+4d\bar{d}).
\end{align}
\end{lemma}
\begin{lemma}\label{node_level_eta}
If $\sqrt{n\log n}\ge \frac{3}{2}\log \frac{2}{\delta}$, with probability at least $1-\delta$
\begin{align}
&\norm{\mathbf{G}^W-\bar\mathbf{G}^W } \le \frac{4}{c}\sqrt{{\frac{\log n}{n}}} \norm{\Xi^{-1}}D_0 \left(D_0\frac{d}{d_{\min}}(1+|\sigma(0)|)+\sqrt{\frac{d}{d_{\min}}}\nu\right) ,\\
&\norm{\mathbf{g}^v-\bar\mathbf{g}^v } \le \frac{2}{c}\sqrt{{\frac{\log n}{n}}} \left(D_0\frac{d}{d_{\min}}(1+|\sigma(0)|)^2+\sqrt{\frac{d}{d_{\min}}}(1+|\sigma(0)|)\nu\right) ,
\end{align}
for some absolute constant $c$.
\end{lemma}
\subsection{Analysis for GGNN}
We first give a few definitions as follows
\begin{align}
\bar{\mathbf{G}}^W(\mathbf{W},\mathbf{v}) &=\E\left[ {\mathbf{G}}^W(\mathbf{W},\mathbf{v})\right] = \mathbf{W}\mathbf{v}\vb^\top-\mathbf{W}^\ast\mathbf{v}^\ast\mathbf{v}^\top,\\
\phi_j(\mathbf{W},\mathbf{W}^\prime) &= \E_{\mathbf{H}_{j}\sim \mathcal{N}(0,1)}\left[\left(\mathbf{a}_j \sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j\mathbf{W})\right)^\top\mathbf{a}_j \sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j\mathbf{W}^\prime)\right],\\
\phi(\mathbf{W},\mathbf{W}^\prime) &= \frac{1}{n}\sum_{j=1}^n \phi_j(\mathbf{W},\mathbf{W}^\prime),\\
\bar{\mathbf{g}}^v(\mathbf{W},\mathbf{v}) &= \E\left[ \mathbf{g}^v(\mathbf{W},\mathbf{v}) \right]= \frac{1}{n}\sum_{j=1}^n\phi_j(\mathbf{W},\mathbf{W})\mathbf{v}-\frac{1}{n}\sum_{j=1}^n\phi_j(\mathbf{W},\mathbf{W}_\ast)\mathbf{v}_\ast\\
&= \phi(\mathbf{W},\mathbf{W})\mathbf{v}-\phi(\mathbf{W},\mathbf{W}_\ast)\mathbf{v}_\ast\\
&=\phi(\mathbf{W},\mathbf{W})(\mathbf{v}-\mathbf{v}_\ast)+\left(\phi(\mathbf{W},\mathbf{W})-\phi(\mathbf{W},\mathbf{W}_\ast)\right)\mathbf{v}_\ast,\\
\phi_{t,t} &= \phi(\mathbf{W}_t,\mathbf{W}_t), \phi_{t,\ast} = \phi(\mathbf{W}_t,\mathbf{W}_\ast).
\end{align}
We replace the parentheses of $(\mathbf{W},\mathbf{v})$ by subscript $t$ in proper places to denote $(\mathbf{W}_t,\mathbf{v}_t)$ for the ease of presentation.
\begin{lemma}\label{graph_level_lips}
If $\sum_{j=1}^{n}n_j \ge\frac{8\ln\frac{1}{\delta}}{d} $, with probability at least $1-\delta$
\begin{align}
&\frac{\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W}^\prime,\mathbf{v}) }}{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le 2dD_0^2 \norm{\Xi^{-1}} n^2_{\max}, \\
&\frac{\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W},\mathbf{v}^\prime) }}{\norm{\mathbf{v}-\mathbf{v}^\prime}} \le 8D_0d\Xi^{-1}n^2_{\max}L_\sigma,\\
&\frac{\norm{\mathbf{g}^v(\mathbf{W},\mathbf{v})-\mathbf{g}^v(\mathbf{W}^\prime,\mathbf{v})}}{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le 10dD_0L_\sigma n_{\max}^2+{d\sqrt{n^3_{\max}}}+{\sqrt{n_{\max}}}\nu^2,\\
&\frac{\norm{\mathbf{g}^v(\mathbf{W},\mathbf{v}) -\mathbf{g}^v(\mathbf{W},\mathbf{v}^\prime) }}{\norm{\mathbf{v}-\mathbf{v}^\prime}} \le 2d{L_\sigma^2}n_{\max}^2.
\end{align}
\end{lemma}
\begin{lemma}\label{graph_level_eta}
If $\sqrt{n\log n}\ge \frac{3}{2}\log \frac{2}{\delta}$, with probability at least $1-\delta$
\begin{align}
&\norm{\mathbf{G}^W-\bar\mathbf{G}^W } \le \frac{4}{c}\sqrt{\frac{\log n}{n}} \norm{\Xi^{-1}} \sqrt{n_{\max}} \left(\sqrt{n_{\max}}2(1+|\sigma(0)|)D_0+\nu\right),\\
&\norm{\mathbf{g}^v-\bar\mathbf{g}^v } \le \frac{2}{c}\sqrt{\frac{\log n}{n}} \sqrt{n_{\max}}2(1+|\sigma(0)|) \left( \sqrt{n_{\max}}2(1+|\sigma(0)|)D_0+\nu\right),
\end{align}
for some absolute constant $c$.
\end{lemma}
\subsection{Useful Lemmas and Results}
\begin{lemma}\label{node_lip}
If $n\ge \frac{8\ln \frac{1}{\delta}}{d d_{\min}}$ with $d_{\min}$ being the smallest node degree in the graph, then with probability at least $1-\delta$ for NGNN
\begin{align}
\frac{1}{n}\norm{ \mathbf{D}^{-1}\mathbf{A}\mathbf{H}}^2
\le d+4d\bar{d},
\end{align}
where $\bar{d}=\frac{\sum_{i=1}^n\frac{1}{d_i}}{n}$.
\end{lemma}
\begin{lemma}\label{lip}
If $\sum_{j=1}^{n}n_j \ge\frac{8\ln\frac{1}{\delta}}{d} $, with probability at least $1-\delta$ for GGNN
\begin{align}
\frac{1}{n} \sum_{j=1}^{n} \norm{\mathbf{H}_{j}^\top \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top}\norm{ \mathbf{a}_j}^2\norm{ \mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j }\le 2dn^2_{\max}.
\end{align}
\end{lemma}
We provide several concentration results for random variables, and we do not provide corresponding proofs as the results are commonly acknowledged.
\begin{lemma}[Hoeffding Inequality for Chi-square Distribution]\label{chi_hoeffding}
$z_k\sim \mathcal{N}(0,1)$, for all $t\in[0,1]$ then the probability
\begin{align}
P\left(\sum_{k=1}^{n}z_k^2 \le (t+1)n\right)\ge 1-\exp(-\frac{nt^2}{8}).
\end{align}
\end{lemma}
\begin{lemma}\label{node_SE_concentration}
Let $X_1,\ldots,X_n$ be independent sub-exponential random variables such that $\mathcal{E} X_i=\mu_i$ and $X_i \in SE(\nu_i^2, \alpha_i)$. Then\begin{align}
\sum_{i=1}^n(X_i-\mu_i) \in SE(\sum_{i=1}^n\nu_i^2, \max_i \alpha_i).
\end{align}
In particular, denote $\nu_\ast^2 = \sum_{i=1}^n\nu_i^2, \alpha_\ast = \max_i \alpha_i$. Then,
\begin{align}
P\left(\frac{1}{n} \sum_{i=1}^n(X_i-\mu_i) \ge t \right)\le \begin{cases}
\exp\left(-\frac{nt^2}{2\nu_\ast^2 }\right) , & \text{if}\ 0<nt<\frac{\nu_\ast^2 }{\alpha_\ast} \\
\exp\left(-\frac{nt}{2\alpha_\ast}\right), & \text{otherwise}
\end{cases}.
\end{align}
\end{lemma}
\begin{lemma}[\cite{honorio2014tight}]\label{node_SE}
Let $s$ be a sub-Gaussian variable with parameter $\sigma_s$ and mean $\mu_s = \mathbb{E} s$. Then, $s^2\in SE(4\sigma_s^2,4\sigma_s^2)$.
\end{lemma}
\begin{lemma}
A random variable $x$ is called sub-Gaussian, if\begin{align}
P(|x|\ge t)\le 2e^{-ct^2},
\end{align}
for some $c>0$ and $\|x\|_{\psi_2}\le c$. Then let $x_1,\ldots,x_n$ be zero-mean independent sub-Gaussian random variables, the general version of the Hoeffding's inequality states that
\begin{align}
P\left(|\sum_{i=1}^{n}x_i|\ge t \right) \le 2\exp\left(-\frac{ct^2}{\sum_{i=1}^{n}\|x_i\|^2_{\psi_2}}\right).
\end{align}
\end{lemma}
\begin{result}\label{eigen}
For node-level GNN tasks,
based on Lemma \ref{node_lip}, the largest eigenvalue of $\phi_{t,t}$ can be well bounded such that $\sigma_M \le (d+4d\bar{d})L_\sigma^2 $. Similarly, we have $L\le (d+4d\bar{d})L_\sigma$.
By Theorem 2.2 in \cite{yaskov2014}, almost surely, the smallest eigenvalues of $\phi_{t,t}$ and $\phi_{t,\ast}$ are lower bounded by an absolute constant, which we denote here by $\sigma_m \le L_\sigma^2d_{out}$ and $\sigma^\prime_m \le L_\sigma^2d_{out}$, respectively .
For graph-level GNN tasks, based on Lemma \ref{lip}, the largest eigenvalue of $\phi_{t,t}$ can be well bounded such that $\sigma_M \le 2dL_\sigma^2 n_{\max}^2$. Similarly, we have {$L \le 2dn^2_{\max}L_\sigma$}.
By Theorem 2.2 in \cite{yaskov2014}, almost surely, the smallest eigenvalues of $\phi_{t,t}$ and $\phi_{t,\ast}$ are lower bounded by an absolute constant, which we denote here by $\sigma_m \le L_\sigma^2d_{out}$ and $\sigma^\prime_m \le L_\sigma^2d_{out}$, respectively.
We will use the above equality in the paper, and the results derived still hold.
\end{result}
\subsection{Proofs of Lemmas and Theorems}
In this section, we provide proofs of the above lemmas and the main theorems.
\subsubsection*{Proof of Lemma~\ref{graph_level_W_init}}
\begin{proof}
Define
\begin{align}
\bar{\mathbf{U}}_{t+1} = \mathbf{W}_t - \alpha\bar \mathbf{G}^W_t = \mathbf{W}_t(\mathbf{I}-\alpha \mathbf{v}_t\mathbf{v}_t^\top)+\alpha \mathbf{W}^\ast\mathbf{v}^\ast\mathbf{v}_t^\top,
\end{align}
and
\begin{align}
\hat{\mathbf{U}} = \bar{\mathbf{U}}_{t+1}/\|\bar{\mathbf{U}}_{t+1}\|_2.
\end{align}
If {{$\mathbf{I}-\alpha \mathbf{v}_t\mathbf{v}_t^\top \ge 0 $}} and {$\mathbf{v}_t^\top\mathbf{v}^\ast \ge \rho$}, $\hat{\mathbf{U}}$ is closer to $\mathbf{W}^\ast$ than $\mathbf{U}^\prime$, i.e., $\|\hat{\mathbf{U}}-\mathbf{W}^\ast\|_2 \le \|{\mathbf{U}^\prime}-\mathbf{W}^\ast\|_2$ where $\mathbf{U}^\prime = \frac{\mathbf{W}_t + \alpha \rho \mathbf{W}^\ast}{\|\mathbf{W}_t + \alpha \rho \mathbf{W}^\ast\|_2}$. Therefore,
\begin{align}
1-\frac{1}{2}\Tr\left(\left(\mathbf{W}_{\ast}-\hat{\mathbf{U}}\right)^\top \left(\mathbf{W}_{\ast}-\hat{\mathbf{U}}\right) \right) \ge 1-\frac{1}{2}\Tr\left(\left(\mathbf{W}_{\ast}-{\mathbf{U}}^\prime\right)^\top \left(\mathbf{W}_{\ast}-{\mathbf{U}}^\prime\right) \right).
\end{align}
Thus, we have
\begin{align}
\Tr\left(\frac{1}{2}\mathbf{W}^\top_\ast\mathbf{W}_\ast+\frac{1}{2}\hat{\mathbf{U}}^\top\hat{\mathbf{U}}-\frac{1}{2}\left(\mathbf{W}_{\ast}-\hat{\mathbf{U}}\right)^\top \left(\mathbf{W}_{\ast}-\hat{\mathbf{U}}\right) \right) \\
\ge \Tr\left(\frac{1}{2}\mathbf{W}^\top_\ast\mathbf{W}_\ast+\frac{1}{2}{\mathbf{U}^\prime}^\top{\mathbf{U}}^\prime-\frac{1}{2}\left(\mathbf{W}_{\ast}-{\mathbf{U}}^\prime\right)^\top \left(\mathbf{W}_{\ast}-{\mathbf{U}}^\prime\right) \right),
\end{align}
which leads to
\begin{align}
&\Tr\left(\frac{1}{2}\mathbf{W}^\top_\ast\mathbf{W}_\ast+\frac{1}{2}\hat{\mathbf{U}}^\top\hat{\mathbf{U}}-\frac{1}{2}\left(\mathbf{W}_{\ast}-\hat{\mathbf{U}}\right)^\top \left(\mathbf{W}_{\ast}-\hat{\mathbf{U}}\right) \right) \\
&\ge \Tr\left(\mathbf{W}^\top_\ast {\mathbf{U}}^\prime \right) = \Tr\left(\mathbf{W}^\top_\ast \frac{\mathbf{W}_t + \alpha \rho \mathbf{W}^\ast}{\|\mathbf{W}_t + \alpha \rho \mathbf{W}^\ast\|_2} \right) \\
&\ge \Tr\left(\mathbf{W}^\top_\ast \frac{\mathbf{W}_t + \alpha \rho \mathbf{W}^\ast}{1 + \alpha \rho } \right)\\
&=\Tr\left(\frac{\alpha \rho \mathbf{W}^\top_\ast \mathbf{W}^\ast}{1+\alpha \rho }+ \frac{1}{1+\alpha\rho}\left(\frac{1}{2}\mathbf{W}^\top_\ast\mathbf{W}_\ast+\frac{1}{2}{\mathbf{W}_t}^\top\mathbf{W}_t-\frac{1}{2}\left(\mathbf{W}_{\ast}-\mathbf{W}_t\right)^\top \left(\mathbf{W}_{\ast}-\mathbf{W}_t\right)\right)\right).
\end{align}
Hence, we have
\begin{align}
& \Tr\left(\frac{1}{2}\mathbf{W}^\top_\ast\mathbf{W}_\ast+\frac{1}{2}\hat{\mathbf{U}}^\top\hat{\mathbf{U}}-\frac{1}{2}\left(\mathbf{W}_{\ast}-\hat{\mathbf{U}}\right)^\top \left(\mathbf{W}_{\ast}-\hat{\mathbf{U}}\right) \right) \\
& \ge \Tr\left(\frac{\alpha \rho \mathbf{W}^\top_\ast \mathbf{W}^\ast}{1+\alpha \rho }+ \frac{1}{1+\alpha\rho}\left(\frac{1}{2}\mathbf{W}^\top_\ast\mathbf{W}_\ast+\frac{1}{2}{\mathbf{W}_t}^\top\mathbf{W}_t-\frac{1}{2}\left(\mathbf{W}_{\ast}-\mathbf{W}_t\right)^\top \left(\mathbf{W}_{\ast}-\mathbf{W}_t\right)\right)\right).
\end{align}
Rearranging the terms yields
\begin{align}
&\Tr\left( \left(\mathbf{W}_{\ast}-\hat{\mathbf{U}}\right)^\top \left(\mathbf{W}_{\ast}-\hat{\mathbf{U}}\right) \right) \\
&\le \Tr\left(\mathbf{W}^\top_\ast\mathbf{W}_\ast + \hat{\mathbf{U}}^\top\hat{\mathbf{U}} -\frac{1+2\alpha\rho}{1+\alpha\rho}\mathbf{W}^\top_\ast\mathbf{W}_\ast - \frac{1}{1+\alpha\rho}{\mathbf{W}_t}^\top\mathbf{W}_t+\frac{1}{1+\alpha\rho}\left(\mathbf{W}_{\ast}-\mathbf{W}_t\right)^\top \left(\mathbf{W}_{\ast}-\mathbf{W}_t\right)\right),
\end{align}
which is equivalent to
\begin{align}
& \|\mathbf{W}_{\ast}-\hat{\mathbf{U}}\|_2^2 \le 1+1-\frac{1+2\alpha\rho}{1+\alpha\rho}-\frac{1}{1+\alpha\rho}+\frac{1}{1+\alpha\rho}\|\mathbf{W}_{\ast}-\mathbf{W}_t\|_2^2 = \frac{1}{1+\alpha\rho}\|\mathbf{W}_{\ast}-\mathbf{W}_t\|_2^2.
\end{align}
Therefore, we obtain
\begin{align}
\|\mathbf{W}_{\ast}-\hat{\mathbf{U}}\|_2 \le \frac{\|\mathbf{W}_{\ast}-\mathbf{W}_t\|_2}{\sqrt{1+\alpha\rho}}.
\end{align}
As we have {in Lemmas~\ref{node_level_eta} and~\ref{graph_level_eta} that
\begin{align}
\|\mathbf{U}_{t+1}-\bar{\mathbf{U}}_{t+1}\|_2 = \alpha \norm{\mathbf{G}^W_t-\bar{\mathbf{G}}_t^W} \le \alpha \eta_W,
\end{align}
} it results in
\begin{align}
\|\mathbf{W}_{t+1} - \hat{\mathbf{U}}\|_2 \le \frac{2\alpha \eta_W}{\|\bar{\mathbf{U}}_{t+1}\|_2}.
\end{align}
{
By Theorem~\ref{training_dynamics}, we have $\Tr(\mathbf{W}^\top_\ast\mathbf{W}_t) \ge \mathbf{0}$. Hence,
\begin{align}
\|\bar{\mathbf{U}}_{t+1}\|_2 = \|\mathbf{W}_t(\mathbf{I}-\alpha \mathbf{v}_t\mathbf{v}_t^\top)+\alpha \mathbf{W}^\ast\mathbf{v}^\ast\mathbf{v}_t^\top\|_2 \ge 1-\alpha\|\mathbf{v}_t\|^2_2 \ge 1-\alpha D_0^2 \ge \frac{1}{2}.
\end{align}
}
Thus, it yields
\begin{align}
\|\mathbf{W}_{t+1} - \hat{\mathbf{U}}\|_2 \le {4\alpha \eta_W}.
\end{align}
By triangle inequality,
\begin{align}
\|\mathbf{W}_{t+1} - \mathbf{W}_\ast\|_2 \le \|\mathbf{W}_{\ast} - \hat{\mathbf{U}}\|_2+\|\mathbf{W}_{t+1} - \hat{\mathbf{U}}\|_2 \le {4\alpha \eta_W} + \frac{\|\mathbf{W}_{\ast}-\mathbf{W}_t\|_2}{\sqrt{1+\alpha\rho}},
\end{align}
which results in
\begin{align}
\|\mathbf{W}_{t+1} - \mathbf{W}_\ast\|_2 - \frac{4\eta_W\left(1+\alpha\rho+\sqrt{1+\alpha\rho}\right)}{\rho}\le \frac{1}{\sqrt{1+\alpha\rho}}\left[ \|\mathbf{W}_{t} - \mathbf{W}_\ast\|_2-\frac{4\eta_W\left(1+\alpha\rho+\sqrt{1+\alpha\rho}\right)}{\rho}\right].
\end{align}
\end{proof}
\subsubsection*{Proof of Lemma~\ref{graph_level_v_init}}
\begin{proof}
First ,we define \begin{align}
\bar{\mathbf{v}}_{t+1} = \mathbf{v}_t - \alpha\bar{\mathbf{g}}^v_t.
\end{align}
Then, it directly follows
\begin{align}
\norm{\bar{\mathbf{v}}_{t+1}-\mathbf{v}_\ast}=\norm{\left(\mathbf{I}-\alpha \phi_{t,t}\right)\left(\mathbf{v}_t-\mathbf{v}_\ast\right)-\alpha\left(\phi_{t,t}-\phi_{t,\ast}\right)\mathbf{v}_\ast}.
\end{align}
As we have in Lemmas~\ref{node_level_eta} and~\ref{graph_level_eta} that
\begin{align}
\norm{\mathbf{v}_{t+1}-\bar{\mathbf{v}}_{t+1}} = \alpha \norm{\mathbf{g}_t^v-\bar{\mathbf{g}}_t^v} \le \alpha \eta_v.
\end{align}
By triangle inequality, we have
\begin{align}
\norm{\mathbf{v}_{t+1}-\mathbf{v}_\ast} \le \norm{\mathbf{I}-\alpha \phi_{t,t}}\norm{\mathbf{v}_t-\mathbf{v}_\ast} + \alpha\norm{\phi_{t,t}-\phi_{t,\ast}}\norm{\mathbf{v}_\ast} + \alpha \eta_v
\end{align}
According to Lemma~\ref{node_level_eta} and Lemma~\ref{graph_level_eta}, it can be written as
\begin{align}\label{phi_L}
\norm{\phi_{t,t}-\phi_{t,\ast}} \le L\norm{\mathbf{W}-\mathbf{W}_\ast},
\end{align}
where $L$ is given in Result~\ref{eigen}.
Thus, we can write
\begin{align}
\norm{\mathbf{v}_{t+1}-\mathbf{v}_\ast} \le \norm{\mathbf{I}-\alpha \phi_{t,t}}\norm{\mathbf{v}_t-\mathbf{v}_\ast} + \alpha L\norm{\mathbf{W}-\mathbf{W}_\ast}\norm{\mathbf{v}_\ast} + \alpha \eta_v.
\end{align}
Additionally, we also have
\begin{align}
(\mathbf{v}_t-\mathbf{v}_\ast)^\top\mathbf{g}_t^v &\ge (\mathbf{v}_t-\mathbf{v}_\ast)^\top\bar\mathbf{g}_t^v-\eta_v\norm{\mathbf{v}_t-\mathbf{v}_\ast}\\
&=(\mathbf{v}_t-\mathbf{v}_\ast)^\top\phi_{t,t}(\mathbf{v}_t-\mathbf{v}_\ast)+(\mathbf{v}_t-\mathbf{v}_\ast)^\top\left(\phi_{t,t} -\phi_{t,\ast} \right)\mathbf{v}_\ast-\eta_v\norm{\mathbf{v}_t-\mathbf{v}_\ast}\\
& \ge \sigma_m\norm{\mathbf{v}_t-\mathbf{v}_\ast}^2 - L\norm{\mathbf{W}_t-\mathbf{W}_\ast}\norm{\mathbf{v}_t-\mathbf{v}_\ast}\norm{\mathbf{v}_\ast}-\eta_v\norm{\mathbf{v}_t-\mathbf{v}_\ast}\\
& \ge \sigma_m\norm{\mathbf{v}_t-\mathbf{v}_\ast}^2-\frac{1}{2}\left(\frac{L^2}{\sigma_m}\norm{\mathbf{W}_t-\mathbf{W}_\ast}^2\norm{\mathbf{v}_\ast}^2+\sigma_m\norm{\mathbf{v}_t-\mathbf{v}_\ast}^2\right)-\eta_v\norm{\mathbf{v}_t-\mathbf{v}_\ast}\\
&\ge\frac{\sigma_m}{2} \norm{\mathbf{v}_t-\mathbf{v}_\ast}^2-\frac{L^2}{2\sigma_m}\norm{\mathbf{W}_t-\mathbf{W}_\ast}^2\norm{\mathbf{v}_\ast}^2-\frac{1}{2}\left(\frac{\sigma_m}{2}\norm{\mathbf{v}_t-\mathbf{v}_\ast}^2+\frac{2}{\sigma_m}\eta_v^2\right)\\
&=\frac{\sigma_m}{4} \norm{\mathbf{v}_t-\mathbf{v}_\ast}^2-\frac{L^2}{2\sigma_m}\norm{\mathbf{W}_t-\mathbf{W}_\ast}^2\norm{\mathbf{v}_\ast}^2-\frac{1}{\sigma_m}\eta_v^2.
\end{align}
The second inequality is due to the definition that $\sigma_m$ is the smallest non-negative eigenvalue of the matrix $\phi_{t,t}$ and \eqref{phi_L}.
Therefore, we have
\begin{align}\label{vt_intmd}
&\norm{\mathbf{v}_{t+1}-\mathbf{v}_\ast}^2 = \norm{\mathbf{v}_t-\alpha\mathbf{g}_t^v-\mathbf{v}_\ast}^2=\norm{\mathbf{v}_t-\mathbf{v}_\ast}^2-2\alpha(\mathbf{v}_t-\mathbf{v}_\ast)^\top\mathbf{g}_t^v+\alpha^2\norm{\mathbf{g}_t^v}^2\nonumber\\
&\le \norm{\mathbf{v}_t-\mathbf{v}_\ast}^2 - \frac{\alpha\sigma_m}{2} \norm{\mathbf{v}_t-\mathbf{v}_\ast}^2+\frac{\alpha L^2}{\sigma_m}\norm{\mathbf{W}_t-\mathbf{W}_\ast}^2\norm{\mathbf{v}_\ast}^2+\frac{2\alpha}{\sigma_m}\eta_v^2 +\alpha^2\norm{\mathbf{g}_t^v}^2.
\end{align}
Now, we can write
\begin{align}
\norm{\mathbf{g}_t^v} &\le \norm{\bar \mathbf{g}_t^v} +\eta_v \le \norm{\phi_{t,t}(\mathbf{v}_t-\mathbf{v}_\ast)+\left(\phi_{t,t} -\phi_{t,\ast} \right)\mathbf{v}_\ast}+\eta_v\\
&\le \norm{\phi_{t,t}(\mathbf{v}_t-\mathbf{v}_\ast)}+\norm{\left(\phi_{t,t} -\phi_{t,\ast} \right)\mathbf{v}_\ast}+\eta_v\\
&\le \sigma_M \norm{\mathbf{v}_t-\mathbf{v}_\ast} + L \norm{\mathbf{W}_t-\mathbf{W}_\ast}\norm{\mathbf{v}_\ast}+\eta_v,
\end{align}
where $\sigma_M$ is the largest non-negative eigenvalue of the matrix $\phi_{t,t}$.
Hence, it results in
\begin{align}
\norm{\mathbf{g}_t^v}^2\le 3\sigma^2_M \norm{\mathbf{v}_t-\mathbf{v}_\ast}^2 + 3L^2 \norm{\mathbf{W}_t-\mathbf{W}_\ast}^2\norm{\mathbf{v}_\ast}^2+3\eta_v^2.
\end{align}
Plugging into the inequality \eqref{vt_intmd} provides
\begin{align}
\norm{\mathbf{v}_{t+1}-\mathbf{v}_\ast}^2 \le \left(1-\frac{\alpha\sigma_m}{2}+3\sigma^2_M\alpha^2\right)\norm{\mathbf{v}_t-\mathbf{v}_\ast}^2 +\left(\frac{\alpha L^2}{\sigma_m}+3L^2\alpha^2\right)\norm{\mathbf{W}_t-\mathbf{W}_\ast}^2\norm{\mathbf{v}_\ast}^2+\left(\frac{2\alpha}{\sigma_m}+3\alpha^2\right)\eta_v^2.
\end{align}
\end{proof}
\subsubsection{Proof of Lemma~\ref{node_level_lips}}
First, we can write
\begin{align}
&\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W}^\prime,\mathbf{v}) } =\norm{ \frac{1}{n} \Xi^{-1} \mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top \left(\sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W})-\sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W}^\prime) \right)\mathbf{v} \mathbf{v}^\top}\\
&\le \frac{1}{n} \norm{\Xi^{-1} } \norm{\mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top}\norm{ \sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H}\mathbf{W})-\sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W}^\prime) }\norm{\mathbf{v} }^2\\
&\le \frac{1}{n} \norm{\Xi^{-1} } \norm{\mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top}\norm{ \mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W}-\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W}^\prime }\norm{\mathbf{v} }^2.
\end{align}
Then, using the result from Lemma~\ref{node_lip} with probability at least $1-\delta$ if $n\ge \frac{8\ln \frac{1}{\delta}}{d d_{\min}}$
\begin{align}
\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W}^\prime,\mathbf{v}) } &\le D_0^2 \norm{\Xi^{-1}} \frac{1}{n}\norm{ \mathbf{D}^{-1}\mathbf{A}\mathbf{H}}^2\norm{\mathbf{W}- \mathbf{W}^\prime} \\
&\le D_0^2 \norm{\Xi^{-1}} \norm{\mathbf{W}- \mathbf{W}^\prime}(d+4d\bar{d}) .
\end{align}
Thus, we have the desired result as
\begin{align}
\frac{\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W}^\prime,\mathbf{v}) }}{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le D_0^2 \norm{\Xi^{-1}} (d+4d\bar{d}) .
\end{align}
Next, we can have with probability at least $1-\delta$ if $n\ge \frac{8\ln \frac{1}{\delta}}{d d_{\min}}$
\begin{align}
&\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W},\mathbf{v}^\prime) } =\norm{ \frac{1}{n} \Xi^{-1} \mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top \sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W})\left(\mathbf{v} \mathbf{v}^\top-\mathbf{v}^\prime \mathbf{v}^{\prime\top }\right)}\\
&\le L_\sigma \frac{1}{n} \norm{\Xi^{-1}} \norm{\mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top}\norm{ \mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W} }\norm{\mathbf{v} \mathbf{v}^\top-\mathbf{v}^\prime \mathbf{v}^{\prime\top }}\\
&\le \norm{\Xi^{-1}}(d+4d\bar{d})L_\sigma\norm{\mathbf{v} \mathbf{v}^\top-\mathbf{v}^\prime \mathbf{v}^{\prime\top }} \\
& \le \norm{\Xi^{-1}}(d+4d\bar{d})L_\sigma \left(\norm{(\mathbf{v}+\mathbf{v}^\prime)(\mathbf{v}-\mathbf{v}^\prime)^\top}+\norm{\mathbf{v}^\prime\mathbf{v}^\top-\mathbf{v}\vb^{\prime\top}}\right) \\
& \le \norm{\Xi^{-1}}(d+4d\bar{d})L_\sigma \left(\norm{(\mathbf{v}+\mathbf{v}^\prime)(\mathbf{v}-\mathbf{v}^\prime)^\top}+\norm{(\mathbf{v}^{\prime}-\mathbf{v})\mathbf{v}^\top-\mathbf{v}(\mathbf{v}^{\prime}-\mathbf{v})^\top}\right)\\
&\le \norm{\Xi^{-1}}(d+4d\bar{d})L_\sigma \left(\norm{(\mathbf{v}+\mathbf{v}^\prime)}\norm{\mathbf{v}-\mathbf{v}^\prime}+2\norm{\mathbf{v}^{\prime}-\mathbf{v}}\norm{\mathbf{v}}\right)\\
& \le 4D_0\norm{\Xi^{-1}}(d+4d\bar{d})L_\sigma\norm{\mathbf{v}-\mathbf{v}^\prime}.
\end{align}
Thus, we obtain
\begin{align}
\frac{\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W},\mathbf{v}^\prime) }}{\norm{\mathbf{v}-\mathbf{v}^\prime}} \le 4D_0\norm{\Xi^{-1}}(d+4d\bar{d})L_\sigma.
\end{align}
To show further results, we first write
\begin{align}
&g_1(\mathbf{W})= -\frac{1}{n} \sigma(\mathbf{W}^\top \mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top) \sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W}_\ast)\mathbf{v}_\ast,\\
&g_2(\mathbf{W}) = -\frac{1}{n} \sigma(\mathbf{W}^\top \mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top) \mathbf{e},
\\
&g_3(\mathbf{W})= \frac{1}{n} \sigma(\mathbf{W}^\top \mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top) \sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W})\mathbf{v}.
\end{align}
Then, we can express
\begin{align}
\norm{\mathbf{g}^v(\mathbf{W},\mathbf{v})-\mathbf{g}^v(\mathbf{W}^\prime,\mathbf{v})}=\norm{g_1(\mathbf{W}) -g_1(\mathbf{W}^\prime) +g_2(\mathbf{W}) -g_2(\mathbf{W}^\prime)+ g_3(\mathbf{W})- g_3(\mathbf{W}^\prime) }.
\end{align}
Additionally, we define
\begin{align}
g_3^\prime(\mathbf{W}) = \sigma(\mathbf{W}^\top \mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top) \sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W}).
\end{align}
Therefore, we obtain
\begin{align}
&\frac{\norm{g^\prime_3(\mathbf{W}) -g^\prime_3(\mathbf{W}^\prime)} }{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le \frac{\norm{g^\prime_3(\mathbf{W}) -g^\prime_3(\mathbf{W}^\prime)} }{\norm{\sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W})-\sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W}^\prime)}}\frac{\norm{ \sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W})-\sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W}^\prime)}}{\norm{\mathbf{W}-\mathbf{W}^\prime}} \\
&\le2 \left(\norm{ \sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W})}+\norm{\sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W}^\prime)}\right) {\norm{ \mathbf{D}^{-1}\mathbf{A}\mathbf{H} }} \\
&\le 4L_\sigma \norm{ \mathbf{D}^{-1}\mathbf{A}\mathbf{H} }\norm{ \mathbf{D}^{-1}\mathbf{A}\mathbf{H} }.
\end{align}
Thus, it results in
\begin{align}
\frac{\norm{g_1(\mathbf{W}) -g_1(\mathbf{W}^\prime)} }{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le L_\sigma \frac{1}{n} \norm{\mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top}\norm{ \mathbf{D}^{-1}\mathbf{A}\mathbf{H} }\norm{\mathbf{v}_\ast},
\end{align}
\begin{align}
\frac{\norm{g_3(\mathbf{W}) -g_3(\mathbf{W}^\prime)} }{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le \frac{\norm{\mathbf{v}}}{n} 4L_\sigma \norm{ \mathbf{D}^{-1}\mathbf{A}\mathbf{H} }\norm{ \mathbf{D}^{-1}\mathbf{A}\mathbf{H} }.
\end{align}
Therefore, if $n\ge \frac{8\ln \frac{1}{\delta}}{d d_{\min}}$ with probability at least $1-\delta$
\begin{align}
\frac{\norm{g_3(\mathbf{W}) -g_3(\mathbf{W}^\prime)}+\norm{g_1(\mathbf{W}) -g_1(\mathbf{W}^\prime)} }{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le 5(d+4d\bar{d})D_0L_\sigma .
\end{align}
We also have
\begin{align}
&\frac{\norm{g_2(\mathbf{W}) -g_2(\mathbf{W}^\prime)} }{\norm{\mathbf{W}-\mathbf{W}^\prime}} = \frac{\norm{\frac{1}{n} \left[\sigma(\mathbf{W}^\top \mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top)-\sigma(\mathbf{W}^{\prime\top} \mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top)\right]\mathbf{e}} }{\norm{\mathbf{W}-\mathbf{W}^\prime}}\\
& \le \frac{\frac{1}{n} \norm{ (\mathbf{W}^\top-\mathbf{W}^{\prime\top}) \mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top}\norm{\mathbf{e}} }{\norm{\mathbf{W}^\top-\mathbf{W}^{\prime\top}}}\\
&\le {\frac{1}{n} \left( \norm{ \mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top} \norm{\mathbf{e}} \right) }\\
& \le {\frac{1}{2n} \left( \norm{ \mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top}^2+ \norm{\mathbf{e}}^2 \right) }.
\end{align}
According to Lemma~\ref{node_lip}, if $n\ge \frac{8\ln \frac{2}{\delta}}{d d_{\min}}$, with a probability at least $1-\frac{\delta}{2}$
\begin{align}
\frac{1}{2n} \norm{ \mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top}^2 \le \frac{d+4d\bar{d}}{2},
\end{align}
and with a probability at least $1-\frac{\delta}{2}$
\begin{align}
\frac{1}{2n} \norm{\mathbf{e}}^2 \le \nu^2.
\end{align}
By union bound, with a probability at least $1-{\delta}$
\begin{align}
{\frac{1}{2n} \left( \norm{ \mathbf{H}^\top \mathbf{A}^\top (\mathbf{D}^{-1})^\top}^2+ \norm{\mathbf{e}}^2 \right) } \le \frac{d+4d\bar{d}}{2}+ \nu^2.
\end{align}
Thus, if $n\ge \frac{8\ln \frac{2}{\delta}}{d d_{\min}}$, with probability at least $1-{\delta}$ .
\begin{align}
\frac{\norm{g_2(\mathbf{W}) -g_2(\mathbf{W}^\prime)} }{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le \frac{d+4d\bar{d}}{2}+ \nu^2
\end{align}
Therefore, if $n\ge \frac{8\ln \frac{2}{\delta}}{d d_{\min}}$, with probability at least $1-\delta$
\begin{align}
\frac{\norm{\mathbf{g}^v(\mathbf{W},\mathbf{v})-\mathbf{g}^v(\mathbf{W}^\prime,\mathbf{v})}}{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le 5(d+4d\bar{d})D_0L_\sigma + \frac{d+4d\bar{d}}{2}+ \nu^2.
\end{align}
At last, we can write
\begin{align}
&\frac{\norm{\mathbf{g}^v(\mathbf{W},\mathbf{v}) -\mathbf{g}^v(\mathbf{W},\mathbf{v}^\prime) }}{\norm{\mathbf{v}-\mathbf{v}^\prime}} = \frac{g_3(\mathbf{W},\mathbf{v})-g_3(\mathbf{W},\mathbf{v}^\prime)}{\norm{\mathbf{v}-\mathbf{v}^\prime}}
&\le \frac{L_\sigma^2}{n} \norm{ \mathbf{D}^{-1}\mathbf{A}\mathbf{H} }\norm{ \mathbf{D}^{-1}\mathbf{A}\mathbf{H} }.
\end{align}
Thus, if $n\ge \frac{8\ln \frac{1}{\delta}}{d d_{\min}}$, with probability at least $1-\delta$,
\begin{align}
\frac{\norm{\mathbf{g}^v(\mathbf{W},\mathbf{v}) -\mathbf{g}^v(\mathbf{W},\mathbf{v}^\prime) }}{\norm{\mathbf{v}-\mathbf{v}^\prime}} \le {L_\sigma^2}(d+4d\bar{d}).
\end{align}
\subsubsection{Proof of Lemma~\ref{node_level_eta}}
\begin{proof}
$\mathbf{G}^W-\bar\mathbf{G}^W$ is a centered sub-Gaussian random matrix, and with straightforward calculation, we have
\begin{align}
\mathbf{G}^W = \frac{1}{n} {\Xi^{-1}}U,
\end{align}
where
$
U=\sum_{i=1}^n(U_1+U_2+U_3)\mathbf{v}^\top
$
and
\begin{align}
&U_1 =\frac{\sum_{j\in \mathcal{N}_i}\mathbf{H}^\top_j}{d_i} \sigma\left(\frac{\sum_{j\in \mathcal{N}_i}\mathbf{H}_j\mathbf{W}_{\ast}}{d_i}\right)\mathbf{v}_\ast,\\
&U_2 = \frac{\sum_{j\in \mathcal{N}_i}\mathbf{H}^\top_j}{d_i} \epsilon_i,\\
&U_3 = - \frac{\sum_{j\in \mathcal{N}_i}\mathbf{H}^\top_j}{d_i} \sigma\left(\frac{\sum_{j\in \mathcal{N}_i}\mathbf{H}_j\mathbf{W}}{d_i}\right)\mathbf{v},
\end{align}
where $\mathcal{N}_i$ is the set containing node $i$ and all its neighboring nodes, and the degree $d_i$ equals the cardinality of $\mathcal{N}_i$.
According to Lemma B.8 in \cite{cao2019tight}, we have $\normpsi{\mathbf{a}^\top( U_1+U_3)} \le c_1D_0\frac{d}{d_i}(1+|\sigma(0)|)$ and $\normpsi{\mathbf{a}^\top U_2} \le c_2\sqrt{\frac{d}{d_i}}\nu$ for some absolute constants $c_1$ and $c_2$. Thus, by Lemmas D. 3 and D. 4 in \cite{yi2015regularized}, we have
\begin{align}
\normpsia{\mathbf{a}^\top (U_1+U_2+U_3)\mathbf{v}^\top \mathbf{b}} \le c_3 D_0 \left(D_0\frac{d}{d_i}(1+|\sigma(0)|)+\sqrt{\frac{d}{d_i}}\nu\right)
\end{align}
for some absolute constant $c_3$,
where $\mathbf{a} \in \mathcal{N}_1$ and $ \mathcal{N}_1 = \mathcal{N}(S^{d-1},1/2)$ is a 1/2-net covering $S^{d-1}$, and $\mathbf{b} \in \mathcal{N}_2$ and $ \mathcal{N}_2 = \mathcal{N}(S^{d_{out}-1},1/2)$ is a 1/2-net covering $S^{d_{out}-1}$. Note that $|\mathcal{N}_1| \le 5^d$ and $|\mathcal{N}_2| \le 5^{d_{out}}$.
According to Proposition 5.16 in \cite{tropp2015introduction},
\begin{align}
P\left(\left| \mathbf{a}^\top (\mathbf{G}^W-\bar\mathbf{G}^W) \mathbf{b}\right| \ge \frac{1}{c}\sqrt{{\frac{\log n}{n}}} T\right) \le 2\exp\left(-\frac{\sqrt{{n\log n}}T}{K}\right),
\end{align}
where $K=T=\norm{\Xi^{-1}}D_0 \left(D_0\frac{d}{d_{\min}}(1+|\sigma(0)|)+\sqrt{\frac{d}{d_{\min}}}\nu\right) $ and $d_{\min}$ is the smallest degree of the nodes in the graph. Then if $\sqrt{n\log n}\ge \frac{3}{2} \log \frac{2}{\delta}$, with probability at least $1-\delta$
\begin{align}
\left| \mathbf{a}^\top (\mathbf{G}^W-\bar\mathbf{G}^W) \mathbf{b}\right| \le \frac{1}{c}\sqrt{{\frac{\log n}{n}}} \norm{\Xi^{-1}}D_0 \left(D_0\frac{d}{d_{\min}}(1+|\sigma(0)|)+\sqrt{\frac{d}{d_{\min}}}\nu\right) .
\end{align}
By Lemma 5.3 in \cite{tropp2015introduction}, if $\sqrt{n\log n}\ge \frac{3}{2} \log \frac{2}{\delta}$, with probability at least $1-\delta$
\begin{align}
\norm{\mathbf{G}^W-\bar\mathbf{G}^W } \le \frac{4}{c}\sqrt{{\frac{\log n}{n}}} \norm{\Xi^{-1}}D_0 \left(D_0\frac{d}{d_{\min}}(1+|\sigma(0)|)+\sqrt{\frac{d}{d_{\min}}}\nu\right) ,
\end{align}
for some absolute constant $c$.
Similarly, $\mathbf{g}^v-\bar\mathbf{g}^v$ is a centered sub-Gaussian random matrix, and with straightforward calculation, we have
\begin{align}
\mathbf{g}^v = \frac{1}{n}\sum_{i=1}^n(U_1+U_2+U_3),
\end{align}
where
\begin{align}
&U_1 =\sigma\left(\frac{\sum_{j\in \mathcal{N}_i}\mathbf{W}^\top \mathbf{H}_j^\top}{d_i}\right) \sigma\left(\frac{\sum_{j\in \mathcal{N}_i}\mathbf{H}_j\mathbf{W}_{\ast}}{d_i}\right)\mathbf{v}_\ast,\\
&U_2 = \sigma\left(\frac{\sum_{j\in \mathcal{N}_i}\mathbf{W}^\top \mathbf{H}_j^\top}{d_i}\right)\epsilon_i,\\
&U_3 = - \sigma\left(\frac{\sum_{j\in \mathcal{N}_i}\mathbf{W}^\top \mathbf{H}_j^\top}{d_i}\right)\sigma\left(\frac{\sum_{j\in \mathcal{N}_i}\mathbf{H}_j\mathbf{W}}{d_i}\right)\mathbf{v},
\end{align}
where $\mathcal{N}_i$ is the set containing node $i$ and all its neighboring nodes, and the degree $d_i$ equals the cardinality of $\mathcal{N}_i$.
According to Lemma B.8 in \cite{cao2019tight}, we have $\normpsi{\mathbf{a}^\top( U_1+U_3)} \le c_1D_0\frac{d}{d_i}(1+|\sigma(0)|)^2$ and $\normpsi{\mathbf{a}^\top U_2} \le c_2\sqrt{\frac{d}{d_i}}(1+|\sigma(0)|)\nu$ for some absolute constants $c_1$ and $c_2$. Thus, by Lemmas D. 3 and D. 4 in \cite{yi2015regularized}, we have
\begin{align}
\normpsia{\mathbf{a}^\top (U_1+U_2+U_3)} \le c_3 \left(D_0\frac{d}{d_i}(1+|\sigma(0)|)^2+\sqrt{\frac{d}{d_i}}(1+|\sigma(0)|)\nu\right)
\end{align}
for some absolute constant $c_3$,
where $\mathbf{a} \in \mathcal{N}_1$ and $ \mathcal{N}_1 = \mathcal{N}(S^{d-1},1/2)$ is a 1/2-net covering $S^{d-1}$. Note that $|\mathcal{N}_1| \le 5^d$ by Lemma 5.2 in \cite{tropp2015introduction}.
According to Proposition 5.16 in \cite{tropp2015introduction},
\begin{align}
P\left(\left| \mathbf{a}^\top (\mathbf{g}^v-\bar\mathbf{g}^v) \right| \ge \frac{1}{c}\sqrt{{\frac{\log n}{n}}} T\right) \le 2\exp\left(-\frac{\sqrt{{n\log n}}T}{K}\right),
\end{align}
where $K=T= \left(D_0\frac{d}{d_{\min}}(1+|\sigma(0)|)^2+\sqrt{\frac{d}{d_{\min}}}(1+|\sigma(0)|)\nu\right) $ and $d_{\min}$ is the smallest degree of the nodes in the graph. Then if $\sqrt{n\log n}\ge \frac{3}{2}\log \frac{2}{\delta}$, with probability at least $1-\delta$
\begin{align}
\left| \mathbf{a}^\top (\mathbf{g}^v-\bar\mathbf{g}^v) \right| \le \frac{1}{c}\sqrt{{\frac{\log n}{n}}} \left(D_0\frac{d}{d_{\min}}(1+|\sigma(0)|)^2+\sqrt{\frac{d}{d_{\min}}}(1+|\sigma(0)|)\nu\right) ,
\end{align}
By Lemma 5.3 in \cite{tropp2015introduction}, if $\sqrt{n\log n}\ge \frac{3}{2} \log \frac{2}{\delta}$, with probability at least $1-\delta$
\begin{align}
\norm{(\mathbf{g}^v-\bar\mathbf{g}^v) } \le \frac{2}{c}\sqrt{{\frac{\log n}{n}}} \left(D_0\frac{d}{d_{\min}}(1+|\sigma(0)|)^2+\sqrt{\frac{d}{d_{\min}}}(1+|\sigma(0)|)\nu\right) ,
\end{align}
for some absolute constant $c$.
\end{proof}
\subsubsection*{Proof of Lemma~\ref{graph_level_lips}}
\begin{proof}
First, we can write
\begin{align}
&\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W}^\prime,\mathbf{v}) } =\norm{ \frac{1}{n} \norm{\Xi^{-1}} \sum_{j=1}^{n} \mathbf{H}_{j}^\top \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top \mathbf{a}_j^\top \mathbf{a}_j \left(\sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W})-\sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W}^\prime) \right)\mathbf{v} \mathbf{v}^\top}\\
&\le \frac{1}{n} \norm{\Xi^{-1}} \sum_{j=1}^{n} \norm{\mathbf{H}_{j}^\top \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top}\norm{ \mathbf{a}_j}^2\norm{ \sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W})-\sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W}^\prime) }\norm{\mathbf{v} }^2\\
&\le \frac{1}{n} \norm{\Xi^{-1}} \sum_{j=1}^{n} \norm{\mathbf{H}_{j}^\top \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top}\norm{ \mathbf{a}_j}^2\norm{ \mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W}-\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W}^\prime }\norm{\mathbf{v} }^2.
\end{align}
According to Lemma~\ref{lip}, if $\sum_{j=1}^{n}n_j \ge\frac{8\ln\frac{1}{\delta}}{d} $ with probability at least $1-\delta$
\begin{align}
\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W}^\prime,\mathbf{v}) }& \le D_0^2 \norm{\Xi^{-1}} n_{\max}\frac{1}{n}\sum_{j=1}^{n} \norm{\mathbf{H}_j}^2\norm{\mathbf{W}- \mathbf{W}^\prime} \\
&\le D_0^2 \norm{\Xi^{-1}} n_{\max}\frac{1}{n}2d\sum_{j=1}^{n}n_j\norm{\mathbf{W}- \mathbf{W}^\prime} \\
& \le 2dD_0^2 \norm{\Xi^{-1}} n^2_{\max}\norm{\mathbf{W}- \mathbf{W}^\prime}.
\end{align}
Thus, we have the desired result as
\begin{align}
\frac{\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W}^\prime,\mathbf{v}) }}{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le 2dD_0^2 \norm{\Xi^{-1}} n^2_{\max} .
\end{align}
Similarly, we have with probability at least $1-\delta$ if $\sum_{j=1}^{n}n_j \ge\frac{8\ln\frac{1}{\delta}}{d} $
\begin{align}
&\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W},\mathbf{v}^\prime) } =\norm{ \frac{1}{n} \norm{\Xi^{-1}} \sum_{j=1}^{n} \mathbf{H}_{j}^\top \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top \mathbf{a}_j^\top \mathbf{a}_j \sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W})\left(\mathbf{v} \mathbf{v}^\top-\mathbf{v}^\prime \mathbf{v}^{\prime\top }\right)}\\
&\le L_\sigma \frac{1}{n} \norm{\Xi^{-1}} \sum_{j=1}^{n} \norm{\mathbf{H}_{j}^\top \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top}\norm{ \mathbf{a}_j}^2\norm{ \mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W} }\norm{\mathbf{v} \mathbf{v}^\top-\mathbf{v}^\prime \mathbf{v}^{\prime\top }}\\
&\le 2d\norm{\Xi^{-1}}n^2_{\max}L_\sigma\norm{\mathbf{v} \mathbf{v}^\top-\mathbf{v}^\prime \mathbf{v}^{\prime\top }} \\
& \le 2d\norm{\Xi^{-1}}n^2_{\max}L_\sigma \left(\norm{(\mathbf{v}+\mathbf{v}^\prime)(\mathbf{v}-\mathbf{v}^\prime)^\top}+\norm{\mathbf{v}^\prime\mathbf{v}^\top-\mathbf{v}\vb^{\prime\top}}\right) \\
& \le 2d\norm{\Xi^{-1}}n^2_{\max}L_\sigma \left(\norm{(\mathbf{v}+\mathbf{v}^\prime)(\mathbf{v}-\mathbf{v}^\prime)^\top}+\norm{(\mathbf{v}^{\prime}-\mathbf{v})\mathbf{v}^\top-\mathbf{v}(\mathbf{v}^{\prime}-\mathbf{v})^\top}\right)\\
&\le 2d\norm{\Xi^{-1}}n^2_{\max}L_\sigma \left(\norm{(\mathbf{v}+\mathbf{v}^\prime)}\norm{\mathbf{v}-\mathbf{v}^\prime}+2\norm{\mathbf{v}^{\prime}-\mathbf{v}}\norm{\mathbf{v}}\right)\\
& \le 8D_0d\norm{\Xi^{-1}}n^2_{\max}L_\sigma\norm{\mathbf{v}-\mathbf{v}^\prime}.
\end{align}
Thus, we obtain
\begin{align}
\frac{\norm{\mathbf{G}^W(\mathbf{W},\mathbf{v}) -\mathbf{G}^W(\mathbf{W},\mathbf{v}^\prime) }}{\norm{\mathbf{v}-\mathbf{v}^\prime}} \le 8D_0d\norm{\Xi^{-1}}n^2_{\max}L_\sigma.
\end{align}
To show further results, we first write
\begin{align}
&g_1(\mathbf{W})= -\frac{1}{n} \sum_{j=1}^{n} \sigma(\mathbf{W}^\top \mathbf{H}_{j}^\top \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top) \mathbf{a}_j^\top \mathbf{a}_j \sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W}_\ast)\mathbf{v}_\ast,\\
&g_2(\mathbf{W}) = -\frac{1}{n} \sum_{j=1}^{n} \sigma(\mathbf{W}^\top \mathbf{H}_{j}^\top \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top) \mathbf{a}_j^\top \mathbf{a}_j e_j,
\\
&g_3(\mathbf{W})= \frac{1}{n} \sum_{j=1}^{n} \sigma(\mathbf{W}^\top \mathbf{H}_{j}^\top \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top) \mathbf{a}_j^\top \mathbf{a}_j \sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W})\mathbf{v}.
\end{align}
Then, we can express
\begin{align}
\norm{\mathbf{g}^v(\mathbf{W},\mathbf{v})-\mathbf{g}^v(\mathbf{W}^\prime,\mathbf{v})}=\norm{g_1(\mathbf{W}) -g_1(\mathbf{W}^\prime) +g_2(\mathbf{W}) -g_2(\mathbf{W}^\prime)+ g_3(\mathbf{W})- g_3(\mathbf{W}^\prime) }.
\end{align}
Additionally, we define
\begin{align}
g_3^\prime(\mathbf{W}) = \sigma(\mathbf{W}^\top \mathbf{H}_{j}^\top \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top) \mathbf{a}_j^\top \mathbf{a}_j \sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W}).
\end{align}
Therefore, we obtain
\begin{align}
&\frac{\norm{g^\prime_3(\mathbf{W}) -g^\prime_3(\mathbf{W}^\prime)} }{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le \frac{\norm{g^\prime_3(\mathbf{W}) -g^\prime_3(\mathbf{W}^\prime)} }{\norm{\mathbf{a}_j \sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W})-\mathbf{a}_j \sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W}^\prime)}}\frac{\norm{\mathbf{a}_j \sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W})-\mathbf{a}_j \sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W}^\prime)}}{\norm{\mathbf{W}-\mathbf{W}^\prime}} \\
&\le2 \left(\norm{\mathbf{a}_j \sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W})}+\norm{\mathbf{a}_j \sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W}^\prime)}\right) {\norm{\mathbf{a}_j \mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j }} \\
&\le 4L_\sigma \norm{ \mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j }\norm{ \mathbf{a}_j}^2\norm{ \mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j }.
\end{align}
Thus, it results in
\begin{align}
\frac{\norm{g_1(\mathbf{W}) -g_1(\mathbf{W}^\prime)} }{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le L_\sigma \frac{1}{n} \sum_{j=1}^{n} \norm{\mathbf{H}_{j}^\top \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top}\norm{ \mathbf{a}_j}^2\norm{ \mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j }\norm{\mathbf{v}_\ast},
\end{align}
\begin{align}
\frac{\norm{g_3(\mathbf{W}) -g_3(\mathbf{W}^\prime)} }{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le \frac{\norm{\mathbf{v}}}{n} \sum_{j=1}^{n} 4L_\sigma \norm{ \mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j }\norm{ \mathbf{a}_j}^2\norm{ \mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j }.
\end{align}
Therefore, if $\sum_{j=1}^{n}n_j \ge\frac{8\ln\frac{1}{\delta}}{d} $ with probability $1-\delta$
\begin{align}
\frac{\norm{g_3(\mathbf{W}) -g_3(\mathbf{W}^\prime)}+\norm{g_1(\mathbf{W}) -g_1(\mathbf{W}^\prime)} }{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le 10dD_0L_\sigma n_{\max}^2.
\end{align}
We also have
\begin{align}
&\frac{\norm{g_2(\mathbf{W}) -g_2(\mathbf{W}^\prime)} }{\norm{\mathbf{W}-\mathbf{W}^\prime}} = \frac{\norm{\frac{1}{n} \sum_{j=1}^{n} \left[\sigma(\mathbf{W}^\top \mathbf{H}_{j}^\top \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top)-\sigma(\mathbf{W}^{\prime\top} \mathbf{H}_{j}^\top \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top)\right] \mathbf{a}_j^\top \mathbf{a}_j e_j} }{\norm{\mathbf{W}-\mathbf{W}^\prime}}\\
& \le \frac{\frac{1}{n} \sum_{j=1}^{n}\norm{ (\mathbf{W}^\top-\mathbf{W}^{\prime\top}) \mathbf{H}_{j}^\top \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top}| e_j| }{\norm{\mathbf{W}^\top-\mathbf{W}^{\prime\top}}}\\
&\le {\frac{1}{n} \sum_{j=1}^{n}\norm{ \mathbf{H}_{j}^\top \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top}| e_j| }\\
&\le {\frac{1}{n} \sum_{j=1}^{n}\norm{ \mathbf{H}_{j}}\norm{ \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top}| e_j| } \\
& \le {\frac{1}{n} \sum_{j=1}^{n}\sqrt{n_{j}}\norm{ \mathbf{H}_{j}}| e_j| } \\
& \le {\frac{\sqrt{n_{\max}}}{2n} \sum_{j=1}^{n}\norm{ \mathbf{H}_{j}}^2+e_j^2 }.
\end{align}
According to Lemma~\ref{lip}, if $\sum_{j=1}^{n}n_j \ge \frac{8\ln\frac{2}{\delta}}{d}$, with a probability at least $1-\frac{\delta}{2}$
\begin{align}
\sum_{j=1}^{n}\norm{\mathbf{H}_j}^2 \le 2d\sum_{j=1}^{n}n_j ,
\end{align}
and with a probability at least $1-\frac{\delta}{2}$
\begin{align}
\sum_{j=1}^{n}e_j^2 \le 2n\nu^2.
\end{align}
By union bound, with probability at least $1-{\delta}$
\begin{align}
\sum_{j=1}^{n}\norm{ \mathbf{H}_{j}}^2+e_j^2 \le 2d\sum_{j=1}^{n}n_j + 2n\nu^2.
\end{align}
Thus, if $\sum_{j=1}^{n}n_j \ge \frac{8\ln\frac{2}{\delta}}{d}$, with probability at least $1-{\delta}$
\begin{align}
\frac{\norm{g_2(\mathbf{W}) -g_2(\mathbf{W}^\prime)} }{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le {\frac{\sqrt{n_{\max}}}{2n} \left(2d\sum_{j=1}^{n}n_j + 2n\nu^2\right)} \le {d\sqrt{n^3_{\max}}}+{\sqrt{n_{\max}}}\nu^2.
\end{align}
Therefore, if $\sum_{j=1}^{n}n_j \ge \frac{8\ln\frac{2}{\delta}}{d}$, with probability at least $1-\delta$
\begin{align}
\frac{\norm{\mathbf{g}^v(\mathbf{W},\mathbf{v})-\mathbf{g}^v(\mathbf{W}^\prime,\mathbf{v})}}{\norm{\mathbf{W}-\mathbf{W}^\prime}} \le 10dD_0L_\sigma n_{\max}^2+{d\sqrt{n^3_{\max}}}+{\sqrt{n_{\max}}}\nu^2.
\end{align}
At last, we can write
\begin{align}
\frac{\norm{\mathbf{g}^v(\mathbf{W},\mathbf{v}) -\mathbf{g}^v(\mathbf{W},\mathbf{v}^\prime) }}{\norm{\mathbf{v}-\mathbf{v}^\prime}} &= \frac{g_3(\mathbf{W},\mathbf{v})-g_3(\mathbf{W},\mathbf{v}^\prime)}{\norm{\mathbf{v}-\mathbf{v}^\prime}} \\
&\le \frac{L_\sigma^2}{n} \sum_{j=1}^{n} \norm{ \mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j }\norm{ \mathbf{a}_j}^2\norm{ \mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j }.
\end{align}
Thus, if $\sum_{j=1}^{n}n_j \ge \frac{8\ln\frac{1}{\delta}}{d}$, with probability at least $1-\delta$,
\begin{align}
\frac{\norm{\mathbf{g}^v(\mathbf{W},\mathbf{v}) -\mathbf{g}^v(\mathbf{W},\mathbf{v}^\prime) }}{\norm{\mathbf{v}-\mathbf{v}^\prime}} \le 2d{L_\sigma^2}n_{\max}^2.
\end{align}
\end{proof}
\subsubsection*{Proof of Lemma~\ref{graph_level_eta}}
\begin{proof}
Define $\sum_{j=1}^n\mathit{G}_j=n(\mathbf{G}^W-\bar\mathbf{G}^W)$, where $\mathcal{G}_j$ is a centered sub-Gaussian random matrix. According to Lemma B.8 in \cite{cao2019tight}, we have $\normpsi{\sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W})} \le c_1\sqrt{n_j}(1+|\sigma(0)|)$ and $\normpsi{ \mathbf{a}^\top\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W}} \le c_2\sqrt{n_j}$ for some absolute constants $c_1$ and $c_2$. Thus, by Lemmas D. 3 and D. 4 in \cite{yi2015regularized}, we have
\begin{align}
\normpsia{\mathbf{a}^\top\mathit{G}_j\mathbf{b}} &\le c_3\norm{\Xi^{-1}} D_0\sqrt{n_j} \left(\sqrt{n_j}(1+|\sigma(0)|)(\norm{\mathbf{v}_\ast}+D_0)+\nu\right) \\
&\le c_4\norm{\Xi^{-1}}D_0 \sqrt{n_j} \left(\sqrt{n_j}(1+|\sigma(0)|)D_0+\nu\right)
\end{align}
for some absolute constants $c_3$ and $c_4$,
where $\mathbf{a} \in \mathcal{N}_1$ and $ \mathcal{N}_1 = \mathcal{N}(S^{d-1},1/2)$ is a 1/2-net covering $S^{d-1}$, and $\mathbf{b} \in \mathcal{N}_2$ and $ \mathcal{N}_2 = \mathcal{N}(S^{d_{out}-1},1/2)$ is a 1/2-net covering $S^{d_{out}-1}$. Note that $|\mathcal{N}_1| \le 5^d$ and $|\mathcal{N}_2| \le 5^{d_{out}}$.
According to Proposition 5.16 in \cite{tropp2015introduction},
\begin{align}
P\left(\left| \sum_{j=1}^{n}\mathbf{a}^\top G_j \mathbf{b}\right| \ge \frac{1}{c}\sqrt{{n\log n}} T\right) \le 2\exp\left(-\frac{\sqrt{{n\log n}}T}{K}\right),
\end{align}
where $K=T=c_4\norm{\Xi^{-1}}D_0 \sqrt{n_{\max}} \left(\sqrt{n_{\max}}(1+|\sigma(0)|)D_0+\nu\right)$. Then if $\sqrt{n\log n}\ge\frac{3}{2} \log \frac{2}{\delta}$, with probability at least $1-\delta$
\begin{align}
\left| \sum_{j=1}^{n}\mathbf{a}^\top G_i\mathbf{b}\right| \le \frac{1}{c}\sqrt{{n\log n}} \norm{\Xi^{-1}}D_0 \sqrt{n_{\max}} \left(\sqrt{n_{\max}}(1+|\sigma(0)|)D_0+\nu\right).
\end{align}
By Lemma 5.3 in \cite{tropp2015introduction}, if $\sqrt{n\log n}\ge \frac{3}{2} \log \frac{2}{\delta}$, with probability at least $1-\delta$
\begin{align}
\norm{\mathbf{G}^W-\bar\mathbf{G}^W } \le \frac{4}{c}\sqrt{\frac{\log n}{n}} \norm{\Xi^{-1}} D_0\sqrt{n_{\max}}\left(\sqrt{n_{\max}}(1+|\sigma(0)|)D_0+\nu\right),
\end{align}
for some absolute constant $c$.
Similarly, define $\sum_{j=1}^n g_j=n(\mathbf{g}^v-\bar\mathbf{g}^v)$, where $g_j$ is a centered sub-Gaussian random matrix. According to Lemma B.8 in \cite{cao2019tight}, we have $\normpsi{\mathbf{c}^\top\sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W})} \le c_5\sqrt{n_j}(1+|\sigma(0)|)$ for some absolute constant $c_5$. Thus, by Lemmas D. 3 and D. 4 in \cite{yi2015regularized}, we have
\begin{align}
\normpsia{\mathbf{c}^\top g_j} \le c_6 \sqrt{n_j}(1+|\sigma(0)|) \left(\sqrt{n_j}(1+|\sigma(0)|)D_0+\nu\right)
\end{align}
for some absolute constant $c_6$,
where $\mathbf{c} \in \mathcal{N}_3$ and $ \mathcal{N}_3 = \mathcal{N}(S^{d_{out}-1},1/2)$ is a 1/2-net covering $S^{d_{out}-1}$. Note that $|\mathcal{N}_3| \le 5^{d_{out}}$.
According to Proposition 5.16 in \cite{tropp2015introduction},
\begin{align}
P\left(\left| \sum_{j=1}^{n}\mathbf{c}^\top g_j\right| \ge \frac{1}{c}\sqrt{{n\log n}} T\right) \le 2\exp\left(-\frac{\sqrt{{n\log n}}T}{K}\right),
\end{align}
where $K=T=c_6 \sqrt{n_{\max}}(1+|\sigma(0)|) \left( \sqrt{n_{\max}}(1+|\sigma(0)|)D_0+\nu\right)$. Then if $\sqrt{n\log n}\ge \frac{3}{2}\log \frac{2}{\delta}$, with probability at least $1-\delta$
\begin{align}
\left| \sum_{j=1}^{n}\mathbf{c}^\top g_i\right| \le \frac{1}{c}\sqrt{\frac{\log n}{n}} \sqrt{n_{\max}}(1+|\sigma(0)|) \left( \sqrt{n_{\max}}(1+|\sigma(0)|)D_0+\nu\right).
\end{align}
By Lemma 5.3 in \cite{tropp2015introduction}, if $\sqrt{n\log n}\ge \frac{3}{2} \log \frac{2}{\delta}$, with probability at least $1-\delta$
\begin{align}
\norm{\mathbf{g}^v-\bar\mathbf{g}^v } \le \frac{2}{c}\sqrt{\frac{\log n}{n}} \sqrt{n_{\max}}(1+|\sigma(0)|) \left( \sqrt{n_{\max}}(1+|\sigma(0)|)D_0+\nu\right),
\end{align}
for some absolute constant $c$.
\end{proof}
\subsubsection{Proof of Lemma~\ref{node_lip}}
\begin{proof}
Note that each entry in the matrix $\mathbf{D}^{-1}\mathbf{A}\mathbf{H}$ is a Gaussian variable with zero mean and variance $\frac{1}{d_i}$ and $i$ is the row (node) index of the entry, and $d_i$ is the degree of $i$th node. By Lemmas~\ref{node_SE_concentration} and \ref{node_SE}, if $n\ge \frac{8\ln \frac{1}{\delta}}{d d_{\min}}$ with $d_{\min}$ being the smallest node degree in the graph, then with probability at least $1-\delta$
\begin{align}
\frac{1}{nd} \norm{ \mathbf{D}^{-1}\mathbf{A}\mathbf{H}}^2
\le 1+4\frac{\sum_{i=1}^n\frac{1}{d_i}}{n} ,
\end{align}
leading to
\begin{align}
\frac{1}{n} \norm{ \mathbf{D}^{-1}\mathbf{A}\mathbf{H}}^2
\le d+4d\frac{\sum_{i=1}^n\frac{1}{d_i}}{n} = d+4d\bar{d} ,
\end{align}
where $\bar{d}=\frac{\sum_{i=1}^n\frac{1}{d_i}}{n}$.
\end{proof}
\subsubsection{Proof of Lemma~\ref{lip}}
\begin{proof}
According to Lemma~\ref{chi_hoeffding}, if $\sum_{j=1}^{n}n_j \ge\frac{8\ln\frac{1}{\delta}}{d} $ with probability at least $1-\delta$
\begin{align}
\frac{1}{n} \sum_{j=1}^{n} \norm{\mathbf{H}_{j}^\top \mathbf{A}_{j}^\top (\mathbf{D}_{j}^{-1})^\top}\norm{ \mathbf{a}_j}^2\norm{ \mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j }&\le \frac{1}{n} \sum_{j=1}^{n} \norm{ \mathbf{D}_j^{-1}\mathbf{A}_j}^2\norm{\mathbf{H}_j}^2
\\
&\le \frac{1}{n} \sum_{j=1}^{n} n_j\norm{\mathbf{H}_j}^2
\\&\le n_{\max}\frac{1}{n}2d\sum_{j=1}^{n}n_j\\
&\le 2dn^2_{\max},
\end{align}
completing the proof.
\end{proof}
\subsubsection*{Proof of Theorem~\ref{convergence_thm} }
\begin{proof}
Since we have
\begin{align}
\|\mathbf{W}_{t+1} - \mathbf{W}_\ast\|_2 - \frac{4\eta_W\left(1+\alpha\rho+\sqrt{1+\alpha\rho}\right)}{\rho}\le \frac{1}{\sqrt{1+\alpha\rho}}\left[ \|\mathbf{W}_{t} - \mathbf{W}_\ast\|_2-\frac{4\eta_W\left(1+\alpha\rho+\sqrt{1+\alpha\rho}\right)}{\rho}\right],
\end{align}
it directly follows
\begin{align}
\norm{\mathbf{W}_{t} - \mathbf{W}_\ast} &\le\left(\frac{1}{\sqrt{1+\alpha\rho}}\right)^t\left[\norm{\mathbf{W}_0-\mathbf{W}_\ast}-\frac{4\eta_W\left(1+\alpha\rho+\sqrt{1+\alpha\rho}\right)}{\rho}\right]+\frac{4\eta_W\left(1+\alpha\rho+\sqrt{1+\alpha\rho}\right)}{\rho}\\
&\le \left(\frac{1}{\sqrt{1+\alpha\rho}}\right)^t\norm{\mathbf{W}_0-\mathbf{W}_\ast}+\frac{4\eta_W\left(1+\alpha\rho+\sqrt{1+\alpha\rho}\right)}{\rho}\\
& \le \left(\frac{1}{\sqrt{1+\alpha\rho}}\right)^t\norm{\mathbf{W}_0-\mathbf{W}_\ast}+\eta_W(6\alpha +\frac{8}{\rho}),
\end{align}
and we have the desired result.
Next, since we have
\begin{align}
\norm{\mathbf{v}_{t+1}-\mathbf{v}_\ast}^2 \le \left(1-\frac{\alpha\sigma_m}{2}+3\sigma^2_M\alpha^2\right)\norm{\mathbf{v}_t-\mathbf{v}_\ast}^2 +\left(\frac{\alpha L^2}{\sigma_m}+3L^2\alpha^2\right)\norm{\mathbf{W}_t-\mathbf{W}_\ast}^2\norm{\mathbf{v}_\ast}^2+\left(\frac{2\alpha}{\sigma_m}+3\alpha^2\right)\eta_v^2,
\end{align}
therefore, we can get
\begin{align}
&\norm{\mathbf{v}_{t+1}-\mathbf{v}_\ast}^2 \le \left(1-\frac{\alpha\sigma_m}{2}+3\sigma^2_M\alpha^2\right)\norm{\mathbf{v}_t-\mathbf{v}_\ast}^2 \\
&+\left(\frac{\alpha L^2}{\sigma_m}+3L^2\alpha^2\right)\left[2\left(\frac{1}{{1+\alpha\rho}}\right)^{t}\norm{\mathbf{W}_0-\mathbf{W}_\ast}^2+2\eta^2_W(6\alpha +\frac{8}{\rho})^2\right]\norm{\mathbf{v}_\ast}^2+\left(\frac{2\alpha}{\sigma_m}+3\alpha^2\right)\eta_v^2.
\end{align}
Recursively expanding $\norm{\mathbf{v}_t-\mathbf{v}_\ast}^2$ yields
\begin{align}
\norm{\mathbf{v}_{t}-\mathbf{v}_\ast}^2 \le& \left(1-\frac{\alpha\sigma_m}{2}+3\sigma^2_M\alpha^2\right)^t\norm{\mathbf{v}_0-\mathbf{v}_\ast}^2 \\&+\left(\frac{2\alpha L^2}{\sigma_m}+6L^2\alpha^2\right)\norm{\mathbf{W}_t-\mathbf{W}_\ast}^2\norm{\mathbf{v}_\ast}^2t\left(1-\frac{\alpha\sigma_m}{2}+3\sigma^2_M\alpha^2\vee \frac{1}{{1+\alpha\rho}}\right)^{t-1}\\&+\left[\left(\frac{2\alpha L^2}{\sigma_m}+6L^2\alpha^2\right)(6\alpha +\frac{8}{\rho})^2\norm{\mathbf{v}_\ast}^2\eta^2_W+\left(\frac{2\alpha}{\sigma_m}+3\alpha^2\right)\eta_v^2 \right] \frac{1}{\frac{\alpha\sigma_m}{2}-3\sigma^2_M\alpha^2}.
\end{align}
Plug in the results from Result~\ref{eigen}, and we have the desired result.
\end{proof}
\subsubsection*{Proof of Theorem~\ref{training_dynamics}}
\begin{proof}
First, we prove $\mathbf{W}_{t+1}\in \mathcal{W}$. We start by writing
\begin{align}
&\Tr(\mathbf{W}_\ast^\top\mathbf{W}_{t+1})=\Tr(\mathbf{W}_\ast^\top\hat\mathbf{U})+\Tr(\mathbf{W}_\ast^\top(\mathbf{W}_{t+1}-\hat{\mathbf{U}})) \\
&\ge \Tr\left(\mathbf{W}^\top_\ast \frac{\mathbf{W}_t + \alpha \rho \mathbf{W}^\ast}{1 + \alpha \rho } \right) -\norm{\mathbf{W}_\ast^\top}\norm{\mathbf{W}_{t+1}-\hat{\mathbf{U}}}\\
&\ge \frac{\Tr\left(\mathbf{W}^\top_\ast \mathbf{W}_t\right)}{1 + \alpha \rho} +\frac{\alpha \rho}{1 + \alpha \rho}-\norm{\mathbf{W}_\ast^\top}\norm{\mathbf{W}_{t+1}-\hat{\mathbf{U}}} \\
&\ge \frac{\Tr\left(\mathbf{W}^\top_\ast \mathbf{W}_t\right)}{1 + \alpha \rho} +\frac{\alpha \rho}{1 + \alpha \rho}-4\alpha\eta_W.
\end{align}
Since ${\Tr\left(\mathbf{W}^\top_\ast \mathbf{W}_t\right)} \ge {\Tr\left(\mathbf{W}^\top_\ast \mathbf{W}_0\right)}/2$ and ${\Tr\left(\mathbf{W}^\top_\ast \mathbf{W}_0\right)}\le 1$, we can have
\begin{align}
\Tr(\mathbf{W}_\ast^\top\mathbf{W}_{t+1}) \ge \frac{\frac{1}{2}}{1+\alpha\rho}{\Tr\left(\mathbf{W}^\top_\ast \mathbf{W}_0\right)}+\frac{\alpha \rho}{1 + \alpha \rho}{\Tr\left(\mathbf{W}^\top_\ast \mathbf{W}_0\right)} -4\alpha\eta_W.
\end{align}
{
As we have by assumption $4\alpha\eta_W \le \frac{\alpha\rho/2}{1+\alpha\rho}{\Tr\left(\mathbf{W}^\top_\ast \mathbf{W}_0\right)}$}, we arrive at
\begin{align}
\Tr(\mathbf{W}_\ast^\top\mathbf{W}_{t+1}) \ge {\Tr\left(\mathbf{W}^\top_\ast \mathbf{W}_0\right)}/2.
\end{align}
Therefore, $\mathbf{W}_{t+1}\in \mathcal{W}$.
{Then, we write in a uniform fashion that for both node-level and graph-level tasks that $D=\max \left\lbrace \norm{\mathbf{v}_0- \mathbf{v}_\ast}, \sqrt{\frac{\left(\frac{4\alpha L^2}{\sigma_m}+12L^2\alpha^2\right)\norm{\mathbf{v}_\ast}^2+\frac{2\alpha}{\sigma_m}+3\alpha^2}{\frac{\alpha\sigma_m}{2}-3\sigma_M^2\alpha^2}}\right\rbrace $}, and we
show $\mathbf{v}_{t+1}\in \mathcal{V}$. First, we prove $\norm{\mathbf{v}_{t+1}-\mathbf{v}_\ast} \le D$, and it follows directly as
\begin{align}
&\norm{\mathbf{v}_{t+1}-\mathbf{v}_\ast}^2 \le \left(1-\frac{\alpha\sigma_m}{2}+3\sigma^2_M\alpha^2\right)\norm{\mathbf{v}_t-\mathbf{v}_\ast}^2 +\left(\frac{\alpha L^2}{\sigma_m}+3L^2\alpha^2\right)\norm{\mathbf{W}_t-\mathbf{W}_\ast}^2\norm{\mathbf{v}_\ast}^2+\left(\frac{2\alpha}{\sigma_m}+3\alpha^2\right)\eta_v^2\\
& \le \left(1-\frac{\alpha\sigma_m}{2}+3\sigma^2_M\alpha^2\right)D^2+\left(\frac{\alpha L^2}{\sigma_m}+3L^2\alpha^2\right)4\norm{\mathbf{v}_\ast}^2+\left(\frac{2\alpha}{\sigma_m}+3\alpha^2\right) \le D^2.
\end{align}
At last, we show $\mathbf{v}_\ast^\top\mathbf{v}_{t+1} \ge \rho$. First, we write
\begin{align}
\mathbf{v}_\ast^\top\bar{\mathbf{g}}^v_t = \mathbf{v}_\ast^\top\phi_{t,t}(\mathbf{v}_t-\mathbf{v}_\ast)+\mathbf{v}_\ast^\top\left(\phi_{t,t} -\phi_{t,\ast} \right)\mathbf{v}_\ast
=\mathbf{v}_\ast^\top\phi_{t,t}\mathbf{v}_t-\mathbf{v}_\ast^\top\phi_{t,\ast} \mathbf{v}_\ast.
\end{align}
Moreover, we obtain
\begin{align}
\mathbf{v}_\ast^\top\bar{\mathbf{v}}_{t+1} = \mathbf{v}_\ast^\top({\mathbf{v}}_{t}-\alpha\bar{\mathbf{g}}^v_t) &= \mathbf{v}_\ast^\top\left(\mathbf{I}-\alpha\phi_{t,t}\right)\mathbf{v}_t+\alpha\mathbf{v}_\ast^\top\phi_{t,\ast} \mathbf{v}_\ast
&\ge (1-\alpha\sigma_M)\rho + \alpha\sigma_m^\prime\norm{\mathbf{v}_\ast}^2,
\end{align}
{where $\sigma_m^\prime$ is the smallest non-negative eigenvalue of the matrix $\phi_{t,\ast}$, and by definition $\alpha\sigma_m^\prime\norm{\mathbf{v}_\ast}^2-\alpha\sigma_M\rho \ge \alpha\rho$}, therefore,
\begin{align}
\mathbf{v}_\ast^\top\bar{\mathbf{v}}_{t+1}\ge \rho+\alpha\rho.
\end{align}
As we can also write
\begin{align}
|\mathbf{v}_\ast^\top\mathbf{v}_{t+1}-\mathbf{v}_\ast^\top\bar\mathbf{v}_{t+1}|\le \alpha\norm{\mathbf{v}_\ast}\eta_v,
\end{align}
{and by the assumption on the number of samples $n$, we have $\norm{\mathbf{v}_\ast}\eta_v\le\rho$, thus }
\begin{align}
\mathbf{v}_\ast^\top\mathbf{v}_{t+1} \ge \rho+\alpha\rho-\alpha\rho=\rho.
\end{align}
\end{proof}
\subsection{Main Contributions}
Our first contribution is the design and convergence analysis of an approximate gradient descent algorithm for training GNNs. The algorithm is based on the idea of inexact optimization and approximate training~\cite{schmidt2011convergence,qunwei2017convergence,cao2019tight}, and the major advantage is that it reduces computational complexity while guaranteeing convergence and learnability at the same time. We prove that the proposed algorithm recovers the underlying true parameters of the teacher network with a linear convergence rate up to statistical precision. The assumptions are mild, and can be easily satisfied in practice. Specifically, our analysis is applicable to a wide range of activation functions (see Assumptions \ref{assump:1} and \ref{assump:2}), e.g., ReLU, Leaky ReLU, Sigmod, Softplus and Swish. The analysis only requires that the activation functions to be monotonic increasing, and Lipschitz, and does not depend on the specific gradient computation involved in the activation function \cite{brutzkus2017globally,du2017convolutional,safran2018spurious}.
Our second contribution is the introduction and the extension of the technique of approximate calculation in gradients for the algorithm in order to analyze GNNs. A similar idea was first proposed in~\cite{cao2019tight} to analyze the learnability of CNNs with non-overlapping convolution patches. We highlight that the non-overlapping convolution process is very different from the nature that the feature convolution at nodes in GNNs is intrinsically overlapping, as nodes may share common neighbors. The analysis framework in~\cite{cao2019tight} cannot be directly applied in GNN analysis. We extend the scope of the methodology and propose provably efficient algorithms to learn and analyze GNNs.
Our third contribution is that we investigate the empirical version of the problem where the estimation of parameters is based on $n$ independent samples. We provide uniform convergence results of the proposed algorithm with respect to the sample complexity. We also provide the parameter training dynamics with the proposed algorithm, and show that the training is provably stable.
To the best of the authors' knowledge, these theoretical results are the first sharp analyses of statistical efficiency of GNNs. For ease of presentation, we hereby informally present the main theorem in the paper as follows. We refer readers to Theorem~\ref{convergence_thm} for the precise statements.
\begin{theorem}[Main Theorem (Informal)] GNN is stably learnable with the proposed algorithms in linear time.
\end{theorem}
\subsection{Related Work}
There is a recent surge of interest in theoretically understanding properties of DNNs, e.g., hardness of learning~\cite{pmlr-v65-goel17a,song2017complexity}, landscape of neural networks~\cite{kawaguchi2016deep,choromanska2015loss,hardt2016identity,haeffele2015global,freeman2019topology,safran2016quality,zhou2017landscape,nguyen2017loss,nguyen2017lossb,ge2017learning,safran2018spurious,du2018power}, training dynamics using gradient descent approaches~\cite{tian2017analytical,zhong2017recovery,li2017convergence,du2018many}, and design of provable learning algorithms~\cite{goel2017eigenvalue,goel2017learning,zhang2015learning}. For non-convex optimization problems that satisfy strict saddle property, it was shown in \cite{du2017convolutional} and \cite{jin2017escape} that (stochastic) gradient descent converges in polynomial time.
The landscape of neural networks were then extensively studied~\cite{soltanolkotabi2017learning,kawaguchi2016deep,choromanska2015loss,hardt2016identity,haeffele2015global,freeman2019topology,safran2016quality,zhou2017landscape,nguyen2017loss,nguyen2017lossb,ge2017learning,safran2018spurious,du2018power}. Specifically, algorithms were designed for specific neural network architectures, and their learning of a neural network in polynomial time and sample complexity were further characterized~\cite{pmlr-v65-goel17a,zhang2016l1,zhang2015learning,sedghi2014provable,janzamin2015beating,gautier2016globally,goel2017learning,du2017convolutional}.
However, to the best of our knowledge, there has been no related attempt to understand the training dynamics and to design provably efficient learning algorithms for GNNs. In recent advances \cite{du2017convolutional,du2017gradient,pmlr-v80-goel18a,brutzkus2017globally,zhong2017learning}, the condition for (stochastic) gradient descent or its variants to recover the underlying true parameter under the teacher-student model in polynomial time were analyzed
for CNNs. Specifically, for spherical Gaussian distribution and non-overlapping patch structures, gradient descent approach can recover the underlying true parameter in polynomial time for CNNs \cite{brutzkus2017globally,du2017gradient}. In \cite{cao2019tight}, the information theoretic limits and the computational complexity of learning a CNN were developed.
Note that the GNN shares a similar convolutional structure as the CNN \cite{lecun1995convolutional}, and is therefore closely related to the CNN. Specifically, an image can be viewed as a graph grid where adjacent pixels are connected by edges. Similar to a 2D convolution for an image, graph convolution can also be performed on an image by taking an weighted average of information from adjacent pixels. Here, each pixel in an image can be viewed as a node in a graph, and its neighbors are ordered. The size of neighbors are then determined by the convolution filer size. However, different from image data, the neighbors of a node in a graph for GNNs are unordered, and vary in sizes. Moreover, information convolution in graph is by nature overlapping. Therefore, the analysis for CNNs is intrinsically different from and cannot be directly carried over to the analysis for GNNs. We will elaborate this in this paper.
\subsection{Notations}
Let $\norm{\cdot}$ denote the Euclidean norm of a finite-dimensional vector or a matrix. For a positive semidefinite matrix $X$, we use $\sigma_m(X)$ as its nonzero smallest eigenvalue
and $\sigma_M(X)$ as its nonzero largest eigenvalue in sequel. Throughout this paper, we use capital letters to denote matrices, lower case letters to denote vectors, and lower case Greek letters to denote scalars. We denote $x\wedge y \triangleq \min\{x,y\}$.
\subsection{Node-level GNN (NGNN) with One Graph}
We first investigate one graph with $n$ nodes, and node $i$ has a feature $\mathbf{H}_{i}\in \mathbb{R}^d$ with dimension $d$. We define $\mathbf{H} \in \mathbb{R}^{n\times d}$ as the node feature matrix of the graph. To learn a GNN, a node first collects all the features from its neighboring nodes, and updates its own feature with a shared local transition function among these nodes. This process is termed as graph convolution and can be expressed as
\begin{align}
\hat \mathbf{H}_i = \sigma \left(\frac{1}{\left| \mathcal{N}_i \right| }\sum_{j\in \mathcal{N}_i} \mathbf{H}_{j} \mathbf{W}\right),
\end{align}
where $\sigma$ is the activation function, $\mathcal{N}_i$ is the set containing node $i$ and all its neighboring nodes, and $\mathbf{W}\in \mathbb{R}^{d\times d_{out}}$ represents the local transition function which usually is a fully connected network taking in the averaged node feature and outputs the updated feature. After this process, the node feature gets updated with a change in dimension from $d$ to $d_{out}$. The above process of graph convolution can be written in a global point of view as
\begin{align}
\hat \mathbf{H} = \sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W}),
\end{align}
where
$\mathbf{D}\in \mathbb{R}^{n\times n}$ is the degree matrix of the graph and $\mathbf{A}\in \mathbb{R}^{n\times n}$ is the corresponding adjacency matrix. Here, $\mathbf{D}^{-1}\mathbf{A}\mathbf{H}$ is the operation that one node updates its feature by taking the average of its own features and features of its neighbors.
After the graph convolution, each node has its own feature updated, by incorporating local structural information of the graph and features from its neighboring nodes. Then, the updated node feature is passed into a fully connected layer $\mathbf{v} \in \mathbb{R}^{d_{out}}$. Thus, the output of the entire graph neural network for node-level tasks is
\begin{align}
\hat \mathbf{y} = \sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W}) \mathbf{v} \in \mathbb{R}^n.
\end{align}
As we have $\emph{one}$ graph, suppose the graph has a node-level label vector $\mathbf{y}\in \mathbb{R}^n$, and we have $n$ ground truth data pairs $\{\mathbf{H}_i, \mathbf{y}_i\}_{i=1}^n$. We assume that the node feature matrix $\mathbf{H}\in \mathbb{R}^{n\times d}$ is generated
independently from the standard Gaussian distribution, and the corresponding output $\mathbf{y}\in \mathbb{R}^n$
is generated from the teacher network with true parameters $\mathbf{W}^\ast$ and $\mathbf{v}^\ast$ as follows
\begin{align}
\mathbf{y} = \sigma (\mathbf{D}^{-1}\mathbf{A}\mathbf{H} \mathbf{W}_\ast) \mathbf{v}_\ast +\epsilon.
\end{align}
Here, $\{\epsilon_i\}_{i=1}^n$ are independent sub-Gaussian white noises with $\psi_2$ norm $\nu$, which is a broader class including Gaussian noise but far less restrictive. Without loss of generality, we assume that $\|\mathbf{W}_\ast\|_2 = 1$ in this paper.
\subsection{Graph-level GNN (GGNN) with Multiple Graphs}
We then investigate with $n$ graphs, and the $j$-th graph has $n_j$ nodes with the node feature matrix $\mathbf{H}_{j} \in \mathbb{R}^{n_j\times d}$.
Similar to the convolution for the case of one graph for NGNN, the $j$-th graph updates its node features by the following convolution process
\begin{align}
\hat \mathbf{H}_j = \sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W}),
\end{align}
where $\sigma$ is the activation function, $\mathbf{D}_j\in \mathbb{R}^{n_j\times n_j}$ is the degree matrix of $j$-th graph, and $\mathbf{A}_j \in \mathbb{R}^{n_j\times n_j}$ is the corresponding adjacency matrix. Here, $\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j$ is the operation that one node in the $j$-th graph updates its feature by taking the average of its own features and the features of its neighbors. $\mathbf{W}\in \mathbb{R}^{d\times d_{out}}$ represents the local transition function which is shared among different nodes across different graphs.
For GGNN, graph $j$ with $n_j$ node features has to aggregate the node features such that the graph has a unique graph-level representation. In the following, we discuss several aggregation methods which are most widely used in the literature.
\emph{\bf Particular node feature as graph embedding.} This method picks a specific node feature to represent the global graph embedding. Define $\mathbf{g}_j\in R^{d}$ as the embedding for the $j$-th graph, which in this case is expressed as
\begin{align}
\mathbf{g}_j^T = \mathbf{a}_{j} \sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W}),
\end{align}
where $\mathbf{a}_{j} \in R^{1\times n_j}$ is a row vector with ``1'' as its $i$-th element and otherwise ``0''. Consequently, the $i$-th node feature is picked to represent the $j$-th graph for follow-up graph-level tasks.
{\bf Attention-based Aggregation.}
In this approach, the graph embedding is a weighted sum of the node features and the weight is usually termed as attention that is learned from the corresponding node feature.
The attention-based weighted feature aggregation for the $j$-th graph can be expressed as
\begin{align}
\mathbf{g}_j^T = \mathbf{a}_j \sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W}),
\end{align}
where $\mathbf{a}_j\in R_+^{1\times n_j}$ is an attention row vector for all the nodes, with non-negative elements summing up to 1.
{\bf Averaging.}
Averaging is a special case of the attention based aggregation where the attention is identical for all the node features. Thus, we can write
\begin{align}
\mathbf{g}_j^T = \frac{1}{n_j}\mathbf{1}_{n_j}\sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W}),
\end{align}
where $\mathbf{1}_{n_j}$ is a $n_j$ dimensional row vector of all ones.
After the graph convolution and the node feature aggregation, each graph has its own embedding, incorporating its local structural information and all its node features. Then, the graph embedding is passed into a fully connected layer $\mathbf{v} \in \mathbb{R}^{d}$. Thus, the output of the $j$-th graph for graph-level tasks is
\begin{align}
\hat y_j = \mathbf{a}_j \sigma ( \mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W}) \mathbf{v} \in \mathbb{R}.
\end{align}
As we have $\emph{multiple}$ graphs, suppose each graph has a graph-level label $y_j \in \mathbb{R}$, and we have $n$ ground truth data pairs $\{\mathbf{H}_j, y_j\}_{j=1}^n$. We assume that each node feature matrix $\mathbf{H}_j\in \mathbb{R}^{n_j\times d}$ is generated
independently from the standard Gaussian distribution, and the corresponding output $y_j\in \mathbb{R}$
is generated from the teacher network with true parameters $\mathbf{W}^\ast$ and $\mathbf{v}^\ast$ as follows,
\begin{align}
y_j = \mathbf{a}_j\sigma (\mathbf{D}_j^{-1}\mathbf{A}_j\mathbf{H}_j \mathbf{W}_\ast) \mathbf{v}_\ast +\epsilon_j.
\end{align}
Here, $\{\epsilon_j\}_{j=1}^n$ are independent sub-Gaussian white noises with $\psi_2$ norm $\nu$. Through out this paper, we assume that $\|\mathbf{W}_\ast\|_2 = 1$. For simplicity, we assume that the learning process for $\mathbf{W}$ and $\mathbf{v}$ is disjoint from that for $\mathbf{a}_j$, or $\mathbf{a}_{j}$ is fixed and does not need training.
\subsection{Assumptions}
In this paper, we make the following assumptions, which can be easily satisfied with commonly used activation functions in practice, e.g., ReLU, Leaky ReLU, Sigmod and Softplus.
\begin{assumption}\label{assump:1}
$\sigma$ is a non-trivial increasing function, and is 1-Lipschitz continuous, i.e., $\left| \sigma(x_1)-\sigma(x_2) \right| \le \left|x_1-x_2 \right|, \forall x_1,x_2 \in \mathbb{R}$.
\end{assumption}
\begin{assumption}\label{assump:2}
$\norm{\sigma(x)} \le L_\sigma \norm{x}, \forall x \in \mathbb{R}^n$.
\end{assumption}
|
2,877,628,089,803 | arxiv | \section{Introduction}
\label{sec:intro}
Measurements of the gravitational fields around galaxies have for many decades provided firm evidence for `dark matter': galaxies attract their constituent stars, each other and their surroundings more strongly than can reasonably be estimated on the basis of their visible contents (for a historical account of the subject see \citealt{sanders:2014}). Furthermore, observations of the temperature anisotropies of the cosmic background radiation show that most of this dark matter cannot be baryonic \citep{planckXIII:2015}, in agreement with constraints from Big Bang Nucleosynthesis models (e.g., \citealt{fields/olive:2006}). Understanding the distribution of matter in the Universe is therefore a fundamental task of observational cosmology. The cold dark matter model, augmented with increasingly sophisticated galaxy formation recipes, has been very successful in describing, and reproducing the detailed statistical properties of, the large-scale distribution of galaxies. Though important issues remain, the $\Lambda$CDM model is the baseline for interpreting galaxy formation.
A central role in testing galaxy formation and cosmology models is played by observational mass measurements. They provided the first evidence for dark matter as mass discrepancies in galaxies (e.g., \citealt{bosma:1978}; \citealt{rubin/thonnard/ford:1978}; \citealt{faber/gallagher:1979}; \citealt{vanalbada/etal:1985}; \citealt{fabricant/lecar/gorenstein:1980,buote/canizares:1994}) and clusters \citep{zwicky:1937}. Mass measurements also serve to establish the link between the observed galaxies and their dark haloes, whose assembly age and clustering is mass-dependent, as is well described by the halo model \citep{cooray/sheth:2002}. Masses can be obtained from internal and relative kinematics of galaxies and their satellites, from X-ray observations of hydrostatic hot gaseous haloes around galaxies and clusters, and from strong and weak gravitational lensing.
While kinematics and X-ray mass determinations usually require assumptions of steady-state dynamical equilibrium, gravitational lensing directly probes the projected mass distribution. This model-independent aspect of lensing is very powerful, but comes at a price. Strong lensing measurements are rare and depend on suitable image configurations and mass distributions. These result in complex selection effects which it is essential to understand \citep{blandford/kochanek:1987}. Weak lensing, on the other hand, is intrinsically noisy and thus requires stacking many lenses, except for the most massive galaxy clusters \citep{tyson/etal:1984}. Over the past two decades, telescopes equipped with larger and larger CCD cameras have provided the means to make wide-area weak lensing studies
possible: most recently from the CFHTLenS analysis of the CFHT Legacy Survey \citep[henceforth \citetalias{heymans/etal:2012}]{heymans/etal:2012}, which targeted galaxies \citep[see for example][]{coupon/etal:2015}, groups and clusters \citep[see for example][]{ford/etal:2015} and the large-scale structure \citep[see for example][]{fu/etal:2014}.
This paper introduces the first lensing results from a new, large-scale multi-band imaging survey, the Kilo-Degree Survey (KiDS). Like the on-going Dark Energy Survey \citep[for first lensing results from DES science verification data see][]{melchior/etal:2015,vikram/etal:2015} and the HyperSuprimeCam survey \citep[for first lensing results from HSC see][]{Miyazaki/etal:2015}, KiDS aims to exploit the evolution of the density of clustered matter on large scales as a cosmological probe \citep{albrecht/etal:2006,peacock/etal:2006}, as well as to study the distribution of dark matter around galaxies with more accuracy than has been possible thus far from the ground
\citep[e.g.,][]{mandelbaum/etal:2006b,vanuitert/etal:2011,velander/etal:2014} or space \citep[e.g.,][]{leauthaud/etal:2012}.
Unlike DES and HSC, which use large allocations of time on 4- and 8-m facility telescopes respectively, KiDS uses a dedicated 2.6-m wide-field imaging telescope, specifically designed for exquisite seeing-limited image quality. It is also unique in that all its survey area overlaps with a deep near-infrared survey, VIKING \citep{edge/etal:2013}, providing extensive information on the spectral energy distribution of galaxies.
In \citet[henceforth \citetalias{dejong/etal:2015}]{dejong/etal:2015} we present the public data release of the first KiDS images and catalogues. Here we describe the aspects of the survey, data quality and analysis techniques that are particularly relevant for the weak lensing and photometric redshift measurements, and introduce the resulting shape catalogues. Accompanying papers present measurements and analyses of the mass distribution around galaxy groups \citep{viola/etal:2015}, galaxies (van Uitert et al., in preparation), and satellites \citep{sifon/etal:2015}.
This paper is organized as follows.
\S\ref{sec:data} presents the survey outline and data quality, as well as the data reduction procedures leading up to images and catalogues. \S\ref{sec:shapes} describes how the lensing measurements are made, \S\ref{sec:photom} discusses the photometry pipeline and the derived photometric redshifts, and in \S\ref{sec:sys} a number of tests for systematic errors in the data reduction are presented. Having demonstrated that the KiDS data deliver high-fidelity lensing measurements, in \S\ref{sec:cosmicshear} we calculate the cosmic shear signal from this first instalment of KiDS imaging. Our conclusions are summarized in \S\ref{sec:conclude}. In three appendices we give the mathematical detail of the PSF homogenization and matched-aperture photometry ``\textsc{GAaP}'' pipeline, illustrate some of the quality control plots that are used in the survey production and validation, and provide a guide to the source catalogues which are publicly available to download at \url{http://kids.strw.leidenuniv.nl}.
\begin{figure*}
\putfig{figs/example_goodpsf.pdf}
\caption{Example of high-quality KiDS data obtained with VST/OmegaCAM. PSF \textsc{SExtractor} parameters shown are for the stacked $r$-band image of tile KIDS\_132.0\_-0.5. {\it Left:} direction and strength of the ellipticities of stars in the field. {\it Right:} PSF ellipticity ({\it top}) and FWHM size ({\it bottom}) vs.\ distance from the centre of the image.
\label{fig:example_goodpsf}
}
\end{figure*}
\section{Description of Survey and Data quality}
\label{sec:data}
KiDS \citep{dejong/etal:2013} is a cosmological, multi-band imaging survey designed for weak lensing tomography. It uses the VLT Survey Telescope (VST) on the European Southern Observatory's Paranal observatory. The VST is an active-optics 2.6-m modified Ritchey-Chr\'etien telescope on an alt-az mount, with a 2-lens field corrector and a single instrument at its Cassegrain focus: the 300-megapixel OmegaCAM CCD mosaic imager. The 32 CCDs that make up the `science array' are $4102\times2048$-pixel e2v 44$-$82 devices, which sample the focal plane at a very uniform scale of 0.213\arcsec\ per 15-micron pixel. The chips are 3-edge buttable, and are mounted close together with small gaps of 25-85\arcsec.
OmegaCAM has thinned CCDs, which avoids some of the problems inherent in deep depletion devices such as the `brighter-fatter' effect that introduces non-linearity into the extraction of PSF shapes from the images \citep[][see also \S\ref{sec:brightfat}]{melchior/etal:2015,niemi/etal:2015}, or the `tree rings' \citep{plazas/etal:2014}.
In order to maintain good image quality over the large field of view OmegaCAM makes use of wavefront sensing. For this purpose two auxiliary CCDs are mounted on the outskirts of the focal plane, vertically displaced $\pm2$mm with respect to the science array. As a result, the star images registered on these chips are significantly out of focus and their shapes and sizes provide the information required to monitor and optimise the optical set-up in real-time. Auto-guiding of both tracking and field rotation is done using two further (in-focus) auxiliary CCDs.
For more details on VST and OmegaCAM see \citet{capaccioli/etal:2012} and \citet{kuijken:2011} and references therein.
The integrated optical design of the telescope and camera makes for uniquely uniform and high-quality images over the full one-square degree field of view, well-matched to the seeing conditions on Paranal. An example `best-case' point spread function (PSF) measured from a co-added stack of five dithered sub-exposures is shown in Fig.~\ref{fig:example_goodpsf}, demonstrating that the system is able to deliver better than 0.6\arcsec\ seeing over the full field even in long exposures with low-level ellipticity distortion. This benign PSF variation can be modelled well and leads to very low residuals in the galaxy ellipticity measurements, (see \S\ref{sec:shapes} below). Furthermore, since there are no instrument changes on the VST the system is mostly stable, and continuously monitored photometrically. For a discussion on the long-term photometric stability of VST/OmegaCAM see \cite{verdoes/etal:2013}.
KiDS is part of a suite of three ESO Public Imaging Surveys, which are queue-scheduled together on the VST and observed as conditions and visibility allow \citep{arnaboldi/etal:2013}. The VPHAS+ survey \citep{drew/etal:2014} targets the Southern Galactic plane with short exposures in broad bands and H$\alpha$, and the ATLAS project \citep{shanks/etal:2015} covers some 5000 square degrees of extra-galactic sky in the Southern Galactic Cap to similar depth as the (mostly Northern) Sloan Digital Sky Survey \citep{ahn/etal:2014}. KiDS, by contrast, aims to survey a 1500 square degree area to considerably greater depth, with the specific goal of measuring weak gravitational lensing masses for galaxies, groups and clusters as well as the power spectrum of the matter distribution on large scales.
KiDS targets two $\sim10$-degree wide strips on the sky: an equatorial strip between Right Ascension $10^h20^m$ and $15^h50^m$ plus the GAMA G09 field between $08^h30^m$ and $09^h30^m$, and a Southern strip through the South Galactic pole between $22^h00^m$ and $03^h30^m$ (see \citetalias{dejong/etal:2015} for the footprint of the survey).
It makes use of four broad-band interference filters, $ugri$, with bandpasses very similar to the SDSS filters described in \citet{fukugita/etal:1996}.
The observations of a particular KiDS tile in any given filter consist of five dithered sub-exposures (four in the case of the $u$ band), and are taken in immediate succession. This choice means that KiDS is not well suited for the study of variable stars or supernovae, but it does mean that all data for each tile/filter combination are taken in very similar observing conditions, resulting in homogeneous data. The prevailing seeing and sky brightness dictate which observation is scheduled. The seeing limits for the different filters are matched to the long-term Paranal average, such that the deep, best-seeing $r$-band observations can proceed at the same rate as the shallower $u$ and $g$ exposures. We summarize the observing parameters in Table~\ref{tab:obs}.
The first weak lensing results from KiDS are based on the first two public data releases \citepalias{dejong/etal:2015}, comprising the first 148 square degrees that were observed in all four filters. 109 square degrees from this data set overlap with the unique GAMA spectroscopic galaxy survey \citep{driver/etal:2011,baldry/etal:2014}, and this provides the focus of the early lensing science analyses.
A detailed discussion of the data quality can be found in \citetalias{dejong/etal:2015}; in Table~\ref{tab:obs} we summarize the key quality indicators of PSF sizes and limiting magnitudes. The PSF size distributions reflect that the best dark time is reserved for $r$, with $g$ and $u$ receiving progressively worse seeing time. The seeing distribution of the $i$-band, which is the only filter used in bright time, is very broad. Limiting AB magnitudes (calculated as 5$\sigma$ in a 2\arcsec\ aperture) in $g$ and $r$ are typically $\sim$25, with $u$ significantly shallower. For $i$ band observations, the large variation in seeing and sky brightness results in a wider variation in limiting magnitude than in the other bands.
PSF ellipticity is of critical importance for weak lensing studies. Tile-by-tile statistics of the mean and standard deviation of the PSF ellipticities\footnote{Note that in this section PSF ellipticity is defined as $(1-q)$ where $q=b/a$ is the minor-to-major axis ratio of the star images; this differs from the lensing definition used later on in this paper.} are presented in Fig.~\ref{fig:psf_ellipticities}, and show a typical mean ellipticity of 0.055 and scatter 0.035. Ellipticities do sometimes vary significantly over the field of view, due to focus or alignment errors of the optical system. When such errors arise, the most common ellipticity patterns encountered are an increase in ellipticity either in the centre or towards the corners of the field, and an increase in ellipticity towards one edge. Examples of such PSF ellipticity patterns are illustrated in Fig.~\ref{fig:psf_ellipticity_patterns}.
\begin{table}
\caption{Observing parameters for the KiDS survey. The longer $r$-band observations are made in the best seeing conditions and are used for galaxy shape measurements, while the remaining bands are used to measure photometric redshifts. Ranges cover $>95$ percent of the data.}
\begin{tabular}{cccclc}
\hline
Filter & Exposure & Dithers & Seeing & Limiting & Moon \\
& time (sec) & & (arcsec) & Magnitude & \\
\hline
$u$ & 960 & 4 & $0.95\pm0.2$ & $24.2\pm0.2$ &dark\\
$g$ & 900 & 5 & $0.8\pm0.2$ & $25.1\pm0.2$ &dark\\
$r$ & 1800 & 5 & $0.7\pm0.2$ & $24.9\pm0.25$&dark\\
$i$ & 1080 & 5 & $0.8\pm0.3$ & $23.7\pm0.7$ &bright\\
\hline
\end{tabular}
\label{tab:obs}
\end{table}
\begin{figure}
\putfig{figs/esodr2_psfellips_r_all.pdf}
\caption{Distribution of mean ellipticities and standard deviations of ellipticities of co-added images in data releases 1 and 2 of KiDS. The values are based on \textsc{SExtractor} ellipticity measurements of the 500 brightest unsaturated stars in each tile. The grey scale indicates the number of survey tiles in each bin. Top: $r$ band only; Bottom: data from all filters.
\label{fig:psf_ellipticities}
}
\end{figure}
\begin{figure}
\includegraphics[width=\hsize,trim=0.45in 0.95in 6in 0.45in,clip=true]{figs/psf_ellipticity_pattern2.pdf}
\includegraphics[width=\hsize,trim=0.45in 0.4in 6in 0.45in,clip=true]{figs/psf_ellipticity_pattern1.pdf}
\caption{PSF ellipticity patterns caused by a non-optimal optical configuration of the telescope. The curved focal plane of the VST translates any primary mirror astigmatism into increased ellipticity in the centre of the field (top). A tilt of the secondary mirror results in increased ellipticity near one edge of the field (bottom panel).
\label{fig:psf_ellipticity_patterns}
}
\end{figure}
The KiDS data processing pipeline for lensing builds upon the pipeline developed for the CFHTLenS project \citepalias{heymans/etal:2012}. CFHTLenS reanalysed data from the 154-square degree CFHTLS-Wide survey \citep[see for example][]{fu/etal:2008}, the largest deep cosmological lensing survey completed to date. It is based on new methods for measuring galaxy colours for photometric redshifts, and for obtaining ellipticities, the crucial ingredient for weak lensing. Our KiDS analysis uses further refinements of these techniques.
For historical and practical reasons, KiDS uses different data reduction pipelines for the lensing shape measurements and for the photometry. The latter is based on the 4-band co-added images that are released for general-purpose science through the ESO science archive, while the former uses a lensing-optimised processing pipeline of the $r$-band data only. Integration of both these pipelines and workflows into a single process is underway. Meanwhile, we have taken advantage of the redundancy to perform cross-checks between the different pipelines, for example on star-galaxy separation, masking and photometric calibration, where possible.
Weak lensing measurements are intrinsically noise-dominated; results therefore rely on ensemble averaging so that even small systematic residual shape errors can propagate into the final result and overwhelm the statistical power of the survey. For this reason our dedicated shape measurement pipeline (see \S\ref{sec:shapes}) avoids stacking sub-exposures and resampling of the image pixels. Instead it relies on combining the likelihoods of shape parameters from the different sub-exposures of each source. This part of the reduction was performed only on the $r$-band data, with image calibration and processing using the \textsc{Theli} pipeline \citep[henceforth \citetalias{erben/etal:2013}]{schirmer:2013, erben/etal:2013}, and object detection and classification, PSF modelling and shape measurements using the \emph{lens}fit code \citep[henceforth \citetalias{miller/etal:2013}]{miller/etal:2013}. Before distribution to the team for scientific analysis, the shape measurements were `sabotaged' through a blinding procedure described in \S\ref{sec:blinding}.
The multi-colour photometry was performed tile by tile on stacked images for each of the four bands. This part of the reduction made use of the \textsc{Astro-WISE} environment \citep{begeman/etal:2013} and optical reduction pipeline \citep{mcfarland/etal:2013}. These multi-band images are released to the ESO archive as part of the second KiDS data release, as described in \citetalias{dejong/etal:2015}. The lensing-quality reduction of the $r$-band imaging is made available on request.
\section{K\lowercase{i}DS galaxy shapes for lensing}
\label{sec:shapes}
As the lensing data processing of KiDS is built upon the pipeline developed for CFHTLenS, we refer the reader to the CFHTLenS technical papers \citepalias{heymans/etal:2012, miller/etal:2013, erben/etal:2013} for detailed descriptions of the \emph{lens}fit and \textsc{Theli} implementation. In this section we highlight the differences and improvements implemented for this first KiDS lensing analysis.
\subsection
[Lensing-quality THELI r-band data reduction]
{Lensing-quality T{\sevensize HELI} \textit{r}-band data reduction}
\label{sec:theli}
Our reduction of OmegaCAM data starts from
raw data provided by the ESO archive. Most of the processing algorithms used are
similar to those initially developed for the wide-field imager on the ESO 2.2-m telescope at La Silla, as
described in \citet{erben/etal:2005}. A more in-depth description with tests on the \textsc{Theli} data
products will be published in Erben et al.~(in preparation).
The \textsc{Theli} processing consists of the following steps:
\begin{enumerate}
\item The basis for all \textsc{Theli} processing is formed by \textit{all} publicly
available OmegaCAM data at the time of processing. All data are retrieved
from the ESO archive\footnote{ESO data archive: \url{http://archive.eso.org}}.
\item Science data are corrected for crosstalk effects. We measure significant
crosstalk between CCDs \#94, \#95 and \#96\footnote{Note that the OmegaCAM CCD's have names ESO\_CCD\_\#65 to \#96, see \citetalias{dejong/etal:2015} for their layout in the focal plane.} \citepalias{dejong/etal:2015}. Each pair of these three CCDs show
positive or negative crosstalk in both directions. We found that the strength
of the flux transfer significantly varies on short time-scales and we therefore
determine new crosstalk coefficients for each KiDS observing block (maximum duration ca.~1800s).
\item The characterisation and removal of the instrumental signature (bias, flat field, illumination correction) is performed
simultaneously on all data from a two-week period around each new-moon
and full-moon phase. Each two-week period of dark or bright time defines an
OmegaCAM processing run (see also section 4 of \citealt{erben/etal:2005}),
over which we assume that the instrument configuration is stable.
The processing run definition by moon phase
also naturally corresponds to the observations with different filters
($u$, $g$ and $r$ in dark time and $i$ during bright time).
\item Photometric zero-points, atmospheric extinction coefficients and colour terms are estimated per complete processing run.
They are obtained by calibration of \textit{all} science observations in a run that
overlap with the Data Release 10 of the SDSS \citep{ahn/etal:2014}. Between
30 and 150 such images, with good airmass coverage, are available per each processing run.
\item If necessary we correct OmegaCAM data for occasional electronic interference which
produces coherent horizontal patterns over the whole field of view.
\item As the last step of the run processing we subtract the sky from all
individual chips. The resulting single-CCD sub-exposures, 160 per $r$-band tile, form the basis for the later shape analysis with \emph{lens}fit.
\item All science images belonging to a given KiDS pointing are astrometrically
calibrated against the 2MASS catalogue \citep{skrutskie/etal:2006}. At present
we only use KiDS data belonging to each individual pointing for its astrometric
calibration. A more sophisticated procedure, taking into account overlaps from adjacent
pointing as well as data from the overlapping ATLAS survey \citep{shanks/etal:2015},
will be included in the future and should constrain the astrometric solution further near the edges of each tile.
\item The astrometrically calibrated data are co-added with a weighted mean algorithm.
The identification of pixels that should not contribute, for example those affected by cosmic rays, and weighting
of usable pixels is determined as described in \citetalias{erben/etal:2013}.
\item Finally, \textsc{SExtractor} \citep{bertin/arnouts:1996} is run on the co-added image to generate the source catalogue for the lensing and matched-aperture photometry measurements.
\end{enumerate}
\begin{figure*}
\putfig{figs/shear/SG_plot_exp.pdf}
\caption{Automatic star-galaxy separation based on the second and fourth order moment radii $Q^{1/2}$ and $J^{1/4}$ of individual sources, for a typical KiDS observation.
Five out of the six square panels show the distributions for the individual sub-exposures, with the objects identified as stars shown in red. As the seeing differs between the sub-exposures, the combined distribution for the observation, in the sixth square panel, reveals a series of distinct stellar peaks. The right-most panel shows the distribution of these points in the traditional radius-magnitude plane for the co-added image.}
\label{fig:stargal}
\end{figure*}
The final products of the \textsc{Theli} processing are, for each tile, the single-chip $r$-band data,
the corresponding co-added image with associated weight map and sum
image, and a source catalogue (see also \citetalias{erben/etal:2013} for a more detailed description of these products). These images are made publicly available on request.
\subsection{Point spread function}
Knowledge of the point spread function (PSF) is essential for any weak lensing analysis, since the PSF modifies galaxy shapes. The thousands of stars recorded in every KiDS tile provide samples of the PSF across the field. The first steps are to identify these stars among the many galaxies in each image, and to build a PSF model from them.
\subsubsection{Star selection}
High-density, spatially homogeneous and pure star catalogues are required to construct a good PSF model across the field of view. We outline in this section how we classify stars in order to meet these requirements. We start by creating a source detection catalogue for each of the 5 sub-exposures in a KiDS field, using \textsc{SExtractor} with a high detection threshold. For each sub-exposure, and every detected object for which FLUX\_AUTO has a signal-to-noise ratio (SNR) larger than 15, we then measure the second-order moments $Q_{ij}$ and the axisymmetric fourth order moment $J$ given by
\begin{equation}} \newcommand{\ee}{\end{equation}
Q_{ij} = \frac{\int \d^2 \vec{x} \,W(\vec{x})I(\vec{x}) \, x_i x_j}
{\int \d^2\vec{x} \, W(\vec{x})I(\vec{x})} \, ,
\label{eqn:quadmom}
\ee
\begin{equation}} \newcommand{\ee}{\end{equation}
J=\frac{\int d^2 \vec{x} \,W(\vec{x})I(\vec{x}) \, |\vec{x}| ^4 }
{\int \d^2\vec{x} \, W(\vec{x})I(\vec{x})} \, .
\ee
In the above equations $I(\vec{x})$ is the surface brightness of the object at position $\vec{x}$ measured from the \textsc{SExtractor} position of the object, and $W(\vec{x})$ is a Gaussian weighting function which we employ to suppress noise at large scales. The width of the weighting function is fixed and we choose it to have a dispersion of 3 pixels, motivated by the typical seeing value of our $r$-band data ($\sim0.7$\arcsec).
Defining $Q = Q_{11} + Q_{22}$, we note that $Q^{1/2}$ and $J^{1/4}$ are two different measures of the size of an object, and the ratio between these two quantities depends on the concentration of the object's surface density profile.
These two parameters therefore efficiently classify sources according to their sizes and luminosity profiles.
Fig.~\ref{fig:stargal} shows the distribution of detected objects as a function of their second and fourth order moments for the different sub-exposures in an example tile. We see that galaxies are scattered over a wide range of $(Q,J)$ values whereas point sources cluster in a very compact region with low $Q^{1/2}$ and $J^{1/4}$. The width of this region depends on how strongly the PSF varies across the field of view.
We identify stars in the $Q^{1/2}$--$J^{1/4}$ plane by locating the compact over-density with a `friends of friends' algorithm. The fixed linking length was empirically determined from a sample of the data. We require the final star catalogue to contain the largest possible number of objects while minimising contamination by galaxies, as assessed visually by inspecting the stellar-locus in the (half-light radius, magnitude) plane, shown in the right panel of Fig.~\ref{fig:stargal}. In order to minimise the effect of the PSF variation across the field of view we perform this search in each individual CCD and sub-exposure separately. This automated method is a significant improvement over the approach taken by CFHTLenS, where the stellar locus was visually identified for each chip using data from the co-added image, for every tile in the full survey.
In a final cleaning stage, we combine the 5 star catalogues for each chip and we count how many times each object has been classified as a star. The final star catalogue requires that an object be classified as a star in at least 3 out of the 5 sub-exposures. In the cases where the object is not observed in all sub-exposures, for example when the object lands in a chip gap or at the edge of the field due to the dithering, we only require the star to be classified as such once. In Appendix~\ref{app:phot} on quality control, Fig.~\ref{fig:check_photometry} shows an example distribution of the selected stars across the field of view. Plots such as these are inspected for each field to ensure that the stellar classification is producing a spatially homogeneous catalogue. Confirmation of the purity of our star catalogue comes from the PSF modelling where typically less than 1 percent of the objects are rejected as outliers at that stage.
\subsubsection{PSF modelling}
\label{sec:psfres}
For each KiDS sub-exposure, we construct a PSF model that describes the position-dependent shapes of the identified stars.
The PSF model is expressed as a set of amplitudes on a $32 \times 32$ pixel grid, sampled at the CCD detector resolution and normalized so that their sum is unity.
The variation of each pixel value with position in the field takes the form of a two-dimensional polynomial of order $n$, with the added flexibility that the lowest-order coefficients are allowed to differ from CCD to CCD: this allows for a more complex spatial variation of the PSF and also, in principle, allows for discontinuities in the PSF between adjacent detectors. If the polynomial coefficients up to order $n_\rmn{c}$ are allowed to vary in this way, then the total number of model coefficients per pixel is
\begin{equation}} \newcommand{\ee}{\end{equation}
N_\rmn{coeff} = \frac{1}{2} \left[(n+1)(n+2) + (N_\rmn{D}-1)(n_\rmn{c}+1)(n_\rmn{c}+2) \right]
\ee
with $N_\rmn{D} = 32$, the number of CCD detectors in OmegaCAM. The coefficients for each PSF pixel are fitted independently and a check is made that the total PSF normalisation is unity at the end of the fitting process. The flux and position of each star are also allowed to be free parameters in the fit,
with the stars aligned to the pixel grid of the PSF model using a sinc function interpolation.
This approach allows a great deal of flexibility in the PSF model: in particular it does not imprint any additional basis set signature on top of the detector pixel basis. The total number of coefficients is large, but is well constrained by the large number of data measurements (number of pixels times number of stars) in each sub-exposure. Only stars with a high SNR should be used for constructing the PSF model, because otherwise noise on the measurement of the stellar positions will bias the model towards larger sizes.
In order to optimise the functional form of the PSF model, we selected 10 KiDS fields at random and analysed the five $r$-band sub-exposures in each field, varying the polynomial orders $n$ and $n_\rmn{c}$. We characterise the PSF ellipticity $\epsilon_{\rm PSF}$ and size $R^2_{\rm PSF}$ of the pixelised model and data as
\begin{equation}} \newcommand{\ee}{\end{equation}
\epsilon_{\rm PSF} = \frac{Q_{11} - Q_{22} + 2\rmn{i}Q_{12}}{Q_{11} +Q_{22} + 2\sqrt{Q_{11}Q_{22} -Q_{12}^2}} \, ,
\label{eqn:estar}
\ee
\begin{equation}} \newcommand{\ee}{\end{equation}
R^2_{\rm PSF} = \sqrt{Q_{11}Q_{22} -Q_{12}^2} \,
\label{eqn:psfsize}
\ee
(cf.\ Eq.~\ref{eqn:quadmom}), with the weight function $W(\vec{x})$ set to a Gaussian of dispersion two pixels.
For an accurate PSF model the residuals $\delta \epsilon_{\rm PSF} = \epsilon_{\rm PSF}(\hbox{model}) -\epsilon_{\rm PSF}(\hbox{data}) $ and $\delta R^2_{\rm PSF} = R^2_{\rm PSF}(\hbox{model}) - R^2_{\rm PSF}(\hbox{data})$ should be dominated by photon noise, and therefore uncorrelated between neighbouring stars. Following \cite{rowe:2010} we therefore seek to miminise the PSF ellipticity residual auto-correlation, with as few parameters as necessary. This statistic can be estimated from the data as
\begin{equation}} \newcommand{\ee}{\end{equation}
\langle \delta \epsilon_{\rm PSF} \delta \epsilon_{\rm PSF}^* \rangle_{\theta} =
\overline{ \Re\left[ \delta \epsilon_{\rm PSF} (\vec{x}_a) \delta \epsilon_{\rm PSF}^* (\vec{x}_b) \right]} \, ,
\label{eqn:xi_res}
\ee
where the average is taken over pairs of objects for which $|\vec{x}_a - \vec{x}_b|$ falls in a bin around angular separation $\theta$, and $\Re$ and $^*$ denote the real part and complex conjugate, respectively. Analogously, we also measure the correlation function of the residual size $\delta R^2_{\rm PSF}$.
\begin{figure*}
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/shear/psf_order_choice_nf3.pdf}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figs/shear/psf_order_choice_nf4.pdf}
\end{minipage}
\caption{Selecting the optimal fitting orders for the PSF model for a sample of representative KiDS observations. The upper panels show the residual PSF ellipticity correlation, measured at 1 arcmin separation, as a function of the average PSF ellipticity within the sub-exposure. The lower panels show the two-point residual PSF size correlation measured at 1 arcmin separation as a function of the average PSF size $R^2_{\rm PSF}$. Each data point represents a different sub-exposure, with the point style indicating the polynomial orders $(n, n_{\rm c})$ of the model. Left: results for $n= 3$; right: results for $n= 4$.}
\label{fig:orderchoice}
\end{figure*}
Fig.~\ref{fig:orderchoice} shows the residual correlation functions measured at 1 arcmin separation. We chose this scale as it is the smallest scale that can be reliably measured given the typical star density in the images. The data come from our sample of KiDS sub-exposures for six different PSF models, with the full field of view polynomial order $n=3$ and 4, and chip-dependent polynomial order $n_\rmn{c} = 0$ and 1. We also test models without any chip-dependent coefficients, denoted $n_{\rm c} = {\rm none}$ (these models have a total of $\frac12(n+1)(n+2)$ coefficients). The lower panels of Fig.~\ref{fig:orderchoice} show the residual PSF size correlation as a function of the average PSF size $R^2_{\rm PSF}$. We see a general trend, that the larger sized PSFs lead to more accurate modelling, which suggests that the impact of undersampling, when imaging the PSF, may be an important effect to model in the future. The upper panels of Fig.~\ref{fig:orderchoice} show the residual PSF ellipticity correlation as a function of the average PSF ellipticity $| \epsilon_{\rm PSF}|$. Unsurprisingly, more elliptical PSFs lead to less accurate modelling.
Comparing the results from the different models, we find a reduction in the residuals with the inclusion of a chip-dependent component to the PSF modelling, favouring $n_\rmn{c} = 1$. With that choice, we find little difference between the $n=3$ and $n=4$ model,
selecting the $n= 3$ PSF model, as is has the lowest number of parameters for the two options. With $n= 3$ and $n_\rmn{c}=1$ we fit $N_\rmn{coeff} = 103$ parameters per model PSF pixel. (With several thousand stars per tile, this large number of parameters can still be determined reliably from the data.)
Analysing the full KiDS data set with this PSF model, we find residual correlation functions in the range
$\langle \delta R^2_{\rm PSF} \delta R^2_{\rm PSF} \rangle_{\theta = 1'} = (3.5 \pm 1.3) \times 10^{-7} {\rm arcsec ^4}$, and $\langle \delta \epsilon_{\rm PSF} \delta \epsilon_{\rm PSF}^* \rangle_{\theta = 1'} = (7.1 \pm 3.5) \times 10^{-6}$. The size residual correlation remains fairly constant as a function of angular separation, whereas the amplitude of the ellipticity residual correlation decreases with increasing separation, becoming consistent with zero for scales $\theta> 20$\arcmin. The angular dependence of the PSF ellipticity correlation function and the residuals are shown for an example KiDS field in Appendix~\ref{app:psfmod}.
Even though we find persistent PSF residual correlations, they are too small to impact our scientific analyses of the data. For example, \citet{rowe:2010} define a requirement on the systematic PSF ellipticity residual with correlation amplitude $\langle \delta \epsilon_{\rm PSF} \delta \epsilon_{\rm PSF}^* \rangle_{\theta= 1'} < 5 \times 10^{-5}$, such that it contributes to less than 5 percent of the $\Lambda$CDM cosmic shear lensing signal for source galaxies at $z\sim0.5$. At larger separations the requirement is more stringent with $\langle \delta \epsilon_{\rm PSF} \delta \epsilon_{\rm PSF}^* \rangle_{\theta= 10'} < 8 \times 10^{-6}$ but, as seen in Fig.~\ref{fig:check_PSF}, the KiDS residual correlation functions are already consistent with zero on these scales. With the present analysis we therefore easily meet the \citet{rowe:2010} target requirement on PSF ellipticity residuals for the full KiDS data set.
PSF modelling software development, currently undergoing testing for future data analysis, allows for the central region of the pixel basis PSF model to be oversampled by a factor 3. Rather than re-centering each star's data to its best fit position, the fitting proceeds by shifting the model to the best-fit data position for each star. These developments improve the sampling of the core of the PSF and avoid the introduction of correlated noise caused by interpolation of the star data in the re-centering process. The disadvantage of this procedure is that the model pixel values become correlated, requiring a joint fit of a large number of parameters, which is computationally expensive.
\begin{figure}
\putfig{figs/shear/size_res_vs_mag.png}
\caption{The average residual PSF size as a function of the magnitude of the star, showing no significant flux-dependence in the PSF size. The average non-zero residual (shown as a dashed line) is too low to introduce any significant bias in our analysis.}
\label{fig:brightfat}
\end{figure}
\subsubsection{Testing PSF flux dependence}
\label{sec:brightfat}
\citet{melchior/etal:2015} report a significant flux dependence in the PSF size in Dark Energy Survey data.
The effect is due to the use of modern deep-depletion CCDs in DECam \citep{antilogus/etal:2014}, and is not expected to affect the thinned OmegaCAM detectors used for KiDS.
This is indeed the case. Fig.~\ref{fig:brightfat} shows the difference between the PSF model size and the star size, averaged over the full KiDS data set, as a function of the star's magnitude. As the PSF model has no flux-dependence by definition, any detected flux dependence in the size offset between model and data would arise from CCD effects. Only a very slight trend with star magnitude is seen, more than an order of magnitude smaller than the effect seen by \citet{melchior/etal:2015}. The origin of the average non-zero residual of $(3.3 \pm 0.3) \times 10^{-3}$ is unclear: most likely it arises from the presence of noise in the size measurement of the data, in comparison to the measurement on the noise-free model, or from not including the effects of undersampling in the PSF modelling. We conclude that PSF flux-dependence will not be a challenge for the KiDS analysis.
\subsection{Shape measurement with \textbfit{lens}fit}
Weak gravitational lensing induces a coherent distortion in the images of distant galaxies, which we parametrize through the observed complex galaxy ellipticity $\epsilon = \epsilon_1 + \rmn{i} \epsilon_2$. For a galaxy that is a perfect ellipse, the ellipticity parameters are related to the axial ratio $q$ and orientation $\phi$ as
\begin{equation}} \newcommand{\ee}{\end{equation}
\epsilon=
\epsilon_1 + \rmn{i}\epsilon_2 = \left(\frac{1-q}{1+q}\right)\rmn{e}^{2\rmn{i}\phi}\, .
\label{eqn:e1e2}
\ee
Central to any weak lensing study is a data analysis tool that can determine galaxy shapes from imaging data. We use the \emph{lens}fit code\footnote{See \citetalias{heymans/etal:2012} for a discussion on why \emph{lens}fit is our preferred shape measurement method.} (\citealt{miller/etal:2007,kitching/etal:2008}; \citetalias{miller/etal:2013}) which performs a seven-parameter galaxy model fit ($x,y$ position, flux, scale length $r_\d$, bulge-to-disc ratio and ellipticity $\epsilon_{1,2}$), simultaneously to all sub-exposures of a given galaxy, taking into account the different PSFs in each sub-exposure and the astrometric solution for each CCD.
\emph{Lens}fit first performs an analytic marginalization over the galaxy model's centroid, flux and bulge fraction, using the priors from \citetalias{miller/etal:2013}. It then numerically marginalizes the resulting joint likelihood distribution $L(\epsilon,r_\d)$ over scale length, incorporating a magnitude-dependent prior derived from high-resolution Hubble Space Telescope (HST) imaging. Finally, for each galaxy a mean likelihood estimate of the ellipticity and an estimated inverse variance weight is derived, as described by \citetalias{miller/etal:2013}. We will refer to this latter quantity as the `lensing weight'.
The KiDS lensing data are obtained in the $r$ band. We therefore change the \emph{lens}fit scale-length prior with respect to the $i$-band based prior used in the CFHTLenS analysis. For this purpose we repeat the \citetalias{miller/etal:2013} analysis of the \citet{Simard/etal:2002} catalogue of morphological parameters. This catalogue is based on \textsc{GALFIT} galaxy profile fitting \citep{Peng/etal:2010} of HST imaging data, and provides disc and bulge parameters in various wavebands including the F606W filter which is a good match to the KiDS {\it r} band. Selecting galaxies with $18.5<r_{606}<25.5$, we find the following relation between the median disc scale length and magnitude:
\begin{equation}} \newcommand{\ee}{\end{equation}
\ln(r_\d/{\rm arcsec})= -1.320 - 0.278 (r_{606}-23) \, .
\ee
We note that the more extensive HST galaxy morphology analysis by \citet{Griffith/etal:2012} satisfies our requirements in terms of imaging depth and filter choice. However, it is limited to single S\'ersic profile fits which prevents the selection of disc-dominated galaxies with which to determine a scale-length prior for the disc component.
As discussed in \citetalias{miller/etal:2013}, the measurements do not strongly constrain the shape of the prior of $r_\d$ and we therefore adopt the same functional form \citepalias[appendix B1]{miller/etal:2013}. For the bulge scale-length prior,
the small numbers of bulge dominated galaxies in the \citet{Simard/etal:2002} catalogue prevent a robust determination. We continue to fix the half-light radius of the bulge component to be the exponential scale length of the disc component, as motivated in appendix A of \citetalias{miller/etal:2013}.
The change of the galaxy size prior is the only significant change in \emph{lens}fit as compared to the CFHTLenS analysis. While appropriate, its effect on the results is small: Hildebrandt et al. (in preparation) present an analysis of the RCSLenS survey where similar changes in the scale-length prior are shown not to impact the measured shear amplitudes by more than a few percent.
\begin{figure}
\putfig{figs/mask/stellar_halo_image.png}
\caption{ Left: Small and large stellar haloes, due to reflection from different pairs of surfaces within the VST.
Right: Same image, displaying the two types of halo masks demarcated by small and large circles.
The reflection halo centroids are offset from the star; relative to the centre of the
field-of-view, the small (large) halo centroids lie inwards (outwards).
The bright star in these images has an $r$ magnitude of $\sim$10. The large circle has a radius of 210\arcsec.
\label{fig:stellar_halo_image}
}
\end{figure}
\subsection{Masking of the KiDS images}
The masking of the $r$-band \textsc{Theli} reduction uses the \textsc{automask}
tool\footnote{\url{http://marvinweb.astro.uni-bonn.de/data_products/THELIWWW/automask.html}}
to generate automated masks, which come in three types.
`Void masks' indicate regions of high spurious object detection and/or a strong density
gradient in the object density distribution \citep[see][]{dietrich/etal:2007}. `Stellar masks'
are generated based on standard stellar catalogs GSC-1
\citep[complete at the bright end,][]{lasker/etal:1996}
and UCAC4 \citep[complete from $r\simeq10$ to $\simeq16$]{zacharias/etal:2012}.
The stellar catalogs are used to mask the brighter stars as well as associated small and large
reflection haloes, using mask radii and centroid offsets that were derived empirically for OmegaCAM
as illustrated in Fig.~\ref{fig:stellar_halo_image}.
Finally, the `asteroid masks' flag asteroids and satellite trails.
The \textsc{automask} algorithms and procedures are described in more detail in \citet{erben/etal:2009}.
Fig.~\ref{fig:stellar_halo_signal} shows the effect of bright stars, grouped by magnitude, on the neighbouring
`source galaxies', defined here as objects with valid shape measurements.
The upper panel shows the relative source number density within the large reflection haloes as a function of the radial distance from the centre. The annular halo clearly results in a source count incompleteness out to $\sim 200$\arcsec\ from the halo centre, the severity of which increases with stellar magnitude.
The detection incompleteness is essentially identical whether the source objects are unweighted, or weighted
using the \emph{lens}fit weights, implying that the source density
count deficiency originates in the object detection stage.
The second panel of Fig.~\ref{fig:stellar_halo_signal} shows the tangential shear measured by \emph{lens}fit for objects detected within the large reflection haloes, as a function of the distance from its centre. In general this signal is found to be consistent with zero, on all scales, indicating that the local sky background subtraction performed by \emph{lens}fit removes any bias introduced by the haloes. The cross shear signal, not shown, is also consistent with zero. For the brightest stellar sources with $r<10.5$, however, there is a $\sim$2$\sigma$ coherent tangential ellipticity detected at the halo edges at $\sim 170$\arcsec, and on small scales $<50$\arcsec. For this reason we mask and remove the areas with reflection haloes from the scientific analyses. A similar analysis was also performed within the other, smaller halo seen in Fig.~\ref{fig:stellar_halo_image},
showing identical trends in source count incompleteness and shape coherence.
Based on this analysis, we define two reflection halo masks: a `conservative' mask, with a magnitude limit at $r=11.5$, to indicate the regions of source density incompleteness, and a `nominal' mask that flags regions where there are signs of a coherent shear ($r<10.5$). The stellar halo masks are based on both the GSC-1 and UCAC4 catalog.
The lower panels of Fig.~\ref{fig:stellar_halo_signal} investigate source incompleteness and radial alignment
of source galaxies around the centre of the bright stars themselves, where no sources are detected within 10\arcsec of the star, as these pixels are typically saturated. Again we see that the incompleteness and shape
coherence depends strongly on the stellar magnitude and the radial dependence of this effect determines the area masked around each star. All stars in the UCAC4 catalog with $r<14.0$ are masked, with masking radius (in \arcsec) determined from
the stellar magnitude as $R_{\rm mask} = 2.96 r^2 - 81.2r + 569$. Taking an example $r=11$ magnitude star, $R_{\rm mask} =34$\arcsec, thereby masking the full area within which a significant coherent negative tangential shear is measured.
\begin{figure}
\includegraphics[width=\hsize,trim=15 0 10 0]{figs/mask/stellar_halo_signals_D.png}
\caption{The impact of bright stars on source galaxy counts and galaxy shapes.
The upper two panels show the number count completeness and tangential shear measured within the large reflection haloes
as a function of the radial distance from the centre; the dashed vertical line indicates the 210\arcsec\ radius that is used
to mask these haloes. For the very brightest haloes, a coherent tangential alignment of $\sim$1--2 percent
can be seen at the edges of the large reflection halo, and on small scales.
The lower two panels show these same quantities as function of distance from the centre of bright stars.
We see two effects: a decrease in the source galaxy counts, and a strong, coherent radial
shape alignment immediately around the star, which can be removed from the sample by applying stellar masks.
\label{fig:stellar_halo_signal}
}
\end{figure}
The automatically generated masks were visually inspected, and additional manual masking was
performed if necessary. A number of the early observations are affected by
stray light from bright objects outside the field-of-view as a result of poor baffling of the telescope (see \citetalias{dejong/etal:2015} for some examples). Additionally, a number of missed asteroid and
satellite tracks were masked manually in the co-added image. Manual masking is also used to cover areas
of non-uniformity which the void automask had missed, or for additional stellar halo masks in cases
where the bright stellar catalogues are incomplete. The manual masking is then inspected by a
single person to check for uniformity.
In total, the automated masks, using the conservative halo reflection scheme, along with the manual masks from the lensing pipeline remove 32 percent of the imaged area. With recent improvements at the VST to reduce scattered light, we anticipate the masked area fraction to reduce in future analyses. For this first analysis of 109 square degrees of KiDS data that overlap with GAMA, the total unmasked area is $A=75.1$ square degrees.
\subsection{Effective number density of lensed galaxies}
\label{sec:shapecat}
In its current implementation, \emph{lens}fit is quite conservative when it comes to rejecting galaxies whose isophotes might be affected by neighbours. The final \emph{lens}fit shape catalogue contains a total of 2.2 million sources with non-zero lensing weight, with an average number density of 8.88 galaxies per square arcmin over the unmasked area $A$ of 75.1 square degrees. While this raw number density provides information about the number of resolved, relatively isolated galaxies, it does not represent the true statistical power of the survey. When weights are employed in the analysis to account for the increased uncertainty in the galaxy shape measurements of smaller or fainter objects, the effective number density is reduced.
\citet{chang/etal:2013} propose an effective number density defined as
\begin{equation}} \newcommand{\ee}{\end{equation}
n_\rmn{eff}=\frac{1}{A} \sum_i \frac{\sigma_\rmn{SN}^2}{\sigma_\rmn{SN}^2 + \sigma_{m,i}^2}\, ,
\ee
where $\sigma_\rmn{SN}$ is the intrinsic ellipticity dispersion (`shape noise') and $\sigma_{m,i}$ is the measurement error for galaxy $i$. With this definition, $ n_\rmn{eff}$ represents the equivalent number density of high SNR, intrinsic shape-noise dominated sources with ellipticity dispersion $\sigma_\rmn{SN}$, that would yield a shear measurement of the same accuracy.
As the \emph{lens}fit weights are designed to be an inverse variance weight, $ w_i^{-1} \sim \sigma_\rmn{SN}^2 + \sigma_{m,i}^2$, with the intrinsic ellipticity dispersion fixed to a value $\sigma_\rmn{SN}=0.255$, we can estimate $n_\rmn{eff}$ as
\begin{equation}} \newcommand{\ee}{\end{equation}
n_\rmn{eff} \approx \sigma_\rmn{SN}^2 \frac{\sum_i w_i}{A} = 4.48\, \hbox{arcmin}^{-2} \, .
\ee
The inverse shear variance per unit area, $\hat{w}$, that the survey provides is thus equal to
\begin{equation}} \newcommand{\ee}{\end{equation}
\hat{w} = \frac{\sum_i w_i}{A} = 69 \, \hbox{arcmin}^{-2}
\ee
which corresponds to a 1-$\sigma$ shear uncertainty of $(\hat{w}A)^{-1/2}=0.12/\sqrt{N}$ when averaging $N$ square arc\-minute of survey.
While this definition is useful for forecasting, it makes a number of assumptions; that the shape noise and measurement noise are uncorrelated, that the estimated inverse variance weight is exact, that the intrinsic ellipticity dispersion does not evolve with redshift and that it can be accurately measured from high-SNR imaging of low redshift galaxies.
\citetalias{heymans/etal:2012} propose an alternative definition of an effective number density defined as
\begin{equation}} \newcommand{\ee}{\end{equation}
n_\rmn{eff}^*= \frac{1}{A} \frac{(\sum_i w_i)^2}{\sum_i w_i^2} = 5.98 \hbox{arcmin}^{-2} \, .
\ee
With this definition, $ n_\rmn{eff}^*$ represents the equivalent number density of sources with unit weight and a total ellipticity dispersion per component of $\sigma_\rmn{\epsilon}$, that would yield a shear measurement of the same accuracy where
\begin{equation}} \newcommand{\ee}{\end{equation}
\sigma_\epsilon^2 = \frac{1}{2}\frac{\sum_i w_i^2 \epsilon_i \epsilon_i^*}{\sum_i w_i^2} \, .
\ee
For KiDS we measure $\sigma_\epsilon = 0.278$ per ellipticity component, which is very similar to the ellipticity dispersion measured in CFHTLenS. This definition is useful as it makes no assumptions about how the weight is defined. As the shot noise component for cosmic shear measurement scales with $\sigma^2/n_\rmn{eff}$, the difference between these two definitions for KiDS would change the expected shot noise error on a cosmic shear survey by $\sim 10$ percent.
\section{K\lowercase{i}DS Photometry and Photometric Redshifts}
\label{sec:photom}
Without good redshift estimates any weak lensing data set is of limited use, as redshifts are required to determine the critical surface density that sets the physical scale for all lensing-based mass measurements. For the moment, KiDS photometric redshifts are derived from $ugri$ imaging, and are adequate for the first lensing science analyses from the survey (\citealt{viola/etal:2015}; \citealt{sifon/etal:2015}; van Uitert et al., in preparation). Combination with the VIKING near-IR flux measurements will be used to refine the redshifts further in future.
The colours of the galaxies are obtained with `Gaussian Aperture and PSF' (\textsc{GAaP}) photometry, a novel technique that is designed to account for PSF differences between observations in different filter bands while optimizing SNR. The procedure is summarized in \S\ref{sec:gaap} below, and described in detail in Appendix~\ref{app:gaap}.
We base our photometric redshifts on the \textsc{bpz} code of \citet{benitez:2000}. Further details are given in \S\ref{sec:pz} below. Alternative photometric redshift techniques based on machine-learning are also being investigated \citep{cavuoti/etal:2015}, but have not been integrated into the lensing analysis at this point.
\subsection{Data reduction}
\label{sec:AW_data_red}
The KiDS photometric redshifts are based on the co-added images provided in the public data releases. The processing from raw pixel data to these calibrated image stacks is performed with a version of the \textsc{Astro-WISE} pipeline \citep{mcfarland/etal:2013} tuned for KiDS data. We refer the reader to \citetalias{dejong/etal:2015} for a detailed description of all the steps.
There are some small differences between the \textsc{Theli} reduction of the $r$-band data described in \S\ref{sec:theli}, and the four-band \textsc{Astro-WISE} processing. The latter uses a single flat field per filter for the entire data set, since a dome-flat analysis shows that the peak-to-valley variations of the pixel sensitivity were less than 0.5 percent over the period during which the data were taken. Also, the $i$-band data require a de-fringing step, and different recipes are used to create the illumination correction maps (which are applied in pixel space), and the pixel masks that flag cosmic rays and hot/cold pixels. Satellite track removal is automatic (currently implemented on a per-CCD basis). Finally, background structure from shadows cast by scattered light hitting the shields that cover the CCD bond wires are subtracted separately in a line-by-line background removal procedure. All images are visually inspected and masked if necessary before release.
Photometric calibration starts with zero points derived per CCD from nightly standard field observations, tied to SDSS DR8 PSF magnitudes of stars \citep{aihara/etal:2011}. The calibration uses a fixed aperture (6.3\arcsec\ diameter) not corrected for flux losses. Magnitudes are expressed in AB in the instrumental system. For $g$, $r$ and $i$ the photometry is homogenized across all CCDs and dithers for each survey tile individually. In $u$-band the smaller source density often provides insufficient information for this scheme. The resulting photometry is homogeneous within two percent per tile and filter. Due to the rather fragmented distribution of observed tiles in the first two data releases, no global photometric calibration over the whole survey is feasible yet, resulting in random offsets in the absolute zero points of the individual tiles thus obtained. For the GAMA tiles, which overlap with SDSS, we correct these offsets after the fact. Detailed analysis and statistics of the photometric calibration are presented in \citetalias{dejong/etal:2015}.
A global astrometric calibration combining all CCDs and dithers is calculated per filter for each tile using a second order polynomial.
The de-trended sub-exposures are then re-gridded to a 0.2\arcsec\ pixel scale, photometrically scaled, and co-added to produce the image stacks.
\subsection[Gaussian aperture and PSF photometry (GAaP)]
{Gaussian aperture and PSF photometry (GA{\sevensize A}P)}
\label{sec:gaap}
Photometric redshifts of galaxies require accurate colour measurements. These colours do not need to describe the total light from the galaxy, but they should represent the ratio of the fluxes from the same part of the galaxy in different filter bands. This means that we can optimize SNR by measuring the colours of the brighter, central regions of galaxies without the need to include the noise-dominated low surface brightness outskirts.
Such aperture photometry is complicated by the fact that the PSF is not constant: it varies from sub-exposure to sub-exposure, with position in each image, and with wavelength.
We correct for PSF variations in two steps. First, we homogenize the PSF within each co-added image to a Gaussian shape without significantly degrading the seeing. The resulting images contain most of the information that is present in the original stacks, with a simpler PSF but correlated noise between neighbouring pixels. Second, we perform aperture photometry using elliptical Gaussian aperture weight functions, and correct analytically for the seeing differences.
In brief, the PSF Gaussianization of each KiDS tile consists of the following steps:
\begin{enumerate}
\item
We model high-SNR stars in the co-added image with a shapelet expansion \citep{refregier:2003}, using the pixel-fitting method described in \citet{kuijken:2006}. This formalism provides a natural and mathematically convenient framework for PSF modelling and image convolutions. The scale radius (i.e., size of the parent Gaussian in the shapelet expansion) of the shapelets is matched to the worst seeing found in the individual sub-exposures making up the co-added image for each filter.
\item
We then derive a PSF map by fitting the variation of the shapelet coefficients across the image, using polynomials.
\item
We construct a grid of kernels that yield a Gaussian when convolved with the model PSF, also expressed in the shapelets formalism. The size of the `target' Gaussian is set by the shapelet scale chosen in step (i). We fit the spatial variation of these kernels' coefficients using polynomials, resulting in a kernel map.
\item
Each co-added image is convolved with its kernel map.
\item
The shapes of the PSF stars on this PSF-Gaussianized image are modelled once again with a shapelet expansion, but now using a larger scale radius in order to measure residual flux at large radii. A map of the residual PSF non-Gaussianities is then made as above, and used to make a perturbative correction to the Gaussianized image to improve the PSF Gaussianity further.
\item
As a result of the convolution (and to a lesser extent, also from the preceding re-gridding before co-addition) the noise in these images is correlated on small scales. We keep track of the noise covariance matrix during the Gaussianization, and account for it in the photometric measurements.
\end{enumerate}
The \textsc{GAaP} photometry is performed from these PSF-Gaussianized, co-added images for all sources in the $r$-band \textsc{Theli}-\emph{lens}fit catalogue. First we pick an elliptical Gaussian aperture for each source, with aperture size, shape and orientation chosen to optimize the SNR of the fluxes, based on the pre-Gaussianization $r$-band image. For major and minor axis lengths $a$ and $b$, and orientation $\alpha$ with respect to the pixel coordinate grid, we construct an `aperture matrix'
\begin{equation}} \newcommand{\ee}{\end{equation}
\mat{W}=
\left(\!\!
\begin{array}{cc}
a^2\cos^2\alpha+b^2\sin^2\alpha & (a^2-b^2)\sin\alpha\cos\alpha\\
(a^2-b^2)\sin\alpha\cos\alpha&a^2\sin^2\alpha+b^2\cos^2\alpha
\end{array}
\!\!\right),
\ee
which in turn is used to define the \textsc{GAaP} flux $F_\mat{W}$ as the Gaussian-weighted aperture flux of the \emph{pre-seeing} image of the source, $I_\rmn{pre}(\vec{x})$:
\begin{equation}} \newcommand{\ee}{\end{equation}
F_\mat{W}
\equiv
\int\d\vec{x}\ I_\rmn{pre}(\vec{x}){\rmn e}^{-\frac12\vec{x}^\mat{T} \mat{W}^{-1}\vec{x}} \, .
\label{eq:FW}
\ee
$F_\mat{W}$ is well-defined and manifestly PSF-independent, but since it is defined in terms of the pre-seeing image it is a theoretical construct. However, it is possible to measure this quantity from a Gaussian-smoothed image $I_\rmn{G}= I_\rmn{pre} \otimes G$ (where $G$ is a Gaussian PSF of dispersion $p$ and $\otimes$ denotes convolution) using the identity
\begin{equation}} \newcommand{\ee}{\end{equation}
F_\mat{W}=
\frac{\det(\mat{W})^\frac12}{\det(\mat{W}-p^2\mat{1})^\frac12}
\int\d\vec{x}\ I_\rmn{G}(\vec{x}){\rmn e}^{-\frac12\vec{x}^\mat{T} (\mat{W}-p^2\mat{1})^{-1}\vec{x}}
\ee
which is valid for any PSF size $p<a,b$ (i.e., as long as the aperture is larger than the PSF).
$\mat{1}$ denotes the identity matrix.
For a given source, provided the same aperture matrix $\mat{W}$ is used for all bands, Eq.~\ref{eq:FW} shows that this technique returns fluxes that weight different parts of the source consistently.
A detailed description of the PSF Gaussianization pipeline, propagation of the noise correlation due to the convolution, and a discussion and derivation of the \textsc{GAaP} flux formalism, may be found in Appendix~\ref{app:gaap}. We stress that these aperture magnitudes are not designed to be total magnitudes.
\subsection{Photometric calibration}
\label{sec:photcal}
As described above, the photometric zero points of the co-added images used for the current analysis are calibrated based on nightly standard star field observations, and no global photometric calibration is included. To improve these absolute zero points, a cross-calibration to SDSS is done before the derivation of the photometric redshifts.
We calibrate against the eighth data release of the SDSS \citep{aihara/etal:2011}, which represents the complete SDSS imaging and fully overlaps with the KiDS-GAMA fields. Stars are selected from SDSS and matched to the KiDS multi-colour catalogues. We choose a magnitude range where the OmegaCAM sub-exposures are not saturated and SDSS photometry is sufficiently precise. Over this range we average the differences in the photometry between our \textsc{GAaP} measurements and the SDSS PSF magnitudes in all four bands ($ugri$). We find no trend with magnitude, confirming that the difference is a pure zero point offset.
The distribution of the differences for all 114 fields is similar to the one shown in \citetalias{dejong/etal:2015}. We find the mean offset to be consistent with zero in the $g$-band and offsets of $\sim$0.02mag, $\sim$0.05mag, and $\sim$0.06mag in the $r$-, $i$, and $u$-bands, respectively. Field-to-field scatter is in the range 2.5--5 percent. The offsets are applied to each field globally relying on the photometric stability of SDSS and the KiDS illumination correction. All subsequent analysis is based on these re-calibrated magnitudes.
\subsection{Photometric redshifts}
\label{sec:pz}
The KiDS photometric redshift estimates are obtained following the methods used for CFHTLenS \citep{hildebrandt/etal:2012}. We use the Bayesian photometric redshift code \textsc{bpz} \citep{benitez:2000}, a spectral template-fitting code, together with the re-calibrated template set by \cite{Capak:2004}.
To assess the accuracy of our photometric redshifts, we also produce stacks from VST data in two fields with deep spectroscopic coverage, the Chandra Deep Field South (CDFS) and the COSMOS field. These data were taken under the VOICE \citep{decicco/etal:2015} project. Total exposure times in these fields are much longer than for typical KiDS observations, but individual sub-exposures are similar to those from KiDS, allowing us to produce stacks with similar depth and seeing as a typical KiDS field. We extract catalogues and photometric redshifts in the same way as for the KiDS tiles, and then match the resulting photometric catalogues with the combined CDFS spectroscopic catalogue\footnote{\url{http://www.eso.org/sci/activities/garching/projects/goods/MasterSpectroscopy.html}} and a deep zCOSMOS catalogue (zCOSMOS team, private communication). In the following we compare the KiDS photometric redshifts to the high-confidence spectroscopic redshifts from these catalogues.
\begin{figure}
\putfig{figs/photo-z/r-number-counts.pdf}\\
\caption{Number counts in the $r$ band of the lensing catalogue (blue, weighted by \emph{lens}fit weight) and the spectroscopic catalogue (red, unweighted).}
\label{fig:photo-z_numbercounts}
\end{figure}
Fig.~\ref{fig:photo-z_numbercounts} shows the $r$-band magnitude number counts of the lensing catalogue (weighted by the \emph{lens}fit weight, see Sect.~\ref{sec:shapes}) and the spectroscopic matches (unweighted). This deep spectroscopic sample spans the full magnitude range of the lensing sample, with broadly similar distribution, and therefore we do not apply any further weighting. This is also the reason why we concentrate on the zCOSMOS and CDFS fields here. Adding in the numerous bright spectroscopic redshifts from SDSS and GAMA would not add significant information about the performance of the photometric redshifts of the faint KiDS sources.
\begin{figure}
\putfig{figs/photo-z/KiDS_specz_deepmaglim_r19r24_flag34_mask_ZB_zspec_contours.pdf}\\
\caption{Photometric redshift vs. spectroscopic redshift in the CDFS and COSMOS fields for objects with $19<r<24$. Contours are spaced in 0.5-$\sigma$ intervals with the outermost contour corresponding to the 2-$\sigma$ level. Photo-$z$ are estimated from four-band $ugri$ data from the VOICE project in the two fields, stacked so as to approximate the KiDS depth and seeing. Spec-$z$ are from the combined ESO CDFS catalogue and a deep zCOSMOS catalogue. For this sample we find a photo-$z$ scatter of 0.054 after rejecting 11 percent of the galaxies as outliers. The photo-$z$ bias for this sample is 0.01.}
\label{fig:zz}
\end{figure}
\begin{figure}
\putfig{figs/photo-z/KiDS_specz_deepmaglim_z_phot_stats4_flag3_err.pdf}\\
\caption{Statistics of the photometric vs.\ spectroscopic redshift discrepancy $\Delta z$ as a function of $r$-band magnitude in the CDFS and COSMOS fields. From top to bottom: clipped RMS dispersion, outlier fraction, average offset, and fraction of galaxies in each given ODDS cut (normalized to the total).}
\label{fig:photo-z_stats_mag}
\end{figure}
\begin{figure}
\putfig{figs/photo-z/KiDS_specz_deepmaglim_z_phot_stats4_19ltrlt24_flag3_z_err.pdf}\\
\caption{As Fig.~\ref{fig:photo-z_stats_mag}, but plotted as a function of photometric redshift $z_\rmn{B}$.}
\label{fig:photo-z_stats_z}
\end{figure}
A straight comparison of the Bayesian photometric redshifts, $z_\rmn{B}$, and the spectroscopic redshifts, $z_{\rm spec}$, is shown in Fig.~\ref{fig:zz}. To quantify the level of agreement, we characterize the photometric redshift of each galaxy by the relative error
\begin{equation}
\Delta z=\frac{z_{\rm B}-z_{\rm spec}}{1+z_{\rm spec}}\,.
\end{equation}
and plot its statistics in bins of magnitude and redshift in Figs.~\ref{fig:photo-z_stats_mag} and \ref{fig:photo-z_stats_z}, respectively.
We use the mean of $\Delta z$ as a measure for the photometric redshift bias, the fraction of objects with $|\Delta z|>0.15$ as the outlier rate, and the RMS scatter after rejection of the outliers as the dispersion. We show the statistics for different cuts on the \textsc{bpz} ODDS parameter \citep[see][]{benitez:2000}, which is a measure of the uni-modality of a galaxy's posterior redshift distribution. Cutting on ODDS usually leads to slightly better photometric redshifts at the expense of losing objects. This is reflected in the completeness fraction, plotted in the bottom panel of Figs.~\ref{fig:photo-z_stats_mag} and \ref{fig:photo-z_stats_z}.
These tests check for the accuracy of the photometric redshift point estimates. Such point estimates can be used to select galaxies in certain redshift regions, to define tomographic redshift bins, and to distinguish between foreground and background galaxies in different lensing applications. The modelling of the lensing measurement, however, makes use of the full photometric redshift posterior probability distributions $p(z)$ that \textsc{bpz} estimates for each galaxy, and in that sense $p(z)$ is the more crucial quantity for the weak lensing science goals.
We have checked that the summed $p(z)$ posteriors of the galaxies plotted in Fig.~\ref{fig:zz} agree well with their spectroscopic redshift distribution provided we exclude galaxies whose $z_\rmn{B}$ values lie at the extremes of the redshift distribution of the spectroscopic calibration sample.
After some experimentation, based on these results as well as on Fig.~\ref{fig:photo-z_stats_z}, we cut our galaxy catalogue at $0.005<z_{\rm B}<1.2$ in all lensing analyses.
Detailed characterization and testing of the $p(z)$ will be presented in forthcoming papers (Choi et al. in prep., Hildebrandt et al. in prep.).
\begin{figure*}
\putfig{figs/photo-z/kids_gama_9june2015.pdf}\hfill
\caption{Angular cross-correlations between KiDS galaxies binned by photometric redshift, and GAMA galaxies binned by spectroscopic redshift. Spectroscopic redshifts increase from top to bottom, and photometric redshifts increase from left to right.}
\label{fig:photo-z_spec-z_cross}
\end{figure*}
\subsection{Galaxy Clustering analysis}
As a further test of our photometric redshifts, following \citep{newman/:2008} we calculate the angular cross-correlation of the positions of GAMA and KiDS galaxies on the sky, grouped by spectroscopic (GAMA) and photometric (KiDS) redshifts. Galaxies that are physically close will produce a strong clustering signal, and hence this measurement can validate photometric redshift estimates.
GAMA is a highly complete spectroscopic survey down to a limiting magnitude of $r<19.8$, measuring redshifts out to $z_{\rm spec}=0.5$. We group the GAMA galaxies into five redshift bins $i$ of width $\Delta z_{\rm spec} \simeq 0.1$. We limit the KiDS galaxies to $r<24$, and group them into eight photometric redshift bins $j$, listed in Fig.~\ref{fig:photo-z_spec-z_cross}. The photometric redshifts extend beyond the GAMA redshift range to $z_{\rm B}=1.2$. The projected angular clustering statistic $w_\rmn{gg}^{ij}(\theta)$, between spectroscopic bins $i$ and photometric bins $j$, is then estimated using the \citet{landy/szalay:1993} estimator by means of the \textsc{athena} code \citep{kilbinger/bonnett/coupon:2014}. Errors are calculated using a jackknife analysis. We focus on angular scales $1\arcmin< \theta < 30\arcmin$, where the upper angular scale is set by signal-to-noise constraints, and the lower angular scale is chosen to reduce the impact of scale dependent galaxy bias on the measurements \citep{schulz/:2010}. The results are shown in Fig.~\ref{fig:photo-z_spec-z_cross} with the spectroscopic redshift bin $i$ increasing from top to bottom, and the photometric redshift bin $j$ increasing from left to right.
The strongest angular clustering is found when the spectroscopic and photometric redshift sample span the same redshift range, which can be seen along the `diagonal' of Fig.~\ref{fig:photo-z_spec-z_cross} where $i=j$. This is anticipated if there is no significant bias in the photometric redshift measurement $z_{\rm B}$. As the photometric redshifts have an associated scatter, we also see clustering between adjacent spectroscopic and photometric redshift bins. With the exception of the $0.2<z_{\rm B}<0.3$ bin, we find non-zero clustering only in matching or adjacent bins, implying that the photometric redshift scatter is less than the spectroscopic bin width $\Delta z = 0.1$. This is consistent with the analysis presented in \S\ref{sec:pz} which found the scatter $\sigma_z < 0.08$ out to $z_{\rm B}=1.2$.
A correlation between the positions of galaxies in widely separated redshift bins would indicate the presence of catastrophic errors in the KiDS photometric redshifts. We see this to some extent in the non-zero clustering measured between the $0.2<z_{\rm B}<0.3$ and $0.4<z_{\rm spec}<0.5$ galaxy samples, indicating that a small fraction of the photometric redshifts in this bin are actually at a higher redshift. This measurement could be used to infer the true redshift distribution of this galaxy sample \citep[see for example][which is beyond the scope of this paper]{mcquinn/white:2013}. For all other photometric redshift bins, we find the clustering signal to be consistent with zero for all bin combinations separated by $\Delta z=0.1$ or more. We can therefore conclude that the fraction of `catastrophic outliers' is low, in agreement with the direct spectroscopic-photometric redshift comparison presented in \S\ref{sec:pz}.
We consider this analysis as a validation of our redshift estimates. A similar conclusion is drawn from the analysis of the cross-correlation between different photometric redshifts bins of KiDS galaxies, presented in \citetalias{dejong/etal:2015}, which extends the cross-correlation between bins beyond redshift $z=0.5$ which cannot be probed with the GAMA catalogues.
\subsection{The combined shear-photometric redshift catalogue}
\label{sec:photzsample}
In \S\ref{sec:pz} we defined a photometric redshift selection criterion $0.005<z_{\rm B}<1.2$ to ensure a good level of accuracy in the photometric redshifts. We now combine that redshift selection with the shape measurement analysis by also selecting galaxies with a \emph{lens}fit weight $w>0$ \citepalias[this cut excludes all galaxies for which no shape measurement was obtained, see][]{miller/etal:2013}. The upper panel of Fig.~\ref{fig:nofz} compares three redshift distributions for this sample of galaxies, showing the distribution of the $z_{\rm B}$ point estimates of the photometric redshift, and the weighted and unweighted sums of the associated posterior distributions $p(z)$. The weighted distribution, plotted as the thick solid line, is the one most relevant for our analysis: it is the effective redshift distribution of the lensing information, and has a median redshift of $z_m=0.53$.
The weights used in the lensing analysis favour higher SNR galaxies which are typically at lower redshift in this flux-limited survey, and hence the weighted median redshift is lower than that of the unweighted sample (which has $z_m=0.63$). Indeed if the shape measurement criterion $w>0$ had not been applied, the unweighted median redshift would be even higher with $z_m=0.66$. This is illustrated in the lower panel of Fig.~\ref{fig:nofz}, which shows the effective redshift distribution for galaxies with different \textsc{bpz} ODDS parameters: the more precise photometric redshifts, with high ODDS, also tend to be at lower redshifts (e.g., the weighted median redshift for galaxies with ODDS$>0.9$ is 0.43).
As the ODDS value decreases, so does the accuracy of each individual photometric redshift, owing to multiple peaks in each galaxy's posterior distribution that result from degeneracies in the redshift solution. In the stacked posterior shown in Fig.~\ref{fig:nofz}, these degeneracies are responsible for the shape of the distribution at the peak.
Fig.~\ref{fig:nofz} illustrates the importance of using the full posteriors $p(z)$ instead of the best-fit photometric redshifts $z_\rmn{B}$ to define the survey redshift distribution. The point estimates are more prone to artefacts associated with the particular filter set used. They also do not reflect the full information content of the photometry. As an illustration of how using $z_\rmn{B}$ could bias a lensing analysis, Fig.~\ref{fig:beta} shows the measured angular diameter distance ratio $D_\rmn{ls}/D_\rmn{s}$ for a lens at redshift $z_\rmn{l}=0.25$. Using $z_\rmn{B}$ or $p(z)$ to determine the redshift distribution of the background lensed sources, changes the average distance ratio by $\sim 10$ percent. As the distance ratio defines the lensing efficiency of sources at different redshifts, using $z_\rmn{B}$ instead of $p(z)$ would result in an underestimate of the lensing surface mass density by $\sim 10$ percent.
\begin{figure}
\includegraphics[width=\hsize,trim=2mm 0 11mm 10mm]{figs/photo-z/nz.pdf}
\includegraphics[width=\hsize,trim=0 0 0 0]{figs/photo-z/nz_waterfall.pdf}
\caption{The galaxy photometric redshift distribution. Upper panel: summed posterior redshift distributions $n(z)$, with (solid line) and without (dashed line) weighting by the \emph{lens}fit weight. The effective median redshift of the lensing survey is $z_\rmn{m}=0.53$. The histogram shown in this panel shows the distribution of the $z_\rmn{B}$ point estimates of the photometric redshift.
Lower panel: the lensing weighted posterior $n(z)$ distributions of galaxies in progressively lower ODDS categories (see text).}
\label{fig:nofz}
\end{figure}
\begin{figure}
\putfig{figs/photo-z/beta.pdf}
\caption{Effect of using the full photometric redshift posterior $p(z)$, or the point estimate $z_\rmn{B}$ to determine the angular diameter distance ratio $D_\rmn{ls}/D_\rmn{s}$ for a lens galaxy at redshifts $z_\rmn{l} = 0.25$. The average distance ratio $D_\rmn{ls}/D_\rmn{s}$ sets the lensing efficiency and differs by 10 percent depending which redshift measure is used (dashed lines).}
\label{fig:beta}
\end{figure}
\section{Tests for systematic errors in the K\lowercase{i}DS lensing catalogue}
\label{sec:sys}
Different science cases require different levels of accuracy in the shear and photometric redshift catalogues. It is common to model calibration corrections to shear measurement in terms of a multiplicative term $m$ and additive terms $c_k$ such that
\begin{equation}} \newcommand{\ee}{\end{equation}
\epsilon_k^{\rm obs} = (1+m) \epsilon_k^{\rm true} + c_k \, , \qquad (k=1,2)\, ,
\ee
where $\epsilon_k^{\rm obs}$ are the observed ellipticity parameters, and $\epsilon_k^{\rm true}$ the true galaxy ellipticity parameters \citep{heymans/etal:2006}.
\citet{massey/etal:2013} present a compilation of possible sources of such correction terms, and calculate requirements on their amplitudes for different kinds of analysis.
In an ideal shape measurement method, both $m$ and $c_k$ would be zero. In reality however, these corrections need to be determined so the data can be calibrated, and then systematics tests performed to ensure the calibration is robust.
Our first series of lensing science papers measure shear-position correlation statistics, also known as galaxy-galaxy lensing, where the tangential shear of background galaxies is determined relative to the position of foreground structures. As this measurement is taken as an azimuthal average, it is very insensitive to additive correction terms $c_k$ except on scales comparable to the survey boundaries. It is, however, very sensitive to the accuracy of the measured multiplicative calibration $m$, an error which leads directly to a bias in the mass determined from the lensing measurement. Furthermore, these measurements rely on a good knowledge of the photometric redshift distribution to determine the level of foreground contamination in the background source sample and hence the level of dilution expected in the measured lensing signal. In this section we therefore first describe the analysis done to validate the multiplicative calibration $m$ used, and then verify that the redshift scaling of the galaxy-galaxy lensing signal is consistent with the expectation based on the photometric redshift error distributions.
In this technical paper we also present the first demonstration of the suitability of the data for cosmological measurements through two-point shear statistics. Such an analysis places more stringent requirements on the accuracy of the shear catalogue, in particular the additive corrections $c_k$. We therefore perform an additional set of tests, following \citetalias{heymans/etal:2012}, first selecting fields where the cross-correlation between the measured shear signal and the PSF pattern is consistent with zero systematics. We then empirically determine the $c_k$ terms from the remaining data.
\begin{figure}
\includegraphics[width=\hsize]{figs/shear/size_SNR_ellip_dist.png}
\caption{Comparison of the observed properties of galaxies in the image simulations from \citet{miller/etal:2013} (thin lines) to the observed properties of galaxies in KiDS (thick lines). The upper panels compare the signal-to-noise ratio (SNR) distributions in bins of increasing galaxy size (in arcseconds). The ellipticity distributions can be compared as a function of galaxy size (middle panels) and SNR (lower panels).}
\label{fig:sim_data_comp}
\end{figure}
\subsection{Multiplicative calibration}
The multiplicative calibration term $m$ can only be determined through the analysis of image simulations where the true galaxy shapes are known. \citetalias{miller/etal:2013} describe the CFHT MegaCam image simulations against which \emph{lens}fit was calibrated extensively in the CFHTLenS analysis. The primary aim of these simulations was to correct for noise bias \citep{hirata/etal:2004,refregier/etal:2012,melchior/viola:2012}.
On average the noise bias resulted in a $\sim5$ percent correction to the measured shear, with more significant corrections for smaller, fainter galaxies. This analysis provided a calibration correction that depends on the \emph{lens}fit parameters SNR and size $r_\d$ as
\begin{equation}} \newcommand{\ee}{\end{equation}
m(\hbox{SNR}, r_\d) = \frac{\beta}{\log_{10} \hbox{SNR}} \exp(-\alpha \, r_\d \, \hbox{SNR})\, ,
\label{eqn:mcalmod}
\ee
with $\alpha=0.306\,\rmn{arcsec}^{-1}$ and $\beta=-0.37$.
The {\it r}-band KiDS VST-OmegaCAM imaging differs from the simulated {\it i}-band CFHT MegaCam imaging in a few key respects. The pixel scales differ: $\theta_{\rm pix}=0.213\arcsec$ for OmegaCAM and $\theta_{\rm pix}=0.186\arcsec$ for MegaCam. The KiDS data are shallower than CFHTLenS, and while the mean PSF FWHM values for the two sets of lensing data are the same (0.64\arcsec), the average KiDS PSF ellipticity is $\sim 15$ percent smaller than the average CFHTLenS PSF. We verify in two different ways that this CFHTLenS correction is suitable to use for KiDS: (i) using a re-sampling technique such that the simulated catalogues better match the KiDS data, and (ii) by comparing the galaxy-galaxy lensing signal around bright galaxies in CFHTLenS and KiDS for progressively fainter source samples.
\begin{figure*}
\putfig{figs/shear/Signal_4.pdf}
\caption{Four panels on the left: KiDS vs.\ CFHTLenS comparison of the average tangential shear around galaxies with $20<r<21$ measured from progressively fainter source populations. The insets show the average multiplicative correction factor, as derived from Eq.~\ref{eqn:mcalmod} and applied to the plotted measurements. Right-hand panel: bin by bin comparison of the shear values for $\theta$ between 1 and 20 arcmin. The red line shows the best-fit linear regression, and the grey zone the corresponding 1-$\sigma$ uncertainty (errors on both axes are taken into account).}
\label{fig:shearcomp}
\end{figure*}
\subsubsection{Re-sampled image simulations}
\label{sec:resamp}
Fig.~\ref{fig:sim_data_comp} compares the measured properties of galaxies in the image simulations from \citetalias{miller/etal:2013} (thin lines) to the properties of galaxies in KiDS (thick lines). The upper panels compare the SNR distributions in bins of increasing galaxy size\footnote{In principle this comparison should be made in terms of the relative galaxy-to-PSF size, but as the KiDS and CFHTLenS imaging have similar seeing distributions we work with galaxy size in arcseconds.} showing that the image simulations have a deficit of small galaxies. \citetalias{miller/etal:2013} concluded this arose from an overestimate of the true PSF size when creating the image simulations. Compared to the image simulations, which are a good match to the SNR distribution of the CFHTLenS data, we also see a higher proportion of low SNR galaxies in KiDS. This arises because CFHTLenS imposed a magnitude limit $i<24.7$ on their galaxy sample, based on the depth to which photometric redshifts were considered reliable. For KiDS, we do not include a similar imposed fixed magnitude limit, see Fig.~\ref{fig:photo-z_numbercounts}, as the depth of the survey is within the limits covered by deep spectroscopic surveys.
Comparing the ellipticity distributions as a function of galaxy size (middle panels) and SNR (lower panels) in Fig.~\ref{fig:sim_data_comp}, we see an excess of simulated galaxies of large ellipticity in the high-SNR regime. As shown in \citet{viola/etal:2014} and \citet{hoekstra/etal:2015}, calibration corrections can be sensitive to the ellipticity distribution. For the purposes of the analysis of our first 100 square degrees, we re-sample the simulated galaxy catalogues from \citetalias{miller/etal:2013} such that the simulated ensemble galaxy properties match the KiDS data in terms of size, SNR and ellipticity. This is possible as the image simulations from \citetalias{miller/etal:2013} simulated two complete CFHTLenS surveys. Hence while there is a deficit of small, low SNR galaxies in the simulations, relative to the global populations, there are sufficient numbers with which to validate the calibration scheme $m$ from Eq.~\ref{eqn:mcalmod}, for KiDS, in this under-represented regime.
We sample galaxies from the image simulations, such that the correlations that exist between observed size, observed SNR and observed ellipticity in the data are retained. As \emph{lens}fit performs a joint parameter fit of galaxy ellipticity and size, selecting galaxies based on their observed size will introduce a selection bias on galaxy ellipticity. It is therefore critically important not to subject \emph{lens}fit catalogues to any `cleaning criterion', for example rejecting small galaxies based on the \emph{lens}fit size estimate. Instead we use the \emph{lens}fit weights to optimally combine the shape measurements. Following \citetalias{miller/etal:2013} we determine the accuracy of the CFHTLenS calibration correction for KiDS by calculating
\begin{equation}} \newcommand{\ee}{\end{equation}
\delta m = \frac{\sum_{ik} \left[1+m(\hbox{SNR}, r_\d)\right] w_{i} (\epsilon_{ik}^{\rm obs} - \epsilon_{ik}^{\rm true}) }{2\sum_i w_i} = -0.04 \pm 0.02 \, ,
\ee
where the sum is taken over the simulated galaxies $i$ in the re-sampled image simulation catalogues, weighted by the observed \emph{lens}fit weights $w_i$, and calculated for both components $k$ of the ellipticity. We find that the CFHTLenS calibration correction underestimates the calibration required for KiDS by a few percent\footnote{We note an error in the calculation of Eq.~\ref{eqn:mcalmod} used in the first KiDS lensing analyses (\citealt{viola/etal:2015}; \citealt{sifon/etal:2015}; van Uitert et al., in preparation) that did not correctly account for the different MegaCam and OmegaCAM pixel scales. By luck this error erroneously increased the average value of $m$, such that the KiDS-correction $\delta m$ was reduced to $\delta m = -0.03 \pm 0.02$.}, which is within the current statistical error budget for the early science presented in \citet{viola/etal:2015}, \citet{sifon/etal:2015} and van Uitert et al. (in preparation). We also verified that this underestimate did not vary significantly as a function of galaxy SNR, as it arises from the increased fraction of small galaxies in the sample.
A new suite of KiDS image simulations are in production using the \textsc{GALSIM} software \citep{rowe/etal:2015}, in preparation for future analyses in which the larger area surveyed will demand a more accurate calibration scheme.
\subsubsection{Galaxy-galaxy lensing at different signal to noise ratio: KiDS vs. CFHTLenS}
\label{sec:mtest}
In this section we apply an additional consistency check to confirm the findings of the image simulation re-sampling analysis, using real data. We verify that the SNR dependence of the multiplicative calibration is robust by comparing galaxy-galaxy shear measurements from observations of different depths. To divorce this test from any uncertainties in photometric redshift, we define lens and source samples purely by $r$-band magnitude. We then compare the dimensionless, $m$-calibrated tangential shear profile $\gamma_{\rm t}(\theta)$ measured with KiDS and with the deeper CFHTLenS data \citepalias{erben/etal:2013}. The lens samples are selected with $20<r<21$, and four source samples are selected in half-magnitude bins from $r=22$ to $r=24$. For the brightest sources the average calibration corrections from Eq.~\ref{eqn:mcalmod} are only a few percent for both surveys, but the faintest bin includes a 14 percent calibration correction for KiDS compared to a 4 percent correction for CFHTLenS. Figure.~\ref{fig:shearcomp} shows the good agreement between the calibrated KiDS and CFHTLenS tangential shear profiles, measured between 1 and 20 arcmin, for the four different source samples. To quantify the consistency we perform a direct bin by bin comparison of the measured shears in the right-hand panel of Fig.~\ref{fig:shearcomp}. Fitting a simple proportionality relation to the points, using uncorrelated bootstrap errors, as motivated by the results of the analytical prescription described in \citet{viola/etal:2015}, we find a best-fit ratio of (KiDS/CFHTLenS)=$1.05 \pm 0.13$.
\subsection{Testing redshift scaling with galaxy-galaxy lensing}
As objects get fainter, our ability to measure shape, photometry and photometric redshifts degrades. On the other hand the fainter galaxies tend to be at higher redshifts, and therefore they experience a stronger lensing distortion. Measuring the dependence of the lensing signal with source redshift can in principle provide tight constraints on the growth of structure and geometry of the Universe. It is therefore imperative to perform a cosmology-insensitive joint test of the shear-redshift catalogue and determine whether any redshift-dependent shear bias exists. In \citetalias{heymans/etal:2012} a galaxy-galaxy lensing test of shear-redshift-scaling was designed that was found to be only very weakly sensitive to the fiducial cosmology assumed in the analysis. The mean tangential shear $\gamma_\rmn{t}$ is measured around a sample of lens galaxies for a series of source galaxies split by increasing photometric redshift, $z_{\rm B}$. We approximate the mass distribution of the galaxies in the lens sample as simple isothermal spheres with a fixed velocity dispersion $\sigma_\rmn{v}$. The predicted tangential shear around the lens sample $i$, measured from source sample $j$, is then given by
\begin{equation}} \newcommand{\ee}{\end{equation}
\gamma_{\rm t}^{ij} (\theta) = \frac{2\pi}{\theta} \left( \frac{\sigma_\rmn{v}}{c} \right)^2 \Big\langle \frac{D_{\rm ls}}{D_{\rm s}} \Big\rangle_{ij} \, .
\label{eq:gamasis}
\ee
Here $c$ is the speed of light, and $D_\rmn{ls}/D_\rmn{s}$ is the ratio between the angular diameter distances from the lens to the source, and from the observer to the source. The average of this ratio depends on the effective redshift distribution of the lens and source sample \citep[see for example][]{bartelmann/schneider:2001}. For a fixed lens sample, we should recover consistent measurements of $\sigma_\rmn{v}$, independent of which source sample is used. Any discrepancy indicates either a poor knowledge of the photometric redshift distribution for that source sample, a redshift-dependent shear measurement bias, or a strong redshift dependence in the velocity dispersion $\sigma_\rmn{v}$ of the lenses within their foreground redshift bin.
\begin{figure}
\includegraphics[width=\hsize]{figs/shear/kidsdr2_gama_sisamp_1arcmin_blind4.pdf}
\caption{The tangential shear measured at one arcminute as a function of the average redshift of the source sample, for two samples of GAMA lenses with spectroscopic redshifts between $0.25<z_s<0.5$ (filled) and $z_s<0.25$ (open). The solid line shows the predicted signal from the best-fit SIS model, with the dashed lines showing the 68 percent confidence interval.}
\label{fig:zscaling}
\end{figure}
Fig.~\ref{fig:zscaling} shows the tangential shear determined at one arcminute, for source galaxies in seven bins of $z_\rmn{B}$ spanning $0.005<z_\rmn{B}<1.5$. Two samples of lens galaxies from GAMA were used, with spectroscopic redshifts between $0.25<z_\rmn{s}<0.5$ (filled) and $z_\rmn{s}<0.25$ (open).
The solid line connects the predicted signals from the best-fit SIS model, assuming a Planck cosmology, taking into account the full redshift posterior $p(z)$ for the sources in each bin. The amplitude of the model is set by fitting to all sources with photometric redshifts $0.2<z_{\rm B}<1.0$, which is considered to be the safest photometric redshift range based on the results presented in Fig.~\ref{fig:photo-z_stats_z}. The dashed lines show the 68 percent confidence intervals on the model amplitudes.
As expected, the signal increases as the average redshift of the source sample increases. We also see that the signal and model do not tend to zero for low $z_\rmn{B}$, even though the mean source photometric redshift is in front of the lens. This is a result of a non-zero fraction of catastrophic outliers in the photometric redshift sample that are actually at high redshift, causing a significant tangential shear signal. By taking account of the full photometric redshift posterior probability distributions of the sources, the knowledge of catastrophic outliers enters the model, generating an upturn at low source redshift (note that such low-$z_\rmn{B}$ galaxies which are actually at high redshift do not show up in the cross-correlations in Fig.~\ref{fig:photo-z_spec-z_cross} as they fall outside the GAMA redshift range). This analysis shows that, within the current SNR of the measurement, our shear-redshift catalogue is not subject to significant redshift-dependent shear biases.
\subsection{Field Selection for cosmic shear test}
\label{sec:passfail}
\citetalias{heymans/etal:2012} describe a method to identify observations with significant residual contamination of the galaxy shapes by the PSF. It involves comparing the correlation between galaxy and PSF shape, measured in the data and with mock catalogues. As a result 25 percent of the CFHTLenS tiles were flagged as unsuitable for cosmic shear science; nonetheless these data could be retained for the galaxy-galaxy lensing analyses as the azimuthal averaging renders the measurement essentially insensitive to additive PSF errors. We follow CFHTLenS in not applying field selection for our first series of galaxy-galaxy lensing science papers, but repeat the \citetalias{heymans/etal:2012} analysis on KiDS in order to assess its future competitiveness for cosmic shear science. We summarize the key steps of the analysis, and refer the reader to \citetalias{heymans/etal:2012} for a detailed description.
The ellipticity estimate for each source can be written as
\begin{equation}} \newcommand{\ee}{\end{equation}
\epsilon^\rmn{obs}=\epsilon^\rmn{int}+\gamma+\eta+A_{\rmn{sys},i}\epsilon^i_{\rm PSF} \, ,
\ee
where $\epsilon^\rmn{int}$ is the intrinsic galaxy ellipticity, $\gamma$ is the true cosmological shear that we wish to detect, and $\eta$ is the random noise on the shear measurement whose amplitude depends on the size and shape of the galaxy in addition to the SNR of the observations. The final term reflects residual amounts of PSF contamination from the various sub-exposures $i$ that `print through' to the final galaxy ellipticities. Even though the coefficients $A_{\rmn{sys},i}$ should be very small for good shape measurement pipelines, this term can generate significant coherent correlations when the shapes of many galaxies on the same tile are averaged.
From a set of $N$ sub-exposures of a part of the sky ($N=5$ in the case of KiDS $r$-band data) \citetalias{heymans/etal:2012} define a vector of star-galaxy cross-correlation coefficients $\bxi_\rmn{sg}$, with one element per sub-exposure:
\begin{equation}} \newcommand{\ee}{\end{equation}
\bxi_\rmn{sg} =
\langle \epsilon^\rmn{obs} \bepsilon^*_\rmn{PSF} \rangle =
\langle \epsilon^\rmn{int} \bepsilon^*_\rmn{PSF} \rangle +
\langle \gamma \, \bepsilon^*_\rmn{PSF} \rangle +
\langle \eta \, \bepsilon^*_\rmn{PSF} \rangle +
\mat{C} \vec{A}_{\rm sys} \, ,
\label{eqn:sg}
\ee
where the average is taken over all galaxies in the pointing.
Here $\bepsilon_\rmn{PSF}$ is a vector of PSF ellipticity patterns, one per sub-exposure, determined from the PSF model at the locations of the source galaxies in each sub-exposure. $\mat{C}$ is a matrix whose elements $C_{ij} = \langle \epsilon_{\rm PSF}^i {\epsilon_{\rm PSF}^j}^* \rangle$ give the average covariance of PSF ellipticities between the sub-exposures. The complex conjugate of the ellipticity is denoted with a $*$, and only the real part of the averages in Eq.~\ref{eqn:sg} is kept (as in Eq.~\ref{eqn:xi_res}). We have assumed that $\vec{A}_\rmn{sys}$ does not vary across the field of view.
For a sufficiently wide area, the first three terms of Eq.~\ref{eqn:sg} average to zero, in which case $\vec{A_\rmn{sys}}=\mat{C}^{-1}\bxi_\rmn{sg}$. The contribution of this systematic ellipticity error to the two-point shear correlation function, $\langle \epsilon^\rmn{obs} {\epsilon^\rmn{obs}}^* \rangle$ is then given by
\begin{equation}} \newcommand{\ee}{\end{equation}
\Delta \xi_\rmn{obs} = \bxi_\rmn{sg}^\mat{T} \mat{C}^{-1} \bxi_\rmn{sg} \, .
\label{eqn:deltaeobs}
\ee
We wish to use $\bxi_\rmn{sg}$ as a diagnostic with which to identify those tiles where, for whatever reason, the PSF modelling has left significant residuals that would contaminate the shear-shear correlation function. The KiDS data are taken in square-degree tiles, and on these scales the measurement of $\bxi_\rmn{sg}$ will have contributions from the first three noise terms in Eq.~\ref{eqn:sg} through chance alignments between the different noise, PSF and cosmic shear fields. We therefore estimate the expected amplitude of $\Delta \xi_\rmn{obs}$, a positive quantity, from a series of 184 simulated KiDS data sets each containing 109 systematics-free one-square degree mock catalogues. These mock catalogues are populated to match the intrinsic ellipticity and measurement noise in the data. A correlated cosmic shear signal is also added, drawn from the $N$-body simulations of \citet{Harnois-Deraps/etal:2012}, following the effective galaxy redshift distribution $n(z)$ of KiDS shown in Fig.~\ref{fig:nofz}. Fig.~\ref{fig:pu_test} shows the distribution of $\sum (\Delta \xi_{\rm obs})$, where the sum is taken over all 109 mock fields, for the 184 different mock realizations of KiDS. The dashed line shows the result we would have obtained if the mock catalogues had contained a cosmic shear signal only, to emphasize that the two-point star-galaxy cross correlation function will be non-zero even in the absence of ellipticity noise. We then measure the average star-galaxy cross correlation coefficient for each field observed, with the result summed over all fields shown as the hashed rectangle in the upper panel. The difference between the expected result from the mock simulations and the data shows that some fields do indeed contain strong PSF residuals. To isolate these fields we determine a probability $p$ for each field that $\Delta \xi_{\rm obs}$ is consistent with zero systematics (see \citetalias{heymans/etal:2012} for details). We then set a threshold on this probability such that the data (shown hashed) match the expected distribution from the simulations, a requirement met when $p>0.11$. We find that this procedure rejects only 4 of our fields (3.7 percent, cf.~25 percent for CFHTLenS), suggesting that the PSF modelling in KiDS is of a significantly higher quality than in CFHTLenS, as could have been expected owing to the clean OmegaCAM PSF.
\begin{figure}
\putfig{figs/shear/delta_xi_sys_blinding4pass2.pdf}
\caption{Field selection based on the degree of correlation between the PSF ellipticity pattern and the galaxy ellipticities as compared to simulated data. See the text for the definition of $\sum(\Delta\xi_\rmn{obs})$, which quantifies the degree of residual PSF contamination in measurements of the two-point shear correlation function. The histogram shows the expected range of this statistic in simulations, and the hashed region indicates the measured value $\pm$ the 1-$\sigma$ bootstrap error. For comparison the dashed histogram shows the expected range for shape-noise free simulations. Top: all 109 KiDS fields. Bottom: result of field selection (see text for details).}
\label{fig:pu_test}
\end{figure}
\subsection{Additive calibration correction}
\label{sec:addcalcor}
For the data that passed our field selection (\S\ref{sec:passfail}) we measure the average weighted ellipticity components $\langle \epsilon_{1,2} \rangle$. For a KiDS-size survey, in the absence of systematic error, these should be consistent with zero. As with the analysis of CFHTLenS \citepalias{heymans/etal:2012}, we find a small residual shear signal in KiDS at the level of $\sim 10^{-3}$ (shown in Fig.~\ref{fig:c_corr_before_after}). The dependence on galaxy size and SNR is different though. In CFHTLenS small, high-SNR galaxies were found to be the dominant source of the residual signal in $\langle \epsilon_2 \rangle$ whereas $\langle \epsilon_1 \rangle$ was consistent with zero: instead, for KiDS we find that the lowest SNR galaxies dominate the residual, which is stronger in $\langle \epsilon_1 \rangle$. In addition we see a strong dependence of $\langle \epsilon_1 \rangle$ on the Strehl ratio (defined here as the fraction of light in the PSF model that falls into the central pixel), which could be a sign of error due to undersampling of the PSF. Indeed, with typical pixel-to-seeing ratio of 0.25 for CFHTLenS and 0.3 for KiDS, we expect KiDS to be more prone to such errors. Future analyses of KiDS will therefore include a PSF modelling method that correctly accounts for the under-sampling (Miller et al. in prep). For this first release, however, we follow the CFHTLenS strategy of calibrating and removing this small systematic effect empirically. Note that the first lensing analyses are based on tangential shear averages and are therefore not affected by such additive errors as long as the analysis is not affected by the survey boundaries: for the current data set we see no sign of additive effects out to projected radii of $2h^{-1}\rmn{Mpc}$ \citep{viola/etal:2015}.
\begin{figure}
\putfig{figs/shear/pre_post_cor_e1_e2_in_bins_blind_4.pdf}
\caption{The weighted mean ellipticity components $\langle \epsilon_1 \rangle$ (left) and $\langle \epsilon_2 \rangle$ (right), as a function of PSF Strehl ratio (upper), galaxy size (middle) and galaxy SNR (lower). The points are shown before (open symbols) and after (closed symbols) the empirical calibration has been applied, with the latter offset horizontally for clarity.}
\label{fig:c_corr_before_after}
\end{figure}
Using all the data that passed the field selection in \S\ref{sec:passfail}, we bin the data in three dimensions with six bins in size and SNR, and three bins in Strehl ratio, and fit a 3D second-order polynomial model to the bins\footnote{For our first set of galaxy-galaxy lensing papers, an earlier version of the additive correction was applied that used a third-order polynomial fit to a 3D binning with ten bins on each axis. On further inspection this sub-optimal set-up was discovered to introduce a low level of spurious noise into the shape measurement. As the shear-position correlations were found to be insensitive to the additive correction we only updated the additive calibration for the cosmological analysis demonstration in this paper.}. Fig.~\ref{fig:c_corr} presents example slices from the data cube and the model fit.
Applying the $c$-correction to the shear catalogue changes the one-point statistics $\left\langle (\epsilon_1,\epsilon_2)\right\rangle$ from $(-0.0015,-0.0002)$ to $(0.0004,0.0004)$, with a 1-$\sigma$ uncertainty of 0.0003. This is sufficiently small that it will not impact the measurement of the two-point shear correlation function presented in \S~\ref{sec:cosmicshear}. This level of residual shear will however impact future degree-scale cosmological shear measurements, requiring improvements in the calibration scheme for future data releases.
\begin{figure}
\putfig{figs/shear/example_c_correction_iblind_4.pdf}
\caption{The measured dependence of $\langle \epsilon_1 \rangle$ (left) and $\langle \epsilon_2 \rangle$ (right) as a function of SNR, for three different size bins (panels upper to lower $r = 0.6\arcsec, 0.3\arcsec, 0.2\arcsec$), and two different Strehl ratio bins with Strehl $=0.05$ (open symbols)) and Strehl $=0.1$ (closed symbols). The corresponding best-fitting models are shown as solid (Strehl = $0.05$) and dashed (Strehl $=0.1$) lines. }
\label{fig:c_corr}
\end{figure}
\section{Cosmic shear measurement}
\label{sec:cosmicshear}
The measurement of weak gravitational lensing by large-scale structure, often referred to as `cosmic shear', has the ability to set tight constraints on both standard cosmological parameters \citep[see for example][and references therein]{heymans/etal:2013}, and a range of modified gravity scenarios \citep{simpson/etal:2013,planckXIV:2015}. While the amount of data analysed in this paper represents less than 10 percent of the final KiDS area, in this section we argue that the data quality is at the level that the full survey will indeed provide high-fidelity cosmic shear measurements. It also provides a practical demonstration of our blinding scheme, which has been designed to counter user confirmation bias in future KiDS cosmic shear analyses.
\subsection{Blinding the KiDS weak lensing catalogues}
\label{sec:blinding}
In the post-Planck precision cosmology era, one the challenges facing new cosmological observations is confirmation bias \citep[e.g.,][]{croft/dailey:2011}. Many new surveys are therefore following the approach, particularly favoured by the particle physics community, of performing a `blind' analysis. The first stage of such an analysis is the verification and validation of software packages through the analysis of mock simulated data. The KiDS $N$-body simulations span 30,000 square degrees with a WMAP9 cosmology \citep{hinshaw/etal:2013}, and are an extension of the suite of lensing simulations described in \citet{Harnois-Deraps/VanWaerbeke:2015}.
With these simulations we can verify the analysis methods for galaxy-galaxy lensing, galaxy-cluster lensing and tomographic cosmology, and also determine covariance matrices for the analysis of the data.
This mock data strategy does not prevent confirmation bias in the analysis of the real data, where potentially unknown sources of systematic error increase the complexity of the analysis. For example, choices are currently made about which sub-exposures or pointings to excise from the analysis based on the outcome of a range of systematic tests on the shear measured in these regions. Choices are also made as to which length scales to include in the analysis of correlation functions or power spectra, which binning to use, and which photometric redshift ranges to trust. It is therefore important to build blinding into our data analysis such that these choices are informed purely through scientific rationale, and not influenced by the results of independent experiments.
An example of an early blind cosmological data analysis is \citet{davis/etal:2007} where the analysis team was given supernova data in which the redshifts had been stretched. This strategy of manipulating the data with a small multiplicative perturbation has also been used by other groups, but has the drawback that when the data are finally unblinded, the analysis has to be re-run. This potentially allows for low-level adjustments in the re-analysis, for example choosing which scales to include. We have therefore designed an alternative blinding scheme that prevents this, by ensuring that the true data are analysed along with the perturbed versions.
All KiDS weak lensing catalogues analysed contain four sets of ellipticity data: the true data, and three versions that have been manipulated by an unknown amount. Specifically, the magnitudes of the ellipticities in column $A=1,2,3,4$ of the catalogues are `curved' with a function
\begin{equation}} \newcommand{\ee}{\end{equation}
\epsilon_{\rmn A} = \epsilon \left(\rmn{e}^{k_\rmn{A}[1-(\epsilon/\epsilon_\rmn{max})^2]^2}\right)
\label{eq:blind}
\ee
parametrized by a single value $k_\rmn{A}$ such that $\epsilon_\rmn{max}$, the maximum ellipticity in the catalogues, is left invariant under this remapping. The values $\lbrace k_\rmn{A}\rbrace$ are unknown, except that for one of them, the true data, $k_\rmn{A}$ is equal to zero. The differences between the $k_\rmn{A}$ can easily be reconstructed by dividing the shear columns, but this provides no information as to which column contains the true ellipticities. The values of $k_\rmn{A}$ were limited to $|k_\rmn{A}|<0.2$, in order to satisfy two conditions. On the one hand, the effect of the transformation should be sufficiently large that it effectively blinds KiDS to confirmation bias with CMB measurements from Planck, by changing the results up to $\sim10\sigma$ in terms of the Planck error on the amplitude of the matter density power spectrum \citep{planckXIII:2015}. At the same time Eq.~\ref{eq:blind} must not distort the lensing values to such an extent that it creates suspicious effects in galaxy-galaxy lensing, ellipticity distributions, SNR or redshift scaling.
We asked a trusted colleague, external to the team, to set the values of $k_\rmn{A}$ through a \textsc{Python} executable that takes the original lensing catalogues output from \emph{lens}fit (see \S\ref{sec:shapes}), manipulates the ellipticity columns, according to Eq.~\ref{eq:blind}, and outputs a new catalogue with the additional blind columns inserted in an order unknown to any member of the KiDS team.
The team members agreed that they would not wilfully unblind themselves by attempting to back-track the data manipulation to discover which column contains the original data. All analyses are carried out on all four sets of columns, including systematics tests, empirical corrections and covariance matrix estimation. Different fields may pass or fail the systematics tests in different blinded columns and this has been taken into account in the final analysis. Even though this setup incurred a factor of four increase in the computational analysis time, we felt this was a necessary step to make, whilst also encouraging the good practice of creating, verifying and validating `press-of-the-button' end-to-end analysis scripts. In order to allow for phased unblinding, team members add an additional individual layer of blinding by not labelling their results with the blinded column number used. Pre-publication, our results were sent to our external who provided the blinding key, and can verify that the results presented in both this paper, and our first scientific analyses \citep{viola/etal:2015, sifon/etal:2015} were not changed after it was revealed to the authors which column contained the true shear. We show an example of the blinding scheme in action in the next section, where we present the cosmic shear measurement from the four blinded shear measurements.
Thus far our blinding is limited to the shape measurements only and future blinding will also include manipulation of galaxy weights and potentially photometric redshifts, stellar masses and galaxy luminosities. As our first analysis covers less than 10 per cent of the final KiDS area, the blinding described here had only a small effect on the early science results presented in the accompanying \citet{viola/etal:2015} and \citet{sifon/etal:2015} papers. We agreed however, that it was important to implement this blinding scheme from the beginning, in order to learn from this `dry run' in preparation for the future larger-area KiDS cosmological analyses.
\subsection{Second order weak lensing statistics}
To detect weak lensing by large-scale structures and extract cosmological parameter constraints and information about systematics from the data, a wide
range of different two-point statistics have been proposed \citep[see][for a comprehensive discussion of the relationship between these statistics]{SchvWKM02,COSEBIS}. These real-space statistics all derive from the observed angular two-point correlation function $\hat{\xi}_{\pm}$ which can be estimated from the data as follows:
\begin{equation}} \newcommand{\ee}{\end{equation}
\hat{\xi}_{\pm}(\theta) = \frac{\sum_\theta w_a w_b \left[ \epsilon_\rmn{t} (\vec{x}_a) \epsilon_\rmn{t} (\vec{x}_b) \, \pm \, \epsilon_\times (\vec{x}_a) \epsilon_\times (\vec{x}_b)
\right]}{
\sum_\theta w_a w_b } \, .
\label{eqn:xipm_est}
\ee
Using inverse variance weights $w$, the sum is taken over pairs of galaxies with angular separation $|\vec{x}_a - \vec{x}_b|=\theta \pm \Delta \theta /2 $, where $\Delta \theta$ is the width of the bin\footnote{Note that the final reported angular scale of the bin should not be the mid-point of angular range selected, but the weighted average separation of the galaxy pairs in that bin.}. The tangential and cross components of the ellipticities $\epsilon_{\rmn{t},\times}$ are measured with respect to the vector joining each pair of correlated objects \citep{bartelmann/schneider:2001}.
Weak gravitational lensing produces curl-free gradient distortions (E-mode), and contributes only to the curl distortions (B-mode) at small angular scales, $\theta < 1$ arcmin, mainly due to source redshift clustering \citep{schneider/etal:2002}. Decomposing the weak lensing signal into E and B modes therefore provides a method with which to gauge the contribution to the overall shear correlation signal from non-lensing sources. These could arise from residual systematics in the shape measurement method, or from the intrinsic alignment of nearby galaxies \citep[see][and references therein]{troxel/ishak:2015}.
\citet{Crittenden/etal:2002} show that the shear correlation functions, estimated in Eq.~\ref{eqn:xipm_est}, can be decomposed
into the E- and B-type correlators
\begin{equation}} \newcommand{\ee}{\end{equation}
\xi_\rmn{E}(\theta)=\frac{\xi_+(\theta)+\xi'(\theta)}{2}
\qquad\hbox{and}\qquad
\xi_\rmn{B}(\theta)=\frac{\xi_+(\theta)-\xi'(\theta)}{2}
\, ,
\label{eqn:xieb}
\ee
where
\begin{equation}} \newcommand{\ee}{\end{equation}
\xi'(\theta)=\xi_-(\theta)+4\int_\theta^\infty \frac{\d\vartheta}{\vartheta} \xi_-(\vartheta)
-12\theta^2 \int_\theta^\infty \frac{\d\vartheta}{\vartheta^3}\xi_-(\vartheta)\, .
\label{eqn:xipr}
\ee
The measured E-mode $\xi_\rmn{E}(\theta)$ is related to the underlying non-linear matter power spectrum $P_\delta$ that we wish to probe, via
\begin{equation}} \newcommand{\ee}{\end{equation}
\xi_\pm(\theta) = \frac{1}{2\pi}\int \d\ell \,\ell \,P_\kappa(\ell) \, J_{0,4}(\ell \theta) \, ,
\label{eqn:xiGG}
\ee
where $J_{0,4} (\ell \theta)$ is the zeroth (for $\xi_+$) or fourth (for $\xi_- $) order Bessel function of the first kind. $P_\kappa(\ell)$ is the convergence power spectrum at angular wave number $\ell$
\begin{equation}} \newcommand{\ee}{\end{equation}
P_\kappa(\ell) = \int_0^{w_{\rm H}} \d w \,
\frac{q(w)^2}{a(w)^2} \, P_\delta \left( \frac{\ell}{f_K(w)},w \right),
\label{eqn:Pkappa}
\ee
where $a(w)$ is the dimensionless scale factor corresponding to the comoving radial distance $w$, and $w_H$ is the horizon distance. The lensing efficiency function $q(w)$ is given by
\begin{equation}} \newcommand{\ee}{\end{equation}
q(w) = \frac{3 H_0^2 \Omega_{\rm m}}{2c^2} \int_w^{w_{\rm H}}\, \d w'\ n(w')
\frac{f_K(w'-w)}{f_K(w')},
\label{eqn:qk}
\ee
where $n(w)\d w$ is the effective number of galaxies in $\d w$, normalized so that $\int n(w)\d w = 1$. $f_K(w)$ is the angular diameter distance out to comoving radial distance $w$,
$H_0$ is the Hubble parameter and $\Omega_\rmn{m}$ the matter density parameter at $z=0$. For more details see \citet{bartelmann/schneider:2001} and references therein.
\begin{figure}
\putfig{figs/shear/comp_sys_xi_EB_blinding4.pdf}
\caption{Comparison of the E-type (upper) and B-type (lower) shear correlation functions measured using all the data (dashed); after the application of the field selection (open points); and after the application of both field selection and the additive calibration correction (solid). Without these two corrections the B-mode, which is an indicator of non-lensing systematic errors, becomes significantly non-zero on large scales. Note that the B-mode vertical axis has been multiplied by $\theta$ (in arcminutes) in order to emphasize the differences from a zero signal.}
\label{fig:xi_EB}
\end{figure}
\begin{figure}
\putfig{figs/shear/comp_blinding_theory.pdf}
\caption{The E-type shear correlation functions from the 105 tiles of KiDS data that pass the PSF systematics tests in \S\ref{sec:passfail}. Measurements from all four blindings are shown, with the true shear measurement indicated with an error bar. For comparison the E-mode signal expected from three different $\Lambda$CDM cosmological models are shown; Planck cosmology using the TT spectra (dashed), and EE spectra (dotted) along with the best-fit CFHTLenS result (solid). Note that the vertical axis has been multiplied by $\theta$ (in arcminutes) in order to improve the visualisation by enhancing the differences.}
\label{fig:xi_E_theory}
\end{figure}
\subsection{KiDS shear correlation data and survey parameters}
\label{sec:shear_corr}
Fig.~\ref{fig:xi_EB} presents the derived E- and B-type shear correlation functions, from Eq.~\ref{eqn:xieb}. These were calculated following the method in \citet{Pen/etal:2002}, using 4000 finely binned measurements of the shear correlation function $\xi_{\pm}(\theta)$ spanning $9\arcsec< \theta<4^\circ$ in equal bins of $\log \theta$. As our data extend over many degrees, but not to infinity, we use a fiducial cosmological model to determine the integrand in Eq.~\ref{eqn:xipr}, splitting the
integrals into two. The first is calculated from the observations directly, extending from $\theta$ to $\theta_{\rm max}$ where $\theta_{\rm max}= 4^\circ$. The second extends from $\theta_{\rm max}$ to $\infty$ and is calculated by inserting $\xi_-(\theta)$ calculated from Eq.~\ref{eqn:xiGG} assuming the KiDS redshift distribution and the best-fit Planck cosmology \citep{planckXIII:2015}.
This model dependent part of the integrand sums to $\sim 10^{-7}$ for the three cosmological models that are compared in Fig.~\ref{fig:xi_E_theory}. This model dependence prevents cosmological parameter estimation directly from the E-mode signal. The analysis is still a valid diagnostic test for residual systematics, however, as the model-dependent addition to Eq.~\ref{eqn:xipr} is less than 10 percent of the total signal on the largest angular scales probed. The errors are estimated following \citet{Pen/etal:2002}, treating each noisy finely binned raw shear correlation measurement as uncorrelated with the others. We then propagate these uncorrelated errors through to a final correlated error on the coarsely binned E- and B-type shear correlation functions. This approximation is sufficient for this diagnostic test as the current KiDS area is relatively small such that for the majority of scales the data are shot-noise dominated.
Focussing first on the measured E mode presented in the upper panel of Fig.~\ref{fig:xi_EB}, the small effect of removing the 4 percent of fields that failed the selection stage (\S\ref{sec:passfail}) can be seen, as well as the result of subsequent application of the additive calibration correction (\S\ref{sec:addcalcor}). The impact of this two-step calibration can also be seen in the B-mode signal (lower panel), which is consistent with zero on all scales, demonstrating excellent control of systematic errors in shape measurement with KiDS. Without the field selection or additive ellipticity corrections, however, we find a significant B-mode signal on scales $\theta>10'$. In preparation for future releases we are currently implementing a number of improvements in both the data reduction pipeline, PSF modelling and shape measurement analysis which are designed to reduce the significance of the calibration corrections on our analysis.
To illustrate how the implemented blinding scheme modified the results, Fig.~\ref{fig:xi_E_theory} compares the E-mode measured from all four blindings, with the true shear measurement indicated as the data point with Poisson error bars. For comparison the E-mode signal expected from a range of $\Lambda$CDM cosmological models are also shown, using the effective weighted redshift distribution shown in Fig.~\ref{fig:nofz}, as estimated from the weighted sum of the photometric redshift probability distributions $p(z)$. The three cosmological models use the Planck results from table 3 of \citet{planckXIII:2015} showing the difference between the cosmology fit to the TT spectra (dashed) and EE spectra (dotted) along with the best-fit CFHTLenS result (solid) from \citet{kilbinger/etal:2013}.
\section{Conclusions}
\label{sec:conclude}
In this paper we present the first lensing analysis of the Kilo-Degree Survey (KiDS) data obtained at the VLT Survey Telescope (VST) at ESO's Paranal Observatory. KiDS is a multi-band survey specifically designed for weak lensing tomography, that takes advantage of the very good image quality at the VST. A particular advantage of the VST, where the camera operates at an f/5 Cassegrain focus, compared to much faster wide-field prime-focus cameras, is the simplicity and generally low amplitude of the ellipticity pattern, as well as the uniformity of the size of the point spread function (PSF) over the full field of view.
The KiDS lensing analysis draws heavily on heritage from the CFHTLenS project \citep{heymans/etal:2012}, in particular in the use of \textsc{Theli} \citep{erben/etal:2013} and \emph{lens}fit \citep{miller/etal:2013} for measuring galaxy shapes (\S\ref{sec:shapes}), and \textsc{bpz} \citep{benitez:2000} for photometric redshifts \citep{hildebrandt/etal:2012}. As input for the photometric redshifts, aperture-matched colours are derived from PSF Gaussianization of the public data release of the \textsc{Astro-WISE} reduction of the KiDS images \citep{dejong/etal:2015}, and subsequent Gaussian Aperture and PSF (\textsc{GAaP}) photometry. This procedure, which was developed specifically for KiDS, is described in detail in \S\ref{sec:photom} and Appendix~\ref{app:gaap}. The resulting shear/photometric redshift catalogues are available to the community (Appendix~\ref{app:data}), and form the basis of three companion scientific analyses \citealt{sifon/etal:2015}; \citealt{viola/etal:2015}; van Uitert et al., in preparation) that exploit the overlap of these data with the GAMA spectroscopic survey \citep{driver/etal:2011}. The KiDS lensing catalogues contain 8.88 galaxies per square arcminute with non-zero lensing weight, cover an unmasked area of 75 square degrees, and provide an inverse shear variance of 69 per square arcminute. The median redshift of the summed posterior photometric redshift distributions of the galaxies, accounting for the \emph{lens}fit weight, is 0.53.
Considerable attention was paid to quantifying and correcting the lensing estimates for additive and multiplicative bias. In order to validate the galaxy ellipticities, we carried out extensive tests (\S\ref{sec:sys}). All indications are that the data are indeed `lensing-quality.' For example, the degree of star-galaxy shape correlation in the KiDS data is essentially consistent with the expectations from realistic simulated cosmic shear fields, with just 4 percent of the tiles falling outside expected parameter ranges, and the amplitude of galaxy-galaxy lensing around magnitude-limited foreground lenses scales in the same way as it did in CFHTLenS even though the depths of the surveys differ. Taking advantage of the GAMA overlap, we also tested the way the tangential shear around galaxies at known (spectroscopic) redshift scales with the (photometric) redshift of the sources. Also here we recover the expected dependence, which gives us confidence in both the photometric redshifts and the shears we measure.
Finally, in \S\ref{sec:cosmicshear} we present a first measurement of the cosmic shear correlation function from these data. Though admittedly still noisy, the results are consistent with previous measurements, and show negligible B-mode signal, demonstrating the high fidelity of the KiDS lensing data.
KiDS observations continue at the VST, and as the area of the survey grows more refined cosmological lensing measurements will follow.
\section*{Acknowledgments}
We are grateful to Matthias Bartelmann for being our external blinder, revealing which of the four catalogues analysed was the true unblinded catalogue at the end of this study, to Giovanni Covone and Mattia Vaccari for providing the VOICE data, to all the members of the KiDS weak lensing team who supported this work, and to the GAMA team for their spectroscopic catalogues.
We also thank Mike Jarvis and Martin Kilbinger for \textsc{corr2} and \textsc{athena}, the correlation function measurement software used in this analysis. We acknowledge support from the European Research Council under FP7 grant number 279396 (MV,MC,CS,RH,ME,H.Ho) and 240185 (AC and CH). EvU acknowledges support from an STFC Ernest Rutherford Research Grant, grant reference ST/L00285X/1. RN and EvU acknowledge support from the German Federal Ministry for Economic Affairs and Energy (BMWi) provided via DLR under project no.50QE1103. HHi is supported by the DFG Emmy Noether grant Hi 1495/2-1.
JHD and LvW are funded by the NSERC of Canada, and LvW by CIfAR. TDK is supported by a Royal Society URF.
CB acknowledges the support of the Australian Research Council through the award of a Future Fellowship. This work is supported by the Netherlands Organisation for Scientific Research (NWO) through grants 614.001.103 and 614.061.610, by the Dutch Research School for Astronomy (NOVA), and by the Deutsche Forschungsgemeinschaft in the framework of the TR33 'The Dark Universe'.
Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory
under programme IDs 177.A-3016, 177.A-3017 and 177.A-3018, and on data products produced by Target/OmegaCEN, INAF-OACN, INAF-OAPD and the KiDS production team, on behalf of the KiDS consortium.
{\small \textit{Author Contributions:} All authors contributed to the development and writing of this paper. The authorship list is given in three groups: the lead authors (KK, CH, HHi, RN, TE, JdJ, MV), followed by two alphabetical groups. The first alphabetical group includes those who are key contributors to both the scientific analysis and the data products. The second group covers those who have either made a significant contribution to the data products, or to the scientific analysis.}
\bibliographystyle{mnras}
|
2,877,628,089,804 | arxiv | \section{Introduction} \label{sec:recintro}
\subsection{History and motivation} \label{ssec:motiv}
Rectangular cavities are perhaps the most frequently studied
geometries in connection with vacuum (Casimir) energy.
Nevertheless, there are still worthwhile things to say about them.
Although exactly solvable, they exhibit many features that
are subjects of current research and debate in the broader context
of quantum vacuum energy:
boundary divergences, corner effects, sometimes surprising signs,
and sometimes revealing connections with geometry through the
spectrum of periodic and other closed classical paths (or optical
rays).
A complete listing of previous literature is impossible, but we
summarize what we see as the most important historical
developments.
Lukosz \cite{Lu} calculated the interior vacuum energy of the
electromagnetic field in a (3D) perfectly conducting
parallelepiped, using zeta-function regularization, and predicted a
repulsive (outward) force for many aspect ratios, including the
cube.
Ambj{\o}rn and Wolfram \cite{AW} extended such calculations to a
wide variety of dimensions, fields, and boundary conditions.
Actor \cite{A} emphasized that divergences in the total energy
must be understood in terms of the local behavior of the field near
boundaries and boundary singularities (edges and corners),
and calculated the local zeta function for the 3D scalar field.
For earlier, closely related discussions of a rectangular
waveguide and various other systems, see \cite{DowK,DowB}.
All these works were done in
the framework of zeta functions~\cite{El,EORBZ,Kir},
but in practice, in the special
case of rectangular cavities, functional equations for Epstein zeta
functions are used to convert zeta-regularized sums over
eigenvalues into what are, in effect, zeta-regularized sums over
classical paths.
(In \cite{RVZ} a transition to an ultraviolet cutoff was also
made at this step.)
The calculation of vacuum energy via classical paths (also called
optical rays) \cite{BmC,BD12,JR,SS,MSSV,funorman,JS,SJ12} leads
naturally to more physical regularizations associated with
separation of points in the Green functions of the theory.
For rectangular parallelepipeds such calculations are exact
(no stationary-phase approximations are required) and reduce to
the classic method of images. Although they did not discuss vacuum
energy, Balian and Bloch \cite{BB3} used the 3D parallelepiped
as a principal example in their landmark study of the relation
between periodic orbits and oscillations in the eigenvalue density,
and their catalog of periodic and other closed orbits is the
best starting point for a study of the rectangle.
Hacyan et al.~\cite{HJV} calculated the full stress tensor for the
electromagnetic field in a box by a Green-function approach;
along the way, they showed how to reduce the electromagnetic problem
to scalar fields with mixed Dirichlet and Neumann boundary conditions
via Hertz potentials. In this connection see also \cite{RS}.
(Contrary to the impression left by some papers on the global
problem, a local investigation of electromagnetism cannot be split
into pure Dirichlet and pure Neumann problems, even when a
decomposition into TE and TM modes exists.)
At this point we should mention the work of Ford and Svaiter
\cite{FS}, which showed that physically motivated cutoffs could
convert divergences into finite effects clearly localized near
boundaries.
This theme has been repeatedly visited since then
\cite{GO,systemat,funorman,delta,ines1,ines2}
and will play a major role in the present work.
Cavalcanti \cite{Cav} rejuvenated the field by introducing the
piston model (for a 2D scalar field),
discussed in detail in Sec.~\ref{sec:piston} and illustrated in
Fig~\ref{fig:piston}.
(Similar ideas were advanced earlier by Svaiter et
al.~\cite{Sv}.)
The motivation for the piston is that the calculation of the
force on the piston plate is unaffected by either uncompensated
divergences or unknown forces from the exterior.
The conclusion of \cite{Cav} is that the force is always attractive
(inward).
That paper used both zeta and cutoff regularization, but still
starting from the eigenvalue spectrum.
Hertzberg et al.\ \cite{HJ1,HJ2} extended the piston model to
dimension~3 and to the electromagnetic field, and they analyzed it
in terms of closed paths (but without the close attention to
locally defined quantities that we provide here).
From that point of view, the repulsive nature of the Lukosz force
is attributable to a particular type of path moving parallel to the
plate and producing an energy proportional to the piston
displacement. (It is essentially the Casimir energy associated
with the walls perpendicular to the movable plate.) But such energy is
also present in the exterior part of the piston shaft, and
therefore these paths make no net contribution to the force. What
is left of the Lukosz force is attractive.
This effect shows up even more clearly in the two-dimensional
model (Sec.~\ref{sec:piston}).
Rodriguez et al.\ \cite{R1,R2} have made a numerical study of two
conducting rectangular objects in a narrow passage, a model closely
akin to the pistons and pistols we discuss here.
They conclude that the distance to the confining walls influences
the attraction between the blocks, and their analysis makes use of
the local stress tensor.
In \cite{Z} that model is approached by the method of closed
optical paths.
Illuminating though the piston has been, it does not settle the
original issue of the physical reality of the force calculated by
Lukosz \cite{Lu} and others.
The existence of a Casimir-like energy in the exterior part of the
piston shaft says nothing about what happens when that part of the
shaft is removed, the plate remaining free to move
(see Fig.~\ref{fig:looselid} in Sec.~\ref{sec:pistol-d}).
The ``finite part'' of that force is robust, in the sense that all
reasonable prescriptions for calculating it give the same answer.
It can be obtained by differentiating the total energy, or by
integrating the pressure over the movable boundary.
It can be obtained by zeta-function regularization or by
ultraviolet cutoffs, and within the latter framework the choice of
cutoff function dictates the relative sizes of the
cutoff-dependent terms but not the structure of the series nor the
numerical value of the finite term \cite[Appendix B]{Cav}.
Is the object of this consensus a meaningless number? One of
our goals is to investigate to what extent it has physical
significance.
The opinion expressed in \cite{HJ1} is that
``Without [the piston shaft] (or some open region that allows
rigid motion of the partition) the Casimir energy of the
parallelepiped is, in fact, cutoff dependent.
If the cutoff dependence is somehow ignored, a repulsive force
\dots\ remains as an artifact.''
We agree that a correct calculation of the force on the piston
must include the effect of the piston shaft,
and that the net effect is attractive.
We do not agree that the repulsive force
associated with the interior can be dismissed as
an artifact of naively discarding a divergent term.
The scenario indicated in Fig.~\ref{fig:looselid},
a box with a movable lid, is a
well-defined problem of relative motion of rigid bodies, just as
much as the piston is; cutoff-dependent energies associated with
the rigid boundaries cannot affect the force.
The difficulties of analyzing Fig.~\ref{fig:looselid} are, first,
that the effects of the corners and gaps in that geometry are hard
to calculate
(see, however, \cite{Luedge,DC,Dowker,Smith,GL} and, on a
different tack, \cite{MPW,MPWconfs}),
and, second, that the idealized Casimir theory is not
physically applicable to very small separations of the bodies.
We evade the first problem by considering another scenario,
the ``pistol'' (Fig.~\ref{fig:pistol}),
which should still exhibit the uncompensated Lukosz force on a
flat boundary.
However, we find that the situation is then confounded by a
strong countervailing attractive force associated with the
Casimir energy in the narrow gap surrounding the
``bullet''\negthinspace.
Moreover, one now runs up against the second problem, which
cannot be treated seriously within the limits of our methodology.
A crude model of a ``real'' boundary can be easily obtained,
however, by maintaining a finite cutoff of atomic dimensions.
The result is that the force depends sensitively on how tightly
the ``bullet'' fits into the ``barrel''\negthinspace.
If the fit is loose, the Lukosz force is overwhelmed by the
corresponding force associated with the gap surrounding the
bullet, and the net force is attractive.
If the fit is tight, the gap force can be made repulsive or even
fine-tuned to vanish, as originally hoped; unfortunately, that
is the regime in which one is least justified in taking the model
seriously.
All we claim is that external forces opposing the Lukosz force are
model-dependent and might, in principle, be controlled so as to
demonstrate the existence of the Lukosz force.
In this paper we consider strictly the two-dimensional scalar
field,
usually with Dirichlet boundary conditions,
although we sometimes lapse into the three-dimensional
electromagnetic terminology (such as ``conductor'') for conceptual
discussions.
It is intended that three-dimensional generalizations will be
presented elsewhere~\cite{Liu}.
Sec.~\ref{sec:local}
presents a thorough analysis, by means of classical paths, of all
components of the stress (energy-momentum) tensor in
a rectangle.
Sec.~\ref{sec:global} does the same for the energy and also the
pressure and force on one side.
Contributions are recorded for each path (or class of similar
paths) separately, with comments on their physical or geometrical
significance to the extent that we can discern it.
The results are stated for all values of the
curvature, or conformal, coupling constant, $\xi$
(see (\ref{lagrangian}) and (\ref{stresst})),
and all values of the parameter in an
exponential ultraviolet cutoff.
For the most part, they are stated for any combination of
Dirichlet and Neumann conditions on the four sides of the box.
A brief account of this part of the work was published
previously~\cite{leipzig}, along with evidence that the
gravitational effects of boundaries in the
``renormalized'' theory
without cutoff can be understood (and believed) as the
distributional limit of the predictions of the cutoff theory,
thereby providing a true renormalization.
In the rest of the paper we restrict to the Dirichlet condition.
The piston is reviewed from our point of view in
Sec.~\ref{sec:piston}.
Sec.~\ref{sec:pistol-d} introduces the pistol model and treats it
in the Dirichlet theory.
Sec.~\ref{sec:pistol-c} investigates the pistol with a finite
cutoff.
{\sl A remark on terminology:\/}
Many authors, including some of ourselves on previous occasions
(e.g., \cite{systemat}), use the term ``renormalized energy'' to
refer to the finite part of a regularized energy when the
latter is expanded as a series in the cutoff parameter.
Strictly speaking, ``renormalization'' refers to the process
of obtaining physically observable probability amplitudes by
absorbing suitable divergent and finite contributions into
redefinitions of physical parameters (couplings, masses, etc.)\
appearing in the bare Lagrangian.
Ideally,
all renormalizations in the first sense should either be
associated with renormalizations in the second sense or be
justified by cancellations of divergent terms coming from
different sources.
Yet in the absence of a completed theory, one must often talk
about renormalization in the first sense without having an
obvious counterterm or cancellation, and there seems to be no
convenient substitute terminology.
Much of our work in this paper has to do, in fact, with
exhibiting cancellations, and \cite{leipzig} and its planned
sequels have to do with gravitational counterterms.
When we use ``renormalization'' or ``renormalized'' in the first
sense, we have always either put the word in quotation marks or
accompanied it by the word ``naive''\negthinspace, depending on
context.
\subsection{Basic formalism} \label{ssec:formalism}
We are concerned here with the massless scalar wave equation
\begin{equation}
\pd{^2\phi}{(x^0)^2} = \nabla^2\phi
\label{fieldeq}\end{equation}
in a cavity $\Omega$ together with a Dirichlet ($\phi=0$)
or Neumann ($\hat\mathbf{n}\cdot \nabla\phi=0$) condition
on each part of the boundary of~$\Omega$.
We write $H$ for the corresponding positive self-adjoint operator:
$H = - \nabla^2$ with boundary conditions understood.
The eigenvalues of $H$ are positive, with the possible exception
(in the totally Neumann case) of a constant eigenfunction with
eigenvalue zero.
The formulas in this subsection are presented for arbitrary
spatial dimension~$d$, but in the next section we specialize to
$d=2$.
The field equation (\ref{fieldeq}) is obtained canonically from the
curved-space action and Lagrangian
\begin{equation} S= \int_\Omega L\, \sqrt{|g|} \,d^{d+1}x,
\qquad
L={\textstyle\frac12}\left[g^{\mu\nu}\,\partial_\mu\phi\,\partial_\nu\phi
+ \xi R\,\phi^2\right],
\label{lagrangian} \end{equation}
by taking the variation with respect to $\phi$ and then setting
the metric to its flat-space value, $g_{\mu\nu}=\eta_{\mu\nu}\,$.
(Our tensorial sign conventions are that $\eta_{00}<0$,
but $T_{00}>0$ for normal matter.)
The stress tensor is defined by
\begin{equation} T^{\mu\nu} = \frac2 {\sqrt{|g|}} \,
\frac{\delta S}{\delta g_{\mu\nu}}\,.
\label{stressdef}\end{equation}
It reduces in flat space-time (after use of the equation of motion,
(\ref{fieldeq})) to
\begin{equation}
T_{\mu\nu} = (1-2\xi)\, \partial_\mu\phi\, \partial_\nu\phi
+\bigl(2\xi-{\textstyle\frac12}\bigr) \eta_{\mu\nu} \,
\partial_\lambda\phi \,
\partial^\lambda \phi -2\xi\, \phi\,\partial_\mu\partial_\nu\phi.
\label{stresst}\end{equation}
In (\ref{lagrangian}) $R$ is the curvature scalar, and
$ \xi$
labels different possible gravitational couplings.
In curved space different values of $\xi$ are different theories;
after the reduction to flat space
the field equation is
independent of~$\xi$, but the stress tensors are different.
It turns out [see (\ref{T00})] that changing $\xi$ changes
$T_{00}$ only
by a divergence, and therefore the total energy
$ E = \int_\Omega T_{00}\, d\mathbf{r}$
is independent of $\xi$, at least classically,
under Dirichlet or Neumann boundary conditions.
(A Robin boundary condition \cite{leb,systemat,sah},
$\hat\mathbf{n}\cdot\nabla\phi = \gamma \phi$,
would require a boundary term to be added to the action
(\ref{lagrangian}).
There results a $\xi$-dependent boundary term in $E$, which
vanishes when $\xi=\frac14\,$.
Similar remarks apply to models with delta function potentials
\cite{delta,ines1,ines2}.)
There are three natural choices of~$\xi$:
\begin{description}
\item{$\xi=0\,$:} minimal coupling, which simplifies the
Lagrangian and curved-space field equation;
\smallskip \item{$\xi = \xi_d\,$:} conformal coupling,
\begin{equation}
\xi_d \equiv \frac{d-1}{4d}\,;
\qquad
\xi_2 ={\textstyle \frac18}\,, \quad \xi_3 = {\textstyle\frac16}\,, \quad
\xi_\infty={\textstyle\frac14}\,,
\label{conformal}\end{equation}
which results in the mildest behavior of the quantized field near
the boundary;
\smallskip \item{$\xi = \frac14\,$:} the coupling that
eliminates the Robin boundary energy, which also
simplifies the relation between the stress tensor and the total
energy, as we shall see.
\end{description}
It is convenient to adopt $\xi=\frac14$ as the base value and to
define
$ \beta= \xi-\frac14 $
to parametrize the coupling.
Thus we write
\begin{equation}
T_{\mu\nu}(\xi) \equiv T_{\mu\nu}({\scriptstyle\frac14}) + \Delta T_{\mu\nu}
\label{Tgen}\end{equation}
and obtain
\begin{equation}
T_{00}({\scriptstyle\frac14}) =
\frac12\Biggl[\left(\pd \phi {x^0}\right)^2
-\phi\nabla^2\phi \Biggr], \qquad
\Delta T_{00} = -2\beta\nabla\cdot(\phi\nabla\phi),
\label{T00}\end{equation}
\begin{equation}
T_{0j}({\scriptstyle\frac14}) = \frac12\left[\pd{\phi}{x^0}\,\pd{\phi}{x_j}\,
-\phi\,\pd{^2\phi}{x^0\,\partial x_j}\right], \qquad
\Delta T_{0j} = -2\beta\pd{}{x^0}\left(\phi\,\pd{\phi}{x_j}\right),
\label{T0j}\end{equation}
\begin{eqnarray}
T_{jk}({\scriptstyle\frac14}) &=& \frac12\left[\pd{\phi}{x_j}\,\pd{\phi}{x_k}
- \phi\, \pd{^2\phi}{x_j\,\partial x_k} \right],
\nonumber\\
\Delta T_{jk} &=&-2\beta \left[ \pd {\phi}{x_j}\, \pd{\phi}{x_k}
+ \phi\, \pd{^2\phi}{x_j\,\partial x_k} \right]
\quad\hbox{when $j\ne k$,}
\label{Tjk}\end{eqnarray}
\begin{eqnarray}
T_{jj}({\scriptstyle\frac14}) &=& \frac12\Biggl[\left(\pd{\phi}{x_j}\right)^2
- \phi\,\pd{^2\phi}{x_j{}\!^2}\Biggr],
\nonumber \\
\Delta T_{jj}&=& -2\beta\Biggl[\left(\pd{\phi}{x^0}\right)^2
-\sum_{k\ne j} \left(\pd{\phi}{x_k}\right)^2
+\phi\,\pd{^2\phi}{x_j{}\!^2}\Biggr].
\label{Tjj}\end{eqnarray}
The trace of the tensor is
\begin{equation}
T^\lambda_\lambda = -\left({\textstyle\frac12}+2\beta d\right)
\Biggl[\left(\pd{\phi}{x^0}\right)^2 - (\nabla\phi)^2\Biggr],
\label{trace}\end{equation}
which vanishes for the conformal coupling, $\beta=- (4d)^{-1}$.
When the theory is canonically quantized, the vacuum expectation
value of the stress tensor is expressed formally in terms of the
normal modes
\[ \varphi_n =\frac1{\sqrt{2\omega_n}} \,\phi_n(\mathbf{r})
e^{-i\omega_n x^0}, \qquad H\phi_n = \omega_n{}\!^2\phi_n\,,
\qquad \|\phi_n\|=1,\]
as
\begin{equation}
\langle T_{\mu\nu}(\mathbf{r})\rangle = \sum_{n=1}^\infty
T_{\mu\nu} [\varphi_n,\varphi_n^*].
\label{modesum}\end{equation}
(The Neumann zero mode, when it exists, is omitted and
ignored. If included and treated properly, it would add a continuous
energy spectrum \cite[Appendix]{aspects}.)
The notation in (\ref{modesum}) means that in each
of the bilinear terms in (\ref{T00})--(\ref{Tjj}), the field $\phi$
is replaced by a mode function in one factor and by its
complex conjugate in the other. (When the factors are not the same,
the product should be symmetrized.)
In particular,
\begin{equation}
\langle T_{00}({\scriptstyle\frac14})\rangle =
{\textstyle\frac12}\sum_n \omega_n |\phi_n(\mathbf{r})|^2.
\label{T00sum}\end{equation}
Integrating $T_{00}({\scriptstyle\frac14})$ over $\Omega$ gives the expected formal sum
for the total energy,
\begin{equation}
\langle E\rangle = {\textstyle\frac12}\sum_n \omega_n \,.
\label{energysum}\end{equation}
As promised earlier, we regularize all these divergent sums with
an exponential ultraviolet cutoff.
It is convenient to start from the (Poisson) cylinder kernel,
\begin{equation}
T(t,\mathbf{r},\mathbf{r}')\equiv
\sum_{n=1}^\infty \phi_n(\mathbf{r})
\phi_n(\mathbf{r}')^* e^{-t\omega_n} =
\langle \mathbf{r}|e^{-t\sqrt{H}} |\mathbf{r}' \rangle.
\label{cyl}\end{equation}
(Here $t$ is not the physical time.) Then
\begin{equation}
\langle T_{00}({\scriptstyle\frac14}) \rangle_t
= -\, \frac12\,\pd Tt(t,\mathbf{r},\mathbf{r}),
\label{Toot}\end{equation}
\begin{equation}
\langle E\rangle_t = -\, \frac12\, \pd{}t
T(t), \qquad
T(t) \equiv \int_\Omega
T(t,\mathbf{r},\mathbf{r})\,d\mathbf{r}.
\label{ET}\end{equation}
To obtain $\langle \Delta T_{00}\rangle $ and the other components of
$\langle T_{\mu\nu}\rangle $ one needs a more primitive cylinder kernel,
\begin{equation}
\overline T(t,\mathbf{r},\mathbf{r}') =
- \sum_{n=1}^\infty \frac{1}{\omega_n}\,
\phi_n(\mathbf{r})\phi_n(\mathbf{r}')^* e^{-t\omega_n}.
\label{Tbar}\end{equation}
Then $T = \pd {\overline T}t$ and
\begin{equation}
\langle \Delta T_{00}\rangle _t = \beta
\nabla_\mathbf{r}\cdot[\nabla_{\mathbf{r}'}
\overline T(t,\mathbf{r},\mathbf{r}')]_{\mathbf{r}'=\mathbf{r}}\,.
\label{DeltaTt}\end{equation}
In terms of partial differential equations,
$T$ and $\overline T$ are characterized by the elliptic equation
\begin{equation}
\pd{^2T}{t^2} = - \nabla^2 T
\label{cyleq}\end{equation}
along with the imposed spatial
boundary conditions, the initial condition
\[ T(0,\mathbf{r}, \mathbf{r}') = \delta(\mathbf{r}-\mathbf{r}') =
\pd{\overline T}t(0,\mathbf{r}, \mathbf{r}') ,\]
and the requirement of boundedness as $t\to+\infty$.
(The Green function $\overline T$ can be introduced
differently,
either as twice the Euclidean Green function in
$\mathbf{R}\times \Omega$ with its source on $t=0$,
or through an analytic continuation to imaginary time
of the Wightman or Feynman two-point function.)
The vacuum expectation values of the summands in (\ref{T0j}) are
identically zero, as expected from the mode-by-mode time-reversal
invariance.
For the other components one obtains
\begin{equation}
\langle T_{jj}({\scriptstyle\frac14})\rangle_t =
\frac18\Biggl[ -2\,\pd{^2}{x_j\,\partial x_j'}
+ \pd{^2}{x_j{}\!^2}
+ \pd{^2}{x'_j{}\!^2} \Biggr]\overline T ,
\label{TjjT}\end{equation}
\begin{equation}
\langle \Delta T_{jj} \rangle_t =
\frac\beta2 \Biggl[2\,\pd{^2}{t^2}
-2\sum_{k\ne j} \pd{^2}{x_k\,\partial x'_k} + \pd{^2}{x_j{}\!^2}
+ \pd{^2}{x'_j{}\!^2} \Biggr] \overline T ,
\label{DTjjT} \end{equation}
\begin{equation}
\langle T_{jk}({\scriptstyle\frac14})\rangle_t =\frac18\Biggl[
\pd{^2}{x_j\,\partial x_k} +\pd{^2}{x'_j\,\partial x'_k}
-\pd{^2}{x_j\,\partial x'_k} -\pd{^2}{x'_j\,\partial x_k}
\Biggr]\overline T,
\label{TjkT}\end{equation}
\begin{equation}
\langle \Delta T_{jk}\rangle_t =\frac{\beta}2 \Biggl[
\pd{^2}{x_j\,\partial x_k} +\pd{^2}{x'_j\,\partial x'_k}
+\pd{^2}{x_j\,\partial x'_k} +\pd{^2}{x'_j\,\partial x_k}
\Biggr]\overline T,
\label{DTjkT}\end{equation}
where it is understood that $\mathbf{r'}$ is to be set equal to
$\mathbf{r}$ at the final step.
\section{The stress tensor}
\label{sec:local}
\subsection{Preliminaries}\label{ssec:prelim}
We now restrict attention to dimension~$2$ and write $x$ for
$x_1$ and $y$ for $x_2\,$.
Define
\begin{eqnarray}
A&\equiv& \pd Tt = \pd{^2\overline{T}}{t^2}\,, \label{Adef}\\
B_1 &\equiv& \frac12 \left( \pd{^2\overline{T}}{x^2} + \pd{^2\overline{T}}{x'^2}
\right), \quad
B_2 \equiv \frac12 \left( \pd{^2\overline{T}}{y^2} + \pd{^2\overline{T}}{y'^2}
\right), \label{Bdef} \\
C_1&\equiv& \pd{^2\overline{T}}{x\,\partial x'}\,,\quad
C_2 \equiv \pd{^2\overline{T}}{y\,\partial y'}\,,\quad \label{Cdef} \\
D_{12}&\equiv& \frac12\left(\pd{^2\overline{T}}{x\,\partial y'} +
\pd{^2\overline{T}}{y\,\partial x'}\right), \label{Ddef} \\
E_{12} &\equiv& \frac12\left( \pd{^2\overline{T}}{x\,\partial y} +
\pd{^2\overline{T}}{x'\,\partial y'}\right)\,. \label{Edef}
\end{eqnarray}
(The subscripts on $D$ and $E$ are merely to facilitate later
generalization to higher dimensions.)
Then from (\ref{Toot}), (\ref{DeltaTt}), and
(\ref{TjjT})--(\ref{DTjkT}) we have
\begin{eqnarray}
\left\langle T_{00}({\scriptstyle\frac14})\right\rangle_t &=& -{\textstyle\frac12}A, \label{00calc} \\
\left\langle \Delta T_{00}\right\rangle_t &=& \beta (B_1 +B_2 +C_1 + C_2),
\label{d00calc}\\
\left\langle T_{01}({\scriptstyle\frac14})\right\rangle_t &=& 0 = \left\langle \Delta T_{01}\right\rangle_t \,,
\quad\hbox{etc.},\label{01calc} \\
\left\langle T_{11}({\scriptstyle\frac14})\right\rangle_t &=& {\textstyle\frac14} (B_1 - C_1), \quad\hbox{etc.},
\label{11calc} \\
\left\langle \Delta T_{11}\right\rangle_t &=& \beta (A + B_1 - C_2), \quad\hbox{etc.},
\label{d11calc}\\
\left\langle T_{12}({\scriptstyle\frac14})\right\rangle_t &=& {\textstyle\frac14}(-D_{12}+E_{12}),
\label{12calc}\\
\left\langle \Delta T_{12}\right\rangle_t &=& \beta (D_{12}+E_{12}). \label{d12calc}
\end{eqnarray}
In (\ref{00calc})--(\ref{d12calc}) it is understood that
$\mathbf{r}'=\mathbf{r}$.
\goodbreak
\subsection{Path classes and energy density} \label{ssec:paths}
The cylinder kernels in infinite two-dimensional space are
\begin{equation}
\overline{T}(t,\mathbf{r},\mathbf{r}') = -\, \frac1{2\pi}\,
(t^2 +|\mathbf{r} -\mathbf{r}'|^2)^{-1/2},
\label{freecylbar}\end{equation}
\begin{equation} T(t,\mathbf{r},\mathbf{r}') =
\frac t{2\pi}\, (t^2 +|\mathbf{r} -\mathbf{r}'|^2)^{-3/2}.
\label{freecyl} \end{equation}
Because of its central importance, we shall discuss the energy
density, $\left\langle T_{00}({\scriptstyle\frac14})\right\rangle$, along with the construction of the
cylinder kernel for the rectangle, path by path.
For rectangular parallelepipeds of any dimension,
with any combination of Dirichlet, Neumann, and periodic boundary
conditions,
the construction of any kernel (Green function) as a sum over
classical paths reduces to the classic ``method of images''
(and yields the exact answer).
For a rectangle the array of image points appears in
Fig.~\ref{fig:images}.
To every path is associated a sign, $(-1)^\eta$, where $\eta$ is
the number of Dirichlet sides struck by the path.
(If a path hits a corner, both sides are counted, and the path
reflects back upon itself.)
The image sum for $\overline T$ is not absolutely convergent, but
the
derivatives of the series, from which observable quantities are
calculated, are convergent.
\begin{figure}
\centerline{\beginpicture
\setcoordinatesystem units <.5in, .4in> point at 0 1
\setplotarea x from -0.5 to 3.5, y from -0.5 to 2.5
\putrule from -0.5 0 to 3.5 0
\putrule from -0.5 1 to 3.5 1
\putrule from -0.5 2 to 3.5 2
\putrule from 0 -0.5 to 0 2.5
\putrule from 1 -0.5 to 1 2.5
\putrule from 2 -0.5 to 2 2.5
\putrule from 3 -0.5 to 3 2.5
\put{$\times$} at .7 .2
\put{$\bullet$} at 2.7 .2
\put{$\bullet$} at .7 2.2
\put{$\bullet$} at 2.7 2.2
\put{$\circ$} at 1.3 .2
\put{$\circ$} at 3.3 .2
\put{$\circ$} at .7 -.2
\put{$\circ$} at 2.7 -.2
\put{$\circ$} at 1.3 2.2
\put{$\circ$} at 3.3 2.2
\put{$\circ$} at .7 1.8
\put{$\circ$} at 2.7 1.8
\put{$*$} at 1.3 -.2
\put{$*$} at 3.3 -.2
\put{$*$} at 1.3 1.8
\put{$*$} at 3.3 1.8
\linethickness=1.3pt
\putrule from 0 0 to 1 0
\putrule from 0 1 to 1 1
\putrule from 0 0 to 0 1
\putrule from 1 0 to 1 1
\endpicture\qquad\parbox{2.5truein}
{\begin{description}
\item[$\times$] $ = \hbox{point $\mathbf{r}$ under study}$,
\item[$\bullet$] $ = \hbox{periodically displaced image}$,
\item[$\circ$] $ = \hbox{reflection through a side}$,
\item[$*$] $ = \hbox{reflection through a corner}$.
\end{description}}}
\caption{A point in a rectangle and its images relevant to
Dirichlet and Neumann boundary conditions
(cf.\ \cite[Sec.~9.A]{BB3}).
Image points fall into three classes according to whether the
number of reflections is \emph{even} in both dimensions, one, or
neither.
The first case corresponds to periodic displacements.
Points of the third class are joined to $\mathbf{r}$ by lines that
pass through an intersection point of the lattice of extended
rectangle sides --- i.e., an image of a corner of the rectangle.}
\label{fig:images} \end{figure}
\begin{figure}
\centerline{\beginpicture
\setcoordinatesystem units <.5in, .35in> point at 0 0
\setplotarea x from -0.5 to 3.5, y from -0.5 to 2.5
\putrule from -0.5 0 to 3.5 0
\putrule from -0.5 1 to 3.5 1
\putrule from -0.5 2 to 3.5 2
\putrule from 0 -0.5 to 0 2.5
\putrule from 1 -0.5 to 1 2.5
\putrule from 2 -0.5 to 2 2.5
\putrule from 3 -0.5 to 3 2.5
\put{$\times$} at .7 .2
\put{$\bullet$} at 2.7 .2
\put{$\bullet$} at .7 2.2
\put{$\bullet$} at 2.7 2.2
\plot .7 .2
2.7 2.2 /
\plot 1 .5
.5 1 /
\plot .5 1
0 .5 /
\plot 0 .5
.5 0 /
\plot .5 0
.7 .2 /
\setdashes
\putrule from .7 0 to .7 2.2
\setsolid \linethickness=1.3pt \noindent
\putrule from 0 0 to 1 0
\putrule from 0 1 to 1 1
\putrule from 0 0 to 0 1
\putrule from 1 0 to 1 1
\endpicture}
\caption{Two periodic paths (one solid, one dashed) are shown,
both within the rectangle and in the covering space.}
\label{fig:per}\end{figure}
Following Cavalcanti \cite{Cav} we take the rectangle to have
horizontal and vertical dimensions $a$ and $b$, horizontal and
vertical coordinates $x$ and $y$, and horizontal image-displacement
indices $j$ and~$k$.
(We occasionally still find it necessary to use $j$ and $k$ as
tensor indices, but never in the same equation as the image
indices.)
Thus the contribution of a typical periodic
path (see Fig.~\ref{fig:per}) to $\overline{T}$ is
\begin{equation}
\overline{T}_{\mathrm{P}jk} = -\,\frac{(-1)^\eta}{2\pi}
[t^2 + (2ja +x'-x)^2 + (2kb+y'-y)^2)]^{-1/2}.
\label{Pjk}\end{equation}
From (\ref{00calc}) and (\ref{Adef}) we obtain
\begin{eqnarray}
\left\langle T_{00}({\scriptstyle\frac14})\right\rangle_{t\mathrm{P}jk} &=&
-\,\frac{(-1)^\eta}{4\pi} \left[t^2+(2ja)^2 +(2kb)^2\right]^{-5/2}
\nonumber\\ &&\times
\left[-2t^2+(2ja)^2 + (2kb)^2\right],
\label{T00Pjk}\end{eqnarray}
which is independent of~$\mathbf{r}$.
Also, one finds from (\ref{d00calc}) and
(\ref{Bdef})--(\ref{Cdef})
that $C_j = - B_j$ in this case and hence
\begin{equation}
\left\langle \Delta T_{00}\right\rangle_{t\mathrm{P}jk} = 0.
\label{dT00Pjk}\end{equation}
These two results are expected and related:
Since $\Delta T_{00}$ is a total divergence and hence must
integrate to $0$, and since the energy from a periodic path is
independent of position in the rectangle,
$\left\langle \Delta T_{00}\right\rangle_{t\mathrm{P}jk}$ must be identically zero.
Finally, note that if the boundaries are all Dirichlet or all Neumann,
$\eta$ is even and hence $\left\langle T_{00}\right\rangle_{0\mathrm{P}jk}$ is
always negative.
\begin{figure}
\centerline{\beginpicture
\setcoordinatesystem units <.5in, .35in> point at 0 0
\setplotarea x from -0.5 to 3.5, y from -0.5 to 2.5
\putrule from -0.5 0 to 3.5 0
\putrule from -0.5 1 to 3.5 1
\putrule from -0.5 2 to 3.5 2
\putrule from 0 -0.5 to 0 2.5
\putrule from 1 -0.5 to 1 2.5
\putrule from 2 -0.5 to 2 2.5
\putrule from 3 -0.5 to 3 2.5
\put{$\times$} at .7 .2
\put{$*$} at 1.3 -.2
\put{$*$} at 3.3 -.2
\put{$*$} at 1.3 1.8
\put{$*$} at 3.3 1.8
\plot .7 .2
3.3 1.8 /
\plot 1 .3846
0 1 /
\setdashes
\plot .7 .2
1.3 -.2 /
\setsolid \linethickness=1.3pt \noindent
\putrule from 0 0 to 1 0
\putrule from 0 1 to 1 1
\putrule from 0 0 to 0 1
\putrule from 1 0 to 1 1
\endpicture}
\caption{Two corner paths. Inside the rectangle such paths
bounce back from a corner and retrace themselves.
The shortest such paths have lengths arbitrarily close to $0$.}
\label{fig:corner} \end{figure}
Next, consider the simple and interesting case of a corner path:
\begin{equation}
\overline{T}_{\mathrm{C}jk}
= -\,\frac{(-1)^\eta}{2\pi}
\left[t^2 +(2ja -x'-x)^2 +(2kb -y'-y)^2 \right]^{-1/2}.
\label{Cjk}\end{equation}
This time one finds that $C_j = +B_j$ and $B_1+B_2 =-A$, so that
\begin{eqnarray}
\left\langle T_{00}({\scriptstyle\frac14})\right\rangle_{t\mathrm{C}jk} &=&
{\textstyle \frac1{4\beta}} \left\langle\Delta T_{00}\right\rangle_{t\mathrm{C}jk}
\nonumber \\ &=&
-\, \frac{(-1)^\eta}{4\pi}\left[t^2+(2ja-2x)^2 +(2kb-2y)^2\right]^{-5/2}
\nonumber \\ &&{}\times
\left[-2t^2+(2ja-2x)^2 + (2kb-2y)^2\right] .
\label{T00Cjk}\end{eqnarray}
That is, the two terms in $\left\langle T_{00}\right\rangle_{t\mathrm{C}jk}$ are
proportional, and, in particular,\break
$\left\langle T_{00}\right\rangle_{t\mathrm{C}jk}$ vanishes
for minimal coupling ($\beta=-\frac14$).
These seeming coincidences are probably related to the fact that
the integral of $\left\langle T_{00}\right\rangle_{tCjk}$ over the rectangle must
vanish (see Sec.~\ref{sec:global}). Note that the quantity is a
function of the distance to $\mathbf{r}$ from the corner or
corner-image concerned (see Fig.~\ref{fig:corner}).
Again it is negative as $t\to0$
whenever all the boundary conditions are of
the same type.
\begin{figure}
\centerline{\beginpicture
\setcoordinatesystem units <.5in, .4in> point at 0 0
\setplotarea x from -0.5 to 3.5, y from -0.5 to 2.5
\putrule from -0.5 0 to 3.5 0
\putrule from -0.5 1 to 3.5 1
\putrule from -0.5 2 to 3.5 2
\putrule from 0 -0.5 to 0 2.5
\putrule from 1 -0.5 to 1 2.5
\putrule from 2 -0.5 to 2 2.5
\putrule from 3 -0.5 to 3 2.5
\put{$\times$} at .7 .2
\put{$\circ$} at 1.3 .2
\put{$\circ$} at 3.3 .2
\put{$\circ$} at .7 -.2
\put{$\circ$} at 2.7 -.2
\put{$\circ$} at 1.3 2.2
\put{$\circ$} at 3.3 2.2
\put{$\circ$} at .7 1.8
\put{$\circ$} at 2.7 1.8
\plot
0.7 0.2
2.7 1.8 /
\plot 1 0.44
0.3 1 /
\plot 0.3 1
0 0.76 /
\plot 0 0.76
0.7 0.2 /
\setdashes
\putrule from 0.7 0.2 to 0.7 1.8
\setsolid \linethickness=1.3pt \noindent
\putrule from 0 0 to 1 0
\putrule from 0 1 to 1 1
\putrule from 0 0 to 0 1
\putrule from 1 0 to 1 1
\endpicture}
\caption{Two side paths of the vertical subclass.
The dashed path is a direct
perpendicular reflection (retracing itself),
with length approaching $0$ as
$\mathbf{r}$ approaches the top boundary.
The solid path combines a reflection from the top with a periodic
horizontal drift; its length is bounded away from $0$.}
\label{fig:side} \end{figure}
The situation is slightly more complicated for paths that
``bounce'' in one dimension while being periodic (or fixed)
in the other.
The number of reflections is now odd, and the energy density
turns out to be
positive for Dirichlet conditions and negative for Neumann,
at least for $\beta$ near~$0$.
We call these paths ``vertical side paths'' if
the bounce is off a horizontal side (see Fig.~\ref{fig:side});
this includes, in particular, the strictly vertical paths
($j=0$).
In those cases we have
\begin{equation}
\overline{T}_{\mathrm{V}jk} = -\,\frac{(-1)^\eta}{2\pi}
\left[t^2 +(2ja +x'-x)^2 +(2kb - y'-y)^2 \right]^{-1/2},
\label{Vjk}\end{equation}
\begin{eqnarray}
\left\langle T_{00}({\scriptstyle\frac14})\right\rangle_{t\mathrm{V}jk} & =&
-\, \frac{(-1)^\eta}{4\pi}\left[t^2+(2ja)^2 +(2kb-2y)^2\right]^{-5/2}
\nonumber\\&& {}\times
\left[-2t^2+(2ja)^2 + (2kb-2y)^2\right] ,
\label{T00Vjk}\end{eqnarray}
\begin{eqnarray}
\left\langle \Delta T_{00}\right\rangle_{t\mathrm{V}jk} & =&
\frac{\beta(-1)^\eta}{\pi}
\left[t^2+(2ja)^2 +(2kb-2y)^2\right]^{-5/2}
\nonumber \\ && {}\times
\left[t^2+ (2ja)^2 -2 (2kb-2y)^2\right].
\label{dT00Vjk}\end{eqnarray}
These quantities depend only on $y$, not~$x$;
in other words, such a term is a function of the
distance from a wall or an image of a wall.
In this case the two terms in the energy density are distinctly
different, so it pays to write out the total explicitly:
\begin{eqnarray}
\left\langle T_{00}\right\rangle_{t\mathrm{V}jk} & = &
\frac{(-1)^\eta}{\pi} \left[t^2+(2ja)^2 +(2kb-2y)^2\right]^{-5/2}
\nonumber\\&& {}\times
\left[\left(\beta +{\textstyle\frac12}\right)t^2
+ \left(\beta-{\textstyle\frac14}\right)(2ja)^2
- \left(2\beta+{\textstyle\frac14}\right)(2kb-2y)^2\right].
\label{totT00Vjk}\end{eqnarray}
The most interesting observation here is that
the coefficient of $(2kb-2y)^2$ vanishes
for conformal coupling ($\beta=-\frac18$).
When $j=0$ and $k=0$ or $1$, the energy density for $t=0$
generically has
$O\left(y^{-3}\right)$ divergences at the boundary, but those
divergences are removed in the conformal case;
this is as close as one comes in a rectangle to the well known
fact that the energy density between infinite parallel plates is
\emph{constant} in the case of conformal coupling.
Formulas for horizontal side paths are easily obtained from
(\ref{Vjk})--(\ref{totT00Vjk}) by
interchanging the roles of the two dimensions.
\subsection{The other components} \label{ssec:Tjk}
From (\ref{11calc})--(\ref{d12calc})
and (\ref{Pjk}), (\ref{Cjk}), (\ref{Vjk}),
one finds the spatial
components (pressure and shear stress). We omit the formula for
the 22 component when it is obvious from the 11 formula.
\emph{Periodic paths:}
\begin{eqnarray}
\langle T_{11}({\scriptstyle\frac14})\rangle_{t\mathrm{P}jk} &=&
\frac{(-1)^\eta}{4\pi}
\left[t^2+(2ja)^2 +(2kb)^2\right]^{-5/2}
\nonumber \\
&&{}\times\left[t^2-2(2ja)^2 +(2kb)^2\right] ,
\label{T11Pjk}\end{eqnarray}
\begin{equation}
\langle T_{12}({\scriptstyle\frac14})\rangle_{t\mathrm{P}jk} =
-\,\frac{3(-1)^\eta}{\pi}
\left[t^2+(2ja)^2 +(2kb)^2\right]^{-5/2}
jakb,
\label{T12Pjk}\end{equation}
\begin{equation}
\langle \Delta T_{11}\rangle_{t\mathrm{P}jk} =0 =
\langle\Delta T_{12}\rangle_{t\mathrm{P}jk}\,.
\label{dT11Pjk}\end{equation}
Thus the stress tensor associated with a periodic path does not
depend upon the conformal parameter, nor upon the coordinates. The
individual terms $\langle T_{12}\rangle$ are nonzero, but they add
to zero when summed over either $j$ or~$k$, as reflection symmetry
requires.
\emph{Corner paths:}
\begin{equation}
\langle T_{11}({\scriptstyle\frac14})\rangle_{t\mathrm{C}jk}
= \langle T_{12}({\scriptstyle\frac14})\rangle_{t\mathrm{C}jk}=0 ,
\label{T11Cjk}\end{equation}
\begin{eqnarray}
\langle \Delta T_{11}\rangle_{t\mathrm{C}jk} &=&
-\,\frac{\beta(-1)^\eta}{\pi}
\left[t^2+(2ja-2x)^2 +(2kb-2y)^2\right]^{-5/2} \nonumber \\
&&{}\times
\left[t^2+(2ja-2x)^2 -2 (2kb-2y)^2\right] ,
\label{dT11Cjk} \end{eqnarray}
\begin{eqnarray}
\langle \Delta T_{12}\rangle_{t\mathrm{C}jk} &=&
-\,\frac{12\beta(-1)^\eta}{\pi}
\left[t^2+(2ja-2x)^2 +(2kb-2y)^2\right]^{-5/2}
\nonumber\\ &&\times
(ja-x)(kb-y).
\label{dT12Cjk} \end{eqnarray}
In addition to (and in contrast to) the remarks about the energy
density made below (\ref{T00Cjk}),
we observe:
(1) The spatial components of the corner-path stress tensor vanish
when $\xi=\frac14$ (whereas the energy density vanishes when
$\xi=0$). So far we have no intuitive explanation of this fact.
(2) The spatial components are no longer functions of corner-image
distances alone, though they do have (for $t=0$) an
$O(|\mathbf{r}|^{-3})$ dependence on
corner-image coordinates, as the energy density does.
(3) When $\beta\ne0$ there is a nonzero
$\langle T_{12}\rangle$, which does
not vanish even when summed. However, if we evaluate it on a
boundary (such as $x =\hbox{(integer)} \times a$), where it would
have a clear physical interpretation as a shear force on the wall
of the box, then it does vanish when summed.
\emph{Vertical paths:}
\begin{eqnarray}
\langle T_{11}({\scriptstyle\frac14})\rangle_{t\mathrm{V}jk} &=&
\frac{(-1)^\eta}{4\pi}
\left[t^2+(2ja)^2 +(2kb-2y)^2\right]^{-5/2}
\nonumber\\ &&\times
\left[t^2-2(2ja)^2 +(2kb-2y)^2\right] ,
\label{T11Vjk}\end{eqnarray}
\begin{eqnarray}
\langle \Delta T_{11}\rangle_{t\mathrm{V}jk} &=&
-\,\frac{\beta(-1)^\eta}{\pi}\left[t^2+(2ja)^2 +(2kb-2y)^2\right]^{-5/2}
\nonumber\\ &&{}\times
\left[t^2 +(2ja)^2-2(2kb-2y)^2 \right],
\label{dT11Vjk} \end{eqnarray}
\begin{equation}
\langle T_{22}\rangle_{t\mathrm{V}jk} = 0 =
\langle T_{12}\rangle_{t\mathrm{V}jk} = 0.
\label{T22Vjk} \end{equation}
In addition to the remarks surrounding
(\ref{Vjk})--(\ref{totT00Vjk}),
observe that $\langle T_{\nu2}\rangle = 0$ for all~$\nu$.
That is understandable: there is otherwise no way to
satisfy the conservation laws (\ref{conslaw}) for
$\mu=1$ and $\mu=2$
by functions that depend only on $y$ but are not constant.
\emph{Horizontal paths:}
\begin{equation}
\langle T_{11}\rangle_{t\mathrm{H}jk} = 0
= \langle T_{12}\rangle_{t\mathrm{H}jk} = 0,
\label{T12Hjk} \end{equation}
\begin{eqnarray}
\langle T_{22}({\scriptstyle\frac14})\rangle_{t\mathrm{H}jk} &=&
\frac{(-1)^\eta}{4\pi}
\left[t^2+(2ja-2x)^2 +(2kb)^2\right]^{-5/2}
\nonumber\\ &&{}\times
\left[t^2+(2ja-2x)^2 -2(2kb)^2\right] ,
\label{T22Hjk}\end{eqnarray}
\begin{eqnarray}
\langle \Delta T_{22}\rangle_{t\mathrm{H}jk} &=&
-\,\frac{\beta(-1)^\eta}{\pi} \left[t^2+(2ja-2x)^2 +(2kb)^2\right]^{-5/2}
\nonumber\\ &&{}\times
\left[t^2 -2(2ja-2x)^2+(2kb)^2 \right].
\label{dT22Hjk} \end{eqnarray}
Observe that $\langle T_{12} \rangle=0$ for \emph{all} side
paths.
For the formulas above one can verify the conservation law
\begin{equation}
-\,\pd{}{x^0} \langle T_{0\mu}\rangle +\pd{}{x_1} \langle
T_{1\mu}\rangle
+ \pd{}{x_2} \langle T_{2\mu}\rangle =0 \quad (\mu=0,1,2).
\label{conslaw}\end{equation}
Here the first term is always $0$, because the quantities do not
depend upon time (not to be confused with the regularization
parameter~$t$).
In the conformal case, $\beta=-\frac18\,$, one also has
tracelessness,
\begin{equation}
-\,\langle T_{00}\rangle + \langle T_{11}\rangle +
\langle T_{22}\rangle=0.
\label{tracelaw}\end{equation}
These identities hold for all $t$, not just $t=0$.
\section{Energy and force}
\label{sec:global}
\subsection{Introductory remarks}\label{ssec:introrem}
In this section the results of the previous one will be used to
calculate the contribution of each image term to the total energy,
$E$, of the scalar field in the rectangle, and consequently the
force, $-\pd{E}{a}$, on the rectangle's right side.
We are concerned here only with the force from inside the
rectangle; ``piston'' arrangements in which it is possible to
calculate or estimate forces from outside will be considered in
later sections.
Consequently, uncompensated divergent terms arise as the
cutoff parameter $t$ is taken to~$0$, and such terms need to
be identified and systematically isolated for later physical
scrutiny.
The sign of Casimir energies and forces has long been a topic of
great interest and mystery, and one of the motivations of our
research has been to see what light the decomposition into image
terms, for which the sign is easy to understand, can shed on such
questions. The following discussion is easy to present for
arbitrary spatial dimension~$d$.
The cylinder kernel in $\mathbf{R}^d$
is
\begin{equation}
T(t,\mathbf{r},\mathbf{r}')=
C(d)\, t (t^2+|\mathbf{r}-\mathbf{r}'|^2)^{-(d+1)/2},
\qquad
C(d) \equiv \frac {\Gamma(\frac{d+1}2)} {\pi^\frac{d+1}{2}}\,.
\label{cylgendim}\end{equation}
Consequently, in the $d$-dimensional analogues of the constructions
in Sec.~\ref{ssec:paths} all the terms in the energy density
$\left\langle T_{00}({\scriptstyle\frac14})\right\rangle$
will have the form
\begin{equation}
-\,\frac{(-1)^\eta}{2}\, \pd{}t [t(t^2+W)^{-s}]
= (-1)^\eta \,\frac {(s-\frac12)t^2 -\frac12 W}{(t^2+W)^{s+1} }
\,,
\label{airportterm} \end{equation}
where $W$ is some nonnegative function of $\mathbf{r}$.
If $W>0$, the limit as $t\to0$ is
$-\frac12 (-1)^\eta W^{-s}$; for pure Neumann boundary conditions
it is always negative, while for pure Dirichlet conditions it will
be positive whenever the number of nonperiodic (bounce) dimensions
is odd for the path concerned.
If $W=0$, the small-$t$ behavior is
$(-1)^\eta (s-\frac12) t^{-2s} $,
divergent and opposite in sign to the other terms.
Now we consider integrating over a coordinate $u$ when
$ W = V + (mL-u)^2$ with $V\ge 0$:
\[ I \equiv \int_0^L \frac
{(s-\frac12) t^2 -\frac12 [V+(mL-u)^2] } {
[t^2+V+(mL-u)^2]^{s+1} } \, du. \]
By the mean value theorem for integrals,
\[I= L \,\frac{(s-\frac12) t^2 -\frac12 [V+(mL-\zeta)^2] } {
[t^2+V+(mL-\zeta)^2]^{s+1} } \,,\]
where $0<\zeta<L$ and $\zeta$ may depend on $t$.
But for us $s$ is always greater than or equal to~$1$.
So if the integral converges at all, the integrand is bounded and
we are back to the situation of the previous paragraph with
$W(t)>0$ and having a positive lower bound.
In the contrary case, $V=0$ and $m=0$ or $1$,
the situation is more delicate but the sign question is absorbed
into the issue of the physical meaning of surface divergences.
In summary, for all \emph{finite} terms we have good control over
the sign.
(The $\left\langle\Delta T_{00}\right\rangle$ terms are irrelevant to total energy,
as discussed below.)
One understands why Ambj\o{}rn and Wolfram \cite[Table~I]{AW} found
nontrivial
signs only for Dirichlet problems (not periodic or Neumann), and the
particular sign patterns they saw are not surprising.
To understand the significance of various paths, it is useful to
refine the classification of paths in the previous section.
Each closed path is characterized by its image indices, $j$
and~$k$, and by its periodicity type, P, V, H, or C.
\begin{description}
\item[P:] Periodic paths, producing constant terms in the energy
density
\smallskip \begin{description}
\item[PZ:] $j=0=k$ --- the zero-length path
\item[PV:] $j=0$, $k\ne0$ --- vertical periodic paths
\item[PH:] $k=0$, $j\ne0$ --- horizontal periodic paths
\item[PD:] $j\ne0$, $k\ne0$ --- diagonal periodic paths
\end{description} \medskip\goodbreak
\item[V:] Nonperiodic closed paths whose uncompensated ``bounce''
occurs on the top or bottom side of the rectangle, producing energy
densities depending on $y$ only
\smallskip\begin{description}
\item[VP:] $j=0$ --- perpendicular vertical bounce paths
\item[VD:] $j\ne 0$ --- vertical bounce paths with horizontal
periodic drift
\end{description} \medskip\goodbreak
\item[H:] Nonperiodic closed paths whose uncompensated ``bounce''
occurs on the right or left side of the rectangle, producing energy
densities depending on $x$ only
\smallskip\begin{description}
\item[HP:] $k=0$ --- perpendicular horizontal bounce paths
\item[HD:] $k\ne 0$ --- horizontal bounce paths with vertical
periodic drift
\end{description} \medskip
\item[C:] Closed paths that are periodic in neither dimension,
producing energy densities associated with corner images
\end{description}
Path PZ produces, by (\ref{ET}), the ubiquitous
volume (here area) divergence,
\begin{equation}
T_\mathrm{PZ}(t) = \frac{ab}{2\pi t^2}\,, \qquad
\langle E\rangle_{t\mathrm{PZ}} = \frac{ab}{2\pi t^3}\,,
\label{Zterm}\end{equation}
which, being ubiquitous, is always ignored
(except for possible relevance to cosmological dark energy).
All other terms in the energy density are pointwise finite, but
some of them have nonintegrable divergences at the boundary.
The path classes involved are VP and HP, which produce the
well known surface (here perimeter) divergence in the total
energy, and C, which produces an energy density that
seemingly diverges at the corners but nevertheless makes no
contribution to the ``renormalized'' total energy,
as we shall see.
\subsection{Energy calculations}
Let us first dispose of
$\int\!\!\int\left\langle\Delta T_{00}\right\rangle\, dx\,dy$,
which is expected to be zero because $\Delta T_{00}$ is the
divergence of a vector field, proportional to $\phi\nabla\phi$,
that vanishes on every Dirichlet or Neumann boundary.
From (\ref{dT00Pjk}) and (\ref{T00Cjk}) we see that the quantity
is indeed zero for periodic paths, while for corner paths it is
proportional to the $T_{00}({\scriptstyle\frac14})$ term (which also will turn out to be
zero).
The situation for side paths is more subtle.
The integral of $\left\langle\Delta T_{00}\right\rangle_{t\mathrm{V}jk}$
from (\ref{dT00Vjk}) is not zero, which is not surprising since
the field from a single image source does not satisfy the boundary
conditions.
However, because (\ref{dT00Vjk}) is a total derivative,
a calculation almost identical to that in
(\ref{Delcalc1})--(\ref{Delcalc2}) below
shows that the sum over $k$ does
telescope to~$0$, at least when the top and bottom boundaries are
of the same type (both Dirichlet or both Neumann).
The total energy contributed by a periodic path is trivially
obtained by multiplying (\ref{T00Pjk}) by the area, $ab$.
The sum of all such terms splits into PV, PH, and PD parts as
\begin{eqnarray} \langle
E\rangle_{t\mathrm{P}\setminus\mathrm{Z}}
&=&-\,\frac{ab}{2\pi} \sum_{k=1}^\infty (-1)^\eta \frac{(2kb)^2 -2t^2}
{[t^2 +(2kb)^2]^{5/2}}
-\frac{ab}{2\pi} \sum_{j=1}^\infty (-1)^\eta \frac{(2ja)^2 -2t^2}
{[t^2 +(2ja)^2]^{5/2}}
\nonumber\\
&&{} - \frac{ab}{\pi} \sum_{j=1}^\infty\sum_{k=1}^\infty (-1)^\eta
\frac{(2ja)^2 +(2kb)^2 -2t^2}{[t^2 +(2ja)^2 +(2kb)^2]^{5/2}}\,.
\label{regperenergy}\end{eqnarray}
If all four sides are of the same type, (\ref{regperenergy})
simplifies in the limit $t\to0$ to
\begin{equation}
\langle E\rangle_{t\mathrm{P}\setminus\mathrm{Z}}=
-\,\frac{\zeta(3)a}{16\pi b^2} - \frac{\zeta(3)b}{16\pi a^2}
-\,\frac{ab}{8\pi} \sum_{j=1}^\infty\sum_{k=1}^\infty
\left(a^2j^2 + b^2k^2\right)^{-3/2}
\label{perenergy}\end{equation}
(a well known result --- e.g., \cite{EORBZ}).
In the special case of a square of side~$a$,
numerical evaluation of (\ref{perenergy}) gives $-0.089859/a$
(identifiable with the vacuum energy of a torus of dimension $2a$
as recorded in \cite{AW, EORBZ}).
Because of the need to sum over a two-dimensional lattice,
the numerical convergence is rather slow, even when repetitions of
primitive orbits are handled all at once ---
in contrast with the situation for parallel plates, where the sum
over paths has been found to be very efficient and increasingly so
in higher dimensions \cite{JS,LF}.
To calculate the other $T_{00}({\scriptstyle\frac14})$ terms it is convenient to return
to the cylinder kernel $T$
and integrate it over the rectangle before taking the final $t$
derivative in~(\ref{Adef}).
Consider first the paths of subclass VP.
According to (\ref{Vjk}) with $\mathbf{r}'=\mathbf{r}$,
for any path of class~V we have
\begin{equation}
T_{\mathrm{V}jk}(t,\mathbf{r},\mathbf{r}) =
+\,\frac{(-1)^\eta}{2\pi} \,t
\left[t^2 + (2ja)^2 + (2kb-2y)^2\right]^{-3/2}.
\label{Vcyl}\end{equation}
Setting $j=0$ we arrive at
\[ \int_0^a dx \int_0^b dy
\,T_{\mathrm{V}0k}(t,\mathbf{r},\mathbf{r}) =
\frac{(-1)^\eta}{2\pi} \,at \int_0^b
\left[t^2 + (2kb-2y)^2\right]^{-3/2}\,dy. \]
The terms with $k=0$ and $k=1$ are divergent (when $t\to0$)
at the bottom and top boundaries, respectively.
The other cases are finite, but need to be added to the divergent
ones to build up a ``clean'' divergence, proportional to a power
of~$t$, that can be discarded in a systematic renormalization
of the mass of the boundary plate.
(Formally, this quantity is the total energy of an isolated
surface in otherwise empty space \cite{A}.)
Again we consider for simplicity only the case where both
horizontal boundaries are the same type, so that $(-1)^\eta$ is
independent of~$k$.
As a reminder that this assumption is in force, we shall write
in resulting formulas
\begin{equation}
(-1)^\eta = \mp \equiv \cases{
-1, &Dirichlet, \cr
+1, &Neumann. \cr}
\label{DNsigns} \end{equation}
The terms combine easily (telescope):
\[ \sum_{k=-\infty}^\infty
\int_0^a dx \int_0^b dy\, T_{\mathrm{V}0k}(t,\mathbf{r},\mathbf{r}) =
\mp \frac{at}{\pi}
\int_0^\infty
\left(t^2 + 4y^2\right)^{-3/2} \,dy
= \mp\, \frac a{2\pi t}\,. \]
Obviously the formula for class HP is the same with $a$
replaced by $b$.
Therefore, the total contribution from VP and HP to the
trace of the cylinder kernel can be written as
\begin{equation}
T_\bot(t)= \mp\, \frac P{4\pi t}\,,
\label{perimcyl}\end{equation}
where $P$ is the perimeter of the rectangle.
It corresponds to a divergent surface energy
\begin{equation}
\langle E\rangle_{t\bot} = \mp\, \frac P{8\pi t^2}\,.
\label{perimen}\end{equation}
For paths of class VD we obtain
\[ \sum_{k=-\infty}^\infty
\int_0^a dx \int_0^b dy\, T_{\mathrm{V}jk}(t,\mathbf{r},\mathbf{r})
= \mp \frac{at}{2\pi} \,\frac1{t^2+(2ja)^2}
= \mp \frac{t}{8\pi aj^2} + O\left( \frac{t^3}{j^4} \right). \]
The sum over $j$ gives the well known $\zeta(2)$, so
\[ \sum_{j\ne0} \sum_{k=-\infty}^\infty
\int_0^a dx \int_0^b dy\, T_{\mathrm{V}jk}(t,\mathbf{r},\mathbf{r}) =
\mp\, \frac{\pi t}{24a} + O\left( t^3\right). \]
The corresponding contribution to the energy is
$\pm \pi/48a\,$;
it may be thought of as a Casimir correction to the surface energy
of the sides at $y=0$ and $y=b$
caused by the presence of the perpendicular
sides with separation~$a$.
Thus the total energy from VD and HD paths is (at $t=0$)
\begin{equation} \langle E\rangle_{t\mathrm{D}} =
\pm\,\frac{\pi}{48}\left(\frac 1a + \frac 1b \right) .
\label{edgeenergy}\end{equation}
It is comparable in magnitude to the term
from periodic paths, (\ref{perenergy}).
In fact, for the square it is larger, since $\pi/24 \approx 0.13$;
that is why the ``renormalized'' (Lukosz) energy
of the Dirichlet square comes out positive.
(The situation for the \emph{force} is different, however, as we
shall see.)
Finally, for a path of class C we have from (\ref{Cjk})
\begin{equation}
T_{\mathrm{C}jk}(t,\mathbf{r},\mathbf{r}) =
\frac{(-1)^\eta}{2\pi} \,t
\left[t^2 + (2ja-2x)^2 + (2kb-2y)^2\right]^{-3/2}.
\label{Ccyl}\end{equation}
Terms with $\{j,k\}\subset \{0,1\}$
yield divergent integrals in the energy
(\negthinspace${}\sim \int r^{-3}r\,dr$)
if $t$ is set equal to~$0$,
but if one integrates with $t$ positive, the result is quite
different.
We assume that all sides are of the same type, so that
$(-1)^\eta =+1$.
Then the contribution to the cylinder trace from the corner
paths telescopes to
\begin{eqnarray}
\sum_{j=-\infty}^\infty \sum_{k=-\infty}^\infty
\int_0^a dx \int_0^b dy\,
T_{\mathrm{C}jk}(t,\mathbf{r},\mathbf{r})
&=& \frac{2t}{\pi} \int_0^\infty dx \int_0^\infty dy\,
(t^2+4x^2+4y^2)^{-3/2} \nonumber \\
&=&\frac14\,.
\label{cornercyl} \end{eqnarray}
Being independent of $t$, this term
makes no contribution at all to the energy via~(\ref{00calc}).
(In a related independent calculation by Zaheer et al.~\cite{Z}
the corner paths were not even considered, because the rectangle
was obtained as a limiting case of a configuration where they did
not exist.)
In the next subsection we shall review why this result is exactly
what should have been expected.
\subsection{Relation to heat kernel asymptotics}
\label{ssec:asymp}
Let $K(t,\mathbf{r},\mathbf{r}')$ be the heat kernel corresponding
to the system under study
($\langle \mathbf{r}|e^{-tH}|\mathbf{r}'\rangle$
in quantum-mechanical notation, as contrasted with (\ref{cyl})).
Let $K(t)$ be its trace (cf.\ (\ref{ET})).
It is well known \cite{Kac,Cl,Gil,Kir}
that as $t\to0$
\begin{equation}
K(t) = \frac A{4\pi t} \mp \frac P{8\sqrt{\pi t}} +\frac14
+O(t^\infty),
\label{heatexp} \end{equation}
where $A=ab$ and $P = 2(a+b)$ are the area and perimeter of the
rectangle,
and $\mp$ is as in (\ref{DNsigns}).
(Here we state (\ref{heatexp}) only for the cases where
all four sides are of the same type.
The other cases
--- in which, for instance, the second term is not
proportional to $P$,
but the qualitative conclusions of this subsection remain true ---
are discussed in~\cite{bookrev}.)
It follows \cite{systemat,BGH} that the trace of the
\emph{cylinder} kernel must have the expansion
\begin{equation}
T(t) = \frac A{2\pi t^2} \mp \frac P{4\pi t} + \frac14 + O(t),
\label{cylexp}\end{equation}
and hence by (\ref{ET}) the regularized Casimir energy is
\begin{equation}
E(t)\equiv -\,\frac12 \,\pd Tt
= \frac A{2\pi t^3} \mp \frac P{8\pi t^2} +
\frac 0t + E_\mathrm{ren} +O(t),
\label{enexp}\end{equation}
where $E_\mathrm{ren}$ is a constant traditionally identified as the
``renormalized'' Casimir energy.
($E_\mathrm{ren}$ is not determined by the
heat kernel expansion (\ref{heatexp});
it is hidden in the $O(t^\infty)$ term there.)
Our calculations above have confirmed (\ref{cylexp}) and
(\ref{enexp})
and determined $E_\mathrm{ren}\,$.
The terms in (\ref{cylexp}) are (\ref{Zterm}), (\ref{perimcyl}),
and (\ref{cornercyl}).
$E_\mathrm{ren}$ is the sum of (\ref{perenergy}) and (\ref{edgeenergy})
(in agreement with previous authors, including
\cite{AW, Cav, Z}).
With regard to the inevitability of the disappearance of the
corner energy (without an explicit renormalization of any kind!),
we stress \cite{systemat,BGH} that the coefficient of $1/t$ in
(\ref{enexp}) \emph{must} be $0$.
(For dimensional reasons, that is where a corner term would need
to appear, along with contributions linear in boundary curvature or
in a Robin constant.)
A $t^{-1}$ term in $E(t)$ would have to come from a $\,\ln t$
term in $T(t)$, which in turn would be associated with a
logarithmic term in $K(t)$, and such terms do not exist.
On the other hand, there is no general reason why $E(t)$ could not
contain a $\,\ln t$ term (and a resulting scale ambiguity in the
``renormalization'').
That would correspond to a $\,t\ln t$ in $T(t)$
and hence a $t^{1/2}$ in $K(t)$ --- which can actually occur
(for example in a disk), but
does not in the model under study here.
\subsection{Force and pressure calculations}
\label{ssec:force}
We now investigate the force on the side at $x=a$ from the field
vacuum inside the rectangle, in the case where all sides are
Dirichlet.
From the previous subsections, the naively renormalized
energy yields the force
\begin{equation}
-\, \pd{E_\mathrm{ren}}a =
+\, \frac{\zeta(3)}{16\pi b^2}
- \frac{\zeta(3)b}{8\pi a^3}
+ \frac b{8\pi} \sum_{j,k=1}^\infty
\frac{k^2b^2 - 2j^2a^2}{(j^2a^2 + k^2b^2)^{5/2} }
+\frac{\pi}{48a^2}
+0,
\label{renforce}\end{equation}
where the terms are the contributions of path classes
PV, PH, PD, VD, and HD, respectively.
It is important to remember that positive energy is not always the
same thing as positive (repulsive) force, although that is true in
many of the classic Casimir-force calculations in which the
absolute value of the energy, being a negative power,
decreases monotonically to $0$ as the relevant
geometrical parameter increases.
In (\ref{renforce}) the PV force is positive although the PV
energy is negative; the individual terms in the PD force can have
either sign, although their energies are all negative; the HD force
is zero because the positive HD energy is independent of~$a$.
Piston analyses center on the cancellation of the positive PV
force by an external force, since the sum of the other three
terms can be shown to be negative (see Sec.~\ref{sec:piston}).
As a first step toward less naive renormalization,
one can keep
the cutoff $t$ finite and retain the cutoff-dependent terms in
(\ref{enexp}).
Then the VP term produces a force
\begin{equation} F_{t\mathrm{VP}} =
+\,\frac1{4\pi t^2}\,,
\label{badforce}\end{equation}
and the terms in (\ref{renforce}) are modified in ways that can
become significant when $a$ or $b$ is not large compared to~$t$.
Another way to calculate the force is to integrate
$\langle T_{11}(a,y)\rangle$ over the side of the box.
We take a moment to verify that the methods are consistent, using
the appropriate formulas from Sec.~\ref{ssec:Tjk}.
\emph{Periodic paths:}
Multiplying (\ref{T11Pjk}) by $b$ to perform the
trivial integration,
we get (after $t\to0$)
\[ F_{\mathrm{P}jk} =\frac b{4\pi}\,
\frac{-2(2ja)^2 + (2kb)^2}{\left [(2ja)^2
+(2kb)^2\right]^{5/2} }\,. \]
Here $j$ and $k$ are not both $0\,$; the terms with one, the
other, or neither $0$ add up to the first three terms in
(\ref{renforce}), as expected.
\emph{Corner paths:} From the energy calculation we know that
these terms should be zero.
Also, from (\ref{T11Cjk}) we have $T_{11}=0$ unless
$\beta \equiv \xi-\frac14$ is nonzero.
For the $\beta$ term (\ref{dT11Cjk}), note that
the integrand has the form of a total derivative,
\begin{equation}
\frac {K -2(2bk-2y)^2 }{ [K +(2bk-2y)^2]^{5/2}}
= \od{}y\, \frac{y-bk}{ [K +(2bk-2y)^2]^{3/2}} \,.
\label{Delcalc1}\end{equation}
Setting $x=a$ and integrating over $y$, one gets
\begin{eqnarray} F_{t\mathrm{C}jk} &=&
-\,{\beta\over \pi}\, \left\{ \frac{(1-k)b}{
[t^2+4(j-1)^2 a^2 + 4b^2(k-1)^2 ]^{3/2}} \right. \nonumber \\
&&\qquad\left. {}
-\frac{(-k)b}{[t^2+4(j-1)^2 a^2 + 4b^2k^2 ]^{3/2}} \right\}.
\label{Delcalc2} \end{eqnarray}
The sum of (\ref{Delcalc2}) over $k$ from $-\infty$ to $\infty$
telescopes to 0.
\emph{Side paths:}
The class of paths that bounce off the side in question (along
with an even number of additional reflections) have $T_{11}$
identically zero
(\ref{T12Hjk}).
This is so even though the shortest such paths
(those with $k=0$, $j=1$) give rise to a divergent energy in the
region marked $\beta$ in Fig.~\ref{fig:alphabeta}.
This matches the $0$ in~(\ref{renforce}).
\begin{figure}
\centerline{\beginpicture
\setcoordinatesystem units <2truecm,2truecm>
\putrule from 0 0 to 2 0
\putrule from 0 0 to 0 2
\putrule from 0 2 to 2 2
\putrule from 2 0.1 to 2 1.9
\setdashes
\putrule from 0 1.8 to 2 1.8
\putrule from 1.8 0 to 1.8 2
\put{$\alpha$} at 1 1.9
\put{$\beta$} at 1.9 1
\endpicture}
\caption {Two regions where divergent surface energy appears.}
\label{fig:alphabeta}\end{figure}
Much more interesting are the paths that bounce off the horizontal
walls.
From (\ref{T11Vjk})--(\ref{dT11Vjk}),
\begin{eqnarray*}
\langle T_{11}\rangle_{t\mathrm{V}jk} &=&
-\,\frac1{4\pi} [t^2 +(2ja)^2 +(2kb-2y)^2]^{- 5/2}
[t^2 -2(2ja)^2 +(2kb-2y)^2] \\
&&{}+\frac{\beta}{\pi} [t^2 +(2ja)^2 +(2kb-2y)^2]^{- 5/2}
[t^2+(2ja)^2 -2(2kb-2y)^2] .
\end{eqnarray*}
For fixed $j,k$ the $\beta$ term is just like the corresponding
corner term with $j-1$ replaced by $j$ and the sign changed.
Therefore, these two classes of $\beta$ terms would cancel
when summed over~$j$,
even if they did not vanish when summed over $k$ as we just saw.
It remains to integrate the other
part of $\langle T_{11}\rangle_{t\mathrm{V}jk}$ over $y$ from 0
to $b$.
The terms with $j=0$
lead to a clone of the calculation following (\ref{Vcyl}).
In particular, those terms for which also
$k=0$ or~$1$ are divergent when $t\to0$.
This divergent pressure clearly corresponds,
in the case $k=1$, to the
divergent energy in region $\alpha$ associated with
paths VP that bounce perpendicularly off the top boundary.
(From $k=0$ comes a corresponding effect at the bottom boundary,
not indicated in Fig.~\ref{fig:alphabeta}.)
That energy is proportional to the
length of the box and hence gives a force (\ref{badforce})
upon differentiation.
Finally, one wants to integrate the terms with $j\ne0$ and see that
they reproduce the remaining (VD) term in (\ref{renforce}).
The integral of each term is, at $t=0$,
\[ -\,\frac1{4\pi}\,
\frac{\frac{b[32a^4j^4 +16a^2b^2j^2(k-1)^2](k-1)}
{[4a^2j^2+4b^2(k-1)^2]^{3/2}}
-\frac{bk[32a^4j^4 +16a^2b^2j^2k^2]}
{[4a^2j^2+4b^2k^2]^{3/2}}
}{ 16a^4 j^4 }\,. \]
At first glance it may seem that this expression sums over $k$
to zero, by the same telescoping argument used elsewhere.
However, unlike those previous sums,
in this case the individual terms do not approach 0 as
$|k|\to\infty$; rather, they go to $1/(32\pi a^2j^2)$.
Taking account of both signs of $k$ and~$j$,
one gets the force to be
\[F_\mathrm{VD}=
4 \sum_{j=1}^\infty\frac1{32\pi a^2j^2}
= \frac{\zeta(2)}{8\pi a^2}
= \frac{\pi}{48a^2} \,,\]
as needed.
Although this exercise may appear redundant, it has underscored two
important points.
First, doing the calculation in terms of pressure instead of
energy by no means eliminates the problem of divergences.
Second, the divergent pressure on a given wall is not associated
with the divergent energy adjacent to the wall
(in region $\beta$ in Fig.~\ref{fig:alphabeta}).
Rather, it goes with the divergent energy adjacent to the
intersecting perpendicular walls
(such as in region $\alpha$).
\section{The Casimir piston} \label{sec:piston}
The physical significance of the forces calculated in \cite{Lu}, in
our Sec.~\ref{ssec:force}, and in much intervening literature has
been called into question.
For one thing, unlike the celebrated sphere calculations of Boyer
\cite{Boy} and others, these calculations are unable to take into
account any forces coming from the region outside the box.
In addition,
within the framework of ultraviolet-cutoff
regularization the
uncompensated divergent energy proportional to the surface area
cannot be easily dismissed in deducing the force conjugate
to a dimension whose variation changes the surface area.
In our case the offending energy is that localized in the region
$\alpha$ in Fig.~\ref{fig:alphabeta},
which is proportional to the length of the box,
and the corresponding pressure was also observed in
Sec.~\ref{ssec:force} in the direct calculation of
$\langle T_{11}\rangle$ on the movable side of the box.
Cavalcanti \cite{Cav} proposed to avoid both problems by
considering a different situation, the piston
(Fig.~\ref{fig:piston}).
\begin{figure}
\centerline{\beginpicture
\setcoordinatesystem units <2truecm,2truecm>
\putrule from 0 0 to 4 0
\putrule from 0 1 to 4 1
\putrule from 0 0 to 0 1
\putrule from 1 0.05 to 1 0.95
\putrule from 4 0 to 4 1
\put{$b$} [r] <-2pt,0pt> at 0 0.5
\put{$a$} [t] <0pt,-2pt> at 0.5 0
\put{$L-a$} [t] <0pt,-2pt> at 2.5 0
\endpicture}
\caption{A rectangular piston in
dimension~2. Its ``shaft'' has length $L-a$, effectively infinite.
The word ``piston'' refers both to the movable plate at $x=a$
and to the model as a whole.}
\label{fig:piston} \end{figure}
The interior partition is free to move horizontally, and one is to
calculate the force upon it.
$L$ is to be taken very large compared to $a$ and~$b$.
The argument now is that the exterior of the apparatus is
unchanging and hence irrelevant to the force, whereas both interior
chambers can be treated exactly.
Furthermore, the total of the interior side lengths is independent
of the piston position, $a$, so that the surface divergences cancel
in the calculation of the force.
Generalizations and variations of this model have been extensively
studied
\cite{Sv,HJ1,HJ2,Bar-pist,Mar,Ed-pist,ZL,SchM,Mar-pist,
EdMd,EdM,Cheng,Schaflask}.
The piston model is not without its own physical problems, because
interactions between the piston plate and the horizontal sides have
been ignored.
In a realistic experiment, because of ordinary Casimir
attraction the plate
would be unstable to striking
the tube wall on edge, after which
it would collapse against one
of the walls of the tube.
It may be argued that this
objection is irrelevant to the question of principle that the
piston model is designed to address;
the only degree of freedom one is varying is~$a$,
so it is legitimate to imagine that the plate is constrained
from moving
in any other degree of freedom.
There still exists a Casimir force between the plate and the
nearest wall, though it is somehow prevented from causing motion.
However, one can argue by symmetry that
this force has no significant horizontal component, so
that the piston theorists are justified in ignoring it.
Nevertheless, in a real apparatus there would surely be some
friction with the walls, so the feasibility of an experiment to
verify the piston analysis is questionable.
Putting these doubts aside, we summarize and recast the Cavalcanti
analysis in our framework of closed paths.
The finite part of the force on the piston from the chamber on the
left has been calculated in~(\ref{renforce}).
The force coming from the shaft on the right can be found from the
same formula, with the sign reversed, $a$ replaced by $L-a$, and
$L$ taken to infinity; the only term that survives is the PV term,
\begin{equation}
F_L = -\,{\zeta(3)\over 16\pi b^2}\,.
\label{shaftforce}\end{equation}
It exactly cancels the corresponding term in~(\ref{renforce}),
leaving PH, PD, and VD terms:
\begin{equation}
F_\mathrm{pist} =
-\, \frac{\zeta(3)b}{8\pi a^3}
+ \frac b{8\pi} \sum_{j,k=1}^\infty
\frac{k^2b^2 - 2j^2a^2}{(j^2a^2 + k^2b^2)^{5/2} }
+\frac{\pi}{48a^2}\,.
\label{pistforce}\end{equation}
Here there is no ``naive renormalization''
as in (\ref{renforce}),
since the divergences (in particular, the VP terms) would
explicitly cancel if the calculation were done for the complete
piston before removing the cutoff.
Cavalcanti \cite{Cav} rendered (\ref{pistforce})
more illuminating by
subjecting it to further analysis.
If one refrains from the $\zeta(3)$ simplification, the
complete sum over periodic paths in the ($t=0$) energy,
(\ref{perenergy}), is
\begin{equation}
\langle E\rangle_{t\mathrm{P}\setminus\mathrm{Z}} =
-\frac{ab}{32\pi} \sum_{j,k=-\infty \atop (j,k)\ne(0,0)}^\infty
(j^2a^2 + k^2b^2)^{-3/2}.
\label{penergy}\end{equation}
From this one can derive two complementary formulas, useful in the
respective regimes $a\gg b$ and $a\ll b$.
(Unfortunately, none of the three formulas for $F_\mathrm{pist}$
is completely transparent for $a\approx b$.)
In the first case,
for $j=0$ one evaluates the $k$ sum to the term~PV
(the first term in~(\ref{perenergy})), as
before, but
for fixed $j\ne0$, one applies a known relation between the $k$~sum
(which is an Epstein zeta function) and a series of modified
Bessel functions.
(This theorem traces back ultimately to the Poisson summation
formula;
see the appendices of \cite{AW} and \cite{Kir}.)
Thus the PH and PD terms together are replaced by the energy
terms
$$-\,\frac{\pi}{48a} - \frac1{2b} \sum_{j,k=1}^\infty \frac
kj\,K_1\left(2\pi jk\,\frac ab \right).$$
(The individual terms in the sum cannot be associated with
individual periodic orbits, nor with individual eigenvalues.)
Remarkably, the first term of this expression precisely cancels
the VD term, so that PH, PD, and VD all together reduce to the
energy
$$- \,\frac1{2b} \sum_{j,k=1}^\infty \frac kj\,
K_1\left(2\pi jk\,\frac ab \right).$$
Since HD does not contribute to the force and the PV force
is still
cancelled by the force from the shaft, the force on the piston is
\cite[(11)]{Cav}
\begin{equation}
F_\mathrm{pist} =\frac{\pi}{b^2} \sum_{j,k=1}^\infty k^2
K_1'\left(2\pi jk\,\frac ab \right).
\label{fbiga}\end{equation}
It follows that the piston force (a) is always negative,
(b) vanishes exponentially fast for $a\gg b$, in contrast to the
usual power-law decay of the Casimir force.
Alternatively, one can apply the Epstein-to-Bessel
transformation to the $j$ sum for fixed~$k$.
That is, PV and PD get replaced by
$$-\,\frac{\pi}{48b} - \frac1{2a} \sum_{j,k=1}^\infty \frac
kj\,K_1\left(2\pi jk\,\frac ba \right).$$
The first term cancels HD (which doesn't contribute to the force
anyway);
the VD term remains (as does PH); and the PV term has been
absorbed, so that
the force from outside the piston is now uncompensated.
Thus the total force on the piston is \cite[(14)]{Cav}
\begin{equation}
F_\mathrm{pist}=
-\, {\zeta(3)b\over 8\pi a^3} + \frac{\pi}{48a^2}
-\frac{\zeta(3)}{16\pi b^2} +\frac{\pi b}{a^3}
\sum_{j,k=1}^\infty
k^2 K_0\left( 2\pi jk\, \frac ba\right).
\label{fsmalla}\end{equation}
As Cavalcanti explains, this form is nicely adapted to
understanding the regime $a\ll b$, where the standard Casimir
result is, of course, recovered in the limit.
Energy is fungible, so one must beware of attributing too
fundamental a connection between particular classes of paths and
the observable net forces.
The striking thing is that the calculations reveal several exact
cancellations, not all of which can be implemented at the same
time.
\section{The Casimir pistol}
\label{sec:pistol-d}
The Casimir piston has proved to be a highly illuminating model,
but it does not settle the issue of the true physical significance
of the purely internal vacuum pressure on the side of a rectangular
cavity.
In the piston both the divergent (VP)
and the positive finite (PV)
internal pressure are exactly
balanced by the precisely analogous pressures in the long shaft on
the other side of the movable plate. This observation does not
tell us what would happen if the external shaft were not there.
The problem of real interest is
a rectangular box with one side free to move, as indicated
schematically in Fig.~\ref{fig:alphabeta} and more realistically
in Fig.~\ref{fig:looselid}.
The main question is whether the force on the
movable side is attractive or repulsive. This is a question
about disjoint rigid bodies, so it is a
meaningful physical problem, just like the piston.
(The 3D electromagnetic case
should be qualitatively similar to the 2D scalar problem.)
Another urgent question is what happens to the VP divergence now
that there are no VP paths in the shaft to compensate it;
the previous paradox (Sec.~\ref{sec:global})
of an apparent infinite pressure has reappeared.
\begin{figure}
\centerline{\beginpicture
\setcoordinatesystem units <2truecm,2truecm>
\putrule from 0 0 to 2 0
\putrule from 0 0 to 0 2
\putrule from 0 2 to 2 2
\putrule from 1.95 0.2 to 1.95 1.8
\putrule from 0.1 0.1 to 2 0.1
\putrule from 0.1 0.1 to 0.1 1.9
\putrule from 0.1 1.9 to 2 1.9
\putrule from 2.05 0.2 to 2.05 1.8
\putrule from 1.95 0.2 to 2.05 0.2
\putrule from 1.95 1.8 to 2.05 1.8
\putrule from 2 0 to 2 0.1
\putrule from 2 2 to 2 1.9
\endpicture}
\caption{A rectangular box with one side (``lid'') free to move.
The box has walls of finite width
and a finite gap between the lid and the box sides.}
\label{fig:looselid}\end{figure}
The problem is difficult because there is no reliable
analytical calculation of
the forces acting from outside the box and inside the tiny gaps at
the ends of the lid.
If we momentarily ignore the gaps, it seems unlikely
that the external forces would be very large
(although we find unconvincing
Lukosz's attempt \cite{Lu} to prove this fact by
appealing to Weyl's theorem).
If one thinks in terms of closed paths, paths striking the walls
perpendicularly will yield only the usual surface divergence, so
the only possible source of nontrivial external forces
is the diffractive paths striking the corners.
If this diffractive effect is small,
therefore, one might expect the force to be repulsive when the
plate is exactly at the mouth of the box. However,
if the plate is located significantly inside or outside the box,
intuition says the opposite: ``inside'' we are getting into the
piston regime,
whereas ``outside'' the Casimir attraction between the
nearest neighboring regions of the two bodies should be dominant.
A convincing resolution of this apparent paradox presumably
requires a serious study of the gap region in a less idealized
geometry, as in Fig.~\ref{fig:looselid}.
It is
clear that what happens around the gap is very complicated,
especially when the plate is part-in and part-out as in that
figure.
One should note that the symmetry argument used in
Sec.~\ref{sec:piston} to dismiss the forces in the gap is
no longer applicable.
The uncertainty about the external and gap forces is somewhat
alleviated
if we replace the thin lid
by a large rectangular object (Fig.~\ref{fig:pistol}).
The piston plate has now become more
like a bullet or artillery shell. The
question now is the sign of the force for various values of the
five dimensions indicated in Fig.~\ref{fig:pistol}:
Does a Casimir pistol exist?
The advantage of this new problem is that the corners of the
two bodies are not near each other, so there are no short
classical paths outside the apparatus
(as long as neither $d$ nor $e-d$ is small), even if diffractive paths
are admitted as classical.
(One could eliminate diffractive paths
(in the sense we are using the term) by replacing the barrel and
bullet by similarly shaped objects with smooth boundaries.)
Like all piston authors, we continue to consider only
horizontal motion (variation of~$a$) and therefore ignore the
vertical force between the bullet and the shaft of the barrel.
\begin{figure}
\centerline{\beginpicture
\setcoordinatesystem units <2truecm,2truecm>
\putrule from -1 0 to 2 0
\putrule from -1 0 to -1 2
\putrule from -1 2 to 2 2
\putrule from 1 1.9 to 3 1.9
\putrule from 1 0.1 to 3 0.1
\putrule from 1 0.1 to 1 1.9
\putrule from 3 0.1 to 3 1.9
\setdashes
\putrule from -1 1 to -0.15 1
\putrule from 0.1 1 to 1 1
\put{$a$} at -0.025 1
\putrule from -1.2 0 to -1.2 0.85
\putrule from -1.2 1.2 to -1.2 2
\put{$b$} at -1.2 1
\put{$\leftarrow c$} at 2.2 1.95
\putrule from 1 2.1 to 1.35 2.1
\putrule from 1.6 2.1 to 2 2.1
\put{$d$} at 1.475 2.1
\putrule from 1 0.7 to 1.85 0.7
\putrule from 2.1 0.7 to 3 0.7
\put{$e$} at 1.95 0.7
\endpicture}
\caption{The Casimir pistol, consisting of two disjoint, perfectly
conducting bodies, the barrel and the bullet.
It is shown schematically, as in Fig.~\ref{fig:alphabeta},
but the barrel can be thought of as having finite thickness, as in
Fig.~\ref{fig:looselid}.}
\label{fig:pistol}\end{figure}
We now consider the implications of taking the small gap of
width $c$ seriously.
(We speak only of the gap at the top, but obviously the same
remarks apply to the one at the bottom.)
The first (and motivating) observation is that the total side
length of the system is now fixed, and hence so is the transverse
extent of the infinite (or cutoff-dependent) surface energy.
In particular, the energy associated with what we call VP paths
(including those striking the exterior of the apparatus) is
independent of~$a$.
The associated paradox is thereby removed!
Our joy in this victory should be short-lived,
however.
If we take VP paths inside the gap
seriously, then for consistency we must also take PV
paths across the gap seriously, and we shall see that they
present a serious problem.
Nonperpendicular paths inside the gap eventually escape from
it, so they are not ``short'' and probably can be neglected.
(In the model of \cite{Z}, all such paths escape to infinity and
hence can never be closed. In our case it is possible, but rare,
for such a
path to bounce off the left side of the rectangle and return to its
starting point.)
From the point of view of a point $\mathbf{r}$ inside
the rectangular region of area $ab$,
the box now has
small ``leaks'' of width~$c$,
but one would not expect that to affect its
internal Casimir energy significantly.
This observation could be made quantitative by imitating a
calculation in~\cite{Z}, but we shall not do so here, because we
are interested only in the limit of very small~$c$.
On the other hand, because the gap $c$ is much
smaller than the box dimensions, $a$ and $b$, the Casimir energy
associated with the rectangle of area $cd$ is much
greater than that of the box.
The principal force associated with this gap rectangle is the
vertical Casimir attraction between the bullet and the barrel,
but we have
agreed to impose a constraint that makes it irrelevant.
However, it is the very essence of the piston argument,
especially as developed by Hertzberg et al.~\cite{HJ1,HJ2},
that the proportionality of the Casimir energy to~$d$
produces a horizontal force, independent of~$d$ but proportional
to $1/c^2$.
(This energy is precisely the contribution of the
vertical periodic (PV) paths.)
In the present scenario this force has sign opposite to the
Lukosz force in the box, because $d$ increases when $a$
decreases, and a larger magnitude than the analogous force in the
piston scenario, because $c<b$.
Therefore, if we accept all the approximations
involved in this argument, we are forced to the conclusion that
the bullet is sucked into the barrel, not expelled from it.
Let us list those assumptions.
\begin{enumerate}
\item The gap between the bullet and the barrel does not
significantly affect the Lukosz force from the empty part of the
pistol chamber.
\smallskip
\item There are no significant forces from outside the pistol.
\smallskip
\item The effect of the gap can be estimated by ignoring
nonperpendicular paths and treating the perpendicular paths as
usual, as if we had simply a pair of parallel plates there.
\end{enumerate}
Obviously, a trustworthy treatment of this system
requires either a numerical analysis
(for example, by the method of Gies et al.~\cite{GK})
or, better still, a
breakthough in the analytical treatment of convex corners.
As a positive result in this direction, we report that
it is possible to compute exact forces and
torques between bodies of arbitrary shape in weak coupling
(for example,
materials with dielectric constant nearly unity). For example,
two thin parallel plates of finite length experience an
attractive lateral force that tends to cause the plates to move
to a configuration where they are centered on each other. This
is the attractive force that tends to increase the length of the
gap in the pistol.
(A recent independent investigation \cite{RJJ} likewise shows a
system maximizing the length of a small gap between flat
surfaces.)
Moreover, in
addition to the attractive force between the plates, there is a
torque exerted on one thin plate above a larger plate which tends
to cause a rotation of the smaller plate about its center of mass
so as to favor perpendicular orientation. Details
have been reported in~\cite{MPW,MPWconfs}. Since these
qualitative conclusions are essentially geometrical, they should
also hold for strong coupling (Dirichlet boundary conditions).
Finally, let us try to confirm the foregoing conclusions
by looking at pressure integrals.
In principle, one can find
the total force on the bullet by integrating
the appropriate components of $\langle T_{\mu\nu}\rangle$
over the surface of the bullet
(or even some larger surrounding surface \cite{R1,R2}).
The integral over the back
side of the bullet is essentially the same as in Sec.~\ref{sec:global}
(apart from the ``infinite'' term).
On the top and bottom sides, the
relevant component is $T_{12}\,$, and a check of the
formulas (\ref{T12Pjk}), (\ref{T11Cjk}), (\ref{T22Vjk}), (\ref{T12Hjk})
shows that the contributions all vanish.
So, one would conclude that the pistol fires after all!
We believe that the resolution of this new paradox is that the
crude approximations listed above, although they
\emph{may} be permissible for the energy calculation, are simply
wrong for the pressure calculation. In particular, if the
Casimir (or the Lukosz) formulas were accurate over the entire
gap rectangle, there would be finite jumps at the end surfaces
of the gap in $\langle T_{11}\rangle$
(which is constant and large in the gap,
constant and smaller in the chamber, and zero in the exterior, in
our approximations). By the conservation law, there is then a
delta function (of $x- \mathrm{(endpoint)}$) in
$\langle \partial T_{12} /\partial y\rangle$.
A more realistic calculation would smear out this
singularity, probably creating a lump of $\langle T_{12}\rangle$
that decreases
more or less linearly in $y$ away from the back corner of the
bullet and also downward away from the front edge of the barrel.
These stress terms would create horizontal forces.
They are very much like the stresses found in
\cite[Figures 4(d,e,f)]{R1}.
\section{The Casimir pistol with cutoff} \label{sec:pistol-c}
\subsection{Parallel plates revisited}
Our discussion so far has concerned the 2D scalar analogue of the
idealized perfect-conductor model of the interaction of the
electromagnetic field with metal bodies.
It is generally agreed that the divergences (except for the
universal volume divergence) encountered in such calculations are
the fault of the physical failure of that model at high
frequencies
--- equivalently, at length scales so small that the material
cannot be modeled as a continuum.
It is also now agreed that
the energy divergences, or the corresponding cutoff-dependent
terms in a calculation with a cutoff, being independent of the
bodies' positions, do not appear in the forces between
rigid conducting bodies.
It is sometimes forgotten that the idealized Casimir theory runs
into physical trouble already for rigid bodies, even the canonical
scenario of parallel flat plates, when the distances become too
small.
It predicts an energy per unit cross section, $\mathcal{E}$,
proportional to
$-a^{-d}$ for plates with separation $a$ in $d$-dimensional space.
If taken literally, this says, implausibly,
that $\mathcal{E}$ becomes (negatively)
infinite when $a$ goes to zero.
One would expect instead that in that limit $\mathcal{E}$
approaches a constant,
since then the space between the plates has disappeared and space
is filled by the perfectly conducting material.
(In fact, the constant should make the total energy turn out to
be $0$ when suitably defined surface energies are also taken into
account.)
Barton \cite{Bar} has done extensive calculations for
dielectric bodies with a polarizability small enough to be
treated perturbatively
(the opposite regime from perfect conductivity).
He showed (see also \cite{maraball}) that a
spatial cutoff at atomic distances serves to cure the
divergences (which otherwise remain even in
the usual model of
quadratic falloff of dielectric constant with frequency
--- e.g., \cite{BE,MN}).
Roughly speaking, the mathematical effect of such a
cutoff is similar to that of a very rapid, such as exponential,
cutoff at high frequency.
In Barton's theory the total energy per unit cross section does
approach $0$ as $a\to0$ when the surface energy is included.
In this model, as $a\to0$ there is a constant attractive
force proportional to the energy density of the uniform medium,
no matter how the latter is regulated \cite{milnotes}.
More recently, Barton \cite{Bar-sphere,Bar-sheet} has
developed a plasma model that is more pertinent to the limit of
perfect conductivity. It also involves an atomic-scale cutoff,
but one affecting only the wavelengths parallel to the
boundaries.
Our aim here is to stay in the highly conducting regime and
to see whether keeping the exponential cutoff parameter~$t^{-1}$
finite, at some value typical of atomic separations, yields a
physically plausible (and divergence-free) model of Casimir
phenomena.
Although ultimately no substitute for serious microscopic modeling
of conductive materials (an unavoidably nonlinear problem),
this approach offers hope of rescuing the huge investment that has
been made into treating vacuum problems (relatively easily) by
spectral analysis of linear partial differential operators.
It also provides a route to understanding the gravitational
significance of ``divergent'' local energies and stresses
\cite{leipzig}.
This cutoff should be regarded as analogous to the ad hoc
repulsive core in the Lennard--Jones potential in atomic physics.
A more accurate potential should be based on the electronic
structure of the atoms; but one would not then apply such a
potential to, say, nucleon-nucleon scattering. Similarly, a
detailed theory of real metals is not relevant to hadron bags,
cosmological branes, thermal fluctuations in soft-matter physics,
and other systems where Casimir-like effects have been studied.
Within our two-dimensional scalar model (which is pertinent to
all these contexts, if to any) the simple exponential cutoff has
the advantage of being universal, but we and readers must remain
conscious that its relevance at small distances to any particular
real physical system is qualitative at best. We stress again that
this atomic-scale cutoff must not be confused with the well known
decrease of dielectric constant with frequency above the plasma
frequency; we make no attempt to model the latter, which is
specific to the electromagnetic scenario.
In the context of the Casimir pistol, the idea is that the small
gap $c$ surrounding the bullet must be in the sub-Casimir regime if
the other dimensions ($a$, $b$, $d$) are in the regime where
Casimir effects are significant, and,
therefore, the deduction in Sec.~\ref{sec:pistol-d}
of a dominant attractive force originating in the gap goes outside
the regime of validity of the theory.
Although the cutoff theory has no fundamental physical
justification, it is probably a bit closer to the truth.
(Unfortunately, we shall see that no robust conclusion is
attainable by this route.)
Maclay and Villarreal \cite{MV} proposed this same kind of
cutoff and
hence obtained formulas and graphs rather similar to ours in this
section. However, they identified $t$ with the reciprocal of the
plasma frequency rather than, as we do, the interatomic spacing,
which is typically 100 times smaller (again cf.~\cite{Bar}).
Other authors \cite{MPS,perivo} considered (for refutation)
an even bigger exponential cutoff length, adequate to make the
unrenormalized vacuum energy of empty space consistent with
cosmological observations, and showed that such theories predict
Casimir repulsion at distances large enough to be refuted by
the existing laboratory experiments.
Before studying the pistol, let us look at the
attraction between parallel plates with the cutoff retained.
This could be done easily in any dimension,
but for coherence in this paper we retain dimension~$2$.
If the separation between plates is $a$, we take the $b \gg a$
limit of (\ref{regperenergy}),
in which only the perpendicular paths ($k = 0$)
contribute, and divide by $b$ to get energy per unit length:
\begin{equation}
\mathcal{E} = \frac{a}{\pi} \sum_{j=1}^\infty
\frac{t^2 - 2j^2a^2}{(t^2+ 4j^2a^2)^{5/2}}\, .
\label{plateenergy} \end{equation}
This function behaves
in keeping with the idea that the energy or the force should
be damped when $a$ is comparable to the nanoscale (interatomic
spacing) represented by~$t$.
Of course, when $a\gg t$, the effect of $t$ is negligible and
(\ref{plateenergy}) gives the standard result.
It is convenient to measure $a$ in units of~$t$. If
$ a = st$, then $\pi t^2\mathcal{E} = F(s)$, where
\begin{equation}
F(r) \equiv r \sum_{j=1}^\infty
\frac{1 - 2j^2r^2}{(1+ 4j^2r^2)^{5/2}}
\label{platedimenless}\end{equation}
\begin{figure}
\centering \includegraphics{plates2d.eps}
\caption{Graph of $F(r)$ as a function of $r$.}
\label{fig:plates2d}
\end{figure}
(see Fig.~\ref{fig:plates2d}).
$F(r)$ has
a zero at $r_0 \approx 0.5888$. It has a minimum
(a zero of the force) at $r_1 \approx 1.0105$,
with $F(r_1)\approx -0.02821$.
At large $r$, $F(r)\sim -\zeta(3)/16r^2$ as in the theory without
cutoff.
For small $r$ the Euler--Maclaurin formula \cite[(23.1.30)]{AS}
shows that
$F(r) \sim \frac14 - \frac r2 +O(r^N)$
for arbitrarily large~$N$.
Thus $F(0)$ precisely cancels the surface energy from (\ref{perimen})
(where $P=2$ because we are looking at unit cross section on two
plates), so that the total energy at $s=0$ is indeed~$0$.
(But this result may be an accident. It does not happen for the
Neumann boundary condition. Also, as we shall now
observe, $s=0$ seems to represent material under compression, not a
solid block of ordinary material.)
At $r<r_1$ this model predicts a repulsion. Therefore, it must
violate the hypotheses of the theorems stating that vacuum forces
between symmetrical bodies separated by a plane are always attractive
\cite{KK,Bac}.
The argument of Kenneth and Klich \cite{KK} refers to the standard
dielectric model of the media,
or a scalar analog thereof,
into which our cutoff does not fit.
The mathematical reason why the theorem of Bachas \cite{Bac}
doesn't apply is less clear, but the key physical point is clear
from that author's remarks (p.~9094)
that a ``quantized particle does not,
strictly speaking, live in one side of the reflecting plane,''
and that the theorem would apply at the quantum level
only if (in the terms of our scenario) one of the slabs were made
of antimatter.
The repulsion occurs only at separations of the order of the
interatomic spacing. Thus the model mocks up a more realistic
theory in which the two slabs are not cleanly separated, in
accordance with Barton's remark \cite[p.~4088]{Bar},
``[A sharp short-distance cutoff,] though a fiction, is a
convenient shortcut to somewhere near the truth.
At small separations, overlap between the electron clouds makes
the interatomic potential highly repulsive....''
We recall also that Ford and Svaiter \cite{FS}
found that a similar effect was induced by a
stochastic uncertainty in the position of the conducting
boundaries, which must in general lead to some probability of
interpenetration.
In short, nobody should be surprised to encounter a repulsion when
pushing two slabs of material together.
A normal, stable material must resist compression. Of
course, such repulsion is not a ``Casimir effect'';
a quantitative
study would require detailed modeling of the material, and it is
a surprise and probably an accident that our crude
field-theoretic model gives such plausible results in this regime.
In particular, the fact that our potential minimum occurs at
a small positive separation, rather than zero or negative, is not
to be taken too seriously.
\subsection{Energy in the pistol}
Now we do the energy accounting for the pistol, under the three
assumptions listed in Sec.~\ref{sec:pistol-d}.
(The notation is a slight simplification of that
in Sec.~\ref{sec:global}.)
\emph{Energy in the chamber:} According to assumption~(1),
the contribution $E_\mathrm{P}$ of periodic paths is still given by
(\ref{regperenergy}) with $\eta=0$.
Corner paths can be ignored because they make no contribution to
the total energy.
HP and HD paths
can be ignored here because they make no contribution to the
relevant force (their energies being independent of~$a$).
The contribution of VP paths is the $a$-dependent term of
(\ref{perimen}):
\begin{equation}
E_\mathrm{VP} =-\, \frac a{4\pi t^2}\,.
\label{Ev}\end{equation}
The contribution of VD paths is given by the generalization of
(\ref{edgeenergy}) to finite cutoff,
\begin{equation}
E_\mathrm{VD} = \frac a{2\pi} \sum_{j=1}^\infty
{-t^2+4j^2a^2 \over (t^2+4j^2a^2)^2 }\, .
\label{Ed}\end{equation}
\emph{Energy in the barrel:}
In the formulas above, $(a,b)$ must be replaced by $(d,c)$,
and we must multiply by $2$ to count both top and bottom gaps.
In accordance with assumption (3), only vertical paths ($j=0$) will be
considered.
The PV paths give
\begin{equation}
E_{\mathrm{P}'} = \frac{2cd} {\pi} \sum_{k=1}^\infty
{t^2 - 2k^2c^2 \over (t^2 + 4k^2c^2)^{5/2} }\,.
\label{Epprime}\end{equation}
The VP paths give
\begin{equation}
E_{\mathrm{VP}'} = -\, \frac d{2\pi t^2}\,,
\label{Evprime}\end{equation}
of which half belongs to the barrel and half to the bullet.
As expected, the barrel part combines with (\ref{Ev}),
\[E_\mathrm{VP}+\frac12 E_{\mathrm{VP}'} = -\,\frac{a+d}{4\pi t^2}\,,\]
to yield something independent of $a$, because $a+d$ is constant.
Similarly, the bullet part of (\ref{Evprime}) combines with the
surface energy of the part of the bullet outside the barrel.
\emph{Summary of pistol energy:}
The only energy terms that contribute to the force (under our
assumptions) are
\begin{equation}
E= E_\mathrm{P} + E_\mathrm{VD} + E_{\mathrm{P}'}
\label{Etot}\end{equation}
as listed above.
We could differentiate with respect to $-a$
(using $\pd {}d = - \pd{}a$) to get the force.
All the sums
encountered can be expressed in terms of inhomogeneous Epstein
zeta functions \cite{El,Kir}.
However, for our purposes
it is better to analyze the various terms qualitatively.
(Quantitatively, we claim nothing for the model at short distances
anyway.)
\subsection{Asymptotics and numerics for the pistol}
Let
$c = rt$, $a = st$, $b=ut$, $d = L-a =(l-s)t$ (see
Fig.~\ref{fig:pistoldimenless}).
\begin{figure}
\centerline{\beginpicture
\setcoordinatesystem units <2truecm,2truecm>
\putrule from -1 0 to 2 0
\putrule from -1 0 to -1 2
\putrule from -1 2 to 2 2
\putrule from 1 1.9 to 3 1.9
\putrule from 1 0.1 to 3 0.1
\putrule from 1 0.1 to 1 1.9
\putrule from 3 0.1 to 3 1.9
\setdashes
\putrule from -1 1 to -0.15 1
\putrule from 0.1 1 to 1 1
\put{$s$} at -.025 1
\putrule from -1.2 0 to -1.2 0.85
\putrule from -1.2 1.2 to -1.2 2
\put{$u$} at -1.2 1
\put{$\leftarrow r$} at 2.2 1.95
\putrule from -1 2.1 to 0.4 2.1
\putrule from 0.6 2.1 to 2 2.1
\put{$l$} at 0.475 2.1
\endpicture}
\caption{Pistol dimensions in units of $t$.}
\label{fig:pistoldimenless}\end{figure}
We want to examine $E$ as a function of~$s$,
with $r$ of order unity
and $s$, $u$, $l-s$ much larger.
From (\ref{Etot}) and (\ref{regperenergy}) we have
\begin{eqnarray} E&=& E_\mathrm{PV} + E_\mathrm{PH} + E_\mathrm{PD}
+ E_\mathrm{VD} + E_{\mathrm{P}'} \nonumber \\
&\equiv& \frac{us}{\pi t}\sum_{k=1}^\infty
\frac{1-2k^2u^2}{(1+4k^2u^2)^{5/2}}
+ \frac{us}{\pi t}\sum_{j=1}^\infty
\frac{1-2j^2s^2}{(1+4j^2s^2)^{5/2}}
\nonumber \\
&&{}+\frac{2us}{\pi t} \sum_{j=1}^\infty\sum_{k=1}^\infty
\frac{1 - 2j^2s^2 - 2k^2u^2 }{ (1+ 4j^2s^2 + 4 k^2 u^2)^{5/2}}
\nonumber \\
&&{}+ \frac s{2\pi t} \sum_{j=1}^\infty
\frac{-1+4j^2s^2}{(1+4j^2s^2)^2 }
+\frac{2r(l-s)} {\pi t} \sum_{k=1}^\infty
\frac{1 - 2k^2r^2}{(1 + 4k^2r^2)^{5/2} }\,.
\label{Etotdimenless} \end{eqnarray}
Let $E_{\mathrm{P}''}$ denote the part of $E_{\mathrm{P}'}$
proportional to~$s$. The other term in $E_{\mathrm{P}'}$
(proportional to~$l$) is independent of~$s$ and hence shall be
ignored in further discussion of the force on the bullet
(including Figs.\ \ref{fig:pullgraphs}--\ref{fig:pushgraphs}).
The terms $E_{\mathrm{PV}}$ and $E_{\mathrm{P}''}$ are linear
functions of~$s$, while the other three terms are nonlinear.
The linear terms dominate the force at large~$s$, and
the main point of interest is the confrontation of
$E_{\mathrm{P}''}$
(Casimir energy in the gap) with $E_{\mathrm{PV}}$
(identified in Sec.~\ref{ssec:force} as the source of the Lukosz
repulsive force; it is the term that would give an attractive
Casimir force between the upper and lower walls of the chamber if
those were allowed to move).
We shall see that generically the P$''$ term is dominant.
At small~$s$ the nonlinear terms
dominate and collectively give a function qualitatively similar to
that in Fig.~\ref{fig:plates2d}.
Two cases are exhibited in Figs.\ \ref{fig:pullgraphs}
and~\ref{fig:pushgraphs}.
\begin{figure}
\centering
\includegraphics{r1a.eps}\hskip1cm\includegraphics{r1b.eps}}
\caption{Graphs of linear (dashed) and nonlinear (solid) parts of
$\pi tE(s)$ for $r=1$, $u=100$, $l=500$. (a) Small $s$; linear terms
are negligible.
(b) Large $s$; linear terms dominate and create an attractive
force.}
\label{fig:pullgraphs} \end{figure}
\begin{figure}
\centering{%
\includegraphics{rhalfa.eps}\hskip1cm\includegraphics{rhalfb.eps}}
\caption{Graphs of linear (dashed) and nonlinear (solid)
parts of $\pi tE(s)$ for $r=0.5$,
$u=100$, $l=500$. (a) Small $s$; linear terms are negligible.
(b) Large $s$; linear terms dominate and create a repulsive
force.}
\label{fig:pushgraphs} \end{figure}
In more detail, by approximating the sums by integrals one can
show that
(when the dimensions other than~$r$ are~$\gg 1$)
\begin{equation}
\pi t E_\mathrm{PH} \sim -\,\frac{\zeta(3) u}{16s^2}
\quad\Rightarrow\quad
\hbox{attractive force}\sim - \,\frac {C'u}{s^3}\,;
\label{PHasy}\end{equation}
\begin{equation}
\pi t E_\mathrm{VD} \sim +\, \frac{\zeta(2)}{8s}
\quad\Rightarrow\quad
\hbox{repulsive force}\sim + \,\frac C{s^2}\,;
\label{VDasy}\end{equation}
\[\pi t E_\mathrm{PD} \sim h\left(\frac su\right)\frac1u
\quad\hbox{for some function $h$, such that} \]
\begin{equation}
s\gg u\gg 1 \;\Rightarrow\;
\pi tE_\mathrm{PD} \sim -\,\frac{\zeta(2)}{8s}
\quad\Rightarrow\quad
\hbox{attractive force}\sim - \,\frac C{s^2}
\label{PDasys}\end{equation}
so that $E_\mathrm{VD}$ and $E_\mathrm{PD}$ cancel to leading
order in $1/s$,
and
\begin{equation}
u\gg s\gg 1 \;\Rightarrow\;
\pi tE_\mathrm{PD} \sim -\,\frac{\zeta(2)}{8u}
+O\left(\frac s{u^2}\right)
\label{PDasyu}\end{equation}
so that the PD force vanishes to leading order in $1/u$.
In the regime $u\gg s$ the PH force dominates the other
nonlinear terms.
(In the absence of fine tuning, it is still smaller than the
linear term unless $u\gg s^3$.)
In the small-$t$ limit (i.e., $s\gg1$)
the PH force reduces to the first term in
(\ref{fsmalla}), which
is simply the standard Casimir
force between the left chamber wall and the bullet,
whereas for general $s$ its energy is
$E_\mathrm{PH} = uF(s)/\pi t$ --- exactly proportional to the
parallel-plate function in Fig.~\ref{fig:plates2d}.
Finally, for large~$s$ ($s\gg u^{1/3}$) all the nonlinear forces
are small compared to
the linear ones, unless the latter happen to cancel.
Indeed, the forces arising from the linear terms are
\begin{equation}
\pi t^2 F_\mathrm{PV}= -F(u), \qquad
\pi t^2 F_{\mathrm{P}''}= +2F(r),
\label{linforce}\end{equation}
where $F$ is defined by (\ref{platedimenless}).
Recall that $F$ has a zero at $r_0\approx 0.6$ and a minimum at
$r_1\approx 1$ and rapidly approaches $0$ at large~$r$.
When $s \gg u^{1/3}$, $F_{\mathrm{P}''}$ exceeds all nonlinear
forces, in particular the PH force
(which, we have seen, dominates the nonlinear forces if
$s\ll u$).
Whether the total force is attractive or repulsive at large~$s$ is
determined by the relative size of the two constant forces in
(\ref{linforce}),
and hence on the value of~$r$, $F(u)$ being small and negative.
(1) For $r_0<r\ll u$ and $r$ not too close to $r_0\,$,
$F_{\mathrm{P}''}$ dominates and the total force is attractive.
This is the regime in which the cutoff model seems most
trustworthy physically.
In particular, it contains the point $r_1\,$, which one might
regard as the most ``natural'' value, corresponding to two blocks
of material in relaxed contact, their effective surfaces separated
by the typical interatomic spacing.
(2) For $0\le r< r_0$ and $r$ not too close to $r_0\,$,
$F_{\mathrm{P}''}$ again dominates and the total force is repulsive.
In particular, if $r=0$ the two gap forces (P$''$ and VP$'$)
cancel and the force can be attributed to the negative surface
energy in the region $\alpha$ (Fig.~\ref{fig:alphabeta}) and the similar
region next to the part of the bullet outside the barrel.
(3) If $r=r_0\,$, $F_{\mathrm{P}''}$ vanishes and the long-range
force is purely $F_\mathrm{PV}$, the repulsive Lukosz force.
This result is what the piston model was designed to achieve ---
a gedankenexperiment showing that the Lukosz result has, at least
in principle, some physical reality.
Unfortunately, that result is attainable only by fine-tuning
and, moreover,
by pushing $r$ into a regime where the physical relevance of the
cutoff model is questionable.
(4) For a special value close to $r_0\,$, namely
\begin{equation}
r \approx r_0 + \frac{F(u)}{2F'(r_0)} =
r_0 - \frac{\zeta(3)}{32u^2F'(r_0)} \,,
\label{zeropt}\end{equation}
the long-range linear force vanishes.
In this scenario the force is the sum of the PH, PD, and VD terms;
in the small-cutoff regime of interest, it is approximately the
piston force~(\ref{pistforce}).
This force is always attractive, but exponentially weak at large~$s$.
In short, the gap plays a spoiler role somewhat like that of the
outer shaft in the piston model. In the piston, where shaft
and chamber have the same width, the shaft force
precisely cancels the related repulsive force from the VP
paths in the chamber, leaving the Casimir-like
force~(\ref{pistforce}).
The same occurs for the pistol in scenario~4, but in the more
plausible scenario,~1, the (attractive, $a$-independent) gap
force overwhelms the interior VP force, precisely because the gap
is narrower than the chamber.
In any case, the force arising from outside the chamber depends,
not surprisingly, on the geometrical configuration outside the
chamber, while the force arising inside is fixed by the geometry
of the chamber.
For the reason mentioned in connection with parallel plates,
the fact that the gap force becomes repulsive at all at finite
gap size may be an artifact of the cutoff model. What one can
say is that the even cruder model of perfect reflection also
displays an artifact, in the form of an attractive force that
diverges as the gap size approaches zero. In a more realistic
model taking into account the interactions related to condensed
matter physics one would expect the gap force to be reduced, if
never reversed.
\section{Conclusions} \label{sec:concl}
We have presented a thorough analysis of the vacuum expectation
value of the stress-energy-momentum tensor in a rectangle.
The calculational methods involve an exponential ultraviolet
cutoff and a sum over images (or closed reflecting paths).
Here we have treated a two-dimensional scalar field; the extension
to three dimensions and electromagnetism is straightforward and
under way. Formulas are presented for all tensor components,
$T_{\mu\nu}(\mathbf{r})$,
for arbitrary combinations of Dirichlet and
Neumann boundaries, arbitrary values of the curvature coupling
$\xi$, and arbitrary values of the cutoff parameter, including
the limit where the cutoff is removed.
Forces (which are independent of $\xi$) have been consistently
calculated both by differentiating energy and by integrating
pressure.
Studying the local energy density and stresses (rather than just
total energy), using a physically motivated ultraviolet cutoff
(rather than an ``analytic'' regularization scheme), and
studying separately the contributions from various classes of
specularly reflecting paths all help to interpret the physics,
especially the roles of boundaries and corners.
Within a cutoff framework one has a clear and consistent
definition of energy densities and forces.
When different configurations of rigid
bodies are compared and all contributions (from inside and outside)
are included, one always finds a cancellation of the
energy divergences
and hence an unambiguous force in the limit of no cutoff.
The decomposition by paths helps one to understand better the
cancellations of divergent terms and often to understand
intuitively the sign of the Casimir force.
Most strikingly, the force on one side of the rectangle includes
important repulsive components associated with paths
\emph{parallel} to that side: a divergent term from short paths
that reflect from the perpendicular sides, and a finite, constant
term from periodic paths between the two perpendicular sides.
In piston geometries these forces are cancelled by counterpart
terms from the exterior of the rectangle, but in more general
circumstances the problem of their physical interpretation must be
taken seriously.
In the later sections of the paper we discuss geometries in which
the vacuum forces from inside a rectangle might be rigorously
exhibited. The box with a loose lid (Fig.~\ref{fig:looselid})
is closest to what one wants
to understand, but accurate calculation of the external edge and,
especially, corner
effects remains impractical for now (at least, beyond the scope of
the present paper).
The piston model (Fig.~\ref{fig:piston})
studied by previous authors is rigorous and
exact, but it obscures the point at issue by adding an external
shaft.
Our attempt to compromise these two scenarios is the \emph{pistol}
(Fig.~\ref{fig:pistol}), which unfortunately did not yield a
robust result. The force on the pistol depends sensitively on the
cutoff length, as compared to the width of the gap
between the bullet and the barrel.
The only regime in which our quantitative analysis
(extrapolated to 3D electromagnetism) can be regarded as physically
trustworthy is that where the gap is small but still larger than
the cutoff; there the behavior is cutoff-independent but
the force is attractive.
Scenarios where the net force is repulsive (in
particular, one where the gap force vanishes) do exist, but require
entering the regime where the calculations cannot be taken
seriously on a quantitative level because one does not know what
the correct ultraviolet cutoff behavior is
(and because stiction and friction are likely to be the dominant
effects there); furthermore, making the
gap force zero or small requires fine tuning within this regime.
Nevertheless, although no quantitative claims can be made for our
model (pistol + cutoff) in that regime, we do submit that the model
is closer to the physical truth than either a model without cutoff
(which would predict infinite energies)
or an analytic regularization that hides the divergences from the
beginning.
Furthermore, while the repulsive Lukosz component of the force is
robust, the force opposing it is dependent on the scenario
considered (e.g., piston vs.\ pistol, or wide gap vs.\ narrow)
and could in principle be controlled to demonstrate the reality
of the Lukosz force, even if the \emph{net} force is attractive
in all practical experiments.
\ack
We thank Martin Schaden and Carlos Villarreal for discussions
and for
providing manuscript copies of their unpublished works.
We thank Gabriel Barton for correspondence and
Jef Wagner, Prachi Parashar,
Chris Pope, and Wayne Saslow for useful remarks.
The numerical plots in Sec.~\ref{sec:pistol-c} were created with
{\sl Mathematica}, the other graphics with \PiCTeX.
{\tolerance 8000
This research is supported by the linked NSF Grants PHY-0554849
(TAMU) and
PHY-0554926 (OU) and forms part of our continuing
collaboration with Ricardo Estrada. \hfil
K.~A.~Milton also received support from DOE Grant
\hbox{DE-FG02}-04ER41305.
L.~Kaplan is supported by NSF Grant PHY-0545390.
K.~Kirsten is supported by NSF Grant PHY-0757791.
\par}
|
2,877,628,089,805 | arxiv | \section{Introduction}
\noindent In experiments on current induced switching of magnetization (see e.g. Ref.~\onlinecite{ref1}),
current passing through a thick polarizing magnet (PM) becomes
spin polarized. The spin polarized current (spin current) then flows through a
nonmagnetic layer (the spacer layer) and becomes partially or fully absorbed by
a switching magnet (SM). The absorbed spin current
exerts a spin-transfer torque on the switching magnet and this torque can be
used to switch the direction of the magnetization of the
switching magnet between the parallel (P) and antiparallel (AP) orientations
relative to the magnetization of the polarizing magnet. In this traditional
setup the current is perpendicular to both the PM/spacer and spacer/SM
interfaces. This setup is referred to as current perpendicular to plane (CPP)
geometry and is shown schematically in Fig.1.
\begin{figure
\includegraphics[width=0.35\textwidth]{fig1.eps}
\caption{\footnotesize CPP switching geometry.}
\end{figure}
The switching process
relies on the scenario in which one of the configurations (P or AP) becomes
unstable, at a critical current, the other configuration is stable and,
therefore, available for switching into. However, in the presence of an
external magnetic field stronger than the coercive field of the switching
magnet, it is found experimentally \cite{ref2,ref3,ref4,ref5} that, for current greater than a critical
value and with the correct sense, neither the P nor the AP configuration is
stable. The magnetization of the switching magnet then precesses continually
and becomes a source of microwave generation. It was also proposed
\cite{ref51}
that microwave generation can occur even in the absence of an applied field
provided the spin-transfer torque has both the in-plane and out-of plane components
of appropriate relative sign.
Both the switching and microwave generation scenarios have potentially very important
applications. However, to limit the current to acceptable values and to minimize the
Oersted fields generated by the current, experiments are performed on CPP nanopillars
with a very small diameter of the order of 100nm. Such nanopillars are difficult to
prepare. Moreover, to achieve a usable microwave power, large arrays of CPP
nonopillars would have
to be manufactured, and this is even more difficult to achieve. We have, therefore,
investigated theoretically two alternative geometries, shown in Fig.2, which may have
\begin{figure
\includegraphics[width=0.5\textwidth]{fig2.eps}
\caption{\footnotesize CPIP (a) and CIP (b) switching geometries.}
\end{figure}
interesting applications since they offer
much more flexibility for design of current-induced switching and microwave
generation devices.
In the first geometry shown in Fig.2a, the current is perpendicular to
the PM/spacer interface but parallel to the spacer/SM interface (CPIP). In the second
geometry shown in Fig.2b the current is parallel to both the PM/spacer and spacer/SM
interfaces (CIP). It is clear from Fig.2 that switching magnets in the CPIP and CIP geometries
are
arrays of either magnetic dots or wires deposited on the surface of a nonmagnetic substrate.
It should be noted that our CPIP geometry in which the current flows parallel to the
switching magnet/nonmagnet
interface is closely related to that used in the so called pure-spin-current-induced
magnetization switching which was recently demonstrated experimentally \cite{ref55}.
This is because, just like in the pure spin current switching, no net charge current flows in the CPIP
and CIP geometries through the switching magnet in the direction perpendicular
to its interface with the spacer. Nevertheless we shall see that a spin current is absorbed by
the switching magnet and this gives rise to a nonzero spin-transfer torque. This effect
is sometimes called nonlocal spin-transfer torque (for detailed discussion of spintronics circuits see
Ref.~\onlinecite{bauer}).
While the potential advantages of the CPIP and CIP geometries are obvious the crucial question
is whether these alternative geometries are as efficient for switching/microwave generation as
the traditional CPP geometry. To address this question we have applied the nonequilibrium
Keldysh formalism \cite{ref6,ref7,ref8} to calculate from first principles
the spin-transfer
torques in the CPIP and CIP geometries. We assume in all our calculations that the spin diffusion
length is much longer than the dimensions of our system (spin is conserved).
We performed calculations of the spin-transfer torque for perfect CPIP and CIP systems
(ballistic limit) and also in the case of a rough nonmagnet/magnet interface to check that our
results remain
valid beyond the ballistic limit. Rather surprisingly both our single-orbital model
calculations and fully realistic
calculations for Co/Cu show that the spin current flowing parallel to the spacer/SM interface
can be absorbed by the switching magnet as efficiently as in the traditional CPP geometry.
Spin polarization of the current in the CIP geometry is not as large as in the CPP geometry
but remains sizable, of the same order of magnitude as in the CPP geometry.
\section{Theoretical formulation}
\noindent
The Keldysh formalism had been applied previously by Edwards {\it et al.} \cite{ref8}
to calculate the spin-transfer torque in the CPP geometry. An essential requirement for the
implementation of the Keldysh formalism is that a sample with an applied bias can be cleaved
into two noninteracting
left (L) and right (R) parts by passing a cleavage plane between two neighboring atomic planes.
It follows that, initially, neither charge nor spin current flows in the cleaved system although the
left and right parts of the sample have different chemical potentials.
This is most easily achieved for a tight-binding (T.-B.) band structure since the T.-B. hopping
matrix between the L
and R parts can be switched off. We shall, therefore, describe our systems by a tight-binding model, in
general multiorbital with s,p, and d orbitals whose one-electron parameters are fitted to
first-principle band structure, as described previously \cite{ref9}.
The hopping between the L and R parts is then turned on
adiabatically and the
system evolves to a steady state. The nonequilibrium Keldysh formalism provides a
prescription for calculating the steady-state charge and spin currents flowing between the L
and R parts of the connected sample in terms of local one-electron Green functions
for the equilibrium cleaved system. In the CPP geometry, considered by Edwards {\it et al.} \cite{ref8},
the sample
is translationally invariant in the direction parallel to all the interfaces and, therefore,
the relevant quantity is the total spin current flowing between any two
neighboring atomic planes. In particular, the spin-transfer torque acting on the switching magnet
is obtained as the difference
between the spin currents entering and leaving the switching magnet (the spin current is naturally
conserved in the nonmagnetic spacer and leads). Edwards {\it et al.} \cite{ref8} showed that the local spin
current is expressed entirely in terms of the
one-electron surface Green functions $g_{L}(\mbox{\boldmath $k_{\parallel}$})$ and $g_{R}(\mbox{\boldmath $k_{\parallel}$})$ for the cleaved sample. Here,
$\mbox{\boldmath $k_{\parallel}$}$ is the wave vector parallel to the interface.
The Green
functions at the surfaces of the cleaved system are obtained from the surface Green functions
of the nonmagnetic leads by the method of adlayers \cite{ref9}. In this method one "grows"
the sample by depositing, one by one,
all its atomic planes on the leads and, after each deposition, the surface Green
function is updated using Dyson's equation. The surface Green function of semi-infinite leads is obtained
by the method of Umerski \cite{andrey}.
We now wish to apply the Keldysh method to the CPIP and CIP geometries. Referring to Fig.2, it is clear
that the translational invariance is broken in the z and y directions but $k$-space description remains
valid in the x direction. We, therefore, need to work in a representation that is atomic-like in
the z and y directions but Bloch-like in the x direction. The method for modelling CPIP and CIP systems
is shown schematically in
Fig.3 for the CPIP geometry. The whole system is built up from chains of atoms parallel to the z axis
\begin{figure
\includegraphics[width=0.45\textwidth]{fig3.eps}
\caption{\footnotesize Schematic model of the CPIP geometry. Two alternative locations of a cleavage plane are
labeled by (1) and (2).}
\end{figure}
which are repeated periodically in the x direction. We shall label the position of each chain by
$n$ and the position of atoms within a chain by $m$. Although we shall frequently refer to chains,
in reality each chain stands for a sheet of atoms since the chains are repeated periodically in the x
direction.
The tight-binding on-site potentials depend on the location of each atom in the sample and those
for magnetic atoms include an interaction between
electrons in d orbitals which leads to an exchange splitting of the bands in the ferromagnets.
The region which lies outside the sample is modelled by fictitious
atoms with an infinite on-site potential which prevents electrons from hopping to these vacant sites.
All chains can be thus regarded as having the same length of $N=N^{ld}+N^{sm(vac)}$ atoms, where
$N^{ld}$
and $N^{sm(vac)}$ are, respectively, the numbers of atoms in the lead and in the switching magnet
(vacuum) in the
vertical z direction. It follows that we can create the whole sample by depositing all its chains
one by one on semi-infinite left and right leads.
The surface Green functions on the chains located immediately to the
left and right of a cleavage plane, that are required in the calculation of the spin current,
are obtained by updating the Green function
from the Dyson equation after each chain deposition. Since the deposition of chains of atoms takes place
in real space in the z direction the Green function is a matrix of dimension
$(2\times N^{orb}\times N)\times (2\times N^{orb}\times N)$, where $N$ is the number of atoms in a chain and
$N^{orb}$ is the number of orbitals. The factor 2 appears because the Green function has two
components corresponding to two spin projections on the spin quantization axis.
To calculate the spin and charge currents we
assume that a bias $V_{b}$ is applied between the left and right leads.
Our goal is to determine the spin and charge currents between any two neighboring chains of atoms
parallel to the the z axis, i.e. to the interface between the
left (polarizing) magnet and the lead. If the cleavage line is first passed to the
left of the switching magnet and then to the right of the magnet, as indicated in Fig.3, the
spin-transfer torque acting on the switching magnet is obtained as the difference between
the total spin
currents in these two locations. Following Edwards {\it et al.} \cite{ref8} and assuming the
linear-response case of a small bias, it is straightforward
to show that the thermal average of the total spin current $j_{n-1}$ flowing between the
chains $n-1$ and $n$ is given by
\begin{widetext}
\begin{eqnarray}
<\mbox{\boldmath $j$} _{n-1}> = \frac{1}{4\pi}\sum_{k m} \, Re \, Tr \{[g_{L}TABg^{\dagger}_{R}
T^{\dagger}-AB+\frac{1}{2}(A+B)]\mbox{\boldmath $\sigma$} \}_{mm}V_{b},
\label{eq1}
\end{eqnarray}
\end{widetext}
where $A=[1-g_{L}^{\dagger}Tg_{R}^{\dagger}T^{\dagger}]^{-1}$,
$B=[1-g_{L}Tg_{R}T^{\dagger}]^{-1}$ are defined in terms of retarded surface Green function
matrices
$(g_{L})_{mm'k}$, $(g_{R})_{mm'k}$ for the decoupled equilibrium system. The subscript L(R) refers
to the chains on the left (right) of the cleavage line. The Green functions depend on the wave
vector $k$ labelling Bloch states in the x direction and on the indices $m$, $m'$ labelling the atoms
in a chain. The matrix $T$ is the tight-binding interchain hopping matrix. The components of
$\mbox{\boldmath $\sigma$}$ are direct products of the
2$\times$2 Pauli matrices $\sigma_{x}$, $\sigma_{y}$, $\sigma_{z}$ and $(N\times N^{orb})\times (N\times N^{orb})$
unit matrix.
Finally, the trace in Eq.(1) is taken over all the orbital and spin indices which are suppressed.
Equation (1) yields the charge current if $\frac{1}{2}\mbox{\boldmath $\sigma$}$ is replaced by a unit matrix multiplied
by $e/\hbar$, where $e$ is the electronic charge.
It follows from Eq.(1) that the total spin current (charge current) between the chains $n-1,n$ is the
sum of partial currents flowing between pairs of atoms which are located on the opposite sides
of the cleavage plane and connected by the T.-B. hopping matrix. By evaluating the individual partial
currents we can, therefore, obtain a detailed information about the local current flow. Equation (1)
yields, of course, only information about current flow in the y direction, which is perpendicular
to the cleavage line. However, by applying locally Kirchhoff's law, the current components in the
direction parallel to the cleavage line (z axis) can also be determined. The current vector
describing the flow of charge current between any two neighboring atoms in the (y,z) plane can be
thus reconstructed. While local currents are not conserved, the total charge current between any
two neighboring chains anywhere in the system is, of course, conserved. The total spin current between
neighboring chains is conserved in the nonmagnetic parts of the system but can be absorbed in the
magnets, which gives rise to spin-transfer torque. The application of Eq.(1) to specific CPIP and CIP
structures will be discussed in Section 3.
\section{Results for a single-orbital tight binding model.}
\noindent
To gain some insight, we have first applied the Keldysh formalism to the CPIP and CIP geometries using
a single-orbital tight-binding model with atoms on a simple cubic lattice and nearest-neighbour hopping $t$.
In this model the relevant parameters are the on-site potentials $V^{\uparrow}$, $V^{\downarrow}$ which
are measured in the units of $2t=1$. The Fermi level is always set at zero.
We begin with the CPIP geometry illustrated in
Fig.2a. For a meaningful comparison of the CPIP geometry with the traditional CPP setup, we also need to
determine the CPP spin current for a system which is finite in the z direction. We, have therefore,
applied to the CPP geometry the same real space method described in section 2 for the CPIP and
CIP geometries.
We choose the total number $N$ of atoms in a chain to be the same in the CPP and CPIP geometries
and make all the spin currents dimensionless by dividing them by the
total charge current multiplied by $\hbar/2e$, where $e$ is the electronic charge. The magnetization of the polarizing magnet is assumed to be parallel to the x axis and
that of the switching magnet is parallel to the z axis. For simplicity, we choose the polarizing magnet
to be semi-infinite in the y
direction. The switching magnet should, of course, be finite since the torque is calculated by taking the
difference between the spin currents before and after the switching magnet. However, it has been
demonstrated for the CPP geometry \cite{ref51} that the dependence of the outgoing spin current on the
switching magnet thickness is almost exactly the same as the dependence of the spin current
on the distance
from the spacer/switching magnet interface in a semi-infinite
magnet. We checked that this is also true for the CPIP geometry. We may, therefore, determine the
spin-transfer torque using a semi-infinite switching magnet. The advantage of using a semi-infinite magnet
is a faster convergence of the $k$-space sum since small and physically unimportant interference effects
which occur in a ferromagnet of a finite thickness are eliminated.
Placing a cleavage plane in the position (1) in Fig.3, we first determine from
Eq.(1) the spin
current in the nonmagnetic spacer, i.e. the spin current incident on the switching magnet.
We then place a cleavage
plane between any two neighboring atomic chains in the switching magnet and determine again from Eq.(1)
the local spin current in the magnet. The spin current $j_{n-1}$ flowing between the
chains $n-1$ and $n$ can be then plotted as a function of the position $n$ of the cleavage plane in the
switching magnet. Such plots are shown in Fig.4 for $N=20$ and for three different aspect ratios
$N^{sm}/N=1/20$, $N^{sm}/N=10/20$, and $N^{sm}/N=19/20$ corresponding to the height of the switching magnet in the
CPIP geometry of one atom, ten atoms, and nineteen atoms. The dependence of the spin current on $n$ in the
CPP geometry is also shown in Fig.4. The spin current curves in Fig.4a and 4b correspond to different
tight-binding on-site potentials in the polarizing and switching magnets, which are listed in the figure.
Those in Fig.4a were chosen so that
the Fermi level in the polarizing and switching magnets intersects both the majority- and minority-spin bands
(a weak magnet)
and there is a perfect matching between the bands of the nonmagnetic spacer and one of the ferromagnet bands.
In Fig.4b both the polarizing and switching magnets are half-metals, i.e., the minority-spin band is empty.
It should be noted that, in general, the spin current relevant for
current-induced switching has an in-plane (x) and out-of-plane (y) components \cite{ref8}. However, we
show in Fig.4 only the in-plane component since it is usually most important in switching.
\begin{figure
\includegraphics[width=0.45\textwidth]{fig4mod.eps}
\caption{\footnotesize Dependence of the spin current on the position $n$ of the cleavage plane in the
switching magnet. The on-site potential parameters in (a) are
$V^{\uparrow}=1.5$, $V^{\downarrow}=2.5$ for the PM, $V^{\uparrow}=V^{\downarrow}=1.7$ for the spacer, and
$V^{\uparrow}=1.7$, $V^{\downarrow}=2.4$ for the SM.
The on-site potential parameters in (b) are
$V^{\uparrow}=0.7$, $V^{\downarrow}=4$ for the PM, $V^{\uparrow}=V^{\downarrow}=0.7$ for the spacer, and
$V^{\uparrow}=0.7$, $V^{\downarrow}=4$ for the SM.}
\end{figure}
It can be seen from Fig.4 that both the CPP and CPIP spin currents decrease as the cleavage plane
is moved
through the switching magnet and become almost zero for a switching magnet of about fifty to hundred
chains wide. The only exception occurs for the aspect ratio $N^{sm}/N=19/20$ for which the
spin current is virtually nondecaying. This will be explained later, once the physical mechanism governing the
spin current absorption is clarified.
Zero outgoing spin current corresponds to complete absorption of the spin current by the
switching magnet, i.e., maximum spin-transfer torque. Fig.4 demonstrates that almost complete absorption
of the spin current is achieved not
only in the CPP but also in the CPIP geometry.
It should be noted that the rate of decay of the CPIP spin current for a half-metallic magnet (Fig.4b) is
comparable to that
for a weak magnet (Fig.4a) but the CPP spin current decays much faster in a half-metallic ferromagnet.
Since the results in Figs.4a and 4b were obtained for
magnets with different band parameters, it is clear that a complete absorption of the spin current by the
switching magnet in the CPIP geometry is a general phenomenon. It can also be seen from Fig.4 that a
switching magnet of height of only one atom has essentially the same absorbing power as that having
height of ten atoms.
To understand these rather surprising results, we first
recall the physical mechanism that governs the absorption of spin current in the CPP geometry \cite{stiles,ref51}. For noncolinear magnetizations of the polarizing and switching magnets, the spin of electrons
incident on the switching magnet is
at an angle to its exchange field. It follows that the spin
must precess in the exchange field of the switching magnet. The precession
frequency is determined by the components of the wave vectors of majority- and minority-spin
electrons parallel to the current flow (perpendicular to the interfaces).
Given that the sum of the energies corresponding to perpendicular and parallel motion of electrons is constant
(equal to the Fermi energy), the perpendicular components of the wave vector, which determine the precession
frequency, are functions of the parallel component $\mbox{\boldmath $k_{\parallel}$}$.
Since the total spin current involves the sum over $\mbox{\boldmath $k_{\parallel}$}$, destructive interference of precessions with
different frequencies occurs. The conventional stationary phase argument \cite{stiles} then shows that
only an extremal frequency of spin current oscillations survives. The stationary phase argument also
predicts that the amplitude of spin current oscillations decays as a function of
the distance from the spacer/magnet interface. Such a behaviour of the CPP spin current is clearly
seen in Fig.4a. The fast decay of the CPP spin current in the case of a half-metallic switching magnet can
be explained as follows. The wave function of an electron with a spin at an angle to the exchange field of a
half-metallic switching magnet is a linear combination of the wave functions with spin parallel and
antiparallel to the exchange field. However, since only electrons with one spin projection on the direction of
the exchange field (magnetization) exist in a half-metallic magnet the precession amplitude must decay
exponentially. This is the behaviour seen for the CPP spin current in Fig.4b.
It is reasonable to assume that spin precession mechanism is also responsible for the decay of the spin
current in the CPIP geometry. However, we need to establish that destructive interference of precessing spins
can occur in this geometry and also that electrons travelling parallel to the spacer/switching
magnet interface do penetrate the switching magnet, so that their spin can precess in the local
exchange field. In an inhomogeneous finite sample shown in Fig.3, size quantization occurs and
electrons thus travel in discrete size-quantized conductance channels. This effect
combined with the sum over the wave vector $k$ in the x direction provides in the CPIP geometry
the relevant channels for destructive interference. However, because of the complexity of size
quantization both in the y and z directions, a simple stationary phase argument is no longer applicable
and an analytical formula for the spin current decay in the CPIP geometry is thus not available.
The only exception is the case with an aspect ratio $N^{sm}/N=19/20$ in Fig.4a where size quantization is so
severe that only one conductance channel is available. Destructive interference then occurs due only to
different $k$-space channels to which the conventional stationary phase argument is applicable. In contrast to
the planar CPP geometry, the $k$-space sum in the CPIP geometry is onedimensional and, therefore, the decay
of spin current oscillations is much slower then in the planar CPP geometry.
Although in the general case of a large number of size-quantized conductance channels we do not have a simple
stationary-phase formula for the spin current in the CPIP geometry, we can nevertheless make an estimate
of the slowest decay of the spin current in a lateral switching magnet. The spin current in Eq.(1) is the trace
over the real-space position in the vertical (z) direction combined with the sum over the wave vector $k$
labelling Bloch states in the x direction. The
trace in the real space is essentially equivalent to a sum over discrete size-quantized conductance channels.
For each conductance channel the sum over the wave vector $k$ can be performed using the conventional
stationary phase argument (see Ref.~\onlinecite{itoh}). That gives a decay of the spin current in each discrete
conductance channel
of the form $\propto 1/\sqrt{n}$, where n is the position of the cleavage plane in the switching magnet.
Since this conventional stationary-phase argument can be applied to each conductance channel, the slowest decay
of the spin current must be $\propto 1/\sqrt{n}$. In practise, destructive interference between different conductance
channels also occurs, and that should lead to a faster decay than the most pessimistic estimate
$\propto 1/\sqrt{n}$.
It remains to demonstrate that transport electrons penetrate the switching magnet
despite the fact that they travel parallel to the interface. To show that we have determined the distribution
of the local charge current in the switching magnet using the method outlined in section 2. The behaviour of the
charge current is shown in Fig.5 for $k=0$ (strictly two-dimensional system) and the aspect ratio $N^{sm}/N=10/20$.
\begin{figure
\includegraphics[width=0.5\textwidth]{fig5.eps}
\caption{\footnotesize Distribution of the charge current in the
switching magnet in the CPIP geometry. The on-site potential parameters are
$V^{\uparrow}=0.7$, $V^{\downarrow}=3.7$ for the PM, $V^{\uparrow}=V^{\downarrow}=1.1$ for the spacer, and
$V^{\uparrow}=1.1$, $V^{\downarrow}=1.9$ for the SM.}
\end{figure}
The orientation of each arrow in Fig.5 represents the direction of the current flow and the length of the arrow
gives the magnitude of the local charge current flowing between neighboring atoms. Figure 5 demonstrates
that there is strong penetration of transport electrons into the switching
magnet, and it is the spin precession of these electrons that results in a spin-transfer torque
(spin current absorption) which is as large as in the CPP geometry.
Finally, we need to explain why the decay of the CPIP spin current in a half-metallic ferromagnet is slower
than in the CPP geometry. In the CPP geometry all electrons have to pass through the switching magnet and the
spin current thus decays exponentially as discussed above. In the CPIP geometry there are many electrons that
penetrate only partially the switching magnet and are then reflected back to the spacer. The spin of
such electrons with a shallow penetration can precess in the exchange field of the switching magnet
and the decay of the spin current is
thus not qualitatively different from that for a weak magnet (see Fig.4a and 4b).
The results shown in Fig.4 and Fig.5 are for structures with perfect interfaces, that are illustrated in Fig.2a.
Interfaces in real structures may well be rough and it is, therefore, necessary to investigate the effect of
interfacial roughness on the absorption of the spin current by the lateral switching magnet. Since the systems
we consider are
"grown" in real space it is straightforward to include in our calculations the effect of a random intermixing
of atoms in the nonmagnetic spacer and switching magnet. The effect of an intermixing over two interfacial atomic
planes on the absorption of the spin current is shown in Fig.6. The intermixing was modelled by replacing the two
interfacial atomic planes by a 50\% alloy of spacer and magnet atoms. The results for a perfect system
are also reproduced in Fig.6. It can be seen that intermixing does not spoil the strong absorption of the spin current
by a lateral switching magnet. The other interesting feature is that the spin current for a perfect CPIP system
exhibits oscillations reminiscent of those that are seen in the CPP geometry. While oscillations of the spin
current in the CPP geometry can be explained by the stationary phase theory, a simple stationary-phase argument is
not available for the CPIP geometry and the precise origin of the oscillations in this geometry is thus not clear.
However, it can be seen in Fig.6 that CPIP oscillations are removed in a system with rough interface.
\begin{figure
\includegraphics[width=0.45\textwidth]{fig6.eps}
\caption{\footnotesize Dependence of the spin current on the position $n$ of the cleavage plane in the
switching magnet for a rough and a perfect interface. The on-site potential parameters are $V^{\uparrow}=2.1$,
$V^{\downarrow}=2.9$
for the PM ,$V^{\uparrow}=V^{\downarrow}=2.1$ for the spacer, and
$V^{\uparrow}=2.1$, $V^{\downarrow}=2.9$ for the SM.
The lead/magnet height is 10/10 atomic planes for both the perfect and the rough systems}
\end{figure}
We now investigate the CIP geometry in which the current flows parallel not only to the interface between the
spacer and the switching magnet but also to the interface between the
spacer and the polarizing magnet. Since the absorbing power of the switching magnet in the CIP geometry
must clearly be the same as in the CPIP geometry, the key question here is the polarizing ability of
a polarizing magnet whose interface with the spacer is parallel to the current flow. To determine the spin current, we
proceed as in the CPIP geometry (Fig.3). We place a cleavage
plane between any two neighboring atomic chains in the switching magnet and determine from Eq.(1)
the local spin current as a function of the position $n$ of the cleavage plane in the
switching magnet. The continuity of the spin current guarantees that the value of the spin
current at the spacer/switching magnet interface is equal to the
spin current in the spacer. It follows that the values of the spin current incident on and leaving the switching
magnet
can both be determined from the profile of the spin current in the switching magnet. This is shown in Fig.7 for
the situation when the polarizing magnet is a half-metal (the minority-spin band is empty) but the Fermi level
in the switching
magnet intersects both the majority and minority-spin bands.
\begin{figure
\includegraphics[width=0.4\textwidth]{fig7.eps}
\caption{\footnotesize Dependence of the spin current on the position $n$ of the cleavage plane in the
switching magnet for a CIP and a CPP system. The on-site potential parameters are $V^{\uparrow}=2.1$, $V^{\downarrow}=5.1$
for the PM, $V^{\uparrow}=V^{\downarrow}=2.1$ for the spacer and
$V^{\uparrow}=2.1$, $V^{\downarrow}=2.9$ for the SM.
The lead/magnet height is 10/10 atomic planes.}
\end{figure}
There are two interesting features seen in Fig.7. First of all we note that in the CPP geometry only majority-spin
carriers can pass through a half-metallic polarizing magnet and, therefore, the spin polarization of the current
incident on the switching magnet is 100\% and in the direction of the spin of the majority-spin carriers.
On the other hand,
the spin polarization in the CIP geometry is much smaller,
only about 25\%. The second interesting feature is that the spin polarization of the current in the CIP geometry
has a sign opposite to that in the CPP geometry. This can be most easily understood in our special case of a
half-metallic polarizing magnet whose majority-spin band matches exactly the bands of either spin in the
nonmagnetic spacer. Minority-spin carriers, which cannot penetrate the polarizing magnet, travel as if in a
perfect slab without being scattered from the region in which the polarizing magnet is located. On the other hand,
majority-spin carriers which can easily penetrate the polarizing magnet region are strongly scattered by the
geometrical inhomogeneity of that region, which strongly reduces but does not suppress completely their current flow.
We thus do not expect the spin polarization to be complete. Moreover, the current of the
minority-spin carriers is larger than that of the majority-spin carriers and the sign of the
spin current polarization is thus reversed.
\section{Results for Co/Cu lateral CPIP system.}
\noindent
Our model calculations for a single-orbital tight-binding band indicate that the absorption of the spin current
by a lateral magnet in the CPIP (CIP) geometry is as efficient as in the standard CPP geometry. To confirm
that these
results remain valid for a fully realistic system, we have made calculations of the spin
current profile in a cobalt switching magnet whose interface with a nonmagnetic copper spacer is parallel to
the current flow (CPIP geometry illustrated in Fig.2a). We used in these calculations a semiinfinite fcc Co
sheet of height 4 and 8 atomic planes as a polarizing magnet. The switching magnet was a sheet of Co of
height 4 (8) atomic planes deposited on a Cu lead whose height was also 4 (8) atomic planes. The crystal
orientation of the Co and Cu sheets was (001). Both Co and Cu sheets were described by a fully realistic
multiorbital tight-binding model with tight-binding parameters fitted to the results of first-principles band
structure calculations (see Ref.~\onlinecite{ref9}). The magnetization of the polarizing Co magnet was taken to be
in the x direction and that of the switching Co magnet was in the z direction. As in our
one-band model calculations, the Co/Cu CPIP system was grown in real space and the spin current was evaluated
without any approximations from the Keldysh formula (1). It should be noted that for a system with 8+8 atomic
sheets, all the matrices in Eq.(1) have size $(36\times 16)\times (36\times 16)$, which makes the evaluation of the spin current
computationally very demanding. Hence our restriction to the maximum size of 8+8 atomic sheets. The dependence
of the CPIP in-plane spin current on the position $n$ of the cleavage plane in the Co switching magnet is shown
in Fig.8. For comparison, the CPP spin current is also shown in Fig.8 (continuous line). In the CPP geometry,
\begin{figure
\includegraphics[width=0.45\textwidth]{fig8.eps}
\caption{\footnotesize Dependence of the spin current on the position $n$ of the cleavage plane in the
cobalt switching magnet.}
\end{figure}
the Co polarizing magnet, the Cu spacer, and the Co switching magnet were all sheets of 4 atomic planes.
It can be seen from Fig.8 that in the
case of the 4+4 CPIP system the absorption of the spin current is as fast as in the conventional CPP
geometry. The long-period oscillations of the spin current in the CPIP and CPP geometry are very similar but we
can see in the CPP geometry an additional short oscillation period which is not present in the CPIP geometry.
The absorption of the spin current for the 8+8 CPIP system is slower but, nevertheless, more than two thirds of
the spin current are absorbed over 50 atomic planes. Our results for realistic Co/Cu systems thus confirm the
viability of a setup with a lateral switching magnet, i.e. the CPIP geometry in which the current flows
parallel to the spacer/switching magnet interface.
\section{Conclusions.}
\noindent
Using the nonequilibrium Keldysh theory, we have investigated theoretically two geometries for current
induced switching of magnetization in
which the current flows parallel to the magnet/nonmagnet interface. In the first geometry the current
is perpendicular to
the polarizing magnet/spacer interface but parallel to the spacer/switching magnet interface (CPIP).
In the second
geometry the current is parallel to both the polarizing magnet/spacer and spacer/switching magnet
interfaces (CIP). Our calculations for a single-orbital tight binding model indicate that the spin current
flowing parallel to the switching magnet/spacer interface can be absorbed by a lateral switching magnet
as efficiently as in the traditional CPP geometry. We have confirmed that the results of such model calculations
in the CPIP geometry are also valid for
experimentally relevant Co/Cu CPIP system described by fully realistic tight binding bands fitted to
an ab initio band structure. Our results show that almost complete absorption of the incident spin current
by a lateral switching magnet (magnetic dot) occurs when the lateral dimensions of the switching magnet are
of the order of 50-100 interatomic distances, i.e., about 20nm. The numerical results are supported by an
analytical stationary phase argument which indicates that the decay of the spin current in a lateral switching magnet
should not be slower than $1/\sqrt{n}$, where $n$ is the lateral size of the magnet measured in the units of
interatomic spacing. Hence about 90\% spin current absorption should be achieved by a magnet of a lateral size
of about 20nm.
Moreover, to achieve full absorption of the spin current
(maximum spin-transfer torque), the height of a lateral switching magnet can be as
small as a few atomic planes. It follows that the total volume of the switching magnet in the CPIP (CIP) geometry
can be even smaller than that in the traditional CPP geometry using magnetic nanopillars. This indicates that
current-induced switching and microwave generation in the CPIP geometry should be feasible. We have also demonstrated
that strong spin current absorption in the CPIP/CIP geometry is not spoilt by the presence of a rough interface
between the switching magnet and nonmagnetic spacer.
We find that the polarization achieved using a lateral magnet in the CIP
geometry is only about 25\% of that in the traditional CPP geometry. The CPIP geometry is thus preferable but
CIP could be still usable with a stronger current.
Finally, we wish to make contact with the recent experiment, see Ref.~\onlinecite{ref55}, in which the so called
pure-spin-current-induced magnetization switching had been demonstrated. In the experimental setup of Ref.~\onlinecite{ref55}
the current was spin polarized by passing it through a magnet (current perpendicular to magnet/spacer interface)
but the resultant spin current was absorbed by a lateral magnet (current parallel to magnet/spacer interface).
The experimental setup of Ref.~\onlinecite{ref55} is thus topologically equivalent to our CPIP geometry.
\begin{acknowledgments}
\noindent
We are grateful to the UK Engineering and Physical Sciences Research Council for financial support
within the framework of the Spin@RT Consortium and to the
members of the Consortium for stimulating discussions.
\end{acknowledgments}
|
2,877,628,089,806 | arxiv | \section{Introduction}
The important ingredient for quantum computation and information processing is the
presence of coherent superpositions. A single isolated two-level system can be
prepared in a coherent superposition of $|0\rangle$ and $|1\rangle$ states, and the
manipulation of such states leads to new possibilities
for storage and processing of information \cite{Nielsen}.
In contrast to the ideal isolated case, the interactions of real quantum systems with their
environment lead to the loss of these coherent superpositions, in other words, decoherence. However,
the more realistic case would be manipulation of many qubits. Coherent superposition of such states
leads to the concept of entanglement, which forms a precious resource for quantum computation and
information. The fragility
of entanglement is due to the coupling between a quantum system and its environment;
such a coupling leads to decoherence, the process
by which information is degraded \cite{MaxSc,zurek2}.
In fact, decoherence is one of the main obstacles for the preparation, observation, and
implementation of multi-qubit entangled states.
The intensive work on quantum information and computing in recent years has
tremendously increased the
interest in exploring and controlling decoherence effects
\cite{nat1,milb2,QA,CJ,zurek,diehl,verst,weimer}.
In this work we address the problem where each of the two qubits are dissipatively coupled to a
local bosonic bath; in quantum optical sense it would mean that both the two-level
systems are subject to spontaneous emission and would imply there exist relaxation between the excited state to ground state.
Dissipation can assist the generation of entanglement
\cite{MPl,Beige,PHoro} that can be used for various quantum information processing. For example,
F. Verstraete {\it et al} \cite{verst} have shown that dissipation can be used as a resource for
the universal quantum computation without any coherent dynamics needed to implement it. Contrary to
other methods, entanglement generation by dissipation
does not require the preparation of a system in a particular input state and exists, in principle, for an arbitrary
long time. These features
make dissipative methods inherently stable against weak
random perturbations, with the dissipative dynamics stabilizing the entanglement.
The effects on system due to environment can be classified into the process with memory (Non-Markovian)
and without memory (Markovian) effects \citep{Pet,RF1,RF2,RF3,RF4,self}. In case of
Markovian processes, the environment acts as a sink for
the system information; the system of interest loses information into the environment and this lost information
plays no role in the dynamics of the system. However,
due to memory effects in case of non-Markovian dynamics, the information lost by the system during the interaction with the environment
will return back to the system
at later time. This makes the non-Markovian dynamics
complicated. Understanding the nature
of non-Markovian dynamics is naturally a very important topic for quantum information science, where the aim is to control a quantum system
for use in technological applications \cite{Terhal,Ban,Wolf,Cirac}. In general, three time scales
in an open system exist to characterize non-Markovian
dynamics: (i) the time scale of the system (ii) the time
scale of the bath given by the bandwidth of bath spectral density (iii) the mutual time scale arising from the coupling between the system and the bath. It is usually
believed that non-Markovian effects strongly rely on the
relations among these different time scales \cite{27, 30, 31}.
In this paper we derive a quantum master equation for interacting qubits with local dissipation. The equation is derived utilizing the completeness of the
eigen basis of the Hamiltonian representing the interacting qubits. The time evolution of the density matrix turns out to be the sum of the time evolution corresponding
to individual qubits with no cross terms. Next we solve this master equation for $X$-type states with the assumption that individual baths have the same properties. The
main content of this paper will remain however the same as for other kind of states and assuming different bath correlation functions for each bath. The different bath
correlation functions can give rise to different time scales in the dynamics and is treated separately. Next we identify different regimes of dynamics
(Markovian and non-Markovian) and show that under non-Markovian regimes of the dynamics, there exists finite entanglement in an initially unentangled state.
This entanglement decay in the same way as the pure state entanglement and we find that the decay rate of entanglement is strongly modified by the non-Markovian behavior.
The rest of the paper is organized as follows:
In section II, we introduce the model Hamiltonian and derive the quantum master equation. In the past related works \cite{Pet,RF1,RF2,RF3,RF4}, non-interacting
qubits have been considered. These qubits are then coupled to a common bath. However, in this paper, we consider qubits interacting through isotropic
Heisenberg interaction which is a general kind of interaction in condensed matter physics.
In section III, we solve the quantum master equation in the eigen-basis of the system Hamiltonian
for a general class of initial quantum states under the assumption that the bath correlation
function decay in the same way.In section IV we give the decay of entanglement of certain $X$-type
state. Finally we conclude in section V with the remarks of wider context of our results.
\section{ Master equation for local dissipation}
In this section we will first derive master equation for the reduced density matrix of the
system which govern the dynamics of the system.
We consider two qubits represented by spin-$\frac{1}{2}$ particles or two level atoms coupled to
each other via isotropic Heisenberg interaction. The qubits are subject to local dissipation
through a coupling with a bosonic bath.
The Hamiltonian of the two qubit system is
\begin{eqnarray}
H_s &=& J \overrightarrow{\sigma}_1 . \overrightarrow{\sigma}_2 \nonumber \\
&=& J[\sigma_1^{+}\sigma_2^{-} + \sigma_1^{-}\sigma_2^{+} + \sigma_1^{z}\sigma_2^{z}],
\end{eqnarray}
where $\sigma_i^{\pm}= \frac{\sigma_i^x \pm i\sigma_i^y}{2}$. $J$ represents the energy scale of
the system. The Hamiltonian $H_s$ can be diagonalized exactly i.e. $H_s|\psi_i\rangle =
\epsilon_i |\psi_i\rangle$ where
$|\psi_i\rangle$'s are the eigenstates of the Hamiltonian $H_s$ with the eigen energies
$\epsilon_i$ and are given below (with notation $|0\rangle=|\uparrow\rangle$ and
$|1\rangle =|\downarrow\rangle$ ):
\begin{eqnarray}
\epsilon_1= J;&& ~~~\psi_1 = |00\rangle \nonumber \\
\epsilon_2= 0; &&~~~\psi_2 = \frac{1}{\sqrt{2}}[|01\rangle + |01\rangle] \nonumber \\
\epsilon_3= -2J; &&~~~\psi_3 = \frac{1}{\sqrt{2}}[|01\rangle - |01\rangle] \nonumber \\
\epsilon_4= J; &&~~~\psi_4 = |11\rangle. \nonumber
\end{eqnarray}
We write the total Hamiltonian (system+bath)
\begin{eqnarray}
H=H_s + H_B + H_I
\end{eqnarray}
where $H_B$ is the Hamiltonian for the bath
\begin{eqnarray}
H_B = \sum_{i=1}^{2} \sum_k \omega_k b_{i k}^{\dagger} b_{i k},
\end{eqnarray}
and the dissipative interaction of the system with bath is represented by the Hamiltonian
\begin{eqnarray}
H_I &=& \sum_{i=1}^{2} \sigma_i^x \sum_k[g_{i k} b_{i k} +g_{i k}^{\star} b_{i k}^{\dagger} ]
\nonumber \\
&=& \sum_{i=1}^{2} \sigma_i^x ( B_i + B_i^{\dagger})
\end{eqnarray}
where $B_i = \sum_k g_{i k} b_{i k}$.
Let $\tilde{O}(t)= e^{i H_o t} O e^{-i H_o t}$ represent an operator defined
in interaction picture with respect to system and bath ($H_o= H_s +H_B$), we can therefore
write $H_I$ in the interaction picture under rotating wave approximation as
\begin{eqnarray}
\tilde{H}_I(t)=\sum_{i=1}^{2} [\tilde{\sigma}_i^{+}(t) \tilde{B}_i(t) +
\tilde{\sigma}_i^{-}(t) \tilde{B}_i^{\dagger}(t)].
\end{eqnarray}
The time evolution of the system operators can be evaluated using the eigen basis of the system
Hamiltonian $H_s$ as
\begin{eqnarray}
\sigma_1^{+}(\tau) &=& \sum_{i,j=1}^{4} \!\!\! P_{ij} \langle \psi_i| \sigma_1^{+} \otimes
I_2| \psi_j \rangle \exp[i (\epsilon_i-\epsilon_j)\tau] \\
\sigma_2^{+}(\tau) &=& \sum_{i,j=1}^{4} \!\!\! P_{ij} \langle \psi_i| I_1 \otimes \sigma_2^{+}
| \psi_j \rangle \exp[i (\epsilon_i-\epsilon_j)\tau]
\end{eqnarray}
where $P_{ij} = |\psi_i\rangle \langle \psi_j |$ is the projection operator $P_{ij} P_{jk} =
P_{ik}$ and $\sum_iP_{ii}=I$. In interaction picture, the time evolution of the total density
matrix (system and bath) $\rho_T(t)$ is given by the
von Neuman-Liouville equation as
\begin{eqnarray}
\frac{d \tilde{\rho}_T(t)}{dt} = -i[\tilde{H}_I(t), \tilde{\rho}_T(t)].
\label{LOE}
\end{eqnarray}
Here we have used $\hbar=1$. We can formally integrate
equation (\ref{LOE}) and write
its solution as:
\begin{eqnarray}
\tilde{\rho}_T(t)= \tilde{\rho}_T(0)- i \int_0^{t}\!\! ds
[\tilde{H}_I(s), \tilde{\rho}_T(s)].
\end{eqnarray}
Subsituting this solution back into the commutator of equation (\ref{LOE}), we get upto second
order
the following equation:
\begin{eqnarray}
\frac{d\tilde{\rho}_{T}(t)}{dt} =&-&i\left[ \tilde{H}_{I}\left(
t\right) ,\tilde{\rho}_{T}\left( 0\right) \right] \nonumber \\
&-&\int\nolimits_{0}^{t}ds\left[ \tilde{H}%
_{I}\left( t\right) ,\left[ \tilde{H}_{I}\left( s\right) ,\tilde{%
\rho}_{T}\left( s \right) \right] \right].
\end{eqnarray}
The solution of the above equation depends on the initial conditions of total density operator. We
consider an initially uncorrelated situation, i.e $\rho_T(0)= \rho_s(0) \otimes \rho_B$, where
$\rho_s$ and $\rho_B$ are respectively the density operators for the system and bath. Tracing over
the bath degrees of freedom and assuming that $tr_B[\tilde{H}_I(t) \rho_B]=0$, we get the following
time non-local master equation for the reduced density matrix:
\begin{eqnarray}
\frac{d \tilde{\rho}_s(t)}{dt}=- \int_0^{t} \!\!\! ds ~tr_B [ \tilde{H}_I(t), [
\tilde{H}_I (s), \tilde{\rho}_T(s) ] ].
\end{eqnarray}
As the bath degree of freedom are infinite so that the influence of the system on the bath is small
in the weak system-bath coupling case. As a cosequence, we write the total density operator $
\tilde{\rho}_T(s)= \tilde{\rho}_s(s) \otimes \rho_B + \mathcal{O}(\tilde{H_I})$ within the second
order perturbation of system-bath coupling \cite{Pet,HJ,HFPB,HPB,MS,EF}. The
replacement
of total density matrix $\tilde{\rho}_T(s)$ with an uncorrelated state $\tilde{\rho}_s(s) \otimes
\rho_B $ is called as Born approximation. Therefore, under Born approximation we write
\begin{eqnarray}
\frac{d \tilde{\rho}_s(t)}{dt}=- \int_0^{t}\!\!\! ds ~tr_B [ \tilde{H}_I(t), [
\tilde{H}_I (s), \tilde{\rho}_s(s) \otimes \rho_B ] ].
\end{eqnarray}
The above equation is in a form of delayed integro-differential equation and is therefore a time
non-local master equation. Replacing $\tilde{\rho}_s(s)$ with $\tilde{\rho}_s(t)$ in this equation
\cite{Pet, HFPB, HPB} we get time-local master equation:
\begin{eqnarray}
\frac{d \tilde{\rho}_s(t)}{dt}=- \int_0^{t}\!\!\! ds ~tr_B [ \tilde{H}_I(t), [
\tilde{H}_I (s), \tilde{\rho}_s(t) \otimes \rho_B ] ].
\end{eqnarray}
Assuming the bath in the vacuum state initially, i.e $\rho_B=|0 \rangle \langle 0|$; using the form
of the $\tilde{H}_I(t)$, we arrive at the following equation:
\begin{eqnarray}
\frac{d \tilde{\rho}_s(t)}{dt} = \mathcal{L}_1[\tilde{\rho}_s(t)] +
\mathcal{L}_2[\tilde{\rho}_s(t)].
\end{eqnarray}
This forms a non-trivial result. The master equations contains sums of $\mathcal{L}_i$ for each qubit and no cross terms with different $\mathcal{L}_i$'s.
This result is the same as that for the non-interacting qubits. Here we have ($i=1,2$)
\begin{eqnarray}
\!\!\! \!\!\! \!\!\!
\mathcal{L}_i(\tilde{\rho}_s(t)) &=& \int_0^{t} \!\!\! ds
\{\Phi_i(t-s)[\tilde{\sigma}_i^{-} (s)
\tilde{\rho}_s(t),\tilde{\sigma}_i^{+} (t) ] \nonumber \\
&&~~~~~~+ \Phi_i^{\dagger}(t-s) [\tilde{\sigma}_i^{-} (t),
\tilde{\rho}_s(t) \tilde{\sigma}_i^{+} (s)] \}
\end{eqnarray}
and the bath correlation function is defined as
\begin{eqnarray}
\Phi_i(t-s)&=& \langle B_i(t-s) B_i^{\dagger}\rangle_0 \nonumber \\
&=&\sum_k |g_{i k}|^2 \exp[-i\omega_k (t-s)].
\end{eqnarray}
Next we revert back to the Schr$\ddot{o}$dinger picture with a change in variable
$\tau=t-s$, we write
\begin{eqnarray}
\label{QME}
\frac{d \rho_s(t)}{dt} &=& -i [H_s, \rho_s(t)] \nonumber \\
&&+ \sum_{i=1}^{2} \int_0^{t} d\tau \left[\Phi_i(\tau)[\sigma_i^{-} (-\tau)
\rho_s(t),\sigma_i^{+} ] \right. \nonumber \\
&&~~~~~~~~~~~~~~\left.
+ \Phi_i^{\dagger}(\tau) [\sigma_i^{-} ,
\rho_s(t) \sigma_i^{+} (-\tau)] \right].
\end{eqnarray}
This represents the quantum master equation in the Schr$\ddot{o}$dinger picture.
The solution of the above master equation depends on the type of initial states. In the next
section we find its solution for general $X$-type initial states.
\section{Solution of Master equation}
In order to obtain the dynamics of entanglement of our two qubit system, we assume that the qubits
are initially prepared in an X state \cite{TingYU}:
\begin{equation}
\rho_s(0)= \left [
\begin{array}{cccc}
u(0) & 0 & 0 & w(0) \\
0 & x_1(0) & y(0) & 0 \\
0 & y^{\star}(0) & x_2(0) & 0 \\
w^{\star}(0) & 0 & 0 & v(0)
\end{array}
\right]
\end{equation}
where we have used the standard basis $\{|00\rangle, |01\rangle, |10\rangle, |11\rangle\}$. Since
the normalization and positivity of $\rho_s(0)$ i.e,
$tr(\rho_s(0))=1$ and $\rho_s(0)>0$, the matrix elements $u, x_1, x_2, v$ are non-negative
parameters with $u + x_1 + x_2 + v=1 $, $\sqrt{uv}\ge|w|$, and $\sqrt{x_1 x_2}\ge |y|$.
We can use more general forms of density matrix with all elements non-zero, this makes the master equation intractable analytically.
Next we express the X state $\rho_s(0)$ in the eigen basis of $H_s$ as
\begin{eqnarray}
\rho_s(0) & =& a(0) |\psi_1 \rangle \langle \psi_1| + b(0) |\psi_2\rangle \langle \psi_2| + e(0)
|\psi_3\rangle \langle \psi_3| \nonumber \\
&& + ~d(0) |\psi_4\rangle \langle \psi_4| + c(0) |\psi_1\rangle \langle \psi_4| +
c^{\star}(0) |\psi_4\rangle \langle \psi_1| \nonumber \\
&&+~ h(0) |\psi_2\rangle \langle \psi_3| + h^{\star}(0) |\psi_3\rangle \langle \psi_2|
\end{eqnarray}
where the various parameters of the density operator in the eigen basis of $H_s$ are related to
the parameters in the standard basis in the following way:
$a(0) = u(0)$,~$ b(0) =\frac{1}{2}[x_1(0) + x_2(0) + y(0) + y^{\star}(0)] $,~
$ e(0) = \frac{1}{2}[x_1(0) + x_2(0) - y(0) - y^{\star}(0)] $,~
$h(0) = \frac{1}{2}[x_1(0) - x_2(0) - y(0) + y^{\star}(0)] $,
$d(0)= v(0)$,~
$c(0) = w(0)$.
Next we see that the form of the density matrix is invariant during the time
evolution generated by
the quantum master equation. Therefore we can the density matrix at time $t$ as
\begin{eqnarray}
\label{rhoe}
\rho_s(t) & =& a(t) |\psi_1 \rangle \langle \psi_1| + b(t) |\psi_2\rangle \langle \psi_2| + e(t)
|\psi_3\rangle \langle \psi_3| \nonumber \\
&& + ~d(t) |\psi_4\rangle \langle \psi_4| + c(t) |\psi_1\rangle \langle \psi_4| +
c^{\star}(t) |\psi_4\rangle \langle \psi_1| \nonumber \\
&&+~ h(t) |\psi_2\rangle \langle \psi_3| + h^{\star}(t) |\psi_3\rangle \langle \psi_2|.
\end{eqnarray}
In order to find the time evolution equations of the various parameters involved in equation
(\ref{rhoe}) we assume that the bath correlation functions have the same form
\begin{eqnarray}
\Phi_i(s) =\frac{\Gamma_i \lambda}{2} e^{-\lambda |s|}
\end{eqnarray}
where $\lambda$ is the spectral width of the bath, $\Gamma_i$
is related to the microscopic system-bath coupling constant. It defines the relaxation time scale
$\tau_R$ over which the state of the system: $\tau_R \sim \Gamma_i ^{-1}$.
It can be shown to be related to the Markovian decay rate $\Gamma_M$ in Markovian Limit of flat
spectrum. This form of correlation function corresponds to the Lorenztian spectral density of the
bath \cite{Pet}. Assuming that $\Gamma_1= \Gamma_2 \equiv \Gamma_M$ for simplicity, we substitute
the $\rho_s(t)$ as in equation (\ref{rhoe}) in the quantum master equation (\ref{QME})
and obtain the time dependence of the parameters as
\begin{eqnarray}
\label{f1}
a(t) &=& a(0) e^{-\Gamma(t)} \\
d(t) &= & d(0) + \int_0^{t}\!\!\!dz [\eta(z) b(z) + \Sigma(z) e(z)] \\
c(t) &=& c(0) e^{-iS_1 t-\Gamma(t)} \\
h(t) &=& h(0) e^{-iS_2 t - \Gamma(t)}
\end{eqnarray}
\begin{eqnarray}
\label{f2}
\frac{d b(t)}{dt} + \eta(t) b(t)& =& \eta(t) a(t) \\
\frac{d e(t)}{dt} + \Sigma(t) e(t) &= &\Sigma(t) a(t)
\end{eqnarray}
where $\Gamma(t) =\Gamma_{+}(t) + \Gamma_{-}(t)$;
$\Gamma_{+}(t)=\frac{1}{2} \int_0^t \!dz~\Sigma(z) $;
$\Gamma_{-}(t)=\frac{1}{2} \int_0^t \!dz~\eta(z) $;
$S_1(t)= S_{+}(t) + S_{-}(t)$; $S_2(t)= 2Jt+S_1(t)$ and the explicit forms of these
functions are given in the appendix A.
\section{Decay of entanglement}
In this section we study entanglement of a two qubit system by means of concurrence
\cite{Wooters}. For a density matrix $\rho$, concurrence is defined as $\mathcal{C}=
max \{0,\sqrt{r_1}-\sqrt{r_2}-\sqrt{r_3}-\sqrt{r_4}\}$, where $r_1$, $r_2$, $r_3$ and $r_4$ are
the
eigen values of matrix $R$ in the descending order. The matrix $R$ is defined as $R=\rho(\sigma^y_1
\otimes \sigma^y_2) \rho^{\star} (\sigma^y_1 \otimes \sigma^y_2) $ and $\rho^{\star}$ represents
complex conjugation of $\rho$ in standard basis.
For X state in the standard
basis we write concurrence as \cite{TingYU}
\begin{eqnarray}
\mathcal{C}(t) \!= \!\! 2~ max\{0, |w(t)|-\!\sqrt{x_1(t) x_2(t)}, |y(t)|-\!\sqrt{u(t) v(t)}\}
\nonumber\\
\end{eqnarray}
where we have
$ u(t)= a(t)$,~ $ w(t) = c(t)$,~
$ x_1(t) = \frac{1}{2}[b(t) + h(t) + h^{\star}(t) + e(t)]$,~
$x_2(t)= \frac{1}{2}[b(t) - h(t) - h^{\star}(t) + e(t)]$,~
$y(t) = \frac{1}{2}[b(t) - h(t) + h^{\star}(t) - e(t)]$,~
$ y^{\star}(t)= \frac{1}{2}[b(t) + h(t) - h^{\star}(t) - e(t)]$,~
$ v(t)= d(t)$.
Next we use these results to investigate the decay of entanglement in some specific cases. First We
consider the decay of the pure entangled state $|\Psi\rangle= \cos \frac{\theta}{2} |01\rangle +
\sin\frac{\theta}{2}|10\rangle$. This state has initial entanglement $\mathcal{C}(0)=\sin\theta$
and at time $t$ we write with the help of above results
\begin{eqnarray}
\mathcal{C}(t) = 2~ max\{0, |y(t)|\}
\end{eqnarray}
or
\begin{eqnarray}
C(t) = 2|y(t)|= \frac{1+\mathcal{C}(0)}{2} e^{-\Gamma_{-}(t)} - \frac{1-\mathcal{C}(0)}{2}
e^{-\Gamma_{+}(t)}.
\label{concu}
\end{eqnarray}
This forms an important result. It shows that even though we have initially an unentangled state
$\mathcal{C}(0)=0$, we still have entanglement at later time $t$. This can be attributed to the
dissipative interaction between the system and the bath. Let us suppose $\theta =\pi$, which
corresponds to $|10\rangle$ state; the effect of dissipative interaction ( $H_I(t)|10\rangle=
B_1^{\dagger}(t)|11\rangle + B_2(t)|00\rangle$ ) results in an entangled state.
\begin{figure}[]
\centering
\includegraphics[width=1.5in,height=1.25in]{R100.eps}
\hspace{0.35cm}
\includegraphics[width=1.5in,height=1.25in]{R1.eps}\\
\vspace{0.61cm}
\includegraphics[width=1.5in,height=1.25in]{R001.eps}
\hspace{0.35cm}
\includegraphics[width=1.5in,height=1.25in]{R0001.eps}
\caption{Decay of entanglement as measured by concurrence $C(t)$ with time at
different
values of the parameter $Q$ and $R$. Here we have used $\theta=\pi/2$. Plots for different $Q$
values at (a) $R=100$ (b) $R=1$ (c) $R=0.01$ and (d) $R=0.001$. Plots (a) and (b) are in units of
Markovian decay rate $\Gamma_M$, i.e $\tau=\Gamma_M t$, while plots (c) and (d) are in units of
rescaled decay rate
$\frac{\Gamma_M}{Q^2}$ i.e $\tau^{\prime}=\frac{\Gamma_M}{Q^2} t $ }.
\label{plot2}
\end{figure}
Next we analyze the Markovian and non-Markovian regimes of the dynamics, for that we define the
following parameters:
$ \tau=\Gamma_M t,~~~Q=\frac{J}{\lambda},~~R=\frac{\lambda}{\Gamma_M} $.
Therefore, using this parametrization we have
\begin{eqnarray}
\Gamma_{+}(t) &=& \frac{\tau}{2[1 + 9Q^2]} - \frac{[1-9Q^2][1-e^{-R\tau}
\cos(3QR\tau)]}{2R[1+9Q^2]^2} \nonumber\\
&& -\frac{3Q}{R[1+9Q^2]^2}e^{-R\tau}\sin(3QR\tau) \\
\Gamma_{-}(t)&=& \frac{\tau}{2[1 + Q^2]} - \frac{[1-Q^2][1-e^{-R\tau}
\cos(QR\tau)]}{2R[1+Q^2]^2} \nonumber\\
&& -\frac{Q}{R[1+Q^2]^2}e^{-R\tau}\sin(QR\tau).
\end{eqnarray}
In order to understand how the Markovian limit is obtained from the above expressions, we plot in
Fig.~\ref{plot2}(a)-(b) $\mathcal{C}(\tau)$ for $\theta=\pi/2$ with respect to dimensionless
parameter $\tau$ for
$R=100$ and $R= 1 $ at different values of $Q$. We observe that the Markovian curve is recovered
for $R>>1$ with $Q=0$. We can understand this behaviour of $\mathcal{C}(t)$ by looking at the
different paramters involved. The typical time scale over which the system of two qubits changes is
$\tau_s \sim 1/J$ and the time scale over which the bath changes is $\tau_B \sim 1/\lambda$ while
relaxation time scale for each qubit would be given by $\tau_R \sim 1/\Gamma_M$. It means $R>>1$
implies $\tau_B<<\tau_R$ and $Q<1$ implies $\tau_R<\tau_s$. Thus physically $R>>1$ and $Q<1$ would
imply that the system evolves over a large time compared to very fast bath dynamics. Therefore
Markovian regime corresponds to $R>>1$ and $Q<1$ and we have
\begin{eqnarray}
\Gamma_{+}(t) = \Gamma_{-}(t)= \frac{\tau}{2}
\end{eqnarray}
and therefore we get standard Markovian limit:
\begin{eqnarray}
\mathcal{C}(t)=\mathcal{C}(0) e^{-\frac{1}{2}\Gamma_{M} t}.
\end{eqnarray}
We observe that under the Markovian limit an initially unentangled state remains unentangled
always. In situations where the spectral width $\lambda $ of the bath is narrower than the energy
scale $J$ involved for the system implying $Q>>1$. This would mean $\tau_R<<\tau_s$. In Fig.
\ref{plot2}(b), we observe for $R=1$ that as $Q$ increases from 0, there is larger devaition from
the Markovian dynamics of the concurrence $\mathcal{C}(t)$. The general trend is similar for all
values of $ R $ as can be seen on comparison to Fig. \ref{plot2}(a). These observations suggest
Markovian regime is in fact opposite to the regime $R<1$ and $Q>>1$, which we call as
{\it non-Markovian} regime. The term that is responsible for this larger deviation can be
attributed to first terms of $\Gamma_{-}(t)$ and $\Gamma_{+}(t)$ containing $Q^2$ in the
denominator. For $Q^2>>1$ suggest defining another time scale
\begin{eqnarray}
\tau^{\prime} = \frac{\Gamma_M }{Q^2} t.
\label{rescaled}
\end{eqnarray}
The decay of entanglement defined by $\mathcal{C}(t)$ at various values of $Q$ for non-Markovian
regime $R<1$ in terms of rescaled time $\tau^{\prime} = \tau/Q^2$ is shown in the
Fig.\ref{plot2}(c)-(d). We see that for large $ Q $ the
concurrence $\mathcal{C}(\tau^{\prime})$ coincides with exponential decay in units of the rescaled
time. Next we see that before reaching the limiting behavior of exponential decay
in rescaled time (\ref{rescaled}), we observe
some oscillatory behavior Figure \ref{plot2}(c)-(d). The deviation from an exponential decay can be
attributed to the memory effects developed in the two qubit system. This occurs clearly due to
the second terms in
$\Gamma_{-}(t)$ and $\Gamma_{+}(t)$. For $ Q \gg 1 $ we may approximate this as
\begin{eqnarray}
\Gamma_{-}(\tau') & \approx & \frac{1}{2}\left[
\tau' + \frac{1- e^{-R Q^2 \tau' }\cos ( R Q^3 \tau')}{R
Q^2}\right] \label{approxgamma1}\\
\Gamma_{+}(\tau') & \approx & \frac{1}{18}\left[
\tau' + \frac{1- e^{-R Q^2 \tau' }\cos ( 3R Q^3 \tau')}{R
Q^2}\right].
\label{approxgamma2}
\end{eqnarray}
In order for the oscillatory term to be visible, we require the exponential decay term in
(\ref{approxgamma1})-(\ref{approxgamma2}) to be not too fast giving $ R Q^2 < 1 $, but
simultaneously the oscillation
frequency should be faster than the overall decay envelope $ R Q^3 > R Q^2 > 1 $. The strongest
oscillations therefore occur when $ R Q^2 \sim 1 $, which agree with the numerical plots in Figure
\ref{plot2}(c)-(d). The deviation from an exponential decay can be attributed to the memory effects
developed initially, typical of non-Markovian behavior. The criterion for
the strongest oscillatory behavior are satisfied when all the characteristic time scales are all
approximately the same i.e $\tau_R\sim \tau_s\sim \tau_B$.
\section{Conclusions}
In conclusion, we have derived a quantum master equation for system of two interacting qubits under
the influence of local dissipation. Using the assumption that the correlation functions have
the same form for each of the baths, the solution of master equation is found
for the general $X$-type state. The time dependence of concurrence, a measure of
entanglement is studied
for a pure entangled state $|\Psi\rangle= \cos \frac{\theta}{2} |01\rangle +
\sin\frac{\theta}{2}|10\rangle$ (a
special case of $X$-type state) under both Markovian and non-Markovian regimes of the dynamics. It
is found that for finite time evolution an unentangled state can go to an entangled state
in contrast to the Markovian case where the unentangled state remains unentangled always.
By identifying the parameter space, we have found that our results reduce to
standard Markovian decay rate which in general is not a physically relevant regime \cite{self}.
In the physically relevant regime with narrower spectral width as compared to $J$, the decay rate
is better approximated by $\Gamma(t) =\frac{\Gamma_M}{Q^2} $, which is the standard Markovian decay
rate divided by the $ Q^2 $, which can be quite large in practice.
Next we compare our work with the several other works that studied non-Markovian dynamics of entanglement.
Taking the example of Ref. \cite{RF2}, the authors derive the non-Markovian decay of the entanglement for the pure
state $|\Psi\rangle= \cos \frac{\theta}{2} |01\rangle +
\sin\frac{\theta}{2}|10\rangle$. The time dependence of concurrence is given by:
\begin{eqnarray}
\mathcal{C}(t)= {\rm max} \{0, C(0) G(t)\}
\end{eqnarray}
where
\begin{eqnarray}
G(t)=e^{-\lambda t/2}\left[ \cosh(\frac{\lambda t}{2}\delta)+ \frac{1}{\delta}\sinh(\frac{\lambda t }{2}\delta) \right]
\end{eqnarray}
and $\delta = \sqrt{1- \frac{2 \Gamma_M}{\lambda}}$ and $\Gamma_M$ is the Markovian decay
rate. Our result Eqn. (\ref{concu}) is more general than above result. The results in these works \cite{RF1,RF2,RF3,RF4} do not tell about the
amount of entanglement and its decay that would be present in an entangled state generated from dissipation.
To see more clearly the behavior, let us examine this in two limiting cases
{\it Weak coupling limit} $ \Gamma_M \ll \lambda $: This regime corresponds
to a weak coupling regime or a very broad coupling to many frequency modes, which gives Markovian behavior.
Here $ \delta \approx 1 - \frac{\Gamma_M}{\lambda} $ and the decay function $G(t)$ gives purely exponential behavior.
To first order the decay function may be approximated as
\begin{eqnarray}
G(t) \approx e^{-\Gamma_M t/2}
\end{eqnarray}
which is nothing but standard Markovian spontaneous decay.
{\it Strong coupling limit} $ \Gamma_M \gg \lambda $: The reverse regime is when the
linewidth of the bath is extremely narrow, which gives rise to strongly non-Markovian behavior. Here we may approximate
$\delta =i\sqrt{\frac{2 \Gamma_M}{\lambda}} $
and
\begin{eqnarray}
G(t) = e^{-\lambda t /2}\left[
\cos \left( \sqrt{\frac{\Gamma_M \lambda}{2}} t \right)+ \sqrt{\frac{\lambda}{2 \Gamma_M}}\sin \left( \sqrt{\frac{\Gamma_M \lambda}{2}} t \right)
\right] \nonumber \\
\end{eqnarray}
which corresponds to damped oscillations at frequency $\sqrt{\lambda \Gamma_M/2}$ and a decay
envelope with rate $\lambda$. Thus we see that in both the cases the previous results do not yield the scaling factor $Q^2$ as
derived in Eq. (35).
The current result would be important for applications where spontaneous
emission is a serious drawback of using excited states, such as
for quantum information processors, quantum simulators, and
quantum metrological applications.
|
2,877,628,089,807 | arxiv | \section{Introduction}
Ever since its identification with a redshift 2.286 optical emission
line source by Rowan-Robinson et al. (1991), leading to an inferred
bolometric luminosity $\sim5 \times 10^{14} L_{\sun}$, the IRAS source
FSC10214+4724 has been the subject of enormous attention. Detections
of CO (Brown \& Vanden Bout 1991; Solomon, Downes \& Radford 1992;
Tsuboi \& Nakai 1992) and submillimeter continuum emission (Clements et
al. 1992, Downes et al. 1992) from the source confirmed the presence of
huge quantities of gas and dust. With a vastly larger lookback time
and luminosity than any other known IRAS source, FSC10214+4724 appeared
to be either an extremely luminous dust embedded quasar, or a
representative of a new class of astronomical object, e.g. a primeval
galaxy.
However, while the redshift of the IRAS source is secure, its intrinsic
luminosity is less certain. The fact that FSC10214+4724 lies at the
flux limit of the IRAS survey, combined with the presence of several
red companion objects within a few arcseconds, led Elston et al. (1994)
to suggest that the IRAS source might be gravitationally lensed by a
foreground group of galaxies. Intriguingly, Matthews et al. (1994)
found arcs emerging from the source in a deconvolved $K$ band image
with $0''\!.6$ seeing taken with the Keck telescope. Matthews et al.
considered the lensing hypothesis, but concluded it was unlikely
because the image morphology was not achromatic. Broadhurst \& Leh\'ar
(1995) modelled the source as gravitationally lensed, finding support
for their model from a reanalysis of the Matthews et al. data. Graham
\& Liu (1995) also argue for lensing, based on deconvolution of a more
recent (March 1995) Keck $K$ band image with $0''\!.4$ seeing.
Trentham (1995) argues on statistical grounds that magnification due to
lensing is likely to be less than a factor of ten, although larger
magnifications are reasonable for smaller far-IR source sizes than the
1 kpc Trentham assumed.
We present an image of FSC10214+4724 taken in December 1994 at 8000
\AA\ with the {\it HST} WFPC2 Planetary Camera with $0''\!.1$
resolution. This image provides dramatic support for the lensing
hypothesis, implying a magnification in the {\it HST} data of $\sim
100$. We use the image to derive a detailed model for the intrinsic
properties of the lensed source and the lensing galaxy.
For reference, at the FSC 10214+4724 redshift $z=2.286$, one
$0''\!.0455$ Planetary Camera pixel subtends $300(180) h^{-1}$pc for
$q_o = 0(0.5)$, while these values are $239(191) h^{-1}$pc for $z=0.9$,
where $h \equiv H_o/100$ km sec$^{-1}$Mpc$^{-1}$. Where not otherwise
specified, we assume $H_o = 50$ km sec$^{-1}$Mpc$^{-1}$ and $q_o =
0.5$.
\section{Observations and Reduction}
Three frames, each 2200 seconds long, were obtained on consecutive
orbits with the WFPC2 F814W filter on the 10th and 11th of December
1994 (UT). FSC 10214+4724 was positioned near the center of the
Planetary Camera, and each exposure was displaced from the other two by
an integer number (5 or 10) of PC pixels in both axes. The Wide Field
Camera data are not considered here.
After standard processing provided by STScI, the multiple frames were
used to filter out cosmic rays and hot pixels. Although these defects
are quite prominent and affect roughly 4\% of the pixels in each frame,
the main characteristics of the combined image discussed in
section~\ref{Morphology} are discernible in each frame even without
this filtering.
Cross-correlations were performed on pairs of frames to confirm that
the actual displacements between frames, as measured in pixels, were
integers to within 0.2 pixels. The frames were then trimmed by the
appropriate number of rows and columns to coregister them, and the
STSDAS task CRREJ was used to average them together, iteratively
excluding pixels which deviated from the previous iteration's average
value by more than three sigma. The minimum value at each pixel
location was used for the initial estimate of the average, and sigma
was the value expected from Poisson statistics and the gain and read
noise. To remove multiple pixel cosmic ray events, a stricter limit of
1.5 sigma was applied to the four pixels adjacent to any pixel which
exceeded the three sigma criterion. Finally a median filtering routine
was applied to identify and interpolate over a few dozen isolated
pixels which deviated sharply from their neighbors in the average
image, presumably because they were corrupted in all three frames.
None of these latter pixels fall within objects in the field, and only
a handful of the pixels in the components discussed below are based on
data from less than two frames.
\section{Results} \label{results}
The combined image of the full Planetary Camera field is shown in
Figure 1(a), while figures 1(b) and (c) show the FSC 10214+4724 region
in progressively greater detail.
A synthetic point spread function (PSF) derived from the ``Tiny Tim"
{\it HST} image modelling software package was used to deconvolve the
average image, because a good empirical point spread function was not
available (see section~\ref{Profiles}). The synthetic PSF was
calculated for a source with the color of a K-star in F814W at the
location of FSC10214+4724 in the Planetary Camera field. Figure 1(d)
shows the same region covered in figure 1(c) after a mild deconvolution
of the data (10 iterations of the STSDAS implementation of the
Lucy-Richardson algorithm) onto a grid subsampled four times more
finely than the original pixels.
\subsection{Morphology} \label{Morphology}
At the resolution of the Planetary Camera, an arc-like structure
dominates the morphology of the emission line source. In the
terminology of Matthews et al. (1994), which is adopted here, the
arc-like structure is component 1 (see Fig. 1(b)). The extent of this
arc is smaller than shown in Matthews et al., and there is a sharply
defined ridge of high surface brightness emission which is $0''\!.7$
long and essentially unresolved in the transverse direction. Lower
surface brightness emission can be seen extending the arc $\sim0''\!.4$
to the west, and a similar amount (but at a considerably fainter level)
to the east-northeast. There is also a hint of still fainter emission
extending a few tenths of an arcsecond due east ({\em not} along a
circular arc) from the eastern tip of the bright ridge. Within the
bright ridge are at least two peaks separated by $0''\!.24$, with the
brighter peak towards the east. The center of curvature of the arc was
fitted and found to be $\sim0''\!.12$ west-northwest of the center of
component 2 (which is $1''\!.18$ from the arc). Component 2 has a
smooth light distribution which is resolved and slightly elongated (see
sections~\ref{Profiles} and \ref{Redshift}). Directly opposite
component 2 from the arc is a faint but clearly visible source
(component 5 in figure 1(b)), $0''\!.43$ from the center of component
2. Component 3 is resolved and has a feature which is suggestive of a
tidal arm leading back towards component 2. Component 4 appears to be a
highly inclined galaxy.
\subsection{Brightness Profiles} \label{Profiles}
In an attempt to quantify the radial extent of the arc, pixels from the
sector subtended by the brightest $0''\!.5$ of the arc at component 2
were sorted in order of radius from component 2. To reduce the effect
of the tangential substructure along the arc, a running average of the
flux from 5 pixels in this radially sorted list was calculated.
Figure~2 plots this running average flux as a function of the average
radius of those pixels less the $1''\!.18$ distance of component 1 from
component 2. For comparison, the (unaveraged) radial profiles are
plotted for stars A and H (see figure 1(a)), for components 2 and 5,
and for the synthetic PSF which was used in the deconvolution shown in
figure 1(d).
While the wings of the synthetic PSF fall inside those of the arc, the
{\em empirical} PSFs of stars A (outside its saturated core) and H
match the arc cross section reasonably well. It therefore appears
likely that the synthetic PSF underestimates the FWHM of the true PSF.
Based on the synthetic PSF we estimate an upper limit of $0''\!.06$
(500 pc) for the intrinsic FWHM of the arc in the radial direction.
Note that the effects of the running average, of any error in using
component~2 as the center of the arc, and of the smaller size of the
synthetic PSF all work in the direction of leading us to overestimate
this dimension.
The deconvolved image shown in Figure 1(d) also yields a $0''\!.06$
FWHM for the arc, but this holds true for star H after deconvolution as
well. Because the individual frames are separated by integer numbers
of PC pixels, there is little leverage on finer scale structure.
Deconvolution does emphasize the high surface brightness of the arc
however, increasing it by a factor of three.
In short, we see no evidence that component 1 is resolved in the radial
direction. (In section~\ref{Model} we will argue that the intrinsic
FWHM of the arc is $\sim 0''\!.01$). Component 5 also appears
unresolved, although its profile suffers from much lower signal to
noise.
Component 2, however, is clearly resolved in figure 2. To extend the
measurement of component 2's surface brightness profile to larger radii
the image was rotated $180\deg$ about the center of component 2, and
pixels at the locations of other objects in the original image were
replaced with pixels from the rotated image. This assumes elliptical
symmetry for component 2 in the replaced regions, which cover a maximum
of 25\% (at $r = 1''\!.3$) of the area at any radius, and 7\% of the
total area. Figure 3 shows the resulting radial surface brightness
profile for component 2. A deVaucouleurs' profile with an effective
radius $r_e \approx1''\!.3$ (10 kpc) provides a much better fit to
component 2 than do exponential disk models, suggesting that this
object is an early type galaxy. The measured ellipticity of component
2 inside the arc is $\approx 0.16\pm 0.1$ at a position angle of
$\approx 3\pm15\deg$ east of north. Excess surface brightness appears
near a radius of $1''\!.4$ even though the component 1 pixels (which
are near this radius) have been replaced. As a check, the surface
brightness profile was measured within sectors centered on component 2
from position angles 73--133$\deg$ and 233--318$\deg$, angles which
bypass all obvious emission sources in figure 1(b). The value for
$r_e$ in this case was $1''\!.0$ (a smaller $r_e$ is consistent with
these sectors being along the minor axis), and excess light was again
found near $1''\!.4$ radius. The total excess light at this radius is
very roughly equivalent to a 23rd magnitude source.
\subsection{Photometry} \label{Photometry}
Photometric measurements obtained from the Planetary Camera image for
the components are given in Table 1. One count in the image
corresponds to $1.185 \times 10^{-21}$ erg/cm$^2$/sec/\AA\, or to a
magnitude of 30.00 in the F814W band with Vega set to magnitude 0. From
the measured standard deviation per pixel, the sensitivity limit
($3\sigma$) is $m_{814} \sim 28.2$ mag for a point source or $\mu_{814}
\sim 25.6$ mag arcsec$^{-2}$. Positions are relative to component 2,
whose position in the HST guide star system is given in Table 1.
Polygonal apertures were used to include the faint emission seen
extending from components 1 and 3. The flux for component 5 was
measured using a $0''\!.35$ diameter aperture, with the local
background measured using the mode of an annulus of width $0''\!.1$
surrounding this aperture, and corrected for PSF losses using the star
H curve of growth. This flux was checked by subtracting away the image
rotated $180\deg$ about component 2, and also by subtracting the
elliptical model fit to component 2 discussed in
section~\ref{Profiles}. All three methods consistently gave a value
close to 100 for the flux ratio of component 1 to component 5, and we
adopt 100 for this important ratio for the remainder of the paper.
\section{Discussion}
The morphology of the components of FSC10214+4724, a circular arc
(component~1) with its radius of curvature centered on another object
(component~2), and another fainter image (component~5) on the opposite
side, strongly supports the gravitational lens hypothesis, i.e. that
component~2 is a foreground galaxy and components~1 and 5 are images of
a single background object. Under this hypothesis, the multiple
imaging and the arclike morphology and high inferred luminosity of
component~1 result from distortion and magnification by the
gravitational potential of the foreground component~2. Components~3
and 4 are other galaxies along the line of sight, possibly related to
the galaxy which is component~2, and probably involved in the lensing.
The high resolution of the {\it HST} image makes the arc morphology and
component 5 readily apparent, and allows us to directly measure the
ratio of the brightnesses of these components. This morphology and
ratio are crucial elements in the development of a lens model for the
source. We find additional support for the lens hypothesis from the
observed morphology of component~2. In particular, as shown in
Appendix A, component~2 has the surface brightness profile and spectral
energy distribution expected for a foreground elliptical galaxy, and
its position angle is correctly predicted by the lens model. In the
following, we adopt the interpretation of FSC10214+4724 as a
gravitationally lensed system, and describe the detailed model of this
system and its consequences.
\subsection{Lens Model} \label{Model}
In the context of a lens model, component~1 is a ``straight arc'' and
component~5 is a ``counterimage.'' This gravitational lens image
configuration is very common; it has been found in several clusters
(see Surdej \& Soucail 1993 for a review). The model for these systems
is that of a source lying on or very close to a cusp in a caustic (a
line of infinite magnification, e.g.\ Blandford \& Narayan, 1992) in
the source plane. Although the magnification of a point source lying
on the caustic is formally infinite, the maximum magnification of a
real object is limited by its finite angular radius $r$. Under the
gravitational lens hypothesis the total magnification of the source
should be on the same order as the flux ratio of arc to counterimage,
roughly $100$ in this case. Gravitational lens models also predict
that the axis ratio of the arc should be on the same order as the total
magnification. The $0''\!.7$ length of the arc thus implies an
observed width on the order of $0''\!.007$ (50 pc), or unresolved even
in {\it HST} images.
In the case of lensing dominated by mass at a single
redshift, the gravitational lens mapping, which takes a two-dimensional
angular position $\vec{x}$ on the image plane (i.e.\ the position
observed on the sky) to a two-dimensional angular position $\vec{y}$
on the source plane (i.e.\ the position that would be observed if
there was no lens), is a gradient mapping
\begin{equation}
\vec{y}=\vec{x}-\vec{\nabla}_{\vec{x}}\,\psi(\vec{x}) \;,
\end{equation}
where $\vec{\nabla}_{\vec{x}}$ is the two-dimensional gradient
operator with respect to angular image-plane position $\vec{x}$, and
$\psi(\vec{x})$ is a scaled, projected, two-dimensional gravitational
potential. The potential is related to the angular surface density
$\Sigma$ (mass per unit solid angle)
\begin{equation}
\Sigma(\vec{x})= \frac{c^2}{8\pi\,G}\,
\frac{D_{\rm d}\,D_{\rm s}}{D_{\rm ds}}\,
\nabla_{\vec{x}}^2\,\psi(\vec{x})\;,
\end{equation}
where $D_{\rm d}$, $D_{\rm s}$ and $D_{\rm ds}$ are angular diameter
distances from observer to lens (deflector), observer to source, and
lens to source, and $\nabla_{\vec{x}}^2$ is the two-dimensional
Laplacian operator.
Where not otherwise stated, the lens models which follow assume that
the potential $\psi$ can be approximated with a quasi-isothermal sphere
with ellipticity (see, e.g.\ Kochanek, 1991), i.e.,
\begin{equation}
\psi(\vec{x})= b\,\sqrt{s^2+r^2}\,
\left[ 1-\gamma\cos 2(\theta-\theta_{\gamma})\right] \:,
\end{equation}
where $\vec{x}=(r,\theta)$ is the position of the point in question
relative to the center of the mass distribution, $b$ is the asymptotic
critical radius (the radius of the Einstein ring), roughly the angular
radius of the circle of images ($\sim 1''$ in this system because that
is the angular separation of arc and lens), $\gamma$ is an ellipticity
parameter, $\theta_{\gamma}$ is the position angle of the major axis,
and $s$ is a core radius. The results do not depend strongly on the
core radius $s$, so it is assumed to be zero. The critical radius $b$
can be related to a one-dimensional velocity dispersion for the lens by
\begin{equation}
\sigma_v^2 = \frac{c^2}{4\pi}\,
\frac{D_{\rm s}}{D_{\rm ds}}\, b \;,
\end{equation}
although this depends on the assumption of isothermality. More secure
is the mass $M$ inside the ``circle of images'' (in this case a circle
of angular radius $b$ around component~2),
\begin{equation}
M= \frac{c^2}{4\,G}\,
\frac{D_{\rm d}\,D_{\rm s}}{D_{\rm ds}}\, b^2 \; .
\end{equation}
The mass $M$ and the inferred luminosity $L$ of the lens can be used to
compute a mass-to-light ratio as well. The inferred physical
properties of the lens depend strongly on lens and source redshifts and
weakly on world model. In this system the lens redshift is unknown, so
$\sigma_v$, $M$, and $M/L$ are given in figure~4 as a function of lens
redshift for the model adopted below. Further discussion of figure~4
is deferred until section~\ref{Redshift}
Model parameters $b$, $\gamma$, and $\theta_{\gamma}$ were varied to
minimize the scatter in the source plane positions corresponding to the
brightest pixels in the arc and counterimage, i.e.:
\begin{equation}
\chi^2 \equiv \sum_i (\Delta \vec{x_i})^2
\end{equation}
where the sum is over the brightest 96 pixels in the deconvolved arc
and the brightest pixel in the counterimage (figure 1(d)), and $\Delta
\vec{x_i}$ is the two-dimensional displacement on the image plane
through which pixel $i$ would need to be moved in order for it to
project (via the lens mapping) to the same location on the source plane
as that of the brightest pixel in the arc. Image-plane rather than
source-plane displacements were used for computing the scatter because
the image plane is the observed plane, the plane on which uncertainties
are homogeneous and isotropic. On the source plane the uncertainties
have been mapped through the non-linear lens mapping and are extremely
inhomogeneous and anisotropic. The minimum rms scatter of the pixels
(in image plane coordinates) was 0.7 PC pixels.
The best-fit model parameters are given in Table~2. The inferred
intrinsic source radius which makes the arc-counterimage flux ratio 100
is $0''\!.0055$ (44 pc). The model makes the assumption that
component~3 is a singular isothermal sphere ($\gamma=s=0$) at the same
redshift as component~2 and with critical radius $b_3=0''\!.6$, the
expected value under the assumption that components~2 and 3 have the
same mass to ($K$-band) light ratio. A simpler model which assumes
that the potential is entirely due to an elliptical shaped mass
centered on component~2 was also considered. The two-component model
was adopted because the intrinsic ellipticity of the potential in this
model is smaller than in the simpler model, in better agreement with
the observed ellipticity in component~2. This is because the external
mass of component~3 has a tidal effect which replaces some of the
ellipticity in the primary lens.\footnote{If component~3 is at a larger
redshift than component~2 (see the Appendix) the agreement in
ellipticity is slightly better yet, but we adopt a single redshift for
components 2 and 3 to confine the number of parameters.} The predicted
orientation of the lens in the models is consistent with the observed
orientation of component~2. Figure~1(f) shows the density and
potential contours for the adopted model, as well as the critical curve
and the image morphology for a circular source of radius $0''\!.0055$,
smoothed to a FWHM of $0''\!.02$, and with the counterimage brightness
enhanced for visibility. Figure ~1(e) shows the model image morphology
convolved with the synthetic PSF discussed in section~\ref{results},
and should be compared to figure~1(c).
The image configuration in the lens model is that of a triple image or
straight arc (plus counterimage). Although parts of the source are
triply imaged in component~1, the source radius inferred from the flux
ratio of components 1 and 5 is large enough that the three images merge
into a single straight arc. We interpret the peak in the east half of
the arc as corresponding to two images merging on the critical curve,
while the peak in the west half corresponds to the third image. The
triple structure may become more apparent in high-resolution images in
other bandpasses if the flux at those wavelengths is produced by
structures offset by $\sim 0''\!.02$ (160 pc) from those which produce
the F814W flux, or having intrinsic size scales a factor of $\sim 3$
smaller.
The source location near the point at which three images on one side of
the lens merge into a single image causes high magnification. As
discussed by Broadhurst and Leh\'ar (1995), the magnification is thus a
sensitive function of source size and position. The inferred size and
position in turn depend on the assumption of an isothermal profile for
the lens potential, i.e.\ $\psi\propto r$. For a shallower potential
$\psi\propto r^{0.9}$, the best-fit model puts the center of the source
further inside the three-image region than for the isothermal case, and
the inferred source radius from the arc-counterimage flux ratio is
$0''\!.013$. For $\psi\propto r^{1.1}$ the inferred source radius is
$0''\!.0046$.
The predicted total magnification of F814W emission from a uniform
circular source as a function of source radius is shown for all three
potential models in figure~5. In each case, the total magnification
for the source radius derived above from the flux ratio of component~1
to component~5 (i.e. 100) is less or greater than 100 because
component~5 is somewhat demagnified or magnified. The dependence of
the calculation of the total magnification in the {\it HST} image on
the assumption of a circular source geometry for the F814W emission was
investigated for the isothermal model. Sources of the same total
projected solid angle on the sky have the same total magnifications to
within about 15\% even if they are highly elliptical, no matter what
their position angle. The magnification in the isothermal model scales
as $r^{-1}$ for very small sizes or separations from the caustic, and
smoothly converts to $r^{-0.5}$ at larger radii, in agreement with
Schneider, Ehlers \& Falco (1992), and can be approximated to $\sim
20\%$ by ${\bf \rm M} = 3.9 r^{-0.624}$ for the range $0''\!.001 < r <
1"$ (8 to 8000 pc). The kink at $\sim 0.005$ arcseconds in figure~5
for $\psi\propto r^{0.9}$ occurs where the source size becomes large
enough to make the three distinct images merge into a single arc.
Because in the other two models the source location is closer to the
point at which the three images merge, the source radii at which the
mergers take place are too small to appear in figure~5. The bump near
$r \sim 0''\!.5$ in figure~5 corresponds to the formation of a ring
(see below).
Different distributions for the narrow line and UV and optical
continuum regions, and the likelihood of substantial reddening (Elston
et al. 1994), can therefore account for the substantially different
appearance of FSC 10214+4724 at different wavelengths noted by Matthews
et al. (1994), and in particular for the larger extent of the $K$-band
arc seen by Matthews et al. and Graham and Liu (1995) than the arc
seen in the {\it HST} image. The $140\deg$ extent of the $K$-band arc
corresponds to a source with $0''\!.25$ (2 kpc) radius. If the source
radius is increased to $\sim 0''\!.5$ it is imaged into an elliptical
($\epsilon \sim 0.4$) ring connecting components~1 and 5. The position
angle of this ring is perpendicular to that of component~2, and is
offset from being perfectly centered on component~2 by $\sim 0''\!.4$
in the direction of component~1. The excess light near $1''\!.4$
radius noted in section~\ref{Profiles} may be the UV (rest frame)
counterpart of the more extended arc seen in the $K$ images. Note that
Matthews et al. find the $H\alpha$ emission to be extended in an
east-west direction by $\sim0''\!.5$, suggesting that the narrow line
region is largely coincident with the UV continuum which dominates the
F814W image.
\subsection{Bolometric Luminosity of FSC10214+4724} \label{Luminosity}
FSC10214+4724 has an apparent luminosity of L$_{app}$=
5$\times$10$^{14}$L$_{\sun}$ (Rowan-Robinson et al. 1993), making it
among the most luminous known objects in the Universe. The vast
majority of this luminosity, $\sim$99\%, is observed in the infrared
(Rowan-Robinson et al. 1991, 1993). There is strong evidence that the
UV source is a quasar (FWHM of C III] $\sim10,000$ km s$^{-1}$ in
polarized light, Goodrich et al. 1995) enshrouded in dust
(H$\alpha/$H$\beta \ge 20$ implying $A_V > 5.5$, Elston et al. 1994),
and that the quasar's luminosity is absorbed in the dust shell and
reradiated in the infrared (Rowan-Robinson 1993). This implies that
the size of the infrared emitting region is substantially larger than
the optical/UV emitting region.
If FSC10214+4724 is magnified by a gravitational lens, the intrinsic
source luminosity is less than the apparent luminosity. However, if
the infrared source is larger than the optical/UV source, the
magnification of the infrared source is less than the magnification
measured from the {\it HST} image. The magnification of the infrared
source can be estimated by assuming that the infrared source can be
approximated as an optically thick blackbody. This assumption
corresponds to making the infrared source as small as possible, and
hence the magnification of the infrared radiation as large as
possible. In this case, because of the assumption that the emitted
infrared energy distribution is independent of distance from the
central heating source, the magnification is independent of
wavelength. The temperature of the dust is assumed to be T$\sim$140K.
At this temperature the emission peaks at a rest wavelength of
18$\mu$m, corresponding to the observed emission that peaks at
60$\mu$m.
With this temperature, the apparent luminosity L$_{app}$ and intrinsic
luminosity L$_{int}$ can be written as
\begin{equation}
L_{app} = {\bf
\rm M}(R)L_{int} = {\bf \rm M}(R)\times 4 \pi R^2 \sigma T^4
\end{equation}
where $R$ is the physical radius of the source, ${\bf
\rm M}(R$) is the magnification from figure~5 for a uniform disk of
radius $R$, and $T$ is the blackbody temperature determined by the
wavelength of peak emission. Solving this equation for $R$ gives a
radius of 130 pc ($0''\!.017$), and ${\bf \rm M}(R) = 42$ for the
isothermal model, so that the intrinisic luminosity of FSC 10214+4724
is $1.2\times 10^{13} L_{\sun}$. A somewhat larger source size and
lower magnification is derived if the temperature T is assumed to be
115K, the color temperature determined by the observed flux densities
at 60 and 450 $\mu$m and corrected for redshift. Then the radius of
the infrared source is 240 pc ($0''\!.03$), the magnification is 29,
and the intrinsic luminosity is $1.7\times10^{13} L_{\sun}$. Note that
at these source radii the magnification is not very sensitive to the
assumed potential (see figure~5).
The expected arc length is $\sim2 r {\bf \rm M}(r)$, or $1''\!.7$ in
the $T=115$K case, and $1''\!.4$ for $T=140$K. From
VLA-A configuration observations at 8.4 Ghz with $0''\!.25$
resolution, Lawrence et al. (1993) found a $0''\!.6$ (east-west) by
$0''\!.3$ source. The similarity of this structure to the arc in the
{\it HST} image suggests a continuum radio source radius closer to the
$0''\!.005$ (40 pc) estimated for the optical/UV source than to the
minimum infrared source size just calculated. Condon et al. (1991)
find that the radio source size for nearby IRAS galaxies with infrared
luminosities $> 10^{12} L_{\sun}$ is typically $\sim 100$ pc (and for
Mrk 231, the most luminous of the sample, $\lesssim 1$ pc), smaller
than the minimum blackbody size for far infrared emission from these
galaxies. For their sample Condon et al. find $<q> = 2.34$, where $q$
is the logarithm of the ratio of far infrared (60--100$\mu$m) to 1.49
GHz flux. For FSC10214+4724, extrapolating the Lawrence et al. (1993)
observed radio flux to 0.45 GHz (the observed frequency for emitted
1.49 GHz) yields 3.5mJy, and interpolating to the rest frame
wavelengths for $60$ and $100 \mu$m and using Condon et al.'s
definition gives $q=1.91$. If the radio magnification is 100, and the
far infrared magnification is 30, then the intrinsic $q = 2.39$.
Therefore the radio morphology and flux measured by Lawrence et al. are
quite consistent with the above estimate for the bolometric
luminosity.
The $0''\!.6$ extent of the radio morphology is also consistent with a
much smaller radio continuum source size, although the value of $q$
would then be significantly larger than observed for local luminous
IRAS galaxies. It would be interesting (albeit quite challenging) to
see whether the very high angular resolution possible with VLBI
observations revealed the triple structure in the arc discussed above.
The size of the infrared source determined under the assumption of
optically thick emission is a plausible lower limit to the physical
source size. Alternatively, the magnification can be estimated based on
the models of Phinney (1989) of infrared emission from dusty, warped
disks illuminated by a central quasar. The physical size of the source
required to obtain a self-consistent solution for the intrinsic
luminosity is quite large. For the region emitting at $150 \mu$m ($450
\mu$m observed), the source radius would be $\sim10''$, much larger
than the observed size of the CO source (e.g. Scoville et al. 1995).
Thus we consider such a model less consistent with the observations
than the optically thick models.
The reduction in the intrinsic luminosity of FSC10214+4724 to $\sim 2
\times 10^{13} L_{\sun}$ implied by the lens model of the source brings
it into the luminosity range of previously studied infrared luminous
AGN. The IRAS source FSC15307+3252 at a redshift of z=0.93 has a
luminosity of $4\times10^{13} L_{\sun}$, while the IRAS source
PSC09104+4109 has a luminosity of $2\times10^{13} L_{\sun}$ for our
assumed cosmology (Cutri et al. 1994). There is no known evidence from
high resolution imaging (Soifer et al. 1994, Hutching and Neff 1988,
Soifer et al. 1995 in preparation) that either of these sources is a
gravitational lens, so the apparent luminosity is presumably the
intrinsic luminosity in these cases. Thus, based on its bolometric
luminosity, FSC10214+4724 is most likely a source simlar to these.
The reduction in intrinsic luminosity reduces the necessary dust mass
associated with the source (Rowan-Robinson et al. 1993) by the same
magnification factor, into the range $M_{dust} \sim 1-3 \times 10^7
M_{\sun}$, which is consistent with the estimates of the gas mass based
on the dynamical mass determinations from the CO observations (Scoville
et al. 1995).
\subsection{Properties of Component 2} \label{Redshift}
No conclusive measurement of the redshift for component~2 has yet been
made, to our knowledge, although tentative values of 0.42 (Close et al.
1995) and 0.90 (Serjeant et al. 1995) have been suggested based on
possible continuum breaks in the spectrum of component~2, while
Goodrich et al. (1995) find Mg lines in absorption at $z=1.32$ (and
possibly $z=0.89$) in the spectrum of component~1. In the Appendix we
provide three estimates of the redshift for component~2 (two of which
are closely related). All three estimates are consistent with $z \sim
0.9$, and we adopt this value as as our best overall estimate of the
redshift. Note the SED and $R_e - <\mu_B>_e$ estimates do not assume
component~2 is a lens, only that it is an elliptical galaxy, and
therefore give additional support to the lensing hypothesis by placing
component~2 at an intervening redshift relative to FSC10214+4724.
The velocity dispersion $\sigma_v$, mass $M$, and mass-to-light ratio
$(M/L)$ predicted for the lens are shown in figure~4 as a function of
lens redshift. Adopting $z=0.9$ yields $(M/L)_B = 8 M_{\sun}/L_{\sun}$
(vs. the observed average of $6 M_{\sun}/L_{\sun}$, van der Marel
1991), $\sigma_v = 270$ km s$^{-1}$, and $M = 3.9 \times 10^{11}
M_{\sun}$ (thus $L_B = 5\times10^{10} L_{\sun}$). These values are for
a radius of $0''\!.85$: using figure~3 the total blue luminosity is
then $L_B = 1.4\times10^{11} L_{\sun}$ or $\sim 4 L*$ (Binggeli,
Sandage and Tammann 1988). These values are independent of
evolutionary model because F814W samples rest frame $B$ at $z = 0.9$.
The velocity dispersion and mass estimated by Graham and Liu (1995),
Broadhurst and Leh\'ar (1995), and Close et al. (1995) are consistent
with figure~4, but their total luminosity is lower (and hence $(M/L)_B$
higher) because a smaller aperture correction than is shown in figure~3
was assumed.
Thus for the redshift estimate $z=0.9$ the present lensing model
predicts properties typical of present day elliptical galaxies, except
that the galaxy is unusually luminous. The probability of a large
lensing galaxy is greater than the galaxy luminosity function alone
implies, however, because the crossection for gravitational lensing is
proportional to mass.
\subsection{The Parent Population of IRAS FSC10214+4724}
Analysis of statistically complete samples of radio galaxies suggests
that the lensing rate (i.e. probability that a given radio galaxy is
lensed) is on the order of $1/500$ (Miralda-Escud\'e \& Leh\'ar 1992,
Myers et al. 1995). Given that a source is lensed, the probability of
getting total magnification ${\bf \rm M}$ is on the order of ${\bf \rm
M}^{-2}$ (e.g.\ Schneider, Ehlers \& Falco, 1992). The estimated total
magnification $\sim 30$ for the IRAS flux from section~\ref{Luminosity}
corresponds to a likelihood of $\sim 10^{-3}$. The existence of a
single lensed object in the surveyed area ($0.2$~sr - Rowan-Robinson
1991) with magnification 30 should, according to these probabilities,
represent an underlying population of $\sim800$ compact, $60
\mu$m-luminous objects per square degree (or $>40$ per square degree at
$95\%$ confidence) which are either not lensed or lensed with much
lower magnification (and hence are not in the FSS catalog). If they
are like IRAS FSC10214+4724, these sources will have observed magnitude
$r\sim 25$ mag, and their IR fluxes will be of order 3mJy at $25 \mu$m
and 7 mJy at $60 \mu$m. To these flux levels, models of the IR galaxy
population with strong luminosity evolution (Hacking and Soifer, 1991)
predict a few hundred sources per square degree, in agreement with this
estimate. Of course this is only an order of magnitude estimate
because it depends on extrapolation from a single serendipitously
discovered object, and on the relative redshift distributions of
IR-luminous and radio galaxies. Optical field galaxy redshift surveys
now underway with the Keck Telescope are approaching this depth (J.
Cohen, private communication; UC DEEP collaboration, private
communication), and IR imaging surveys to well beyond these levels are
envisioned with ISO, WIRE, and SIRTF, so this very uncertain prediction
may be testable in the near future.
\section{Summary}
We have obtained a $0.8\mu$m image of the $z=2.286$ IRAS source
FSC10214+4724 with the {\it HST} WFPC2 Planetary Camera, with $0''\!.1$
resolution and high signal to noise. We find the following:
1) The source appears as an unresolved ($< 0''\!.06$ wide) arc
$0''\!.7$ long, with significant substructure along its length. The
arc is roughly centered on a galaxy $1''\!.18$ to the north
(component~2), and a faint unresolved component (component~5) is
clearly detected $0''\!.43$ north of component~2. Two other galaxies
(components 3 and 4) are evident within a few arcseconds of the IRAS
source. This morphological configuration is characteristic of a
gravitationally lensed system, in which the arc and component~5 are
images of a single background source produced by the potential of the
foreground component~2.
2) The surface brightness profile of component~2 is well matched by a
de Vaucouleurs profile, characteristic of an elliptical galaxy with an
effective radius of $1''\!.27$. There is evidence for excess emission
above the de Vaucouleurs profile near the radius of the arc.
3) The flux ratio of the arc to the component~5 is $\sim 100$, implying
magnification in the {\it HST} image of the background source by
roughly this amount.
4) A detailed lensing model, which reproduces the observed morphology
and relative flux of the arc and counterimage, correctly predicts the
position angle for component~2. Better agreement is found with the
observed ellipticity of component~2 if component~3 is included in the
lensing potential. The model predicts reasonable values for the mass
and velocity dispersion of component~2.
5) If component~2 is an elliptical galaxy, its spectral energy
distribution is inconsistent with it being at $z=2.286$, and $z=0.9$ is
preferred. The surface brightness profile of component~2 implies a
redshift between 0.6 and 1.2. From the lensing model, for $z \sim
0.9$, the central mass-to-light ratio for component~2 is $(M/L)_B = 8
M_{\sun}/L_{\sun}$, the velocity dispersion $\sigma_v = 270$ km
s$^{-1}$, and the total blue luminosity $L_B = 1.4 \times 10^{11}
L_{\sun} \sim 4 L*$.
6) The model predicts an intrinsic radius of $\sim 0''\!.005$ (40 pc)
for the background source at $0.25 \mu$m rest wavelength. Triple
structure in the arc is obscured by this source size, but may become
apparent at high resolution in other bandpasses. The larger size of
the arc observed at $K$ implies an intrinsic source radius of
$0''\!.25$ in the corresponding emitting bandpass. A source of radius
$> 0''\!.5$ would produce a ring of emission connecting the arc and
component~5. This may account for the excess emission seen in the
surface brightness profile of component~2. The $H\alpha$ and radio
continuum morphologies appear similar to that of the $0.8\mu$m arc,
implying a similar source size for the narrow line, UV continuum, and
radio continuum emission.
7) The minimum source size for an optically thick blackbody source
producing the bulk of the bolometric luminosity is $\sim0''\!.03$ (240
pc), implying a bolometric magnification of $\sim30$. The background
lensed source then has an intrinsic luminosity $\sim 2 \times 10^{13}
L_{\sun}$. Thus IRAS FSC10214+4724 is not the most luminous object in
the Universe, but it remains among the most luminous in the IRAS
catalog.
8) The expected incidence of 30-fold gravitational magnification is
low enough to suggest that FSC10214+4724 represents an underlying
population of $\sim800$ compact objects per square degree with optical
magnitude $r\sim 25$ mag and $F_{60\mu{\rm m}} \sim 7$mJy.
\clearpage
\acknowledgments
We thank Mark Dickinson for help with the $R_e - <\mu_B>_e$ technique
for estimating $z$ and in particular for supplying the Sandage \&
Perelmutter data in electronic form, Adam Stanford for calculating
K-corrections and general assistance with STSDAS, and Roger Blandford
for help with lens modelling. We acknowledge helpful discussions with
James Graham, Michael Liu, Tom Broadhurst, Joseph Leh\'ar, and Joseph
Miller. The ideas of {\it rotating} (rather than merely flipping)
component 2 in section~\ref{Profiles} and of a magnification vs. radius
plot (cf figure 5) were suggested by Broadhurst and Leh\'ar. An
anonymous referee reminded us of the sensitivity of the derived
magnification to the isothermal profile assumption. This research was
supported by NASA through a grant awarded by STScI, which is operated
by AURA under NASA contract NAS 5-26555. Portions of the research
described in this paper were carried out by the Jet Propulsion
Laboratory, California Institute of Technology, under a contract with
NASA.
\clearpage
|
2,877,628,089,808 | arxiv |
\section*{Abstract}
We study the infinite time dynamics of a class of nonlinear Schr\"odinger / Gross-Pitaevskii equations. In a previous paper, \cite{GaWe}, we prove the asymptotic stability of the nonlinear ground state in a general situation which admits degenerate {\it neutral modes} of arbitrary finite multiplicity, a typical situation in systems with symmetry. Neutral modes correspond to
purely imaginary (neutrally stable) point spectrum of the
linearization of the Hamiltonian PDE about a critical point. In particular, a small perturbation of the nonlinear ground state, which typically excites such neutral modes and radiation, will evolve toward an asymptotic nonlinear ground state soliton plus decaying neutral modes plus decaying radiation. In the present article, we give a much more detailed, in fact quantitative, picture of the asymptotic evolution. Specificially we prove an {\it equipartition law}:\\
The asymptotic soliton which emerges, $\phi^{\lambda_\infty}$, has a mass which is equal to the initial soliton mass plus
one half the mass, $|z_0|^2$, contained in initially perturbing neutral modes:
\begin{equation}
\|\phi^{\lambda_\infty}\|_{L^2}^2\ =\ \|\phi^{\lambda_0}\|_{L^2}^2\ +\ \frac{1}{2}|z_0|^2\ +\ o(|z_0|^2)\
\nn\end{equation}
\section{Introduction}
In this paper we study the nonlinear Schr\"odinger / Gross-Pitaevskii (NLS/GP) equations in $\mathbb{R}^{3}$
\begin{align}\label{eq:NLS}
i\D_t\psi&=-\Delta \psi+V\psi-|\psi|^{2\sigma}\psi,
\end{align}
where $\sigma\geq 1$, $V:\mathbb{R}^{3}\rightarrow \mathbb{R}$ is a real, smooth function decaying rapidly at spatial infinity.
%
We study the large time distribution of mass / energy of solutions with initial data
\begin{equation}
\psi(x,0)\ =\ \psi_0,
\label{data}
\end{equation}
which are sufficiently small in the ${H}^2(\mathbb{R}^{3})$ norm \footnote{Since our results are in the low energy / small amplitude regime, our analysis goes through without change for the nonlinearities of the form $+g|\psi|^{2\sigma}\psi$ for any fixed real $g$. Here, we have taken $g=1$.}
NLS/GP arises in many physical contexts. In quantum physics, it describes a mean-field limit, $N\to\infty$, of the linear quantum description of $N-$ weakly interacting bosons. Here, $\psi$ is a collective wave-function and $V$, a trapping potential, and the nonlinear potential arises due to the collective effect of many quantum particles on a representative
particle \cite{Pitaevskii-Stringari:03,ESY:07}. In classical electromagetics and nonlinear optics, NLS/GP arises via the {\it paraxial approximation} to Maxwell's equations, and governs the slowly varying envelope, $\psi$, of a nearly monochromatic beam of light, propagating through a waveguide \cite{Boyd:08,Newell-Maloney:03}. The waveguide has linear refractive index profile, determining the potential, $V$, and cubic ($\sigma=1$) nonlinear refractive index, due to the optical Kerr effect.
NLS/GP is a infinite-dimensional Hamiltonian system and a unitary evolution in $L^2(\mathbb{R})$. In the $N-$ body quantum setting the time-invariant $L^2$ norm corresponds to the conservation of mass. In the electromagnetic setting, it is the conservation of energy (optical power). In this paper, we prove an equipartition law
(Theorem \ref{SEC:MainTheorem}) for the $L^2$ mass / energy
small (weakly nonlinear) solutions. Hence, we may refer to this result equipartition of energy or equipartition of mass.
\bigskip
The mathematical set-up is as follows. We choose a spatially decaying potential $V $ for which the Schr\"odinger operator, $-\Delta+V$, has only two negative eigenvalues
\begin{equation}
e_0\ <\ e_1<0.
\nn\end{equation}
$e_1$ is chosen to be closer to the continuous spectrum than to $e_0$,
(permitting coupling via nonlinearity of discrete and continuum modes at quadratic order in the nonlinear coupling coefficient, $g$):
\begin{equation}
2e_1\ -\ e_0\ >\ 0.
\nn\end{equation}
The excited state eigenvalue $e_1$ may be degenerate with multiplicity $N$. (In Section \ref{SEC:summary}, we allow for nearly degenerate excited state eigenvalues.) Denote the corresponding eigenvectors by
\begin{equation}\label{eq:excitedMode}
\phi_{lin},\ \ \xi_1^{lin}, \cdots,\ \xi_{N}^{lin}.
\end{equation}
For NLS/GP, \eqref{eq:NLS}, there is a family of {\it nonlinear ground states} which bifurcates from the zero solution in the direction of $\phi_{lin}$. The excited state eigenvectors are manifested as neutral modes (time periodic states with non-zero frequency) of the linearized NLS/GP equation about the ground state family; see Section \ref{SEC:GroundState}.
More specifically, there exists an open interval $\mathcal{I}$, with $e_0$ as an endpoint, such that for any
$\lambda\in \mathcal{I}$, NLS/GP~\eqref{eq:NLS} has solutions of the form
\begin{equation}\label{eq:groundstate}
\psi(x,t)=e^{i\lambda t}\phi^{\lambda}(x),
\end{equation}
where $\phi^{\lambda}$ is asymptotically collinear to $\phi_{lin}$ for small $H^2$ norm and $\lambda\to-e_0,\ \lambda\in\mathcal{I}$.
The excited state
eigenvalues give rise, in the linear approximation, to neutral modes, $(\xi,\eta)^T$, and therefore linearized time-dependent solutions, which are undamped (neutral) oscillations about $\phi^\lambda$ :
\begin{equation}\label{eq:LinApp}
e^{i\lambda t} \left(\ \phi^{\lambda} +\ (\Re z)\cdot\xi\ +\ i(\Im z)\cdot\eta\
\ \right)
\end{equation}
where $z\in\mathbb{C}^N$.
%
In ~\cite{GaWe}, also referred to in this paper as \GW, we proved the asymptotic stability of the ground states. Namely, if the initial condition is of the form
\begin{equation}\label{eq:asymStab}
\psi_0= e^{i\gamma_0}[\phi^{\lambda_0}+ R_0]
\end{equation} for some $\gamma_0\in\mathbb{R}$ and $R_0:\mathbb{R}^3\rightarrow \mathbb{C}$ satisfying $\|\langle x\rangle^{4}R_0\|_{H^2}\ll \|\phi^{\lambda_0}\|_{2},$ then generically there exists a $\lambda_{\infty}\in \mathcal{I}$ such that
\begin{equation}\label{eq:asympto2}
\min_{\gamma\in \mathbb{R}}\|\psi(t)-e^{i\gamma}\phi^{\lambda_{\infty}}\|_{\infty}\rightarrow 0\ \text{as}\ t\rightarrow \infty\ ;
\end{equation}
In particular, the neutral oscillatory modes eventually damp to zero as $t\to\infty$ via the coupling and transfer of their energy to the nonlinear ground state and to continuum radiation modes. When the neutral mode is simple, i.e. $N=1$ in ~\eqref{eq:excitedMode}, similar results have been obtained in \cite{SW:99,TsaiYau02,BuSu,SW:04,GS2,Cuccagna:03}.
In the present paper, we seek a more detailed, quantitative description of the large time dynamics.
We consider a special class of initial conditions to which the results of \GW, in particular, \eqref{eq:asympto2} apply:
$$\psi_0=e^{i\gamma_0}\phi^{\lambda_0}\ +\ {\rm neutral\ modes}\ +\ R_0$$ with
$$\|\phi^{\lambda_0}\|_{2}\gg\ \|{\rm neutral\ modes}\|_2\ \gg \|\langle x\rangle^{4}R_0\|_{H^2}.$$
The main result of this paper, proved by a considerable refinement of the analysis in \cite{GaWe}, is that the emerging asymptotic ground state has, up to high order corrections, a mass equal to its initial mass plus one-half of the mass of the initial excited state mass:
\begin{equation}
\|\phi^{\lambda_{\infty}}\|_{2}^2\ =\ \|\phi^{\lambda_0}\|_{2}^{2}\ +\
\frac{1}{2}\ \|{\rm neutral\ modes}\|_2^2\ \left(\ 1+o(1)\ \right).
\nn\end{equation}
Thus, half of the excited state mass goes into forming a limiting, more massive, ground state, $\phi^{\lambda_\infty}$ and the other half of the excited state mass is radiated away to infinity. We call this the {\it mass-} or {\it energy- equipartition}. That this phenomenon is expected, was discussed in
~\cite{SW:99,SW:04,SW-PRL:05}. The main achievement of the present work is a {\it rigorous quantification} of the asymptotic ($t\to\infty$) mass / energy distribution. \bigskip
The paper is organized as follows: In Section ~\ref{SEC:GroundState} we review results on the existence and properties of the ground state manifold, and on the spectral properties of the linearized NLS/GP operator about the ground state. In Section ~\ref{SEC:MainTheorem} we state and discuss Theorem \ref{THM:MassTransfer} on equipartition. In Section ~\ref{sec:ProveMainTHM} we present the proofs, using technical estimates established in the appendices, {\it e.g.} Sections ~\ref{sec:approxPos}-~\ref{sec:compare}. In Section ~\ref{SEC:summary}, we present a generalization of the Theorem \ref{THM:MassTransfer} to the case of nearly degenerate case, and an outline of its proof. A more extensive list of references and a discussion of related work on NLS/GP appears in \GW~.
\section*{Acknowledgments} ZG was supported, in part, by a Natural Sciences and Engineering Research Council of Canada (NSERC) Postdoctoral Fellowship and NSF Grant DMS-04-12305 .
MIW was supported, in part, by U.S.
NSF Grants DMS-04-12305, DMS-07-07850 and DMS-10-08855. This work was initiated while ZG was a visitor at the Department of Applied Physics and Applied Mathematics at Columbia University, and was continued while he was a visiting postdoctoral fellow at the Department of Mathematics of Princeton University.
\subsection{Notation}\label{notation}
\begin{itemize}
\item[(1)]\
$
\alpha_+ = \max\{\alpha,0\},\ \ [\tau]=\max_{\eta\in \mathbb{Z}}\ \{\eta\le\tau\}
\item[(2)]\ $\Re z$ = real part of $z$,\ \ $\Im z$ = imaginary part of $z$
\item[(3)] Multi-indices
\begin{align}
&z\ =\ (z_1,\dots, z_N) \in \mathbb{C}^N,\ \bar{z}=(\bar{z}_1,\dots, \bar{z}_N)\\
&a\in \mathbb{N}^N,\ z^a=z_1^{a_1}\cdot\cdot\cdot z_N^{a_N}\nonumber\\
&|a|\ =\ |a_1|\ +\ \dots\ +\ |a_N|
\nonumber
\end{align}
\item[(4)] $Q_{m,n}$ denotes an expression of the form
\begin{equation}
Q_{m,n}\ = \sum_{|a|=m,\ |b|=n}\ q_{a,b}\ z^a\ \bar{z}^b\
= \sum_{|a|=m,\ |b|=n}\ q_{a,b}\prod_{k=1}^{N}\ z_k^{a_k}\bar{z_k}^{b_k} \nonumber
\end{equation}
\item[(5)] \begin{equation}
J\ =\ \left(\begin{array}{cc} 0 & 1\\ -1 & 0\end{array}\right),
\ \ H\ =\ \left(\begin{array}{cc} L_+ & 0\\ 0 & L_-\end{array}\right),\ \
L=JH=\left(\begin{array}{cc} 0 & L_-\\ -L_+ & 0\end{array}\right)
\nonumber\end{equation}
\item[(6)] $\sigma_{ess}(L)=\sigma_c(L)$ is the essential (continuous) spectrum of $L$,\\ $\sigma_{disc}(L)$ is the discrete spectrum of $L$.
\item[(7)]\ Riesz projections: $P_{disc}(L)$ and $P_c(L)=I-P_{disc}(L)$\\
$P_{disc}(L)$ projects onto the discrete spectral part of $L$\\
$P_c(L)$ projects onto the continuous spectral part of $L$
\item[(8)]\ $\langle f,g\rangle = \int\ f(x)\ {\overline{g(x)}}\ dx $
\item[(9)]\ $\| f\|_p^p=\ \int_{\mathbb{R}^3}\ |f(x)|^p\ dx,\ \ 1\le p\le\infty$
\item[(10)]\ $\| f\|_{H^s(\mathbb{R}^3)}^2=\ \int \left|\left(I-\Delta_x\right)^{s\over2}f(x)\right|^2\ dx$
\item[(11)]\ $\| f\|_{\cH^{s,\nu}}^2\ =\ \int_{\mathbb{R}^3}\ \left|\langle x\rangle^\nu\ (I-\Delta)^{\frac{s}{2}}f(x)\right|^2\ dx$
\end{itemize}
\section{Review of the set up}\label{SEC:GroundState}
In this section we review the setting presented in detail in \cite{GaWe}.
\subsection{Assumptions on the potential, $V(x)$}\label{Vassumptions}
We assume that the Schr\"odinger operator $-\Delta+V$ has
the following properties:
\begin{enumerate}
\item[(V1)]
$V$ is real valued and decays sufficiently rapidly, {\it e.g.} exponentially, as $|x|$ tends to infinity.\\
\item[(V2)] $-\Delta+V$ has two eigenvalues $e_{0}<e_{1}<0$.\\
$e_{0}$ is the lowest eigenvalue with
ground state $\phi_{lin}>0$, the eigenvalue $e_{1}$ is degenerate
with multiplicity $N$ and eigenvectors
$\xi_{1}^{lin},\xi_{2}^{lin},\cdot\cdot\cdot,\xi_{N}^{lin}.$
\end{enumerate}
\subsection{Bifurcation of ground states from $e_0$}
\begin{proposition}\label{bif-of-gs}
Suppose that the linear operator $-\Delta+V$ satisfies the
conditions above in subsection \ref{Vassumptions}. Then there exists a constant
$\delta_{0}>0$ and a nonempty interval $\mathcal{I}\subset [e_{0}-\delta_{0},
e_{0})$ such that for any $\lambda \in \mathcal{I}$, NLS/GP (~\ref{eq:NLS}) has solutions of the form
$\psi(x,t)=e^{i\lambda t}\phi^{\lambda}\in {L}^{2}$\
with
\begin{equation}\label{eqn:perturb}
\phi^{\lambda}=\delta\left(\ \phi_{lin}+\cO(\delta^{2\sigma})\ \right)\ {\rm and}\
\delta=\delta(\lambda)=|e_{0}+\lambda|^{\frac{1}{2\sigma}}\left(\int \phi^{2\sigma+2}_{lin}\right)^{-\frac{1}{2\sigma}}.
\end{equation}
\end{proposition}
\subsection{Linearization of NLS/GP about the ground state}
If we write $\psi(x,t)=e^{i\lambda t}\left(\ \phi^\lambda\ + u +\ iv\ \right)$, then we find the linearized perturbation equation to be:
\begin{equation}
\frac{\D}{\D t}\left(\begin{array}{ll} u\\ v \end{array} \right)\ =\
L(\lambda)\ \left(\begin{array}{ll} u\\ v \end{array} \right)\ =\ JH(\lambda)\ \left(\begin{array}{ll} u\\ v \end{array} \right),
\label{linearized}
\end{equation}
where
\begin{equation}\label{eq:opera}
L(\lambda):=
\left(\begin{array}{lll}0&L_{-}(\lambda)\\
-L_{+}(\lambda)&0 \end{array} \right)
= \left(\begin{array}{lll}0&1\\
-1&0 \end{array} \right)\ \left(\begin{array}{lll}L_+(\lambda)&0\\
0& L_{-}(\lambda) \end{array} \right)\ \equiv\ JH(\lambda)\ .
\end{equation}
Here, $L_+$ and $L_-$ are given by:
\begin{align}
L_{-}(\lambda)&\ :=\ -\Delta+\lambda+V-(\phi^{\lambda})^{2\sigma}\nn\\L_{+}(\lambda)&\ :=-\Delta+\lambda+V-(2\sigma+1)(\phi^{\lambda})^{2\sigma}\label{Lpm}
\end{align}
\nit The following results on the point spectrum of $L(\lambda)$ appear in ~\cite{GaWe};
see Proposition 4.1, p. 275 and Propositions 5.1-5.2, p. 277:
\begin{lemma}\label{LM:NearLinear}
Let $L(\lambda)$, or more explicitly, $L(\lambda(\delta),\delta)$
denote the linearized operator about the the bifurcating state
$\phi^\lambda, \lambda=\lambda(\delta)$. Note that $\lambda(0)=
-e_0$. Corresponding to the degenerate e-value, $e_1$, of
$-\Delta+V$, the matrix operator $L(\lambda=-e_0,\delta=0)$ has
degenerate eigenvalues $\pm iE(-e_0)=\pm i(e_1-e_0)$, each of
multiplicity $N$. For $\delta>0$ and small these bifurcate to
(possibly degenerate) eigenvalues $\pm iE_1(\lambda),\dots,$ $\pm
iE_N(\lambda)$ with neutral modes
$$\left(
\begin{array}{lll}
\xi_{1}\\
\pm i\eta_{1}
\end{array}
\right),\ \left(
\begin{array}{lll}
\xi_{2}\\
\pm i\eta_{2}
\end{array}
\right),\ \cdot\cdot\cdot, \left(
\begin{array}{lll}
\xi_{N}\\
\pm i\eta_{N}
\end{array}
\right)$$ satisfying the estimates
\begin{equation}\label{eq:Orthogonality}
\langle \xi_{m},\eta_{n}\rangle =\delta_{m,n},\ \langle \xi_{m},\phi^{\lambda}\rangle=\langle \eta_{m},\partial_{\lambda}\phi^{\lambda}\rangle=0
\end{equation}
and
\begin{equation}\label{eq:GoToNear}
0\not=\displaystyle\lim_{\lambda\rightarrow
e_{0}}\xi_{n}=\lim_{\lambda\rightarrow e_{0}}\eta_{n}\in
span\{\xi^{lin}_{n},\ n=1,2,\cdot\cdot\cdot,N\}\ \text{in}\
H^{k}\ \text{spaces for any}\ k>0.
\end{equation}
\end{lemma}
\begin{remark}\label{remark:2ndorderres} Since $E(-e_0)=e_1-e_0$, it follows that if $2e_1-e_0>0$, then for sufficiently small $\delta$, $2E_n(\lambda)>\lambda,\ n=1,2,\cdot\cdot\cdot,N$. This ensures nonlinear coupling of discrete to continuous spectrum at second order (in the nonlinearity coefficient, $g$). Thus, to ensure such coupling, we assume:
\begin{itemize}
\item[(V3)]
\begin{equation} 2e_1-e_0>0.\ \label{2e1me0}
\end{equation}
\end{itemize}
\end{remark}
\begin{lemma}\label{mainLem2}
Assume the potential $V=V(|x|)$ and the functions $\xi^{lin}_{n}$ admit the form $\xi^{lin}_{n}=\frac{x_{n}}{|x|}\xi^{lin}(|x|)$ for some function $\xi^{lin}$, then $\phi^{\lambda}$, hence $\partial_{\lambda}\phi^{\lambda}$, is spherically symmetric, $E_{n}=E_{1}$ for any $n=1,2,\cdot\cdot\cdot,N=d$ and we can choose $\xi_{n},\eta_{n}$ such that $\xi_{n}=\frac{x_{n}}{|x|}\xi(|x|)$ and $\eta_{n}=\frac{x_{n}}{|x|}\eta(|x|)$ for some real functions $\xi$ and $\eta.$
\end{lemma}
\bigskip
In this paper we make the following assumptions on the spectrum of the operator
$L(\lambda):$
\begin{enumerate}
\item[{\bf (SA)}] The linearized operator $L(\lambda)$ has discrete spectrum given by:
\subitem - an eigenvalue $0$
with
generalized eigenspace spanned by
$\left\{\ \left(
\begin{array}{lll}
0\\
\phi^{\lambda}
\end{array}
\right)\ ,\ \left(
\begin{array}{lll}
\partial_{\lambda}\phi^{\lambda}\\
0
\end{array}
\right)\ \right\}$
\subitem - {\it neutral eigenvalues} $\pm iE(\lambda),\ E(\lambda)>0$,\\ satisfying the condition $2E(\lambda)>\lambda$ and with corresponding eigenvectors
$\left|
\begin{array}{lll}
\xi_{n}\\
\pm i\eta_{n}
\end{array}
\right\rangle,\ n=1,\dots,N $.
\end{enumerate}
For the non self-adjoint operator $L(\lambda)$ the (Riesz)
projection onto the discrete spectrum subspace of $L(\lambda)$,
$P_{d}=P_{d}(L(\lambda))=P_{d}^\lambda$, is given explicitly in ~\cite{GaWe}, Proposition 5.6, p. 280:
\begin{equation}\label{eq:PdProjection}
\begin{array}{lll}
P_{d}&\equiv&\frac{2}{\partial_{\lambda}\| \phi^{\lambda}\|^{2}}\left(\ \left|
\begin{array}{lll}
0\\
\phi^{\lambda}
\end{array}
\right\rangle \left\langle
\begin{array}{lll}
0\\
\partial_{\lambda}\phi^{\lambda}
\end{array}
\right|\ +\ \left|
\begin{array}{lll}
\partial_{\lambda}\phi^{\lambda}\\
0
\end{array}
\right\rangle \left\langle
\begin{array}{lll}
\phi^{\lambda}\\
0
\end{array}
\right|\ \right)\\
& &\\
& &+\frac{1}{2}i\displaystyle\sum_{n=1}^{N}\left(\ \left|
\begin{array}{lll}
\xi_{n}\\
i\eta_{n}
\end{array}
\right\rangle\left\langle
\begin{array}{lll}
-i\eta_{n}\\
\xi_{n}
\end{array}
\right| \ -\ \left|
\begin{array}{lll}
\xi_{n}\\
-i\eta_{n}
\end{array}
\right\rangle\left\langle
\begin{array}{lll}
i\eta_{n}\\
\xi_{n}
\end{array}
\right|\ \right)\ .
\end{array}
\end{equation}
and the projection onto the essential spectrum by $P_{c}\ \equiv\ 1-P_{d}.$
\medskip
The large time analysis of NLS/GP requires good decay estimates on the linearized evolution operator, $e^{L(\lambda)t}P^\lambda_c$. An obstruction to such estimates are, so-called, threshold resonances
(see \cite{GaWe} and references therein), which we preclude with the following hypothesis.
\begin{enumerate}
\item[({\bf Thresh$_\lambda$)}] Assume $L(\lambda)$ has no resonances at $\pm i\lambda$
\end{enumerate}
For small solitons, $\delta$ sufficiently small, ({\bf Thresh$_\lambda$}) follows from the absence of a zero energy resonance for $-\Delta+V$.
\subsection{ Second order (nonlinear) Fermi Golden Rule}\label{sec:fgr}
In this subsection we review the definitions and constructions presented in detail in ~\cite{GaWe} pp. 281-282. The amplitudes and phases of the neutral modes are governed by the complex-valued vector parameter $z: \ \mathbb{R}^{+}\rightarrow \mathbb{C}^{N}$, first arising in the linear approximation of solution $\psi$, see {\it e.g.} ~\eqref{eq:LinApp}. Its precise definition is seen in the decomposition of the solution $\psi$ in ~\eqref{Decom}, under the condition ~\eqref{s-Rorthogonal}, below, from which it follows that
\begin{equation}
\partial_{t}z =-iE(\lambda) z -\Gamma( z ,\bar{ z }) z +\Lambda( z ,\bar{ z })
z\ +\ \cO\left((1+t)^{-\frac{3}{2}-\delta}\right),\ \ \delta>0\label{new-nf1}
\end{equation}
where $\pm iE(\lambda)$ are complex conjugate $N-fold$ degenerate
neutral eigenfrequencies of $L(\lambda)$, $\Gamma$ is non-negative symmetric and $\Lambda$ is skew symmetric.
In what follows we define the non-negative, Fermi Golden Rule matrix, $\Gamma$.
Define vector functions $G_{k},\ k=1,2,\cdot\cdot\cdot, N$, as
\begin{equation}\label{eq:Fk2}
G_{k}(z,x):=\left(
\begin{array}{lll}
B(k)\\
D(k)
\end{array}
\right)
\end{equation} with the functions $B(k)$ and $D(k)$ defined as $$\begin{array}{lll}
B(k)&:=&-i\sigma(\phi^{\lambda})^{2\sigma-1} \ \ \left[\ \rhoo\ \eta_{k}+\omegaa\ \xi_{k}\ \right]\ , \\
D(k)&:=&-\sigma(\phi^{\lambda})^{2\sigma-1}
\left[\ 3\rhoo\xi_{k}-\omegaa\eta_{k}\ \right]\
-\ 2\sigma(\sigma-1)(\phi^{\lambda})^{2\sigma-1}\ \rhoo\xi_{k}\ ,
\end{array}$$
where
$$
z\cdot\xi\ :=\ \displaystyle\sum_{n=1}^{N}z_{n}\xi_{n},\ z\cdot\eta\ :=\ \displaystyle\sum_{n=1}^{N}z_{n}\eta_{n}.
$$
In terms of the column 2-vector, $G_{k}$, we define
a $N \times N$ matrix $Z(z,\bar{z})$ as
\begin{equation}\label{eq:zMatrix}
Z(z,\bar{z})=(Z^{(k,l)}(z,\bar{z})),\ \ 1\le k,l\le N
\end{equation} and
\begin{equation}
Z^{(k,l)}\ =\ -\left\langle
(L(\lambda)+2iE(\lambda)-0)^{-1}P_{c}G_{l}, iJP_cG_{k}\right\rangle
\label{Zkl-sym}
\end{equation}
Finally, we define $\Gamma(z,\bar{z})$ as follows:
\begin{equation}
\Gamma(z,\bar{z})\ :=\ \frac{1}{2}[Z(z,\bar{z})+Z^{*}(z,\bar{z})].
\label{Gammadef}
\end{equation}
Thus,
\begin{equation}
[\ \Gamma(z,\bar{z})\ ]_{kl}\ =\ -\ \Re\ \left\langle
(L(\lambda)+2iE(\lambda)-0)^{-1}P_{c}G_{l}, iJP_cG_{k}\right\rangle.
\label{Gamma-kl}
\end{equation}
By ~\eqref{new-nf1} and ~\eqref{Gamma-kl} we find
\begin{equation}
\partial_{t}\ |z(t)|^2\ =\ -2\ z^*\ \Gamma(z,\bar{z})\ z\ +\ \dots.
\label{energy-id}\end{equation}
In \GW $\Gamma$ was shown to be non-negative and we require it to be positive definite.
In particular, we shall require the following Fermi Golden Rule hypothesis.\\ \\ Let $P_{c}^{lin}$ denote the spectral projection onto the essential spectrum of $-\Delta+V$. Then
\begin{enumerate}
\item[{\bf (FGR)}] We assume there exists a constant $C>0$ such that
$$-\Re\langle i[-\Delta+V+\lambda-2E(\lambda)-i0]^{-1}P_{c}^{lin}(\phi_{lin})^{2\sigma-1}(z\cdot \xi^{lin})^{2},(\phi_{lin})^{2\sigma-1}(z\cdot \xi^{lin})^{2}\rangle\ge C|z|^4$$ for any $z\in \mathbb{C}^{N}.$
\end{enumerate} The assumption FGR implies that there exists a constant $C_1>0$ such that for any $z\in \mathbb{C}^{N}$
\begin{equation}\label{eq:FGR}
z^*\ \Gamma(z,\bar{z})\ z\geq C_1\|\phi^{\lambda}\|_{\infty}^{4\sigma-2} |z|^{4}.
\end{equation}
Note that for each fixed $z$, smallness of $|\lambda+e_0|$ together with ~\eqref{eqn:perturb} and ~\eqref{eq:GoToNear} imply that the leading term in $z^*\ \Gamma(z,\bar{z})\ z$ is
\begin{equation}\label{eq:Gamma0}
\begin{array}{lll}
& &z^* \Gamma_0(z,\bar{z})\ z\\
&\equiv &-
2\sigma^{2}(\sigma+1)^{2}\delta^{4\sigma-2}(\lambda)\ \times \ \\
& &\Re\langle i[-\Delta+V+\lambda-2E(\lambda)-i0]^{-1}P_{c}^{lin}(\phi_{lin})^{2\sigma-1}(z\cdot\xi^{lin})^{2},
(\phi_{lin})^{2\sigma-1}(z\cdot\xi^{lin})^{2}\rangle.
\end{array}
\end{equation}
\section{Main Theorem}\label{SEC:MainTheorem}
In this section we state our main results, Theorems \ref{THM:MassTransfer} and \ref{THM:MainTheorem2}.
\begin{theorem}\label{THM:MassTransfer}
Assume a cubic nonlinearity, $\sigma=1$, in ~\eqref{eq:NLS}. If the spectral conditions {\bf (SA) (Thres$_\lambda$) and (FGR)} are satisfied,
then there exists a constant $\delta$ such that if the initial condition $\psi_{0}$ satisfies the condition
$$\psi_{0}(x)=e^{i\gamma_{0}}[\phi^{\lambda_{0}}+\alpha_{0}\cdot\xi+i\beta_{0}\cdot \eta +R_{0}]$$
for some real constants $\gamma_{0},\ \lambda_{0}$, real $N$ vectors $\alpha_{0}$ and $\beta_{0}$, function $R_0:\mathbb{R}^{3}\rightarrow \mathbb{C}$, such that for $\epsilon\le\delta$:\\
$|\lambda_{0}-|e_{0}||\leq \epsilon,\ \ |\alpha_{0}|+|\beta_{0}|\ \lesssim \epsilon \|\phi^{\lambda_{0}}\|_{2}$,
$\|\langle x\rangle^{4}R_{0}\|_{H^2}\lesssim |\alpha_{0}|^{2}+|\beta_{0}|^{2}=\mathcal{O}(\epsilon^2)$, for $\epsilon\leq \delta$, then there exist smooth functions
\begin{align}
&\lambda(t):\mathbb{R}^{+}\rightarrow
\mathcal{I},\ \ \ \gamma(t): \mathbb{R}^{+}\rightarrow
\mathbb{R},\ z(t):\mathbb{R}^{+}\rightarrow \mathbb{C}^{N},\nn\\
&\ \ R(x,t):\mathbb{R}^{3}\times\mathbb{R}^{+}\rightarrow
\mathbb{C}
\nn\end{align}
such that the solution of NLS evolves in the form:
\begin{align}\label{Decom}
\psi(x,t)&\ =\ e^{i\int_{0}^{t}\lambda(s)ds}e^{i\gamma(t)}\nn\\
&\ \ \ \ \ \ \ \times[\ \ \phi^{\lambda}+a_{1}(z,\bar{z})
\D_\lambda\phi^{\lambda}+ia_{2}(z,\bar{z})\phi^{\lambda}\ +\ (Re\ \tilde{z})\cdot\xi\ +\ i(Im \tilde{z})\cdot\eta\ +\ R\ \ ],\end{align}
where $\lim_{t\rightarrow \infty}\lambda(t)=\lambda_{\infty},$
for some $\lambda_{\infty}\in \mathcal{I}$.\\
Here, $a_{1}(z,\bar{z}),\ a_{2}(z,\bar{z}): \mathbb{C}^{N}\times\mathbb{C}^{N}\rightarrow \mathbb{R}$ and $\tilde{z}-z: \mathbb{C}^{N}\times\mathbb{C}^{N}\rightarrow \mathbb{C}^{N}$
are some polynomials of $z$ and $\bar{z}$, beginning with terms of order $|z|^{2}$.
\begin{enumerate}
\item[(A)] The dynamics of mass/energy transfer is captured by the following reduced dynamical system for the key modulating parameters, $\lambda(t)$ and $z(t)$:
\begin{equation}\label{eq:IncreaseLambda}
\frac{d}{dt}\ \|\phi^{\lambda(t)}\|_2^2 =z^* \Gamma_0(z,\bar{z})\ z +\ \cS_\lambda(t),
\end{equation}
\begin{equation}\label{eq:DecayZ}
\frac{d}{dt}\ |z(t)|^2 = -2z^* \Gamma_0(z,\bar{z})\ z+\cS_{z}(t),
\end{equation}
where $z^*\Gamma_0(z,\bar{z})z$ is given in ~\eqref{eq:Gamma0}, and
\begin{equation}
S_\lambda(t)\lesssim\ (1+t)^{-\frac{19}{10}},\ \ {\rm and}\ \ S_z(t)\ \lesssim\ (1+t)^{-\frac{12}{5}}.\label{new-lam-z}
\end{equation}
Furthermore,
\begin{equation}
\int_0^\infty\ S_\lambda(\tau)d\tau,\ \ \ \ \int_{0}^{\infty} \cS_z(\tau)\ d\tau\ =o(|z_0|)^2.
\label{SL1}\end{equation}
\item[(B)] $\vec{R}(t)=( Re\ R(t)\ ,\ Im\ R(t))^T$ lies in the essential spectral part of $L(\lambda(t))$. Equivalently, $R(\cdot,t)$ satisfies the symplectic orthogonality conditions:
\begin{align}\label{s-Rorthogonal}
&\omega\langle R,i\phi^{\lambda}\rangle\ =\ \omega\langle
R,\partial_{\lambda}\phi^{\lambda}\rangle\ =\ 0, \nn\\
&\omega\langle
R,i\eta_{n}\rangle=\omega\langle R,\xi_{n}\rangle=0,\
n=1,2,\cdot\cdot\cdot, N,
\end{align}
where $\omega\langle X,Y\rangle:=Im\int X\overline{Y}$.
\item[(C)] {\bf Decay estimates:} For any time $t\geq 0$
\begin{align}
&\|(1+x^{2})^{-\nu}\vec{R}(t)\|_{2}\leq C(\|\langle x\rangle^{4}\psi_0\|_{H^2})(1+t)^{-1},
\label{Rdecay}\\
&\|\vec{R}(t)\|_{{H}^2}\leq \epsilon_{\infty},
\label{eq:stability}\\
&|z(t)|\leq C(\|\langle x\rangle^{4}\psi_0\|_{H^2})(1+t)^{ -\frac{1}{2} }.
\label{z-decay}
\end{align}
\item[(D)] {\bf Mass / Energy equipartition:}\ Half of the mass of the neutral modes contributes to forming a more massive asymptotic ground state and half is radiated away
\begin{equation}\label{eq:Mass}
\|\phi^{\lambda_{\infty}}\|_{2}^{2}=
\|\phi^{\lambda_{0}}\|_{2}^{2}+\frac{1}{2}\left[\ |\alpha_{0}|^{2}+|\beta_{0}|^{2}\ \right]
+o\left(\ \left[|\alpha_{0}|^{2}+|\beta_{0}|^{2}\right]\ \right).
\end{equation}
\end{enumerate}
\end{theorem}
The following result applies to the case where $\sigma>1$.
\begin{theorem}\label{THM:MainTheorem2}
Assume the general nonlinearity $\sigma >1$ . Then statements (A)-(D) of Theorem \ref{THM:MassTransfer} hold provided, in addition to the assumptions of Theorem \ref{THM:MassTransfer}, we assume:
\begin{itemize}
\item[(1)] in the case where the neutral modes are degenerate ($N>1$), the potential $V$ is spherically symmetric and the eigenvectors $\xi^{lin}_{n},\ n=1,2,\cdots, N=d,$ admit the form $\xi_{n}^{lin}=\frac{x_{n}}{|x|}\xi(|x|)$ for some function $\xi.$
\item[(2)] $|\alpha_{0}|^{2}+|\beta_{0}|^{2}\leq [\|\phi^{\lambda_{0}}\|_{2}]^{C(\sigma)}$ for some sufficiently large constant $C(\sigma).$
\end{itemize}
\end{theorem}
The statements (B) and (C) are obtained in ~\cite{GaWe}: all except ~\eqref{eq:stability} are taken from Theorem 7.1, p. 284. Equation ~\eqref{eq:stability} is from the proof (Line 18, p. 306) that $\mathcal{R}_{4}(T):=\displaystyle\max_{0\leq t\leq T} \|\vec{R}(t)\|_{{H}^2}\ll 1$; $\mathcal{R}_{4}$ is defined in (11.2) .
The bounds on $S_\lambda(t)$ and $S_z(t)$, ~\eqref{new-lam-z}, of statement (A) were proved in \cite{GaWe}; see equations (8-9) and (8-11) p. 286. For the estimate $|Remainder|\lesssim (1+t)^{-\frac{19}{10}}$, see line 9, p. 306. The remaining assertions in (A) will be reformulated as Theorems ~\ref{THM:Zequation} and ~\ref{THM:KeyTerm}, and proved in Section ~\ref{sec:ProveMainTHM}. Statement (D) is proved just below.
\begin{remark}
{\bf Mass equipartition:}
It is straightforward to interpret ~\eqref{eq:Mass} as implying equipartition of the neutral mode mass. Indeed, since $\phi^{\lambda}$ is orthongal to $ \xi_{m}$ (see ~\eqref{eq:Orthogonality}) and since mass is conserved for NLS/GP, i.e. $\|\psi(t)\|_2=\|\psi_0\|_2$ , we have
\begin{equation}
\|\psi(\cdot,t)\|_{2}^{2}=\|\psi_0\|_{2}^{2}=\|\phi^{\lambda_{0}}\|_{2}^{2}+|\alpha_{0}|^{2}+|\beta_{0}|^{2}+o(|\alpha_{0}|^{2}+|\beta_{0}|^{2}),\ \ {\rm for\ all}\ t.\nn\end{equation}
The theorem implies that $\psi(t,\cdot)$ has a weak-$L^2$ limit, $\phi^{\lambda_\infty}$, whose mass is given by ~\eqref{eq:Mass}. Thus, half of the mass of the neutral modes is transferred to the ground states while the other half is radiated to infinity.\\ \\
\end{remark}
\nit We now use Statement (A) of Theorem
\ref{THM:MassTransfer} to prove Statement (D).\\ \\
\nit{\bf Proof of Mass equipartition:} Twice the first plus the second equation in (\ref{new-lam-z}) yields:
\begin{equation}
\frac{d}{dt}\left(\ 2\left\|\phi^{\lambda(t)}\right\|_2^2\ +\ |z(t)|^2\ \right)\ =
\ 2\cS_\lambda(t)\ + \cS_z(t).
\label{lincomen}\end{equation}
Integration of (\ref{lincomen}) with respect to $t$ from zero to infinity
and use of the decay of $z(t)$, (\ref{z-decay}), imply
\begin{equation}
2\left\|\phi^{\lambda(\infty)}\right\|_2^2\ =\ 2\left\|\phi^{\lambda(0)}\right\|_2^2\ +\ |z(0)|^2\ +\ \int_0^\infty\left(\ 2\cS_\lambda(t')\ + \cS_z(t')\ \right)\ dt'.
\nn\end{equation}
Dividing by two and estimating the integral, using (\ref{SL1}), completes the proof of Statement D.
\begin{remark}
{\bf Generic data in a neighborhood of the origin:}\\ For the case of cubic nonlinearity, $\sigma=1$, the condition $|\alpha_{0}|^{2}+|\beta_{0}|^{2}\ll \|\phi^{\lambda_{0}}\|_{2}^{2}$ can be improved to a state about generic (low energy) initial conditions satisfying $$|\alpha_{0}|^{2}+|\beta_{0}|^{2}\approx \|\phi^{\lambda_{0}}\|_{2}^2.$$ We impose the stronger condition in the present paper to simplify the treatment and to apply directly the results in \GW\ \cite{GaWe}. We refer to ~\cite{SW:04,MR1992875}. See also our remarks ~\ref{remark:remark3} and ~\ref{Remark} below.\\ \\
{\bf The generality of the nonlinearity in ~\eqref{eq:NLS}} \\ Our results hold not only for focusing nonlinearity, i.e., $-|\psi|^{2\sigma}\psi$ in ~\eqref{eq:NLS}. In fact all of the results in the Theorems ~\ref{THM:MassTransfer} and ~\ref{THM:MainTheorem2} can be transferred to the general cases $g|\psi|^{2\sigma}\psi,\ g\in \mathbb{R}\backslash\{0\}$ without difficulty. We restrict to the present consideration in order not to clutter our arguments by discussing various constants.
\end{remark}
\subsection{Relation to previous work}
Theorems \ref{THM:MassTransfer} and \ref{THM:MainTheorem2} are derived from a refinement of the analysis of \cite{GaWe} and a generalization to arbitrary nonlinearity parameter $\sigma\ge1$. In this subsection we explain this.
The overall plan for proofs of asymptotic stability can be broken into two parts, motivated by a view of the soliton as an interaction between discrete and continuum modes:
\begin{itemize}
\item[{\bf Part 1}]:\
a) We seek a natural decomposition of the solution into a component evolving along the manifold of solitons and a component which is dispersive. However, since the linearization about the soliton may have neutral modes, non-decaying time periodic states, we incorporate these degrees of freedom among the discrete degrees of freedom in the Ansatz. The dispersive components of the evolution lie in the subspace bi-orthogonal, in fact symplectic-orthogonal, to the discrete modes. The result is a {\it strongly-coupled} system governing the discrete degrees of freedom and dispersive dispersive wave field, $R(t)$. Mathematically we decomposed the solution $\psi$ as in ~\eqref{eq:decom}, and by the orthogonal conditions ~\eqref{eq:Orthogonality} and ~\eqref{s-Rorthogonal} we derive equations for $\dot\lambda$, $\dot\gamma$, $z$ and $\vec{R}$. These are taken from ~\cite{GaWe} and displayed in Appendix ~\ref{sec:decom}.\\ \\
b) We solve explicitly for the leading order components of $R(t)$, which arise due to resonant forcing by new, nonlinearity-generated, discrete mode frequencies. To achieve this we find the leading order, that is second order in $z$ and $\bar{z}$ contributions to $R(x,t)$. This is presented in Appendix ~\eqref{eq:Rform}. \\ \\
c) This leading order behavior is substituted into the equations governing the discrete modes, leading to a (to leading order) closed equation for the discrete modes, implying estimates for $\dot\lambda$ and $\dot\gamma$. This is Proposition ~\ref{Prop:Majorants}.\\ \\
d) The latter is put into a normal form, via a finite sequence of near-identity changes of variables, in which the energy transfer mechanisms are made explicit. This is achieved via the introduction of $z\mapsto a_1(z,\bar{z}),\
a_2(z,\bar{z}),\ p(z,\bar{z})$ and $q(z,\bar{z})$ in Appendix ~\ref{sec:NormalFormExp}.
\item[{\bf Part 2}]:
The full coupled system is now in a form of:\\
a finite dimensional system of (normal form) ODEs, with non-resonant terms removed by near identity changes of variables, with rapidly time- decaying corrections, determined by the dispersive part, {\it weakly-coupled} to a dispersive PDE, with rapidly decaying and/or oscillating source terms, coming from the discrete components of the solution. The latter is essentially treatable by low-energy scattering methods.
In \GW\ ~\cite{GaWe} we proved that the neutral mode mass and $\lambda(t)$, which through $\|\phi^{\lambda(t)}\|_2^2$ controls the ground state mass, is governed by
\begin{equation}\label{eq:crudeLamb}
\frac{d}{dt} \lambda(t) =\ {\rm Rem}_\lambda(t),
\end{equation}
\begin{equation}\label{old-lam-z2}
\frac{d}{dt} |z(t)|^2 = - 2z^*\Gamma( z ,\bar{ z })z + {\rm Rem}_z(t),
\end{equation}
where $\Rem_\lambda(t)$ and $\Rem_z(t)$ satisfy an estimate of the form:
\begin{equation}\label{Rem-ests}
\begin{array}{lll}
|\ \Rem_{\lambda}(t) |\ &\lesssim& |z(t)|^4+\ \|\langle x\rangle^{-4}R(t)\|_{H^2}^2+\ \|R(t)\|_\infty^2+ |z(t)|\ \|\langle x\rangle^{-4}\tilde{R}(t)\|_{2}\\
& &\\
|\ \Rem_{z}(t)\ | &\lesssim& |z(t)|^5+|z(t)|\ \|\langle x\rangle^{-4}R(t)\|_{H^2}^2+|z(t)|\ \|R(t)\|_\infty^2+ |z(t)|^2\ \|\langle x\rangle^{-4}\tilde{R}(t)\|_{2}
\end{array}
\end{equation}
where, $\tilde{R}$ is defined in ~\eqref{eq:difTildeR}, and for
$t\gg1$ we have
\begin{equation}
|z(t)|\sim t^{-\frac{1}{2}},\ \|\langle x\rangle^{-4}R(t)\|_{H^2}\sim t^{-1},\ \|R(t)\|_\infty\sim t^{-1},\ \|\langle x\rangle^{-4}\tilde{R}(t)\|_{2}\sim t^{-\frac{7}{5}}.
\nn\end{equation}
\end{itemize}
Since $\Rem_z(t)=\cO(t^{-2-\tau}),\ \tau>0$, $\Rem_z(t)$ is dominated by
the first term on the right hand side of \eqref{old-lam-z2}, which is $\cO(t^{-2})$ and strictly negative, by the Fermi Golden Rule resonance hypothesis {\bf (FGR)}.
Furthermore, $\Rem_\lambda(t)$ is integrable in $t$, $\lambda(t)$ has a limit, $\lambda_\infty$.\\ \\
\section{Refinements of the analysis and outline of the proof}\label{sec:ProveMainTHM}
In view of the results of \GW, we focus on the refinements required. These concern
the terms $\mathcal{S}_{\lambda}$ and $\mathcal{S}_{z}$ in ~\eqref{eq:IncreaseLambda} and ~\eqref{eq:DecayZ} and their estimation in ~\eqref{SL1}, for the proofs of Theorems ~\ref{THM:MassTransfer} and ~\ref{THM:MainTheorem2}. In this section we derive $\mathcal{S}_{\lambda}$ and $\mathcal{S}_{z}$ and estimate them.
Technically the main effort in the present paper is to improve the estimates for the various terms on the right hand side of ~\eqref{eq:crudeLamb} and ~\eqref{old-lam-z2}. It is relatively easier to improve the estimate for $\partial_{t}|z|^2$, since the term $-2z^* \Gamma(z,\bar{z})z$ already measures the decreasing of $|z|^2$. What is left is to prove the term $Rem_{z}$ is indeed a small correction in certain sense.
To improve the estimates of the terms on the right hand side of ~\eqref{eq:crudeLamb} is more involved. From ~\eqref{eq:crudeLamb} we can not tell the increasing or decreasing of the parameter $\lambda.$ For that purpose we expand the right hand side of ~\eqref{eq:crudeLamb} to {\it fourth order} in $z$ and $\bar{z}$ to find some sign. This in turn is achieved by expansion of the function $R$ or $\tilde{R}$ further to third order in $z$ and $\bar{z}$. For that purpose we define the third order terms in ~\eqref{eq:difRgeq3} and introduce remainder by $R_{\geq 4}$ in ~\eqref{dif:R4}.
We next present some precise estimates on $R_{\geq 4},\ z$ and $\dot\lambda$, which are defined in Appendices
~\ref{sec:decom}-~\ref{sec:NormalFormExp}.
To facilitate later discussions we define the constant $\delta_{\infty}$ by:
\begin{equation}\label{eq:defDelta}
\delta_{\infty}:=\|\phi^{\lambda_{\infty}}\|_{L^2}
=\mathcal{O}(|\lambda_{\infty}+e_0|^{1-\frac{1}{2\sigma}})=\mathcal{O}(\delta(\lambda(t)))\ \text{for any time}\ t
\end{equation}
where the last estimate follows from the fact the soliton manifold is stable (see \cite{GaWe}). Recall the constant $\delta(\lambda)\equiv \delta$ defined and estimated in ~\eqref{eqn:perturb}, and recall $\displaystyle\lim_{t\rightarrow\infty}\lambda(t)= \lambda_{\infty}$ in Theorem ~\ref{THM:MassTransfer}.
We have:
\begin{proposition}\label{prop:useful}
Suppose that $\frac{|z_{0}|}{\delta_{\infty}}\ll 1$ for $\sigma=1$ and $|z_{0}|\leq\delta_{\infty}^{C(\sigma)}$ for $\sigma>1$ and $C(\sigma)$ is a sufficiently large constant. Then the following results hold: there exists a constant $C>0$ such that for any time $t\geq 0$
\begin{equation}\label{eq:uppZ}
|z(t)|\leq\ C(|z_{0}|^{-2}+\delta_{\infty}^{4\sigma-2}t)^{-\frac{1}{2}};
\end{equation}
if $\sigma=1$ then
\begin{equation}\label{eq:estR4}
\|\langle x\rangle^{-4 }R_{\geq 4}\|_{2}\lesssim |z_{0}|^{2}(1+t)^{-\frac{3}{2}}+\delta_{\infty}|z_{0}|^{2}|z(t)|^2;
\end{equation}
\begin{equation}\label{eq:estJunk}
|\dot\lambda| \lesssim \delta_{\infty}\ |z(t)|^{4}+\delta_{\infty}^2 |z_0|^2 |z(t)|^3+\delta_{\infty}|z_0|^{4}(1+t)^{-3}+\delta_{\infty}|z_0|^2 |z(t)|(1+t)^{-\frac{3}{2}},
\end{equation}
and if $\sigma>1$ then
\begin{equation}
\|\langle x\rangle^{-4 }R_{\geq 4}\|_{2}\lesssim |z_{0}|^{2}(1+t)^{-\frac{3}{2}}+[|z_{0}|^{2}+|z_{0}|^{2\sigma-1}]|z(t)|^2,
\end{equation}
\begin{equation}
|\dot\lambda |\lesssim |z|^{2\sigma+1}+|z(t)|^{4}+\delta_{\infty}|z_0|^2 |z(t)|^3+|z_0|^4 (1+t)^{-3}+|z_0|^2 |z(t)|(1+t)^{-\frac{3}{2}}.
\end{equation}
\end{proposition}
This proposition will be formulated as different parts of Propositions ~\ref{Prop:KeyEst} and ~\ref{Prop:Majorants} in Appendix ~\ref{SEC:DetailInfo}.\
In the next two subsections we find and estimate the functions $S_{z}$ and $S_{\lambda}$ of ~\eqref{eq:IncreaseLambda} and ~\eqref{eq:DecayZ}.
\subsection{Definition of $S_{z}$ and its estimate}
In this part we define and estimate the function $S_{z}$ in ~\eqref{eq:DecayZ}.
It was proved in ~\cite{GaWe}, p. 293 (and can also be derived from ~\eqref{eq:z1} and ~\eqref{eq:z2}) that $z$ satisfies the equation
\begin{align}\label{eq:ZNequation}
\partial_{t}z+iE(\lambda)z=-\Gamma(z,\bar{z})z+\Lambda(z,\bar{z})z+\mathcal{K}
\end{align} where
$\Gamma(z,\bar{z})$ is positive definite and $\Lambda(z,\bar{z})$ is skew symmetric,
$\mathcal{K}=(\mathcal{K}_{1},\cdots, \mathcal{K}_{N})^{T}$ is defined as $$
\begin{array}{lll}
\mathcal{K}_n&:=&-[\partial_{t}p_{n}+iE(\lambda)\displaystyle\sum_{k+l=2,3}(k-l)P^{(n)}_{k,l}]-i [\partial_{t}q_{n}+iE(\lambda)\displaystyle\sum_{k+l=2,3}(k-l)Q^{(n)}_{k,l}]\\
& &-\left< JN(\vec{R},p,z)-\displaystyle\sum_{m+n=2,3} JN_{m,n},
\left(\begin{array}{lll}\eta_{n}\\-i\xi_{n}\end{array}\right)\right>+\Upsilon_{1,1}[\left< q\cdot \eta, \eta_{n}\right>-\left< i p\cdot\xi, \xi_{n}\right>]\\
& &+[\dot\gamma-\Upsilon_{1,1}][\langle (\beta+q)\cdot\eta, \eta_{n}\rangle-i\langle (\alpha+p)\cdot\xi, \xi_{n}\rangle]\\
& &\\
& &-\dot\lambda[a_1\langle \partial_{\lambda}^{2}\phi^\lambda,\eta_n\rangle+\langle (\alpha+p)\cdot\partial_{\lambda}\xi,\eta_n\rangle+ia_2\langle \partial_{\lambda}\phi^\lambda,\xi_n\rangle+i\langle (\beta+q)\cdot\partial_{\lambda}\eta,\xi_n\rangle]\\
& &+\left\langle \vec{R},\dot\lambda \left(
\begin{array}{lll}
\partial_{\lambda}\eta_{n}\\
-i\partial_{\lambda}\xi_{n}
\end{array}\right)-\dot\gamma \left(
\begin{array}{lll}
i\xi_n\\
\eta_n
\end{array}
\right)
\right\rangle.
\end{array}
$$
Recall that $|z|^2$ measures the neutral mode mass. By direct computation we find
$$
\begin{array}{lll}
\frac{d}{dt}|z|^2&=&-2 z^*\Gamma(z,\bar{z})z+2Re z^*\cdot \mathcal{K}\ =\ -2 z^*\Gamma_0(z,\bar{z})z+S_{z}
\end{array}
$$ with the function $S_z$ defined by
\begin{equation}\label{eq:defSz}
S_{z}:=-2 z^*\Gamma(z,\bar{z})z+ 2 z^*\Gamma_0(z,\bar{z})z+2 Re z^*\cdot \mathcal{K}.
\end{equation}
We now estimate different terms on the right hand side of ~\eqref{eq:defSz}.
\begin{lemma}\label{LM:Junkz}
For $\sigma\geq 1$
\begin{equation}\label{eq:approxPos}
z^* \Gamma(z,\bar{z}) z=z^* \Gamma_0(z,\bar{z}) z+\mathcal{O}(\delta_{\infty}^{4\sigma-1}|z|^4).
\end{equation}
If $\sigma=1$ then
\begin{equation}\label{eq:ref1}
|\mathcal{K}|\lesssim \delta_{\infty} |z(t)|^{4}+\delta_{\infty}\|\langle x\rangle^{-4 }R_{\geq 4}\|_{2}^{2}+\delta_{\infty}|z(t)|\|\langle x\rangle^{-4 }R_{\geq 4}\|_{2};
\end{equation}
if $\sigma>1$ then
\begin{equation}\label{eq:ref2}
|\mathcal{K}|\lesssim |z(t)|^{2\sigma+1}+|z(t)|^{4}+\|\langle x\rangle^{-4 } R_{\geq 4}\|_{2}^{2}+|z(t)|\|\langle x\rangle^{-4 }R_{\geq 4}\|_{2}.
\end{equation}
\end{lemma}
Equation ~\eqref{eq:approxPos} will be proved in Appendix ~\ref{sec:approxPos}, ~\eqref{eq:ref1} and ~\eqref{eq:ref2} will be incorporated into Proposition ~\ref{Prop:Majorants}.
By above estimates we have
\begin{theorem}\label{THM:Zequation}
\begin{equation}\label{eq:estSz}
\int_{0}^{\infty}S_{z}(s)\ ds=o(|z_0|^2).
\end{equation}
\end{theorem}
\begin{proof}
The following two estimates together with Lemma ~\ref{LM:Junkz} are sufficient to prove the theorem:
\begin{equation}\label{eq:estzK}
\int_{0}^{\infty} |z|(s)|\mathcal{K}(s)|\ ds=o(|z_0|^2)
\end{equation}
and
\begin{equation}\label{eq:z02}
\int_{0}^{\infty}\delta_{\infty}^{4\sigma-2} |z|^4(s)\ ds\leq C|z_0|^2.
\end{equation}
We next focus on proving the two inequalities \eqref{eq:estzK} and \eqref{eq:z02}.
The proof of ~\eqref{eq:z02} is relatively easy; it follows applying the estimate of $z$ in ~\eqref{eq:uppZ} and direct computation.
We now turn to ~\eqref{eq:estzK}.
For $\sigma=1$ we use ~\eqref{eq:ref1} and ~\eqref{eq:estR4} to obtain
$$|\mathcal{K}|\lesssim \delta_{\infty}|z|^4+ \delta_{\infty}|z_0|^2 |z| (1+t)^{-\frac{3}{2}}+\delta_{\infty}^2 |z|^3 |z_0|^2 +|z_0|^4\delta_{\infty} (1+t)^{-3}.$$
Together with the assumption on the initial condition $|z|\ll \delta_{\infty}=\mathcal{O}(\delta_{0})$ (see ~\eqref{eq:defDelta}) and ~\eqref{eq:uppZ} we have
\begin{equation}\label{eq:intRemaind}
\int_{0}^{\infty}|z||\mathcal{K}|(s)\ ds=o(|z_{0}|^2).
\end{equation}
\nit For the case, $\sigma>1$, the estimate is easier to obtain by applying the stronger condition $|z_0|\leq O(\delta_{0}^{C(\sigma)})=\mathcal{O}(\delta_{\infty}^{C(\sigma)})$ with $C(\sigma)$ being sufficiently large.
This completes the proof.
\end{proof}
\subsection{Definition of $S_{\lambda}$ and its estimate}
After expanding the dispersive part $\vec{R}$ into the third order in $z$ and $\bar{z}$, we derive in Appendix ~\ref{sec:deriv} an equation for $\frac{d}{dt}\|\phi^{\lambda}\|_2^2$:
\begin{equation}\label{eq:DetailLambda}
\frac{d}{dt}\|\phi^{\lambda}\|_2^2
=- z^* \Gamma_0(z,\bar{z})z+S_{\lambda}
\end{equation}
with $S_{\lambda}$ defined as
\begin{equation}
S_{\lambda}:=
2\Psi\ +\ \left(\ 2\Pi_{2,2} + z^* \Gamma_0(z,\bar{z})z\ \right)
\ +\ 2\sum_{m+n=4,5;\ m\ne n}
\Pi_{m,n} \nn\end{equation}
here $\displaystyle\sum_{m+n=4,5}\Pi_{m,n}$ is a collection of fourth and fifth order terms
$$
\begin{array}{lll}
\displaystyle\sum_{m+n=4,5}\Pi_{m,n}&:=
&-\left\langle \displaystyle\sum_{m+n=4}JN_{m,n}\ ,\ \left(
\begin{array}{lll}
\phi^{\lambda}\\
0
\end{array}\right)
\right\rangle+\Upsilon_{1,1}\left\langle \displaystyle\sum_{m+n=2,3}R_{m,n},\left(
\begin{array}{lll}
0\\
\phi^{\lambda}
\end{array}\right)
\right\rangle+\Upsilon_{1,1}\left\langle q\cdot \eta,\phi^{\lambda}\right\rangle\\
&&\nn\\
& &+\langle \phi^{\lambda},\partial_{\lambda}\phi^{\lambda}\rangle\ \left[\partial_{z}a_1\cdot Z_{2,1}+\partial_{\bar{z}}a_1\cdot\overline{Z_{2,1}}\right],
\end{array}
$$
and $Z_{2,1}:=-\Gamma(z,\bar{z})z+\Lambda(z,\bar{z})z$ with the latter defined in ~\eqref{new-nf1};
and $\Psi$ is defined as
$$
\begin{array}{lll}
\Psi&:=&(\dot\gamma -\Upsilon_{1,1})\left[\ \left\langle \vec{R},
\left(
\begin{array}{lll}
0\\
\phi^{\lambda}
\end{array}
\right)
\right\rangle+\left\langle (\beta+q)\cdot\eta,\phi^{\lambda}\right\rangle\ \right]+
\Upsilon_{1,1}\left\langle R_{\geq 4},\left(
\begin{array}{lll}
0\\
\phi^{\lambda}
\end{array}
\right)\right\rangle\\
& &\\
& &+\dot\lambda\left\langle \vec{R},
\left(
\begin{array}{lll}
\partial_{\lambda}\phi^{\lambda}\\
0
\end{array}
\right)
\right\rangle-\dot\lambda a_{1}\langle \partial_{\lambda}^{2}\phi^{\lambda},\phi^\lambda\rangle-\dot\lambda (\alpha+p)\langle \partial_{\lambda}\xi,\phi^{\lambda}\rangle\\
& &+\langle \phi^{\lambda},\partial_{\lambda}\phi^{\lambda}\rangle[\partial_{t}a_1+iE(\lambda)\displaystyle
\sum_{m+n=2,3}(m-n)A_{m,n}^{(1)}-\partial_{z}a_1\cdot Z_{2,1}-\partial_{\bar{z}}a_1\cdot\overline{Z_{2,1}}]\\
& &+\left\langle JN-\displaystyle\sum_{m+n=2}^{4}JN_{m,n}\ ,\ \left(
\begin{array}{lll}
\phi^{\lambda}\\
0
\end{array}
\right)\right\rangle,
\end{array}
$$ Here we used the convention made in ~\eqref{eq:convention} and the definitions of Appendix ~\ref{sec:NormalFormExp}.
To control these terms in $S_{\lambda}$ we use the following results: (Recall $\delta_{\infty}=\|\phi^{\lambda_{\infty}}\|_2$, defined in ~\eqref{eq:defDelta}.)
\begin{lemma}\label{LM:junk}
\begin{align}\label{eq:KeyTermRef}
|\Psi| &\lesssim |z|\delta_{\infty}^{2\sigma-1}\|\langle x\rangle^{-4}R_{\geq 4}\|_{2}+\|\langle x\rangle^{-4}R_{\geq 4}\|_{2}^2+\delta_{\infty}^{2\sigma-1}|z|^5\\ &\nn\\
\label{eq:KeyTerm}
2\Pi_{2,2}+ z^* \Gamma_0(z,\bar{z})z &= \mathcal{O}(\delta_{\infty}^{4\sigma-1}|z|^4),
\\ &\nn\\
\label{eq:small}
\sum_{
\begin{subarray}{lll}
m+n=4,5,\\
m\not=n
\end{subarray}
}\int_0^\infty \Pi_{m,n}(s)\ ds &\lesssim \sum_{
\begin{subarray}{lll}
m+n=4,5,\\
m\not=n
\end{subarray}
}\int_0^\infty |\partial_{\lambda}\Pi_{m,n}||\dot\lambda|(s)+|\partial_{z}\Pi_{m,n}||\dot{z}+iE(\lambda)z|(s)\ ds\\ &+\ o(|z_0|^2) \nn
\end{align}
\end{lemma}
\nit The bound ~\eqref{eq:KeyTermRef} will be proved in Appendix ~\ref{SEC:DetailInfo}, ~\eqref{eq:small} in Section ~\ref{sec:periodic} and ~\eqref{eq:KeyTerm} in Section ~\ref{sec:compare}. We now briefly present the ideas in the proof.
\begin{itemize}
\item[(1)] $\Psi$ is defined in term of functions $\dot\lambda,\ \dot\gamma$, $z$ and $\vec{R}$. They satisfy a coupled system. This system must be put in matrix form and decoupled. In the end, we bound the functions $\dot\lambda$ and $\dot\gamma$ by the functions of $\vec{R}$ (or $R_{\geq 4}$) and $z$.
\item[(2)] All the integrands in ~\eqref{eq:KeyTerm} are of order $|z|^4$ in $z$ and $\bar{z}.$ What makes the terms different is the sizes of the coefficients. These depend smoothly on the functions $\phi^{\lambda}, \ \partial_{\lambda}\phi^\lambda,\ \xi,\ \eta$, which in turn depend smoothly on the small parameter $\delta(\lambda)=\mathcal{O}(\delta_{\infty})$; see Proposition ~\ref{Prop:Parameters}. The estimate~\eqref{eq:KeyTerm} follows from a perturbation expansion in the parameter $\delta(\lambda).$
\item[(3)] For ~\eqref{eq:small} the important observation is that, if $m\ne n$, then function $\Pi_{m,n}$ is a sum of the functions of the form $C(\lambda)z^{m}\bar{z}^n=C(\lambda)\prod_{k}z^{m_{k}}_{k}\prod_{l}\bar{z}^{n_{l}}_{l}$ with $m=\sum_{k}m_k,\ n=\sum_{l}n_{l}$. These are ``almost periodic" with period $2\pi(E(\lambda)(m-n))^{-1}\ne0$ since $z$ satisfies the equation $\dot{z}=-iE(\lambda)z+\cdots$. This non-trivial oscillation enables us to integrate by parts in the variable $s$ to derive smallness. The term $o(|z_0|^2)$ in ~\eqref{eq:small} is due to a boundary term obtained in this way.
\end{itemize}
Based on the estimates in Lemma ~\ref{LM:junk} we will prove
\begin{theorem}\label{THM:KeyTerm}
$S_{\lambda}$ satisfying the estimate in ~\eqref{eq:IncreaseLambda}, i.e.
$$\int_{0}^{\infty} S_{\lambda}(s)\ ds=o(|z_0|^2).$$
\end{theorem}
\begin{proof}
The result follows directly from Lemma ~\ref{LM:junk} and the following two estimates:
\begin{equation}\label{eq:KeyTerm2}
\int_{0}^{\infty}|\Psi|(s)\ ds=o(|z_0|^2);
\end{equation}
\begin{equation}\label{eq:KeyTerm3}
\sum_{
\begin{subarray}{lll}
m+n=4,5,\\
m\not=n
\end{subarray}
}\int_0^\infty |\partial_{\lambda}\Pi_{m,n}||\dot\lambda|(s)+|\partial_{z}\Pi_{m,n}||\dot{z}+iE(\lambda)z|(s)\ ds=o(|z_0|^2)\ .
\end{equation}
We next prove estimates \eqref{eq:KeyTerm2} and \eqref{eq:KeyTerm3}.
In the proof we consider the case $\sigma=1$. That of $\sigma>1$ is different, but easier due to the stronger condition $|z_{0}|\leq\delta_{\infty}^{C(\sigma)}$ for some sufficiently large $C(\sigma)$, and hence omit the details.
We start with ~\eqref{eq:KeyTerm2}, by estimating three different terms in the estimate of $\Psi$ in ~\eqref{eq:KeyTermRef} on the right hand side.
By applying the estimates for $z$ in ~\eqref{eq:uppZ}
\begin{equation}\label{eq:estPsi}
\int_{0}^{\infty}\delta_{\infty}|z|^5(s)\ ds\lesssim \int_{0}^{\infty}\delta_{\infty}(|z_0|^{-2}+\delta_\infty^2 s)^{-\frac{5}{2}}\ ds=\frac{2}{3}\delta_{\infty}^{-1}|z_0|^3=o(|z_0|^2)
\end{equation} where the assumption on the initial condition $|z_0|\ll \delta_{0}=\cO(\delta_{\infty})$ was used.
By the estimate of $R_{\geq 4}$ in ~\eqref{eq:estR4} and $|z(t)|$ in ~\eqref{eq:uppZ}
$$
\begin{array}{lll}
& &\int_{0}^{\infty} \delta_{\infty}|z(s)|\|\langle x\rangle^{-4 }R_{\geq 4}(s)\|_{2}\ ds\\
&\lesssim & \delta_{\infty}|z_{0}|^{2}\int_{0}^{\infty}(1+s)^{-\frac{3}{2}}(|z_{0}|^{-2}+\delta_{\infty}^{2} s)^{-\frac{1}{2}}\ ds+
\delta_{\infty}^2|z_{0}|^{2}\int_{0}^{\infty}(|z_{0}|^{-2}+\delta_{\infty}^{2} s)^{-\frac{3}{2}}\ ds\\
&=&o(|z_0|^2).
\end{array}
$$
The third term can be similarly estimated.
Assembling the above estimates yields $$\int_{0}^{\infty}|\Psi(t)|\ dt=o(|z_0|^2).$$
To prove ~\eqref{eq:KeyTerm3} we use the equations for $\dot{z}$ and $\dot\lambda$ in ~\eqref{eq:ZNequation} and ~\eqref{eq:estJunk} to find that if $m+n=4,5$ and $m\not=n$ then
$$|\partial_{\lambda}\Pi_{m,n}||\dot\lambda(s)|+|\partial_{z}\Pi_{m,n}||\dot{z}+iE(\lambda)z|(s)
\lesssim |z||\dot\lambda|+ |z|^6+|z|^3|\mathcal{K}|.
$$ Using the estimates in ~\eqref{eq:ref1} and ~\eqref{eq:ref2} for $\mathcal{K}$, and the estimate ~\eqref{eq:estJunk} and similar techniques above we prove ~\eqref{eq:KeyTerm3}. This is straightforward, but tedious, hence we omit the details.
\end{proof}
\begin{remark}\label{remark:remark3}
In the last step of ~\eqref{eq:estPsi} we used $|z_0|\ll \delta_\infty$ to control $\int_{0}^{\infty}\delta_{\infty}|z|^5(s)\ ds.$ If $\sigma=1$ this can be relaxed to $|z_{0}|\leq \|\phi^{\lambda_0}\|_2=\mathcal{O}( \delta_{\infty})$ by inspecting closely the terms forming $\delta_{\infty}|z|^5$. The term actually is a part of $\left\langle JN_{\geq 5},\left(
\begin{array}{lll}
\phi^{\lambda}\\
0
\end{array}
\right)\right\rangle $, and can be written as $\displaystyle\sum_{m+n=5}K_{m,n}$ for some properly defined $K_{m,n}$. To evaluate $\displaystyle\sum_{m+n=5}\int_{0}^{\infty} K_{m,n}(s)\ ds$ we observe that $K_{m,n},\ m+n=5,$ are ``almost periodic" as $\Lambda_{m,n}$ of ~\eqref{eq:small}. Hence by integrating by parts as in the proof of ~\eqref{eq:small} it is easy to obtain the desired estimate $$\sum_{m+n=5}\int_{0}^{\infty} K_{m,n}(s)\ ds=o(|z_0|^2).$$
Note that the terms $K_{m,n},\ m+n=5,$ may not be well defined if $\sigma\not\in \mathbb{N}.$
\end{remark}
\section{Extension to the case of nearly degenerate neutral modes}\label{SEC:summary}
In ~\cite{GaWe} and the main part of the present paper we have proved that if the neutral modes are degenerate and their eigenvalues are sufficiently close to the essential spectrum then the ground state is asymptotically stable and its mass will grow by half of that of the neutral modes.
In what follows we extend the results to the cases where the neutral modes are nearly degenerate, {\it i.e.} a cluster of approximately equal eigenfrequencies. For technical simplicity, we consider the case of cubic nonlinearity, $\sigma=1$. The main result is Theorem ~\ref{THM:main3} below. The key ideas of the proof will be presented after its statement.
\subsection{New assumptions on the spectrum and definition of FGR}
As in Subsection ~\ref{Vassumptions}
we assume that the linear operator $-\Delta+V$ has
the following properties:
\begin{enumerate}
\item[(V1)]
$V$ is real valued and decays sufficiently rapidly, {\it e.g.} exponentially, as $|x|$ tends to infinity.\\
\item[(V2)] The linear operator $-\Delta+V$ has $N+1$ (counting multiplicity if degenerate) eigenvalues $e_{0}, \ e_{k},\ k=1,2,\cdot,N,$ with $e_{0}<e_{k},$\\
$e_{0}$ is the lowest eigenvalue with
ground state $\phi_{lin}>0$, the eigenvalues $\{e_{k}\}_{k=1}^{N}$ are possibly degenerate
with eigenvectors
$\xi_{1}^{lin},\xi_{2}^{lin},\cdot\cdot\cdot,\xi_{N}^{lin}.$
\item[(V3)] Moreover, for any $k=1,2,\cdots, N$
we assume
\begin{equation} 2e_k-e_0>0.
\end{equation}
\end{enumerate}
Then the nonlinear equation ~\eqref{eq:NLS} admits a family of ground states solution $e^{i\lambda t}\phi^{\lambda}$ with properties as described in Proposition ~\ref{bif-of-gs}. The linearized operators about the ground states, $L(\lambda)$, takes the same form as in ~\eqref{eq:opera}. The excited states of $-\Delta+V$ bifurcate to the neutral modes $\left(
\begin{array}{lll}
\xi_k\\
\pm i\eta_k
\end{array}
\right)$ of $L(\lambda)$ with eigenvalues $\pm i E_{k}(\lambda),\ k=1,\cdots, N.$ The ground states $\phi^{\lambda}$ and neutral modes satisfy all the estimates in Lemma ~\ref{LM:NearLinear} and the estimates ~\eqref{eq:LambdaPhi2}-~\eqref{eq:asympto}.
Assumption {\bf (SA)} on the spectrum of $L(\lambda)$ is generalized, in the case where near-degeneracy is admitted, as
\begin{enumerate}
\item[{\bf (SA)}] The discrete spectrum of the linearized operator $L(\lambda)$ consists of:
the eigenvalue $0$
with
generalized eigenvectors $\left(
\begin{array}{lll}
0\\
\phi^{\lambda}
\end{array}
\right)$ and $\left(
\begin{array}{lll}
\partial_{\lambda}\phi^{\lambda}\\
0
\end{array}
\right)$ and eigenvalues $\pm iE_k (\lambda),\ E_k(\lambda)>0,\ k=1,2,\cdots,N$.
\end{enumerate}
A consequence of nonzero neutral mode frequency-differences is a slightly different system for the neutral mode amplitudes, $z(t)$. The solution $\psi(t)$ is decomposed as in ~\eqref{eq:decom}. Following the same procedure as in ~\cite{GaWe}, we derive
\begin{equation}
\partial_{t}z =-iE(\lambda) z -\Gamma( z ,\bar{ z }) z +\Lambda( z ,\bar{ z })
z\ +\cdots
\end{equation}
where $E(\lambda)=Diag[E_{1}(\lambda),\cdots,\ E_{n}(\lambda)]$ is a diagonal $N\times N$ matrix, $\Gamma$ is symmetric and $\Lambda$ is skew symmetric.
We now describe the matrix $\Gamma$, which takes a different form from the degenerate case:
Define vector functions $G(k,m),\ k,m=1,2,\cdot\cdot\cdot, N,$ as
\begin{equation}
G(k,m):=\left(
\begin{array}{lll}
B(k,m)\\
D(k,m)
\end{array}
\right)
\end{equation} with the functions $B(k,m)$ and $D(k,m)$ defined as $$\begin{array}{lll}
B(k,m)&:=&-i\phi^{\lambda} \ \ \left[\ z_m \xi_{m}\ \eta_{k}+z_m\eta_m\ \xi_{k}\ \right]\ , \\
D(k,m)&:=&-\phi^{\lambda}
\left[\ 3z_m\xi_m\ \xi_{k}-z_m\eta_m\ \eta_{k}\ \right]\ .
\end{array}$$
In terms of the column 2-vector, $G(k,m)$, we define
a $N \times N$ matrix $Z(z,\bar{z})$ as
\begin{equation}
Z(z,\bar{z})=(Z^{(k,l)}(z,\bar{z})),\ \ 1\le k,l\le N
\end{equation} and
\begin{equation}
Z^{(k,l)}\ =\ -\left\langle
\sum_{m=1}^{N}(L(\lambda)+iE_l (\lambda)+iE_{m}(\lambda)-0)^{-1}P_{c}G(l,m), iJP_c\sum_{m=1}^{N}G(k,m)\right\rangle
\end{equation}
Finally, we define $\Gamma(z,\bar{z})$ as follows:
\begin{equation}
\Gamma(z,\bar{z})\ :=\ \frac{1}{2}[Z(z,\bar{z})+Z^{*}(z,\bar{z})].
\end{equation}
We shall require the following Fermi Golden Rule hypothesis: Let $P_{c}^{lin}$ be the projection onto the essential spectrum of $-\Delta+V$ then
\begin{enumerate}
\item[{\bf (FGR)}] We assume there exists a constant $C>0$ such that $$-Re\langle i[-\Delta+V+\lambda-2E_1(\lambda)-i0]^{-1}P_{c}^{lin}\phi_{lin}(z\cdot \xi^{lin})^{2},\phi_{lin}(z\cdot \xi^{lin})^{2}\rangle\ge C|z|^4$$ for any $z\in \mathbb{C}^{N}.$
\end{enumerate} The assumption FGR implies that there exist constants $C_1>0$ and $\delta_0>0$ such that if $\sup_{k,l}|E_{k}(\lambda)-E_{l}(\lambda)|\leq \delta_{0}$ then for any $z\in \mathbb{C}^{N}$
\begin{equation}\label{eq:FGR2}
z^*\ \Gamma(z,\bar{z})\ z\geq C_1\|\phi^{\lambda}\|_{\infty}^{2} |z|^{4}.
\end{equation}
We now introduce the leading order contribution to $\Gamma(z,\bar{z})$.
For each fixed $z$ we use the fact $|\lambda+e_0|$ being small and use ~\eqref{eqn:perturb} and ~\eqref{eq:GoToNear} to find the leading term in $z^*\ \Gamma(z,\bar{z})\ z$ is $z^* \Gamma_0(z,\bar{z})\ z$ defined as
\begin{equation}\label{eq:Gamma2}
\begin{array}{lll}
& &z^* \Gamma_0(z,\bar{z})\ z\\
&=&-
8\delta^{2}(\lambda)\Re\langle i\displaystyle\sum_{m,n\leq N}[-\Delta+V+\lambda-E_m(\lambda)-E_n-i0]^{-1}P_{c}^{lin}\phi^{lin}(z_m \xi_{m}^{lin})(z_n \xi_n^{lin}),
\phi^{lin}(z\cdot\xi^{lin})^{2}\rangle.
\end{array}
\end{equation}
\subsection{Main Theorem in nearly degenerate case and strategy of proof}
Recall that we only consider the case $\sigma=1$, i.e. the cubic nonlinearity.
\begin{theorem}\label{THM:main3}
There exists a constant $\delta_0$ independent of the initial condition $\psi_0$ of ~\eqref{eq:NLS} such that if $\displaystyle\max_{k,l}|E_k(\lambda)-E_l(\lambda)|\leq \delta_0$ then all the results in Theorem ~\ref{THM:MassTransfer} hold with $ z^* \Gamma_0(z,\bar{z})\ z$ replaced by the expression in ~\eqref{eq:Gamma2}. Moreover all the remainder estimates in ~\eqref{eq:IncreaseLambda}-~\eqref{eq:Mass} hold and are independent of the size of $\delta_0.$
\end{theorem}
In the next we show how to recover all the estimates. To simplify the treatment we only consider the case $N=2$ with eigenfrequencies $E_1(\lambda)$ and $E_2(\lambda).$
There are some differences between the degenerate and the nearly degenerate cases. Among them, the most outstanding one are terms, which previously vanished identically, which now need to be estimated. These terms include, for example, $\langle ImN_{1,1},\phi^{\lambda}\rangle$, which was proved to be zero in ~\cite{GaWe} Lemma 9.4, p. 291 which we see below is non-zero in the nearly degenerate case. To treat such terms, the key observation is that these terms have a factor $E_1(\lambda)-E_2(\lambda)$ in their coefficient enabling us to re-express $[E_1(\lambda)-E_2(\lambda)]z_1 \bar{z}_2$ as $-i\frac{d}{dt}z_1 \bar{z}_2+o(|z|^4)$. Thus, these terms can be removed via integration by parts and a redefinition of the normal form transformation.
\subsection{Normal Form Transformation and Asymptotic Stability of Ground States}
We decompose the initial condition in exactly the same way as in ~\eqref{eq:decom}. All equations ~\eqref{eq:decom}-~\eqref{eq:Gamma11}, ~\eqref{eq:gamma} and ~\eqref{eq:lambda} hold. The equations for $\dot{z}$ are slightly different since $z_j$ each have different associated frequencies. Consequently instead of ~\eqref{eq:z1} and ~\eqref{eq:z2} we have $$\partial_{t}(\alpha_{n}+p_{n})-E_n(\lambda)(\beta_{n}+q_{n})+\cdots,\ \ \
\partial_{t}(\beta_{n}+q_{n})+E_n(\lambda)(\alpha_{n}+p_{n})+\cdots,$$
requiring a different near-indentity / normal form transformation.
To illustrate the main difference in the calculation we study the equation for $\dot\lambda.$ Recall that the function $\dot\lambda$ satisfies the equation $$\dot\lambda+\partial_{t}a_1=-\frac{1}{\langle \phi^{\lambda},\partial_{\lambda}\phi^{\lambda}\rangle}\langle ImN(\vec{R},z),\phi^{\lambda}\rangle+\cdots$$ and we want to remove the second and third order terms in $z$ and $\bar{z}$ from the equation by defining some polynomial $a_1$ in $z$ and $\bar{z}$: $$a_1:=\sum_{m+n=2,3}A_{m,n}^{(1)}.$$ In the degenerate case we set $A_{1,1}^{(1)}=0$ (see ~\eqref{eq:pkmn}) due to the fact $\langle ImN_{1,1},\phi^{\lambda}\rangle=0.$ When the latter no longer holds $A_{1,1}^{(1)}$ has to be redefined. Following steps in ~\cite{GaWe}, p. 291, we use the fact $\left(
\begin{array}{lll}
\xi_{n}\\
i\eta_{n}
\end{array}
\right),\ n=1,2,$ are eigenvectors of $L(\lambda)$ to obtain
\begin{equation}\label{eq:IMN11}
\begin{array}{lll}
\langle ImN_{1,1},\phi^{\lambda}\rangle&:=&\frac{1}{2i}\displaystyle\sum_{n=1}^{2}\sum_{m=1}^{2}\bar{z}_{n}z_{m}\int (\phi^{\lambda})^2 (\xi_n\eta_m-\xi_m\eta_n)\\
&=&\frac{1}{4i}\displaystyle\sum_{n=1}^{2}\sum_{m=1}^{2}\bar{z}_{n}z_{m}[\langle (L_{-}-L_{+})\xi_n,\eta_m\rangle-\langle (L_{-}-L_{+})\xi_m,\eta_{n}\rangle]\\
&=&\frac{1}{4i}[E_1(\lambda)-E_2(\lambda)][z_1\bar{z}_2-z_2\bar{z}_1][\langle \eta_1,\eta_2\rangle+\langle \xi_1,\xi_2\rangle].
\end{array}
\end{equation}
To remove ~\eqref{eq:IMN11} from the equation of $\dot\lambda$ we define
\begin{align}\label{eq:A11}
A_{1,1}^{(1)}&:=&-\frac{1}{4\langle \phi^{\lambda},\partial_{\lambda}\phi^{\lambda}\rangle}[z_1\bar{z}_2+z_2\bar{z}_1][\langle \eta_1,\eta_2\rangle+\langle \xi_1,\xi_2\rangle]\
= \mathcal{O}\left(\|\phi^{\lambda}\|_2^{2}\ |z|^2\right),
\end{align}
where in the last step the estimate \eqref{eq:asympto} and the fact $\xi_1^{lin}\perp \xi_{2}^{lin}$ are used.
For the other terms in $a_1$ we only re-define $A_{2,0}^{(1)}$ to illustrate the differences: Decompose $\frac{1}{\langle \phi^{\lambda},\partial_{\lambda}\phi^{\lambda}\rangle}\langle ImN_{2,0},\phi^{\lambda}\rangle$ as $K_1 z_1^2+K_2 z_1 z_2+K_3 z_2^2,$ then instead of the definition in ~\eqref{eq:A1} we define $$A_{2,0}^{(1)}=-\frac{i}{2E_1(\lambda)}K_1 z_1^2-\frac{i}{E_1(\lambda)+E_2(\lambda)}K_2 z_1z_2-\frac{i}{2E_2(\lambda)}K_2 z_2^2.$$
The new normal forms enable the proof of asymptotic stability of the ground states to go through, as well as all results in Section ~\ref{sec:CorrectionNormal}, {\it i.e.} all the statements in Theorem ~\ref{THM:MassTransfer} except (A) and (D), which we discuss in the next subsection.
\subsection{Equipartition of Energy}
In this subsection we recover the Statements (A) and (D). Most of the arguments proved in the degenerate regime still hold. As presented above certain newly-nonzero terms enter different places. In what follows we present the strategy to handle such terms.
To illustrate the idea we only study one term whose counterpart is $H_{2,2}$ in ~\eqref{eq:H22}
$$D:=\sum_{
\begin{subarray}{lll}
m+n=2\\
m'+n'=2
\end{subarray}}D(m,n,m',n')
$$ with $D(m,n,m',n')$ being a real function:
$$D(m,n,m',n'):=Re\langle i(-\Delta+V+\lambda+m E_1(\lambda)+n E_2(\lambda))^{-1}P_{c}^{lin}\phi^{\lambda}(z_1 \xi_1)^{m} (z_2 \xi_2)^{n}, \phi^{\lambda}(z_1\xi_1)^{m'}(z_2\xi_2)^{n'}\rangle.$$ If $E_1(\lambda)=E_{2}(\lambda)$ then we use the observation in ~\eqref{eq:H22} to prove $$
D(m,n,m',n')=\overline{D(m,n,m',n')}=-D(m',n',m,n)\ \text{implies}\ D=0.
$$ When $E_1(\lambda)\not=E_2(\lambda)$ we use the following result to recover the desired estimate
\begin{lemma}
\begin{equation}\label{eq:switch}
\int_{0}^{\infty}D(s)\ ds=o(|z_0|^2).
\end{equation}
\end{lemma}
\begin{proof}
The facts that $D(m',n',m,n)$ is real and $(-\Delta+V+\lambda+m' E_1(\lambda)+n' E_2(\lambda))^{-1}$ is self-adjoint imply
$$
\begin{array}{lll}
D(m',n',m,n)&=&\overline{D(m',n',m,n,)}\\
&=&-Re\langle i(-\Delta+V+\lambda+m' E_1(\lambda)+n' E_2(\lambda))^{-1}P_{c}^{lin}\times\\
& &\phi^{\lambda}(z_1 \xi_1)^{m} (z_2 \xi_2)^{n}, \phi^{\lambda}(z_1\xi_1)^{m'}(z_2\xi_2)^{n'}\rangle.
\end{array}
$$
The crucial step is to find the presence of $E_1(\lambda)-E_2(\lambda)$ in the coefficient:
\begin{equation}\label{eq:realPart}
\begin{array}{lll}
D(m,n,m',n')+D(m',n',m,n)
&=&-[(m-m')E_1(\lambda)+(n-n')E_2(\lambda)] Re H\\
&=&-[m-m'][E_{1}(\lambda)-E_{2}(\lambda)]ReH
\end{array}
\end{equation} where $H$ is defined as
$$
\begin{array}{lll}
H&:=&\langle i[-\Delta+V+\lambda+m' E_1(\lambda)+n' E_2(\lambda)]^{-1}[-\Delta+V+\lambda+m E_1(\lambda)+n E_2(\lambda)]^{-1}\times\\
& &P_{c}^{lin}\phi^{\lambda}(z_1 \xi_1)^{m} (z_2 \xi_2)^{n}, \phi^{\lambda}(z_1\xi_1)^{m'}(z_2\xi_2)^{n'}\rangle,
\end{array}
$$ and in the last step the fact $m+n=m'+n'=2$ is used.
~\eqref{eq:realPart} enables us to use the same trick as in ~\eqref{eq:small}, namely integration by parts, to obtain the desired estimate
\begin{align}
&\int_0^{\infty}D(m,n,m',n')+D(m',n',m,n) ds\\
&=\int_{0}^{\infty}\frac{d}{ds}\ \Re\ \left( iH\ \right)\ ds+\int_0^{\infty} \mathcal{O}(|\dot\lambda||z|^4+\|\phi^{\lambda}\|_{2}^{2}|z|^6)\ ds
+o(|z_0|^2).
\end{align}
The proof is complete.
\end{proof}
\nit In summary, as outlined above, all the estimates obtained in the degenerate can be proved in the nearly degenerate case.
|
2,877,628,089,809 | arxiv | \section{Introduction}
\label{sec:introduction}
The current Internet infrastructure draws far more power than needed
for its usual operation. At the same time, the network is still
growing, so this inefficiency translates to ever increasing power
demands with high monetary and environmental costs. For reference, the
overall energy consumption of all networking equipment just in the USA
in 2008 was estimated to be larger than
$18\,$TWh~\cite{lanzisera10:_data_networ_equip_energ_use} and the
estimated energy usage for the year 2020 in Europe is more than
$38\,$TWh~\cite{group08:_smart}.
These high energy demands have spurred successful research on all areas of
networking, from the link
level~\cite{reviriego09:_perf_eval_eee,herreria12:_gi_g_model_gb_energ_effic_ether,Sivaraman2014110,Khan2013965,Jung20143}
to the networking layer adapting the routing decisions, as suggested in
Gupta's seminal paper~\cite{gupta03:_green_of_inter}. However, these traffic
engineering proposals were initially constrained to the mere aggregation of
traffic during low activity periods to power off some devices, as that was the
only way a non-power aware device could be made to draw less power. From
there, many researches have followed this idea applying it to different
scenarios. Just as an example,
\cite{chiaraviglio12:_minim_isp_networ_energ_cost,caria12:_how,botero12:_energ_effic_virtual_networ_embed,addis14:_energ_manag_throug_optim_routin}
study centralized algorithms to minimize the number of active network
resources to get significant power savings, assuming a simple on-off power
model of networking equipment. Decentralized algorithms, as extensions to the
ubiquitous OSPF protocol, were also explored
in~\cite{cianfrani10:_energ_savin_routin_algor_green_ospf_protocy,cianfrani12:_ospf_integ_routin_strat_qos}.
Fortunately, newly produced networking equipment is increasingly becoming more
power aware. For instance, old Ethernet interfaces drew a fixed amount of
power regardless of the actual load. Since the arrival of the IEEE~802.3az
standard~\cite{802.3az}, this is no longer the case as they can adapt their
power demands to the traffic load. Thus, it is unnecessary to turn them off
completely in order to save
power~\cite{herreria12:_gi_g_model_gb_energ_effic_ether}. This trend is not
only limited to Ethernet devices. It also appears in optical
networks~\cite{zhang11:_towar_energ_effic_epon_epon,rodriguez12:_improv_energ_effic_upstr_epon},
switching modules~\cite{bianco12:_power_savin_distr_multi_router_archit}, etc.
The result is that new networking equipment exhibits non-flat power
profiles~\cite{herreria12:_gi_g_model_gb_energ_effic_ether}, and thus presents
an opportunity to regulate the traffic offered to each device, either
spreading or concentrating it, to take advantage of the power profile of each
device. These new capabilities are explored for instance
in~\cite{cardona09:_energ_profil_aware_routin,garroppo11:_energ_aware_routin_based_energ_charac_devic,seoane11:_energ_energ_effic_ether}.
The idea is not to concentrate traffic in a few set of links and power off the
rest, but to find the optimum share of traffic that minimizes energy costs
according to the power profile of each device.
In this paper we present the first dynamic decentralized algorithm capable of
adapting routing decisions to minimize energy usage when networking equipment
has otherwise unrestricted power profiles. Although it is not the first
proposal to make use of the ant colony optimization
algorithm~\cite{di97:_antnet} for energy saving~\cite{kim12:_ant_inter}, it is
the first that does not limit its routing decisions to decide the set of links
to power off for a given traffic matrix. In fact, it takes advantage of links
with non-flat power profiles and adjusts their traffic load in real time to
minimize power consumption while keeping all the installed capacity available.
This lets the network react better to unexpected spikes in the traffic load
and, additionally, improves the network resilience in case of a link failure.
The main difficulty in the adaptation of the original ant colony optimization
algorithm comes from the fact that, for the problem at hand, the cost of a
given route is not a simple linear function of its load and thus the protocol
becomes more complex than in the original version of the
algorithm~\cite{di97:_antnet}. We show in the next sections how this problem
was solved.
The rest of the paper is organized as follows. In
Section~\ref{sec:related-work} we present the related work. Then,
Section~\ref{sec:problem-statement} defines the problem in detail. Our
algorithm is described in Section~\ref{sec:trancas-algorithm}. Then,
an evaluation is carried out in
Section~\ref{sec:performance-evaluation} to finally present our
conclusions in Section~\ref{sec:conclusions}.
\section{Related Work}
\label{sec:related-work}
Research on new routing procedures that save power on communication networks
have been ongoing for a few years already. The first proposals focused on
concentrating the traffic on a reduced set of network elements so that unused
resources could be powered off during low load periods decreasing power
consumption. \cite{cianfrani12:_ospf_integ_routin_strat_qos} belongs to this
first family of proposals. It tries to concentrate traffic flows on a reduced
set of links to power off the rest. Another proposals in the same vein
are~\cite{chiaraviglio12:_minim_isp_networ_energ_cost} and~\cite{Yang20141}.
The first formulates a minimization problem of the energy consumption
considering that powered nodes and links need a constant amount of power, and
the second treats the problem of maximizing the number of powered off links.
As both problems are intractable
(NP-complete)~\cite{cianfrani12:_ospf_integ_routin_strat_qos,chiaraviglio12:_minim_isp_networ_energ_cost,kim12:_ant_inter,Yang20141},
both articles provide some heuristics to approximate the solution. All these
proposals, however, do not take into account the different power profiles that
new power-aware networking equipment exhibits and may even cause more harm
than good when these profiles are super-linear, as the increased power
consumption caused by traffic aggregation can surpass any power savings
obtained by the reduced consumption of the powered-down resources.
New proposals that take into account the different power profiles are also
known in the literature. For
instance,~\cite{chiaraviglioss:_model_sleep_mode_gains_energ_aware_networ}
considers super-linear energy costs functions in the analysis of the maximum
power savings attainable by powering down part of the network.
In~\cite{seoane11:_energ_energ_effic_ether} the authors formulate a
minimization problem considering the links formed by IEEE~802.3az links.
Similarly, the authors of~\cite{cardona09:_energ_profil_aware_routin} address
a similar problem and compare the results obtained with both super and
sub-linear power profiles. The same problem is also studied
in~\cite{garroppo11:_energ_aware_routin_based_energ_charac_devic,garroppo13:_does_traff_consol_alway_lead},
this time considering bundle links between adjacent routers. The authors find
out in~\cite{garroppo13:_does_traff_consol_alway_lead} that traffic
consolidation does not always lead to energy savings.
The main practical issue with all of these proposals is the complexity of the
problem, which is NP-complete~\cite{kim12:_ant_inter}. Finding the optimum
solution in a real network is very hard and it cannot be usually solved in a
short enough amount of time. NP-complete problems can be tackled employing
search heuristics, usually inspired by elements of the nature, that trade some
optimality in the found solution for execution time. In fact, such heuristics
have already been used with success in other areas of
networking~\cite{Nazi2014246}. So, there is a new line of research that
applies search heuristics to the route optimization problem reducing its
computational complexity. For
instance,~\cite{lu13:_genet_algor_energ_effic_qos_multic_routin} presents an
algorithm to save power in a restricted scenario of a multicast transmission
using genetic algorithms to find the solution to the routing problem.
In~\cite{galan2013:_using} the authors use the particle swarm optimization
technique to study the trade-off between the number of power profiles in line
cards and the energy savings realized. Finally,~\cite{kim12:_ant_inter} uses
the ant colony optimization algorithm to choose which links to power off to
maximize energy savings during low usage periods. Regretfully, none of these
works takes advantage of the energy savings capabilities present in current
equipment, unlike our proposal that permits the links to stay up, but
modulates their offered load to minimize energy consumption without affecting
the network resiliency.
\section{Problem Statement}
\label{sec:problem-statement}
We model the network as a directed graph $G = (N, \Lambda)$, with $N$ being
the set of nodes (i.e. IP routers) and $\Lambda$ the set of directed links.
Each link $\ell = (u, v) \in \Lambda,\,u,v \in N,$ with a nominal capacity
$\mu_{\ell},$ has associated a dynamic cost function $c_{\ell}(\rho_{\ell})
\in \mathbb{R}^+,$ with $\rho_{\ell}$ being the normalized traffic load
carried by the link. That is $\rho_{\ell} \triangleq \lambda_{\ell} /
\mu_{\ell}$, where $\lambda_{\ell}$ is the amount of traffic carried over the
link $\ell \in \Lambda$. Therefore, the cost of the links varies with the
offered load. Furthermore, we assume $c_{\ell}\left(\rho_{\ell}\right) =
\infty\text{ if } \rho_{\ell} > 1$.
The cost function captures the power needed to run the links at a given load.
Although most currently deployed links lack load-aware power profiles, new
links, such as those implementing IEEE~802.3az, have non-flat power profiles
that must be accounted for when implementing energy-aware routing protocols.
In our analysis we will assume that the power profile function is
monotonically increasing with link load. Also, for simplicity, the power
needed by the engines of the routers is assumed to be almost constant and so
it is absent from our analysis.
We will model the network traffic as a set of flows $\Phi$. Each flow $f \in
\Phi$ is described by a triple $f = \left(o, d, \lambda_f\right)$, with $o, d
\in N$ being the origin and the destination nodes respectively, and
$\lambda_f$ the amount of traffic carried by the flow. Each flow $f$ follows a
path $p_{\mathit{f}}$, defined as an ordered set of adjacent links going from
the origin node $o$ to the destination node $d$. There is a list of
symbols used in this article in Table~\ref{tab:symbols_legend}.
\begin{table*}
\centering
\begin{tabular}{c l}\hline
\textbf{Symbol} & \textbf{Definition} \\\hline
$N$ & Set of nodes in the network \\
$E$ & Set of network edges\\
$\Lambda$ & Set of links in the network \\
$\mu_{\ell}$ & Nominal capacity of link $\ell$\\
$c_{\ell}(\rho)$ & Cost function of link $\ell\in \Lambda$ for load $\rho$\\
$\vec c^f$ & Direct costs of flow $f$\\
$\vec \gamma^f$ & Indirect cost caused by flow $f$\\
$\Phi$ & Set of flows \\
$f(o, d, \lambda)$ & Flow from node $o$ to $d$ carrying traffic
$\lambda$\\
$p_{\mathit{f}}$ & Path followed by flow $f$\\
$g_i^f(j)$ & \emph{Goodness} at node $i$ for taking $j$ as the next hop of
flow $f$\\
$\Gamma$ & Estimated network cost for the current agent\\
$\pi_e$ & Threshold between random and \emph{goodness} based next node
selection \\
\end{tabular}
\caption{Notation.}
\label{tab:symbols_legend}
\end{table*}
The total cost of the network, that is, the amount of power needed to operate
it at any given time, can be computed as the sum of the costs of all the links
in the network. As the link cost function is not necessarily linear, each link
load must be obtained first. Let ${a(f, \ell)}$ be the route-link incidence
matrix, defined such that
\begin{equation}
\label{eq:indices}
a(f, \ell) \triangleq
\begin{cases}
1, & \mbox{ if } \ell \in p_{\mathit{f}}\\
0, & \mbox{ otherwise.}
\end{cases}
\end{equation}
Then, the load of a link is simply
\begin{equation}
\label{eq:link-load}
\rho_{\ell} = \frac{1}{\mu_{\ell}}\sum_{f \in \Phi} \lambda_f a\left(f, \ell\right),
\end{equation}
and the power cost of the whole network is
\begin{equation}
\label{eq:network-cost}
P = \sum_{\ell \in \Lambda} c_{\ell}\left(\rho_{\ell}\right).
\end{equation}
Formally, our goal is to solve
\begin{equation}
\label{eq:min-goal}
\min P = \min \sum_{\ell \in \Lambda} c_{\ell}\left(
\frac{1}{\mu_{\ell}} \sum_{f \in \Phi} \lambda_f a(f, \ell)
\right),
\end{equation}
that is, to minimize the overall power consumption $P$ subject to the usual
topological constraints:
\begin{subequations}
\begin{multline}
\label{eq:flow-conservation}
\sum_{\ell = (i, j) \in \Lambda} a(f, \ell) - \sum_{\ell = (j, i) \in
\Lambda} a(f, \ell) \\=
\begin{cases}
1, & \quad \text{if $i = o_f$} \\
-1, & \quad \text{if $j = d_f$} \\
0, & \quad \text{otherwise.}
\end{cases} \quad \forall f \in \Phi
\end{multline}
\begin{equation}
\label{eq:resource-constraints}
\sum_{f \in \Phi} \lambda_f a(f, \ell) \leq \mu_{\ell}, \quad \forall \ell \in \Lambda
\end{equation}
\begin{equation}
\label{eq:variables}
a(f, \ell) \in \{0, 1 \} \quad \forall f \in \Phi, l \in \Lambda
\end{equation}
\end{subequations}
Equations~\eqref{eq:flow-conservation} are the flow-conservation constraints,
and~\eqref{eq:resource-constraints} are the physical constraints of the
network. The choice of integer values for the variables $a(f, \ell)$ means
that the flows are \emph{unsplittable}, i.e., each flow must follow a single
path through the network.
\begin{figure*}
\centering
\hfill{}\subfigure[Optimum forwarding for logarithmic cost function.]{
\includegraphics{problem-log}
\label{fig:problem-statement-log}
}\hfill{}
\subfigure[Optimum forwarding for cubic cost function.]{
\includegraphics{problem-cubic}
\label{fig:problem-statement-cubic}
}\hfill{}
\begin{tabular}{ll cc}\hline
\textbf{Energy Profile}&\textbf{Cost Function}&\textbf{Total Cost for Figure~\ref{fig:problem-statement-log}}&\textbf{Total Cost for Figure~\ref{fig:problem-statement-cubic}}\\\hline
Logarithmic&$c_{\ell}(\rho)=\log_{10}(1+\rho)$&\textbf{0.65}&0.85\\
Cubic&$c_{\ell}(\rho)=\rho^3$&1.33&\textbf{0.48}\\\hline
\end{tabular}
\caption{Power saving routing example for three flows
$\lambda_1=\lambda_2=\lambda_3=1$ with a common destination (the gray
node). Every link has the same cost function (logarithmic or cubic) and
$\mu_{\ell}=3\, \forall \ell \in \Lambda$.}
\label{fig:problem-statement-examples}
\end{figure*}
There is an example in Figure~\ref{fig:problem-statement-examples} that shows
feasible solutions to this problem for two different cost functions in a
simple five nodes network. This elementary example illustrates how the
different cost functions lead to distinct optimum forwarding strategies.
In the form above, the optimization problem is a generalization of the
\emph{unsplittable multicommodity flow problem} (UFP), where the
generalization consists in allowing \emph{arbitrary} cost functions
$c_{\ell}(\cdot)$ for the links. With linear costs,
\eqref{eq:min-goal}--\eqref{eq:variables} is the classical UFP, which in
specific instances is known to be NP-hard: for example, when the network $G$
has only one edge, the classical UFP specializes to the \textsc{knapsack}
problem.
For general link-cost functions, the relaxation of the
problem~\eqref{eq:min-goal}--\eqref{eq:variables} to $a(f, \ell) \in [0, 1]$,
namely to flows splitable over several routes, is generally a global
minimization problem. The case when $c_{\ell}(\cdot)$ are \emph{concave} functions
is, for instance, NP-hard~\cite{Sahni74,Varvalos90}. Since we do not impose
any prior assumption about the energy-consumption profiles, our problem can be
regarded NP from a computational perspective.
Our final goal is to design a new routing algorithm that
solves~\eqref{eq:min-goal}. The solution must be distributed and put low
requirements on the network nodes. Additionally, it must be able to
dynamically adapt to changing traffic demands and do so in a progressive
manner, such that the changes in the set of network paths do not lead the
network to a congested state nor cause undesired oscillations.
\section{Routing Algorithm}
\label{sec:trancas-algorithm}
As already stated in the introduction, we opt for a heuristic approach to
solve the aforementioned NP-hard optimization problem. Among all the families
of heuristic solvers available in the literature, the set of ant colony
algorithms~\cite{di97:_antnet} maps almost directly to the problem at hand.
Furthermore, their decentralized nature and time-adaptive characteristics are
requisites for any deployable solution. Note that we do not propose how the
routing decisions can be implemented in practice, although it is certainly
feasible on a MPLS~\cite{mpls-rfc} network where RSVP-TE~\cite{rsvp-te-rfc}
takes the job of setting up the label switched paths (LSPs).
As in the seminal AntNet algorithm~\cite{di97:_antnet}, our algorithm relies
on autonomous agents (\emph{ants}) that travel the network gathering enough
information to form optimal paths. Agents travel the network from source to
destination and back to the source. In their forward path, they explore
different routes to the destination to measure their costs. In their return
path, they update statistics at every node related to the fitness of the
next-hop node chosen in the previous forward way. The per-flow routing table
is finally calculated as the most appropriate next-hop for a given destination
at every node.
While the original AntNet algorithm used path delay as the cost function, we
use the power consumption. Moreover, AntNet only obtains a single path to a
destination from a given core node, while our algorithm must be able to
calculate different routes for every flow traversing each core node. This
complicates the problem as now the cost of the links does not depend on the
amount of traffic being carried by them in a linear way, and so it is
important to consider individual routes for every flow, even when they share a
common intermediate node and destination. Throughout the rest of this section,
we detail how agents work and what information they collect to obtain the set
of optimal paths for each flow that minimize global power consumption.
\subsection{Information Gathering}
\label{sec:inform-gath}
A key part of the algorithm relies on obtaining enough information about the
network state for updating the flow routes. It is the job of a \emph{forward
agent} to gather this information with the help of the network nodes.
For this, a forward agent departs periodically from the source node $o$
towards the flow destination $d$. This agent carries information about the
current flow rate ($\lambda_f$) and the current flow path ($p_{\mathit{f}}$).
The agent walks the network towards the flow destination in a
non-deterministic manner to be detailed later. When the agent reaches a new
node, it records the identity of the node in an internal list of visited
nodes. At the same time, the agent calculates the marginal cost of carrying
the flow traffic across the link used in the last hop and stores it internally
as $\vec c^f[i]$,\footnote{We assume that agents use the memory in the visited
node to store its state to then serialize it and transmit it to the next
node as an IP packet.} with $i$ the index of the previous node. This depends
on the cost function of the link, the traffic already being carried by the
link and whether the link is part of current flow path. The exact procedure to
calculate the marginal cost is shown in Listing~\ref{lst:fwd_agent_comp} (see
function \texttt{calcCost}). Note that forward agents simply need that core
nodes maintain statistics about aggregate traffic load in their outgoing links
to calculate the marginal cost.
\lstinputlisting[float,label=lst:fwd_agent_comp,caption=Procedure for the
marginal cost calculation.]{fwd_agent_comp.txt}
Before leaving the current node, forward agents need to decide which neighbor
node to visit next. There is a trade-off in this selection, because agents
should explore all the possible paths, but, at the same time, more resources
should be used to explore \emph{good} paths, where a \emph{good} path is the
one that the agent knows that demands less power than others. To achieve a
balance in this selection, forward agents use two procedures to select the
next visited node: one completely random using no previously obtained
information, and the other one based on costs calculated by other agents.
Which procedure to use is selected at random too. With some small probability
$\pi_e$ the agent chooses the first procedure ensuring that eventually all
possible routes are explored. With probability $1-\pi_e$ the next node is
chosen according to its \emph{goodness} relative to the flow~$f$. There is a
vector $\vec G_i(f)$ at every node $i$ that stores the goodness values for
each flow $f$ to all neighboring nodes. That is, $\vec G_i(f) = \{g_i^f(j)
\,\,\forall j \,| \, (i, j) \in \Lambda\}$. The goodness of each node is a
probability related to the estimated power consumption of the flow should it
select the neighbor node as part of its path. It must be stored in the nodes
where it is updated by the \emph{backward} agents.
Finally, if a cycle is detected after arriving to a new node, all the
information about the nodes visited and the links traveled since the previous
visit is deleted from the agent. A pseudo-code version of the algorithm
governing forward agents is provided in Listing~\ref{lst:fwd_agent}.
\lstinputlisting[float,label=lst:fwd_agent,caption=Forward agent
algorithm.]{fwd_agent.txt}
\subsection{Information Dissemination}
\label{sec:inform-dissem}
The information gathered by the forward agent on its way to the destination
node (the marginal costs and the path actually traversed) is used to update
the goodness values at each of the intermediate nodes. The \emph{backward
agent} is in charge of this update process while it travels back to the
origin node following exactly the reverse route recorded by the forward agent.
At an intermediate node $i \in N$ in the route of the backward agent from the
destination $d$ toward the origin $o$, the goodness value is updated based on
the cost of the partial path followed before by the forward agent from $i$ to
$d$. In turn, this cost is the sum of two components: a direct cost that
results from the addition of the flow's traffic to the downstream links from
$i$ on the path, and an indirect cost that measures the impact on the costs
that the remaining flows would see should the current flow $f$ departs
partially or totally from its current route.
\begin{enumerate}
\item The \emph{direct cost} is computed directly from the measurements taken
by the forward agent as
\begin{equation}
\label{eq:cost_from_j}
C^f_{\text{direct}}(i) = \sum_{k = i}^d \vec{c}^f[k],
\end{equation}
where $d$ is the flow destination and $\vec{c}^f[k]$ is the vector of
measures recorded by the forward agent at node $k$.
\item When a flow leaves or changes its route, this might actually induce an
increase in the marginal costs of other flows that were sharing the same
links, particularly if the energy profile in those links is sub-linear. This
possible increment is thus regarded as the \emph{indirect cost} of the
partial route from $i$ to $d$ used by the forward agent. Specifically, the
indirect cost is initialized at the destination $d$ when a backward agent is
created, with value
\begin{equation}
\label{eq:extra_cost_init}
C_{\text{indirect}}^f(d) = \sum_{\ell \in p_f} \vec{\gamma}^f[\ell]
\end{equation}
where
\begin{equation}
\label{eq:extra_cost_link}
\vec{\gamma}^f[\ell] \triangleq \left( c_\ell(\lambda_\ell - \lambda_f) +
c_\ell(\lambda_f) - c_\ell(\lambda_\ell) \right)^+
\end{equation}
is the sum of the marginal increases in energy consumption of all other
flows traversing the link if the flow $f$ were to leave link $\ell$, and
$\lambda_\ell$ is the total traffic carried by link $\ell$. Writing
$\lambda_r \triangleq \lambda_\ell - \lambda_f$ for the remaining traffic
that stays in the link after the departure of flow $f$, the cost
change~\eqref{eq:extra_cost_link} is simply the difference between the cost
due to the remaining traffic $c_\ell(\lambda_r)$ and the cost savings of
shifting $\lambda_f$ units of traffic off its current operating point, i.e.,
$c_\ell(\lambda_\ell) - c_\ell(\lambda_f)$. The value of
$\vec{\gamma}^f[\ell]$ is not allowed to be negative, as would happen for
links with super-linear (convex) cost functions,\footnote{Note that the
argument of~\eqref{eq:extra_cost_link} is negative iff $c_\ell(\cdot)$ is
a superadditive function.} in order to disincentivize that the flows
change prematurely their paths. Such changes could lead to undesirable route
flapping and network instability.
The values of the vector $\vec{\gamma}^f$ are computed by the backward agent,
for every visited link, as part of its reverse path. $\vec{\gamma}^f$ is
stored at the source node of the flow, and it is carried by the forward
agents to be used in the next round of the backward agents as follows: when
the backward agent leaves node $j$ to visit node $i$, the indirect cost is
updated
\begin{equation}
\label{eq:extra_cost_node_i}
C^f_{\text{indirect}}(i) = \begin{cases}
C^f_{\text{indirect}}(j) - \vec{\gamma}^f[(i, j)], \quad & \text{if $(i,
j) \in p_f$} \\
C^f_{\text{indirect}}(j), \qquad & \text{otherwise}.
\end{cases}
\end{equation}
Therefore, the indirect cost decreases toward the source, and diminishes by
an amount equal to the cost of leaving the links upstream from $i$ that flow
$f$ really uses. As a result of this computation, the paths in which the
departure of a flow $f$ would produce a larger cost to other concurrent
flows are penalized in comparison to paths where this does not occur.
\end{enumerate}
The sum of the direct and indirect costs is the \emph{raw goodness} value
computed by the backward agent before leaving its current node, and stored
there
\begin{equation}
\label{eq:goodness}
\Gamma^f(i) = C^f_{\text{direct}}(i) + C^f_{\text{indirect}}(i).
\end{equation}
The goodness is used as a metric to select the best next-hop for every flow.
To this end, it is essential that paths with less energy demands yield
increasing goodness, but this condition is not guaranteed for the raw goodness
for a number of reasons. The raw values are noisy due to the measurement
process, and have to be normalized first for allowing the comparison with the
values computed by other backward agents for the same flow, possibly after
having explored different paths. Finally, some adjustment is needed to ensure
that the metric is monotonically decreasing in $\Gamma^f[i]$. In this paper,
we will apply the same mechanisms and problem-independent constants as [21] to
derive the routing metric.
To simplify notation, we set $\Gamma = \Gamma^f(i)$ for the rest of this
Section. First, $\Gamma$ is normalized by a scaled average of previous
measurements
\begin{equation}
\label{eq:filter-scale}
r^\prime = \min \{ \frac{\Gamma}{\alpha \overline{\Gamma}}, 1 \}
\end{equation}
where $\alpha > 1$ is a suitable attenuation constant and $\overline{\Gamma}$
denotes the average of past samples of $\Gamma$. Averaging smoothes the
measurements and reduces the variance, but this variance can still be high and
trigger instability in the routing decisions. Thus, $r^\prime$ is corrected
according to
\begin{equation}
r^\prime_a = \begin{cases}
r^\prime - \textrm{e}^{-\frac{a \sigma}{\overline{\Gamma}}} \quad &
\text{if $r^\prime < 0.5$} \\
r^\prime + \textrm{e}^{-\frac{a \sigma}{\overline{\Gamma}}} \quad & \text{otherwise}
\end{cases}
\end{equation}
where $\sigma$ stands for the standard deviation of $\Gamma$. This correction
is enforced only if the average $\overline{\Gamma}$ is considered reliable,
i.e., if the ratio $\sigma / \overline{\Gamma} < \epsilon \ll 1$. If the
average is not stable, $\sigma / \overline{\Gamma} \geq \epsilon$ then a
penalty factor is added
\begin{equation}
r^\prime_a = \begin{cases}
r^\prime +1 - \textrm{e}^{-\frac{b \sigma}{\overline{\Gamma}}} \quad &
\text{if $r^\prime < 0.5$} \\
r^\prime -1 - \textrm{e}^{-\frac{b \sigma}{\overline{\Gamma}}} \quad & \text{otherwise}
\end{cases}
\end{equation}
where $b < a$. Finally, the metric $r^\prime_a$ is compressed via the power
law $r^\prime_a \leftarrow (r^\prime_a)^h$ and clipped to the interval $[0,
1]$.
After these nonlinear recalibration steps, node $i$ computes the
\emph{goodness routing metric} to each of its neighbors $j \in N$, $(i, j) \in
\Lambda$ as follows:
\begin{equation}
\label{eq:final-goodness}
g_i^f(j) \leftarrow g_i^f(j) + \begin{cases}
(1 - r^\prime_a) \bigl( 1 - g_i^f(j) \bigr) \quad \text{agent comes
from $j$} \\
-(1 - r^\prime_a) g_i^f(j) \quad \text{otherwise}.
\end{cases}
\end{equation}
It is easy to check that if $\sum_{j \in N, (i,j) \in \Lambda} g_i^f(j) = 1$
then the same condition holds after applying the update
rule~\eqref{eq:final-goodness}, so $g_i^f(j)$ can be conveniently interpreted
as the \emph{likelihood} of preferring neighbor $j$ as the next
hop.\footnote{Note that routing is deterministic, not random, and that the
traffic of a flow is not split among several paths. Thus, even if $g_i^f(j)$
is used as a probability by forward agents, all the traffic from a given
flow chooses as the next hop the neighbor with the highest $g_i^f(j)$
value.} A total goodness of $1$ is easily enforced by choosing as initial
values $g_i^f(j) = 1/n_i$ for every neighbor $j$ of node $i$, with $n_i$
adjacent nodes. Therefore, in absence of better a priori information,
initially each neighbor receives the same goodness as a credit.
\subsection{Obtaining the New Path}
\label{sec:getting-new-path}
Once the new goodness values have been calculated, the backward agent selects
the maximum as the next hop for the flow. This information is stored
internally by the agent.
When the agent finally reaches the source node for the flow, the information
about the next hops is employed to construct a new path. Because the backward
agent follows the reverse path of the forward agent, and the newly constructed
path is just the ordered collection of best next nodes, as determined by their
respective goodness values, for the visited nodes, this new path is not
necessarily connected. So before replacing the current path, the origin node
performs a connectivity test on it. For this, it can either rely on an
existing link state routing algorithm or send any kind of source routed probe
packet.
Even if the recorded path is dismissed, the work done by the agents is not
lost. Chances are high that a new forward agent eventually follows the best
path, as it is the one with the highest goodness values at every node, and
thus, the resulting backward agent will record the whole path.
\subsection{Memory Requirements}
\label{sec:memory-requirements}
It is important to characterize the memory requirements for storing all the
information related to the state of the agents and any auxiliary information
they may need. To this end, we detail the memory needs of edge nodes, regular
nodes and agents. Obviously, since agents are not physical entities, they
cannot really store any information, so they store it transiently in the nodes
they visit. In any case, we account for this separately from the memory needs
of regular nodes for clarity.
The forward agent carries the following information: a set of visited nodes,
the cost of the traveled links, the flow rate, the current path, and the extra
cost of leaving links in the current path. That is, information about the
current path and the traveled one. All this information depends solely on the
path lengths, and so it usually scales with the logarithm of the number of
nodes.
The backward agent does not carry much more information than the forward one.
It just stores the current path and, additionally, the extra cost values of
those links traveled that are part of the current path. Finally, it also holds
a copy of the path followed by the forward agent and it records the best next
hop node for the visited nodes. Again, all this information is proportional to
the path length, so it is independent of the number of flows.
The agents do use information stored in the nodes to communicate with other
agents and to obtain some basic information for their calculations. Source
nodes must store for every flow originating from them the current path of the
flow, its rate and the extra cost incurred when the flow leaves any of the
links currently traversed. The rate information scales linearly with the
number of flows departing from the node, while the path information and extra
costs scales with the product between the number of flows departing at the
node and the logarithm of the network size. We consider as a single flow all
traffic between a given pair of edge nodes, the total information stored at
the edges is still manageable. In the worst case, it is $|E|\log(N)$, with
$|E|$ the number of edge nodes.
Regular nodes, and edge nodes too, need to store additional information for
the agents to do their calculations. They need an estimate of the traffic
being sent across every outgoing link for the cost calculation. They also need
to store the information for the best node selection: the \emph{goodness}
vector and the cost statistics for each flow. Each goodness vector has an
entry for every outgoing link ($\frac{2|\Lambda|}{|N|}$ on average), so its
size should remain relatively small. However, the node must store a goodness
vector for every network flow. In the worst case, there can be as many as
$|E|(|E|-1)$ flows in the network, so this is clearly the limiting factor of
the algorithm. To lower this memory requirement, nodes could use some kind of
eviction policy to free memory associated to flows without recent activity.
All this information is summarized in Table~\ref{tab:memory-requirements}.
\begin{table}
\centering
\begin{tabular}{l c}\hline
\textbf{Element}&\textbf{Needed storage}\\\hline
Forward agent&$O(\log(N))$\\
Backward agent&$O(\log(N))$\\
Source node&$O(|E|\log(N))$\\
Core node&$O\left(\frac{2|\Lambda|}{|N|}|E|^2\right)$\\\hline
\end{tabular}
\caption{Memory requirements.}
\label{tab:memory-requirements}
\end{table}
\section{Evaluation}
\label{sec:performance-evaluation}
In this section we will analyze the performance of our algorithm. We start
with a set of simple experiments in a synthetic topology that highlights the
behavior of the algorithm for different cost functions. Then, we show the
results on more realistic network topologies.
All the results have been produced by an open source in-house simulator
available at~\cite{rodriguez13:_tranc}.\footnote{We refrained from writing a
module for a general purpose network simulator as~\cite{ns-2} as the amount
of new code would be on the same order.} Our simulator abstracts packet
level simulation details and considers the long time traffic averages known.
This speeds up the simulations while, at the same time, let us employ publicly
available traffic matrices that do not usually detail packet level details.
The simulator reads two configuration files: one describes the network
topology and link parameters and the second one controls the traffic
characteristics. The simulated links are described by their maximum traffic
capacity and their cost function. The cost function for a given link is
reduced for simplicity to the set of coefficients $\{a_0, \ldots, a_n\}$ in
the general formula
\begin{equation}
\label{eq:total_cost}
c_{\ell}(\lambda) = a_0 \log \lambda + \sum_{i=1}^{n}a_i \lambda^{i-1}.
\end{equation}
This formula lets us represent the main power profiles links are expected to
exhibit in the near
future~\cite{cardona09:_energ_profil_aware_routin,chiaraviglioss:_model_sleep_mode_gains_energ_aware_networ}:
sub-linear, like those of IEEE~802.3az
links~\cite{herreria12:_gi_g_model_gb_energ_effic_ether}; linear (although
this is not expected to be found in links, it can be used to account for the
power costs of the switch matrix of the routers), a constant component,
although this does not have any effect on the routing decisions, and
super-linear. These latter profiles, like cubic ones, have been found in
Ethernet interfaces applying dynamic voltage and frequency
scaling~\cite{Zhai:2004:TPL:996566.996798}. Finally, we do not consider an
on-off power profile as we need all links to be active to be able to send and
receive agents through them. In any case, with a suitable scaling factor, the
logarithmic profile can be made similar to the on-off profile. For the
algorithm configuration parameters, we used the constants provided
in~\cite{di97:_antnet}: $\epsilon=0.25$, $a=10$, $b=9$ and $h=0.04,$ that are
problem independent. In any case, it has been found that Ant Colonization
algorithms are quite resilient to changes in the configuration
parameters~\cite{dhillon07:_perfor_analy_antnet_algor}, so further tuning has
not been deemed necessary.
Given the inherent random behavior of the algorithm, each simulation has been
repeated 100 times, modifying the initial seed of the random generator of the
simulator. All the provided results show the averaged measure of a given
metric along with its 95$\,$\% confidence interval, except in those cases
where the interval was too small.
\subsection{Algorithm Behavior}
\label{sec:trancas-behavior}
The first set of results shows the behavior of the algorithm in a
regular network. The topology consists of a simple switching matrix of
$n$ steps connecting $n$ traffic sources to $n$ destinations. Every
link has the same cost function and unlimited capacity. The goal is to
check the results obtained by the algorithm in an otherwise
unrestricted scenario. Traffic consists of $n$ identical flows going
from each source to every destination, for a total of $n^2$ flows in
the network.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{log_net_multi}
\caption{Algorithm behavior in a lattice network with a logarithmic cost function. Line
width represents the number of flows in a link, while dotted lines show
unused links.}
\label{fig:log-multi-lattice}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{cubic_net_multi}
\caption{Algorithm behavior in a lattice network with a cubic cost function.}
\label{fig:cubic-multi-lattice}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{linear_net_multi}
\caption{Algorithm behavior in a lattice network with a linear cost
function.}
\label{fig:linear-multi-lattice}
\end{figure}
Figures~\ref{fig:log-multi-lattice},~\ref{fig:cubic-multi-lattice}
and~\ref{fig:linear-multi-lattice} show a graphical representation of the
network and the link occupations for $n=5$ and three different cost functions:
logarithmic, cubic and linear, respectively. The number of flows in each link
is proportional to the line width, with dotted lines representing unused
links. After a close inspection of the graphics it can be seen that in the
logarithmic scenario (Figure~\ref{fig:log-multi-lattice}), routes tend to be
short (\emph{vertical} links are only used in the first and final steps) and
shared among various flows. Note that line widths are quite wide and, at the
same time, a lot of links remain unused. This is expected, as the marginal
cost of adding traffic to a link decreases with its load.
In contrast, for the cubic cost function
(Figure~\ref{fig:cubic-multi-lattice}), most links are lightly used. In fact,
almost all vertical links are employed to avoid sharing traffic on either
horizontal or diagonal links. Again, this is the expected behavior, as in this
case the marginal cost increases with load, so the algorithm must find the way
to spread the traffic across the net as long as the added cost (it employs
more links and longer routes) is not excessive.
In the linear scenario (Figure~\ref{fig:linear-multi-lattice}) the algorithm
just searches for the shortest routes regardless of how the links are shared
among flows. As in the logarithmic cost function scenario, there is almost no
single vertical link used, but it can be also observed that the load is not so
concentrated on a few links. In fact, the total number of empty links is
smaller.
We repeated these simulations on a somewhat larger net with $n=8$.
\begin{figure}
\centering
\subfigure[Logarithmic cost function]{
\includegraphics[scale =1, width=\columnwidth]{histogram-log}
\label{fig:hist-lattice-8-log}
}
\subfigure[Cubic cost function]{
\includegraphics[scale =1, width=\columnwidth]{histogram-cubic}
\label{fig:hist-lattice-8-cubic}
}
\subfigure[Linear cost function]{
\includegraphics[scale =1, width=\columnwidth]{histogram-linear}
\label{fig:hist-lattice-8-linear}
}
\caption{Link occupation for an 8-nodes regular topology for different cost
functions.}
\label{fig:hist-lattice-8}
\end{figure}
Figure~\ref{fig:hist-lattice-8} shows how many links share a given number of
flows. Results agree with the above discussion. The logarithmic scenario has
the highest number of unused links (96) with some links carrying 27 or
even 29 flows. On the contrary, for the cubic cost function most links carry
just a few flows with almost no link sitting unused. The linear case, as
before, fits in between those scenarios.
\begin{table*}
\centering
\begin{tabular}{|r|ccc|c|c|}
\multicolumn{1}{}{}&\multicolumn{3}{c}{\textsc{Ant Colonization Algorithm}}&\multicolumn{1}{c}{\textsc{SPF}}\\\cline{2-5}
\multicolumn{1}{r|}{Cost Function}&\textbf{Log}&\textbf{Linear}&\textbf{Cubic}&\textbf{Any}\\\hline
\textbf{Path Length}&$9.9\pm0.6$&$9$&$9.9\pm0.1$&9\\
\textbf{Energy Savings}&$13.3\pm1.3\,$\%&$0\,$\%&$69.9\pm0.1\,$\%&$0\,$\%\\\hline
\end{tabular}
\caption{Energy savings and path lengths obtained for different cost
functions in a regular switching network with $n=8$. $95\,$\% confidence interval
omitted for clarity when less than $0.1\,$\%.}
\label{tab:lattice-8}
\end{table*}
Finally, the results obtained after the algorithm is run are summarized in
Table~\ref{tab:lattice-8}. It shows both the energy savings when compared to a
power unaware shortest-path-first (SPF) routing algorithm and the average
route lengths. As expected, for the linear cost function, the results are
identical to those of the SPF algorithm, and thus our algorithm produces no
energy savings, but keeps the optimum average path length of just nine hops.
However, for non-linear cost functions it pays a small penalty in path
lengths. This length increment is necessary to obtain more power efficient
routes. In fact, the energy savings for the cubic cost function ($69.9\pm0.1\,$\%) are
quite impressive in this topology.
\subsection{Performance Results}
\label{sec:performance-results}
We have also carried out experiments in more realistic network topologies, the
first set inspired in the topology of the old NSFNet network and a second one
in the \emph{nobel-eu} topology from the Survivable Network Design Library
(SNDlib)~\cite{SNDlib10}.
Figure~\ref{fig:nsfnet} shows the NSFNet network. We have conducted several
simulations with varying traffic matrices: a \emph{full-mesh} matrix with
traffic flowing from each source to every other destination; an
\emph{intra-coast} matrix, with traffic just between some nodes in the same
``coast''; and finally a \emph{coast-to-coast} matrix, with traffic flowing
from nodes in each coast to the other and vice-versa.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{nsfnet}
\caption{Network topology based on the original NSFNet. Node names
correspond with their geographic location.}
\label{fig:nsfnet}
\end{figure}
Although the traffic matrices can not be considered real by any means, they allow
to bring some light to the behavior and performance of the algorithm in a wide
range of representative scenarios.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{iterations-histogram-90}
\caption{Number of iterations to reach 90\% of the greatest power savings for
different traffic matrices and different link cost functions. Error bars
show 95$\,$\% confidence intervals.}
\label{fig:hist-it-nsfnet-110}
\end{figure}
The first performance characteristic we measured is the time needed by the
algorithm to reach $90\,$\% and $99\,$\% of the long term energy savings
it is able to achieve. We use the number of iterations, that is, the number of
forward agents sent by a source, as a proxy for this time, as it eventually
depends on the time separation between two consecutive agents. The results are
plotted in Figures~\ref{fig:hist-it-nsfnet-110}
and~\ref{fig:hist-it-nsfnet-101}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{iterations-histogram-99}
\caption{Number of iterations to reach 99\% of the greatest power savings for
different traffic matrices and different link cost functions. Error bars
show 95$\,$\% confidence intervals.}
\label{fig:hist-it-nsfnet-101}
\end{figure}
The first conclusion is that the algorithm is usually able to reach
the target of $90\,$\% quite fast. There is also a relationship
between the number of flows and the convergence speed. This can be
observed in the full mesh simulations, which usually take the largest
number of iterations. It can also be seen that the intra-coast
scenario is resolved very fast for any cost function, as it takes
almost the same number of iterations to reach $90\,$\% of the final
savings as to get to $99\,$\%. We believe that this is a consequence
of the optimal routes being quite short, and thus easy to come across
by the agents. On the other hand, both the coast-to-coast and the
full-mesh matrices need many more iterations to rise to the $99\,$\%
target. For the linear cost function this happens because the optimal
routes are longer and thus the number of alternative routes with
similar costs is higher, lowering the likelihood of a forward agent to
follow them. The change for the logarithmic cost function is even
sharper. The reason is that not only the routes are longer, like in
the linear case. There is an additional complexity in the fact that
the algorithm tries to pack several flows in the same links for
maximum energy savings. As the agents take their routing decisions
autonomously it takes some extra time for routes to converge to the
same set of links. This also helps to explain why for the cubic cost
function the complexity increase is less noticeable. For super-linear
cost functions the greatest savings come from using disjoint routes,
so there is no need for several flows to converge on the same set of
links. So, it is easier for agents to choose links with low
occupation.
We have also measured two additional performance characteristics: actual power
savings and average path length increment. The power savings are compared to
the power consumed by a network using SPF as the power-agnostic routing
algorithm.
\begin{table*}
\centering
\begin{tabular}{c|l|rrr|}
\multicolumn{2}{l}{}&
\multicolumn{3}{c}{\textsc{Cost Function}}\\\cline{3-5}
\multicolumn{2}{l|}{}&
\multicolumn{1}{c}{\textbf{Log}}&
\multicolumn{1}{c}{\textbf{Linear}}&
\multicolumn{1}{c|}{\textbf{Cubic}}\\\cline{2-5}
\textbf{Length} & \textsf{Coast to coast} & $20.1\pm1.1\,$\% & $0\,$\% &
$16.1\pm1.2$\,\%\\
\textbf{Increment}& \textsf{Intra coast} & $23.3\pm1.9\,$\% & $0\,$\% & $0\,$\%\\
& \textsf{Full mesh} & $11.0\pm0.6\,$\% & $0\,$\% & $0.8\pm0.1\,$\%\\\cline{2-5}
\textbf{Relative} & \textsf{Coast to coast} & $29.5\pm1.2\,$\% & $0\,$\% &
$17.6\pm1.3\,$\%\\
\textbf{energy} & \textsf{Intra coast} & $6.5\pm1.4\,$\% & $0\,$\% &
$0\,$\% \\
\textbf{savings} & \textsf{Full mesh} & $29.9\pm0.5\,$\% & $0\,$\% &
$12.8\pm0.06\,$\%\\\cline{2-5}
\end{tabular}
\caption{Performance improvement of the proposed algorithm for different traffic matrices in
the network depicted in Figure~\ref{fig:nsfnet} when compared against
Shortest Path First. $95\,$\% confidence
intervals omitted for clarity when less than $0.1\,$\%.}
\label{tab:nfsnet-summary}
\end{table*}
The results for the three traffic matrices and the three power profiles are
summarized in Table~\ref{tab:nfsnet-summary}. For the linear cost function,
the algorithm is unable to save more energy with regards to SPF, but this is
expected, as SPF discovers the optimal routes for these networks. In any case,
the results of our algorithm are also optimal, with no additional
energy demands nor increments in the path lengths.
In the logarithmic link cost networks, the algorithm obtains more than
$20\,$\% energy savings for the more complex traffic matrices. The route
lengths also grow, although the increments are below $25\,$\%.
Finally, the cubic cost function does not attain any savings for the
intra-coast traffic matrix. This is because the shortest path routes
are already optimal. In fact, the path lengths are identical for both
the proposed algorithm and the SPF routing algorithm. For the rest of
the traffic matrices it gets savings in the $10$--$20\,$\% range by
distributing flows in different links, at the cost of an obvious
increment in the average path length.
In short, the proposed algorithm is able to trade some increment in route
lengths to save energy in the network. When the routes computed by a shortest
path first algorithm are already optimal, the routes computed by our proposal
are never worse: both average path length and energy consumption remain
identical.
As already stated at the beginning of the Section, we have also used a real
topology both to assess the behavior of our algorithm and to compare it
against the optimization shown
in~\cite{garroppo11:_energ_aware_routin_based_energ_charac_devic,garroppo13:_does_traff_consol_alway_lead}
and to those power profile unaware algorithms that minimize the number of
active
links~\cite{cianfrani12:_ospf_integ_routin_strat_qos,chiaraviglio12:_minim_isp_networ_energ_cost,kim12:_ant_inter,Yang20141}.
We have employed the topology and average traffic matrix of the
\emph{nobel-eu} core network from the SNDlib archive used in those works. The
\emph{nobel-eu} network is a European network consisting on 28 nodes connected
by 41 links and the traffic matrix consists on a total of 378 flows. For the
sake of the comparison, we have simplified the network model proposed
in~\cite{garroppo13:_does_traff_consol_alway_lead} as we restrict the number
of links between a given pair of nodes to one, albeit with unlimited capacity.
Figure~\ref{fig:nobeleu-cubic-evolution} shows the normalized power consumption
with a cubic cost function.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{stair-cubic}
\caption{Comparison between our algorithm, SPF and the optimal result for
the nobel-eu network with a cubic cost function.}
\label{fig:nobeleu-cubic-evolution}
\end{figure}
Traffic was added to the network in several steps to show the dynamics of our
algorithm. It can be seen that the consumption raises briefly above that of
SPF when new traffic enters the network, but rapidly stabilizes below it after
a few iterations. For completeness we have calculated with the help of the IBM
CPLEX solver the optimum power consumption obtained considering our simplified
model of~\cite{garroppo13:_does_traff_consol_alway_lead}. As expected, the
centralized calculation is able to obtain the best results, albeit it cannot
adapt automatically to changing network conditions. We also performed the
experiment with a logarithmic cost function. In this case, our algorithm just
managed to save $3\pm1\,\%$ of the power needed when using SPF routes, while
the CPLEX solver managed to save 16\% of the power, considering again a static
scenario.
To simulate the results of the power profile unaware algorithms we eliminated
all but the most used outgoing link for each node when using SPF
routing.\footnote{The optimum result in these algorithms is obtained with
unlimited capacity links, as a single outgoing link is enough to transmit
all the traffic from a given node, and the rest of the links can be powered
down.} Then, we calculated the global power usage in the modified graph. We
found that energy usage increases $9.5\,$\% for links with logarithmic cost
function when compared with the unmodified network using SPF as a routing
algorithm. This is due to the increased average length of the routes, resulting in
traffic consuming energy in more links. Results, however, are much worse for a
cubic cost function. In this case, traffic should be spread over various links
to minimize consumption, however, with a single path between each pair of
nodes this is not feasible. Energy consumption is $8.8\,$ times higher than in
the unmodified network. Although the results may seem counter intuitive, there
are to be expected, as all these algorithms are designed for networks with
fixed cost links.
\section{Conclusions}
\label{sec:conclusions}
In this paper we have presented a modified version of the
AntNet~\cite{di97:_antnet} algorithm to calculate, in a decentralized way,
optimal routes to reduce power consumption of network links. The presented
solution does not put any restriction in the power profile functions of the
networking equipment.
The proposal was tested in both synthetic and real scenarios with different
power profiles. The obtained results show power savings in the 10--20\% range
for real networks and up to 70\% in favorable, although unlikely, scenarios.
Moreover, the convergence times are small, as the 90\% of the savings are
usually obtained in less than 1000 iterations. Thus the algorithm can be used
continuously in background in the network, adapting the routing tables to the
medium-term averages of the traffic load of the incoming flows.
Finally, the results also show that it is necessary to take into account the
power profile of the links, as not doing so and blindly powering off less used
links can even augment power usage.
\section*{Acknowledgments}
\label{sec:acknowledgments}
Work supported by the European Regional Development Fund (ERDF) and the
Galician Regional Government under agreement for funding the Atlantic Research
Center for Information and Communication
Technologies (\href{http://atlanttic.uvigo.es/en/}{AtlantTIC}).
\bibliographystyle{elsarticle-num}
|
2,877,628,089,810 | arxiv | \section{Introduction}
In the last few years, edge computing has received a lot of attention as an alternative to cloud computing, due to the multiple advantages it offers, such as low bandwidth usage, responsiveness, scalability~\cite{mach2017mobile} and privacy preservation~\cite{satyanarayanan2017emergence}.
Edge computing now becomes possible due to the evolution of devices that offer more computational power than ever.
Combined with application container platforms such as Docker~\cite{anderson2015} that mask heterogeneity problems, it becomes possible for connected devices to form a homogeneous distributed run-time environment.
Additionally, orchestration engines (i.e. Kubernetes\footnote{\url{https://kubernetes.io/}}) have been developed to manage and optimize usage of network, memory, storage or processing power for edge devices and improve the global efficiency, scalability and energy management of edge platforms.
However, such solutions are centralized, which means that they represent a single point of failure (SPOF), which entails several drawbacks, such as lack of reliability and security.
The problem is so critical that developments for high availability have been explored, for instance with Kubernetes\footnote{\url{https://kubernetes.io/docs/setup/independent/setup-ha-etcd-with-kubeadm}}.
In this paper, we propose to tackle this problem with a decentralized algorithm that monitors network resources to drive application execution.
Our solution relies on an original combination of blockchain-like shared data structure, consensus algorithm and containerized monitoring application to enable run-time migration of applications, when relevant, according to the network state.
It provides several advantages, such as verifiable optimal usage of all devices on the network, better resilience to disconnection, independence from cloud connection, improved privacy and security.
The remainder of this paper is organized in 7 sections.
Section~\ref{sec:motivation} introduces our motivating scenario related to a cultural heritage building and shows the need for a decentralized approach.
Section~\ref{sec:related} overviews relevant related work and highlights the originality of our approach.
Section~\ref{sec:architecture} details our proposed architecture and shows how it drives run-time migration of applications on the edge.
Section~\ref{sec:node_application} presents our network monitoring application and shows how the monitoring takes place.
In Section~\ref{sec:implementation}, we propose a technical implementation, and we validate and evaluate our solution with a proof-of-concept prototype related to our cultural heritage scenario.
Section~\ref{sec:conclusion} discusses the results obtained and gives insights for possible future work.
\section{Motivating Scenario}
\label{sec:motivation}
In this section, we illustrate the relevance of our approach with a scenario related to a Slovenian cultural heritage building located in Bled, Slovenia.
This building has been equipped with multiple sensors to monitor its evolution.
The collected data includes temperature, CO2, relative humidity, Volatile Organic Compounds (VOC), ambient light and atmospheric pressure.
In this scenario, the following constraints motivate the need for a fully decentralized edge computing approach:
\begin{itemize}
\item Privacy: collected data about the state of the technological solution being deployed is classified as sensitive information.
Although data about the building could be sent to the cloud, data about the state of resources needs to remain local and only accessible for administration purpose and for the deployed solution to self-manage.
\item Reliability: centralized orchestration is not appropriate as data collection needs to be resilient to failure of any device. The network of devices needs to adjust to device disconnection any time and keep operating in an optimal way.
\item Cost: reducing the overall cost by avoiding investing in a cloud infrastructure that involves monthly payments and permanent connection to maintain.
\item Scalability: as the number of devices will evolve over time, it is necessary for the solution to be able to adjust to changes and homogeneously spread the computation over the network.
\item Performance: reactivity to external events is improved if processing is performed on-site.
\item Cost effectiveness: using existing devices that control sensors to perform necessary processing reduces the resource requirements of cloud based solutions, which reduces cost.
\end{itemize}
In this context, it is relevant to equip devices with the capacity to run applications locally and to self-manage the global network load and distribute it over connected devices, according to the state of the network.
In the next section, we present related work and show the need for a decentralized self-managed platform on the edge.
We also overview existing solutions to abstract from platform heterogeneity and justify the technological choice of a container platform to support our solution.
\section{Background Knowledge and Related Work}
\label{sec:related}
\subsection{Orchestration solutions for edge computing}
Strictly observing the definition of orchestration, it always represents control from one party’s perspective.
This differs from choreography, which is more collaborative and allows each involved party to describe its part in the interaction \cite{peltz2003web}. However, to the authors' knowledge, there are no choreography solutions that tackle the problems defined in previous section. Existing orchestration solutions typically rely on a master/slave model where a node is put in charge of the network and decides to allocate applications to nodes according to an optimization algorithm.
Kubernetes~\cite{hightower2017kubernetes} is the most widely used orchestration tool, it is the go-to tool for orchestration in the Google cloud, mostly used in the Microsoft Azure platform and similar products.
It is also the most feature-filled orchestration tool available~\cite{medel2016modelling}.
It has strong community support across many different cloud platforms (in addition to Google cloud, OpenStack, AWS, Azure).
AWS Elastic Container Service (AWS ECS)~\cite{acuna2016amazon}, Amazon’s native container orchestration tool, is the best option for orchestration of AWS services as it is fully integrated into the Amazon ecosystem. It thus integrates easily with other AWS tools.
The biggest limitation is that it is limited to Amazon services.
Docker Swarm~\footnote{\url{https://github.com/docker/swarm}} ships directly with Docker (integrates with Docker-compose) and is supposed to have the simplest configuration.
However, it lacks some advanced monitoring options as compared to other products like Kubernetes.
Apache Mesos’ based DC/OS~\footnote{\url{https://dcos.io/}} is a “distributed operation system” running on private and public cloud infrastructure that abstracts the resources of a cluster of machines and provides common services.
All presented architectures still have a common flaw: single point of failure and a lack of integration with the edge computing.
\subsection{Container platforms}
Containers as used in the purpose of this paper are run as a group of namespaced processes within an operating system, avoiding the overhead of starting and maintaining virtual machines (at the same time providing most of the functionalities). Application containers, such as Docker, encapsulate the files, dependencies and libraries of an application to run on an OS as opposed to the System containers, such as LXC that encapsulate the whole operating system and are in this view more similar to Virtual Machines. The key advantage of containers over virtual machines is their light weight with respect to resources.
Docker~\cite{anderson2015} is the de-facto standard in the open source application container platforms and made containers mainstream.
Core OS’ rkt~\footnote{\url{https://coreos.com/rkt/}}
offers similar functionality as Docker. Rkt is the container runtime from CoreOS.
Like Docker, Rkt is designed for application containers. The market share comparing to Docker is still much lower, but it is raising and with the new announced merges of Redhat and CoreOS in the development, it presents a viable alternative.
LXC~\footnote{\url{https://linuxcontainers.org/}}, short for Linux Containers, is the container runtime and toolset that helped make Docker possible. LXC predates Docker by several years, and Docker was originally based on LXC (it’s not anymore), but LXC gained little traction.
LXD~\footnote{\url{https://linuxcontainers.org/lxd/introduction/}} is a container platform based on LXC. Essentially, LXD provides an API for controlling the LXC library, as well as easy integration into OpenStack. it is backed by Canonical, the company that develops Ubuntu Linux, which is the primary backer of LXD development at the time of writing.
Unlike Docker and Rkt, LXC and LXD are system containers and as such out of scope of this paper. The selected platform for our research was Docker as it is the most widely used platform and one of the few that can migrate apps at runtime and enables easy communication. The migration is done by pausing the container, dumping the context of the paused container, transferring the context on a different host that can resume the execution given the context.
\subsection{Decentralized Self-managing IoT Architectures}
A lot of work have proposed solutions to enable fully decentralized self-managing architectures for the IoT.
For example, in~\cite{maior2014self}, the work focuses on a decentralized solution for energy management in IoT architectures connected to smart power grids.
In~\cite{higgins2011distributed}, the authors propose a distributed IoT approach for electrical power demand management problems based on “distributed intelligence” rather than “traditional centralized control,” with the system improving on many levels. Then, in~\cite{suzdalenko2013instantaneous} the authors further develop the former approach by creating a decentralized distributed model of an IoT; where consumers can freely join and leave the system automatically at any time. In~\cite{niyato2011machine} a system that uses machine-to-machine (M2M) communication is presented, to reduce the costs of a home energy management system. Also, dSUMO~\cite{bragard2017self}, a distributed and decentralized microscopic simulation that eliminates the central entity and thus overcome the bottleneck in synchronization. In~\cite{al2018energy}, the authors demonstrate the effectiveness of utilizing a publish/subscribe messaging model as connection means for indoor localization utilizing Wireless Sensor Networks (WSNs) through a middle-ware, the results showed that RSS get an acceptable accuracy for multiple types of applications.
However, all the aforementioned contributions are different from the solution we propose in this paper, at two levels.
First, they mostly focus on a single specific aspect and find an optimal solution for it, without considering the fact that an IoT architecture involves multiple criteria that require optimization.
In our work, we already consider multiple criteria to optimize application migration, while envisioning that this number of criteria can increase in the future.
Second, as far as we know, there is no approach that combines blockchain-like data structure and consensus algorithms in a single framework with the objective to drive application migration at run-time on the edge, which is the main contribution of this paper.
\section{A Decentralized Self-managing Architecture}
\label{sec:architecture}
In the following, we describe the general architecture that support our edge computing platform. Devices on the edge are nodes running node software and a containerization software. A node can join the network by following a network protocol for exchanging known nodes and participating by executing the consensus algorithm. Nodes keep discovering the network by asking connected nodes for peers.
For the sake of simplicity, in this paper we consider that the number of nodes remains reasonably limited, so that large scale discovery issues remain out of the scope of this paper.
Our devices are equipped to allow a specific containerized application (called node app) to introspect the state of the node and handle the diffusion of this information over the network.
It also is responsible for maintaining the information about the other nodes up to date, for participating in the consensus algorithm, and for listening to messages coming from the exposed node API.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.7\linewidth]{Node}
\caption[Node]{Architecture of an edge device software platform}
\label{fig:node}
\end{figure}
Figure~\ref{fig:node} shows the key components of Nodes in the system. The node software is compiled into a container, in our case Docker. The container mounts a direct socket to the containerization service for querying the state of the system and managing local containers.
\section{Node Application}
\label{sec:node_application}
Every 500 milliseconds, each device collects information about the state of its neighbours.
Typically, a state is a vector of scores that describes the device state and the applications being executed by the node.
In this work we define a state to be a matrix of vectors $$S (APP,CPU,RAM,DISK,NETWORK,TIMESTAMP)$$ where each vector represents an application being executed by the node and the corresponding resource consumption.
Resources are reported as a fraction of the total available. In order to have comparable values between nodes, reporting on CPU usage and network utilization require some engineering which is outside of the scope of this paper.
Monitoring resources within the P2P network is done by having nodes maintain a list of scores of other nodes. All nodes periodically broadcast digitally signed messages containing their score. All nodes follow simple P2P broadcasting rules that guarantee finality and efficiency in message propagation.
\begin{itemize}
\item If elapsed time greater then $\Delta ST$, broadcast signed message containing own score.
\item When receiving a new score message, check if message was received before (compare digital signatures)
\item If message was not seen before, broadcast it to all connected nodes with the exception of originating node
\end{itemize}
Where $\Delta ST$ is configurable and should depend on the time interval of the consensus algorithm. The score pool hence contains scores of all nodes participating in the network. Each score has a corresponding time-stamp which is later used by elected nodes to create a migration strategy.
For improved efficiency, every score message broadcast is prefaced with a "Do you need this" (DYNT) message coupled with the digital signature of the message only. Messages are sent to nodes that reply to the DYNT message to minimize bandwidth use.
\subsection{Consensus algorithm}
\label{subsec:consensus}
The network requires a consensus algorithm to avoid race conditions when migrating applications.
The choice of a consensus algorithm depends on the requirements of the implementation and domain of application.
In general, any consensus based on leader election can be plugged in.
Examples of such consensus algorithms are Paxos~\cite{lamport2001paxos}, Raft~\cite{ongaro2014search}, PoET~\cite{olson2018sawtooth}, etc.
The elected leader is responsible for creating a migration plan and including the resource consumption estimates in a block. The block gets digitally signed so other nodes can verify it originates from the elected leader.
Nodes receiving a new block must verify the migration plan by computing it locally and comparing the results. If the migration plan is equal, they act on it, otherwise discard the block and wait for a new one. With these simple protocol rules in place the network is Byzantine fault tolerant \cite{castro1999practical}.
A migration strategy is analogous to a block in block-chain based systems. Blocks contain all the data shared among nodes in the network and include a digital signature of the previous block thus creating a block chain. In order to create a digital signature of block $n +1 $ a node needs to have the digital signature of node $n$. A well formed block can be verified by other nodes that also have block $n$. In case of a malformed block, verification will fail, and nodes will reject the block, thus forcing the nodes to agree on the shared data.
The block serves as an instruction set mapping applications to nodes.
Consider a case with 4 nodes in set $N$ denoted by $A, B, C$, and $D$ respectively.
All nodes share their score and keep a local copy of reported scores of other nodes.
Each node also stores a vector of applications $v \in V$ that need to be executed.
Table~\ref{table:block} shows an example of a block $k$ which assigns every $v \in V$ to a node $n \in N$ To create block $k+1$ a node elected as leader computes an assignment such that the use of resources is optimal.
The input to the algorithm is limited to block data to ensure determinism that can enforce consensus.
The algorithm depends on the application domain and exploring available possibilities will be subject to future work.
In this paper, we use the simple algorithm described below, which is deterministic and can only take the block data as input for computation.
\begin{algorithm}[ht!]
\SetAlgoLined
\KwData{BlockData}
\KwResult{Migration plan}
$Max \gets FindMaxLoadedNode(BlockData)$\;
$Min \gets FindMinLoadedNode(BlockData)$\;
\eIf{!AppQueue.isEmpty()}{
\While{!AppQueue.isEmpty()}{
$Min \gets FindMinLoadedNode(BlockData)$\;
$Min.addApp(AppQueue.dequeue())$\;
}
}{
$AppToMigrate \gets Max.MaxLoadApp$\;
$CurrentDeltaScore \gets (Max.score -Min.score)$\;
$FutureDeltaScore \gets (Max.score - AppToMigrate.score ) - (Min.score + AppToMigrate.score)$\;
\If{$Math.abs(CurrentDeltaScore > FutureDeltaScore$)}
{
Migrate $AppToMigrate$ to $Min$\;
}
}
\caption{Deterministic migration plan generation algorithm}
\end{algorithm}
\begin{center}
\begin{table}[ht]
\caption{Block data}
\centering
\begin{tabular}{c c c c c c}
\hline
V&Node&RAM&DISK&CPU&Average Latency \\ [0.5ex]
\hline
$v_0$ & A & 50\% & 23\% & 90\% & 23ms \\
$v_1$ & B & 47\% & 87\% & 23\% & 33ms\\
$v_2$ & C & 12\% & 25\% & 15\% & 51ms \\
$v_3$ & A & 35\% & 14\% & 56\% & 101ms \\
$v_4$ & D & 25\% & 74\% & 16\% & 9ms \\[1ex]
\hline
\end{tabular}
\label{table:block}
\end{table}
\end{center}
Once a block is created, currently reported scores are included that will be used to compute block $k + 2$. Additionally, blocks are equipped with meta-data like block hash, previous block hash, etc. to facilitate their utilization.
\section{Implementation and Evaluation}
\label{sec:implementation}
\subsection{Technical Implementation}
As described in Section~\ref{sec:motivation}, we have implemented and evaluated our solution with a set of sensors deployed in the cultural heritage building Mrakova Domačija in Bled, Slovenia.
Each sensor is connected to a Raspberry Pi device that hosts a Linux Alpine OS and a Docker container.
We developed our node application inside a container, it relies on the Docker introspection capacity (\texttt{docker stats} command called from our Java program) to collect information about each device.
The application also hosts a HTTP server\footnote{Please note that CoAP could be used for energy saving purposes.} that allows communicating with other nodes through a RESTful API operating as follows:
\begin{itemize}
\item HTTP GET gives a representation of the target node, which includes information about the state of the device as well as all the necessary information about the node (i.e. last connection time, average connection time\dots).
\item HTTP PUT sends information to the target node about the state of the source node.
Such request is useful for nodes to send to their neighbours information about their current state.
HTTP PUT allows system designers to specify URLs where shared information is stored (for example~\url{http://192.168.1.15/shared}).
\item HTTP POST holds the same role as HTTP PUT but it applies to new devices, so that the data is added to the shared pool and does not replace existing data.
\item HTTP DELETE is utilized when a node leaves the network in a predictable way, so that its state information is removed from the shared pool without going through a time-out.
\end{itemize}
\subsection{Validation and Evaluation}
To validate the feasibility of our approach and test its scalability we ran performance simulation test cases. In each test case, a fixed number of nodes formed a P2P network.
Nodes were assigned applications to execute.
Each application had a random execution time and preset resource consumption expressed in fractions between 5\% - 40\%. For the sake of simplicity, only one resource was used (CPU).
The simulation ran for 100 blocks with a block time of 1 second.
Applications were queued until the average load of the entire system rose above 90\%. The migration strategy was implemented based on the algorithm described in Section~\ref{subsec:consensus}. Applications arrived in the queue with certain probability, which was gradually increased with the number of nodes in the system.
From the reported resource loads of nodes (reported in \%), we compute the standard deviation as a measure of how balanced resource consumption is.
\begin{figure}[ht!]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{5_s}
\caption[Node]{5 nodes}
\label{fig:5nodes}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{25_s}
\caption[Node]{25 nodes}
\label{fig:25nodes}
\end{subfigure}
\medskip
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{50_s}
\caption[Node]{50 nodes}
\label{fig:50nodes}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{100_s}
\caption[Node]{100 nodes}
\label{fig:100nodes}
\end{subfigure}
\caption{Simulation results}
\label{fig:test}
\end{figure}
In Fig.~\ref{fig:test}, we observe that the standard deviation remains low even when the number of applications in the system grows.
The lower load cases where we can observe higher swings in standard deviations are expected due to the low number of applications.
The crossover happens when the number of applications exceeds the number of nodes.
Below the threshold, there are bound to be nodes that do not run any applications.
We can observe from Fig.~\ref{fig:5nodes} that as the number of nodes is low, resource balancing between nodes is effective earlier, which explains why the measures are less marked than with the other figures, that correspond to test cases where it takes the simulation a longer time to reach the point of crossover where a higher number of applications is distributed over a lower number of nodes.
From the simulation results we conclude that the architecture can scale with the growing number of nodes in the network. Additionally, the naive algorithm for creating a migration strategy performed well in distributing load across the system.
\section{Discussion and Conclusion}
\label{sec:conclusion}
In this paper, we propose a decentralized solution to the resource usage optimization problem, a typical issue in edge computing.
Our solution avoids the single point of failure that centralized architectures suffer from and improves network resilience as it does not depend on a master node.
To design our solution, we have combined a blockchain-like shared data structure and a consensus algorithm with a monitoring application that runs on top of the Docker platform.
Such combination allows edge devices to check at run-time if there is a need for migrating an application, and to reach consensus on a decision to do so.
With our contribution, edge devices become a completely decentralized and distributed run-time platform.
We have implemented and evaluated our solution with a set of sensors deployed in a cultural heritage building in Bled, Slovenia.
Results show that our approach is able to adjust and normalize the application load over a set of nodes.
It also provides, thanks to the fact that the algorithm we use is deterministic and that all the data is stored in a distributed structure, the possibility to verify all the decisions that have been taken to optimize the usage of edge devices.
The consensus algorithm that we use also allows to adjust the global network behaviour to entering or leaving nodes.
Several limitations have been identified that give insights for future work.
First, it is important to observe how adding and removing devices affects network behaviour and to explore how scalable is our approach over a large number of devices.
Second, it seems appropriate to find out what specific aspects of use cases can help determine which consensus algorithm is most suitable for deploying our solution, in order to best match the use case requirements.
Third, it includes semantically describing applications and the services that edge devices offer, to support application migration, and combine in the same architecture the need for efficiently managing network resources together with the needs of applications in terms of functionality and quality of service.
\section{Acknowledgment}
\label{acknowledgment}
The authors gratefully acknowledge the European Commission for funding the InnoRenew CoE project (Grant Agreement \#739574) under the Horizon2020 Widespread-Teaming program and and the Republic of Slovenia (Investment funding of the Republic of Slovenia and the European Union of the European regional Development Fund).
\bibliographystyle{splncs04}
|
2,877,628,089,811 | arxiv | \section{Introduction}
\noindent In this review we study conjectured laws of the total mass of the Bacry-Muzy \cite{MRW}, \cite{BM1}, \cite{BM} Gaussian
Multiplicative Chaos (GMC) measures on the unit interval and circle with non-random logarithmic potentials. These measures are the most
tractable examples of GMC measures in dimension one and serve as a paradigm of all GMC measures due
to their very natural logarithmic covariance structures and connections with $1/f$ noises.
While the positive integer moments of the total mass of all multiplicative chaos measures can be written in the form of multiple integrals, cf. \cite{MeIntRen},
\cite{MeLMP},
the tractability of the Bacry-Muzy measures is inextricably tied to the fact that they are the only GMC measures that have explicitly known moments
given by the Selberg integral on the interval and the Morris integral on the circle. A key challenge of the problem of
computing the law of the total mass is that its moments diverge at any level of intermittency (inverse temperature) thereby rendering the associated
Stieltjes moment problem indeterminate. Moreover, while the negative integer moments are finite, their Stieltjes moment problem
can still be indeterminate as is the case of the Bacry-Muzy measure on the interval, for example. From this perspective, the simplest
measure is the Bacry-Muzy measure on the circle, whose negative integer moments do capture the distribution uniquely. Nonetheless,
the computation of the negative moments from first principles is very difficult and has only been achieved in the simplest
case of the Bacry-Muzy measure on the circle with the zero logarithmic potential corresponding to the positive moments being given by the special case of the Morris integral known as the Dyson integral, cf. the recent announcement in \cite{Remy}. Overall, the study of the law of the total mass is a highly non-classical moment problem
that requires novel mathematical techniques. In the rest of this section we will briefly explain the interest in GMC measures and then review possible approaches to the problem of the total mass and summarize our key contributions.
\subsection{GMC and Total Mass Problem}
The theory of Gaussian Multiplicative Chaos (GMC) measures was conceived by Mandelbrot \cite{secondface},
who introduced the key ingredients of what is now known as GMC under the name of the limit lognormal measure, cf. also his review \cite{Lan}.
The mathematical foundation of the subject was laid down by Kahane \cite{K2}, who created a comprehensive, mathematically rigorous theory of (not necessarily gaussian) multiplicative chaos measures. The theory was advanced further
around 2000 with the introduction of the conical set construction by Barral and Mandelbrot \cite{Pulses} and Schmitt and Marsan \cite{Schmitt} and took its modern form with the theory of infinitely divisible multiplicative chaos measures of Bacry and Muzy \cite{BM1}, \cite{BM}. The theory of Bacry and Muzy was limited to multiplicative chaos on a finite interval.
It has since been extended to multiple dimensions by Robert and Vargas \cite{RVrevis}, who also relaxed Kahane's $\sigma-$positivity
condition and proved universality of GMC, to other geometric shapes such as the circle by Fyodorov and Bouchaud \cite{FyoBou} and Astala \emph{et. al} \cite{Jones}, to complex GMC by Lacoin {\it et. al.} \cite{LRV},
as well as to critical GMC by Duplantier \emph{et. al} \cite{dupluntieratal} and Barral \emph{et. al.} \cite{barraletal}, and most recently to super-critical GMC by Madaule \emph{et. al.} \cite{Mad}. Most recently, Berestycki \cite{B}, Junnila and Saksman \cite{JS}, and Shamov \cite{Shamov} found new re-formulations and further extended the existing theory.
The interest in GMC derives from its remarkable properties of multifractality and multiscaling, from inherent interest in Gaussian logarithmically
correlated fields, cf. \cite{RVLog}, \cite{YO}, upon which GMC measures are built, from the complexity of
mathematical problems that their stochastic dependence poses, and from the many applications in mathematical
and theoretical physics and pure mathematics, in which GMC naturally appears. Without aiming for comprehension, we can mention applications to five areas: (1) conformal field theory and Liouville quantum gravity \cite{Jones}, \cite{BenSch}, \cite{DS}, \cite{RV3}, \cite{RV1}, (2) statistical mechanics of disordered energy landscapes \cite{Cao}, \cite{Cao2}, \cite{ClD},
\cite{Fyo10}, \cite{FyoBou}, \cite{YO}, \cite{FLDR}, \cite{FLDR2}, \cite{Me16},
(3) random matrix theory \cite{FKS}, \cite{FyodSimm}, \cite{Hughes}, \cite{joint}, \cite{Webb}, (4) statistical modeling of fully intermittent turbulence \cite{CD95}, \cite{CGH}, \cite{Chainais}, \cite{Frisch}, \cite{Novikov}, \cite{RVrevis},
(5) conjectured \cite{FHK}, \cite{YK}, \cite{Menon} and some rigorous \cite{SW} applications to the behavior of the Riemann zeta function on the critical line.
A fundamental open problem in the theory of GMC is to calculate the distribution of the total mass of the chaos measure and, more generally,
understand its stochastic dependence structure, \emph{i.e.} the joint distribution of the measure of several subsets of the set, on which it is defined.
The significance of this problem is due to the fact that in most of the aforementioned
areas the objects of interest can be reduced to questions about the total mass. We will illustrate it with two examples.
As it was first discovered in \cite{FyoBou} and \cite{FLDR},
conjectured laws of the total mass of the Bacry-Muzy GMC measures on the circle and interval yield, under the hypothesis of freezing, precise asymptotic distributions of extremes of the underlying gaussian fields, which are restrictions of the 2D Gaussian Free Field to these geometries.
It was subsequently proved rigorously in \cite{Madmax} that the distribution of the maximum of the logarithmically correlated gaussian field underlying the general GMC construction is determined by the law of the total mass of the corresponding critical GMC. As a second example,
fluctuations of mesoscopic counting statistics that converge to $\mathcal{H}^{1/2}$ gaussian fields can be rigorously quantified by means of the law of the total mass of the corresponding GMC measure, thereby connecting random matrix and GMC theories, cf. \cite{joint}. We refer the interested reader to \cite{RV2} for a general review of GMC.
\subsection{Three Known Approaches}
\noindent There are three known approaches to the problem of the total mass.
The first approach, pioneered by Fyodorov and Bouchaud \cite{FyoBou} and Fyodorov \emph{et. al.} \cite{FLDR} and
presented in the general case by Fyodorov and Le Doussal \cite{FLD}, is to heuristically extend the known positive integer moments to the complex plane,
\emph{i.e.} construct a function of a complex variable whose restriction to the finite interval of positive integers, where the moments are finite, coincides with the moments, and thereby
guess the Mellin transform of the total mass. This is particularly simple for the Bacry-Muzy GMC on the circle with the zero potential as the extension of the Dyson integral from the integer to complex dimension is elementary. This method also works on the interval although the extension of the Selberg
integral from the integer to complex dimension is substantially more difficult. The primary theoretical limitation of this approach is that it operates on the moments
directly and the moment problem is not determinate. The secondary limitation is that it is not obvious that the so-constructed Mellin
transform is in fact the Mellin transform of a probability distribution. While it is easy to show that the analytic extension of the Dyson
integral obtained in \cite{FyoBou} is the Mellin transform of a Fr\'echet distribution, it is much more difficult to establish the probabilistic property of
the analytic extension of the Selberg integral found in \cite{FLDR} so that the authors of that work limited themselves to numerical evidence.
Nonetheless, from the computational standpoint, this method is particularly efficient.
The second approach, which we introduced in \cite{Me2} and developed in \cite{Me3}--\cite{Me16}, is based on the formalism of intermittency differentiation and renormalization.
The rule of intermittency differentiation expresses the intermittency derivative of a general class of functionals of the total mass in the form
of an exact, non-local equation (or infinite hierarchy of local equations). It allows to compute the full high-temperature (low intermittency) expansion of the Mellin transform of the total mass of a general GMC measure
in terms of the expansion of positive integer moments in intermittency and effectively reconstruct the Mellin transform by summing the intermittency expansion. In the special cases of the Bacry-Muzy measures on the interval and circle with a logarithmic potential we carried out these computations explicitly by means of
Hardy's moment constant method
and proved that the resulting expressions\footnote{Hardy's method produces expressions for the logarithm of
the Mellin transform in the integral form, which are more cumbersome than the expressions for the Mellin transform itself in terms
of Barnes double gamma factors that are produced by the method of Fyodorov \emph{et. al.} \cite{FLDR}. The integral expressions
however are easier to bring to the L\'evy-Khinchine form and thus prove the probabilistic property of the construction.
This constitutes the analytical proof of existence. It is possible to give a purely probabilistic proof that is based on the Mellin transform directly,
which requires the machinery of Barnes beta distributions. We review both proofs in this paper. }
are Mellin transforms of valid probability distributions, known as the Selberg and Morris integral probability distributions, respectively. These distributions have the properties that their positive integer moments are given by the Selberg and Morris integrals,
\emph{i.e.} match the moments of the total mass, and that the asymptotic expansions of their Mellin transforms in intermittency coincide with the intermittency expansion. The principal computational limitation of this approach is that it too requires the explicit knowledge of the moments.
It is unknown whether the intermittency expansion of the Mellin transform captures the distribution of the total mass uniquely. If true, this approach
would provide a solution to the moment problem of the total mass. The answer depends on detailed, non-perturbative analysis of intermittency differentiation equations and requires novel mathematical tools. We refer the interested reader to \cite{MeIntRen} for an in-depth discussion
of theoretical aspects of our approach. We also note that the intermittency differentiation approach is not limited to 1D or Bacry-Muzy measures or even GMC measures but in fact applies to all infinitely divisible multiplicative chaos measures, cf. \cite{Me17}.
In summary, both the first and second approaches succeeded in constructing good candidates for the distribution
of the total mass of the Bacry-Muzy measures in the sub-critical regime. The formulas for the Mellin transform that are
produced by both methods are known to be the same.
The third and most recent approach introduced by Remy \cite{Remy} is based on the connection between GMC and
Liouville conformal field theory that was established by David \emph{et. al.} \cite{David}.
It interprets negative integer moments of the total mass of the Bacry-Muzy GMC on the circle
in terms of one-point correlation functions of Liouville conformal field theory on the unit disk and thereby computes these moments from first principles. Unlike the first two approaches, the approach of Remy \cite{Remy} is mathematically rigorous.
A priori, it appears that this approach is limited to low-dimensional GMC measures that are connected
with the Liouville theory and further have a determinate Stieltjes moment problem for the negative moments such as the Bacry-Muzy measure
on the circle. In particular, in its current form it does not apply to the Bacry-Muzy measure on the interval as the Stieltjes moment problem
for the negative moments of its total mass is indeterminate. It should be stressed that the
result of Remy \cite{Remy} has not resolved our conjecture about the distribution of the total mass of the Bacry-Muzy GMC on the circle because
we allow for a non-random logarithmic potential. Remy \cite{Remy} sets it to zero, which greatly simplifies the
total mass distribution as it reduces it to the Fr\'echet factor as predicted by Fyodorov and Bouchaud \cite{FyoBou}, whereas the full distribution
was conjectured in \cite{Me16} to have a completely non-trivial Barnes beta factor, cf. Sections \ref{CirAnalytical} and \ref{Probabilistic}.
\subsection{Summary of Results and Plan of the Paper}
The scope of this paper is the review of mathematically rigorous results on the existence and properties of the Selberg and Morris integral probability distributions. We review both the analytical and probabilistic constructions of these distributions in the sub-critical and critical regimes in detail.
In particular, we review the theory of Barnes beta distributions that provide basic building blocks of the Selberg and Morris integral distributions.
We also review the analytic continuation of the complex Selberg integral (Dotsenko-Fateev integral).
The theory is illustrated with several conjectured applications.
The analytical construction is based on the three known representations of the Mellin transform: a finite product of ratios of Barnes double gamma
factors, a regularized infinite product of ratios of Euler's gamma factors, and a L\'evy-Khinchine integral representation of the logarithm of the Mellin
transform. The Barnes double gamma and infinite product representations lead to the
computation of the negative moments and furnish simple proofs that the Mellin transform is the analytic continuation of the Selberg/Morris integrals. The Barnes double gamma representation also provides the asymptotic expansion of the Mellin transform, which is shown to match the intermittency expansion that follows from the known formulas for the positive integer moments.
The integral representation of the logarithm of the Mellin
transform furnishes the analytical proof of the fact that the Mellin transform is in fact the Mellin transform of a log-infinitely divisible probability distribution, whose gaussian and L\'evy components are computed explicitly.
The method of deriving the analytic continuation of the Selberg integral in the approach of Fyodorov \emph{et. al.} \cite{FLDR}
is more computationally efficient than our method of summing the intermittency expansion, cf. \cite{Me4}. For this reason, we
use their approach in the construction of the analytic continuation,
and also apply it in deriving the analytic
continuations of the Morris and complex Selberg integrals. Afterwards, to be consistent with our approach, we check that the high-temperature asymptotic expansions of the analytic continuations of the Selberg and Morris integrals coincide with the corresponding intermittency expansions.
The approach of Fyodorov \emph{et. al.} is based on the novel idea of using Barnes-like double gamma function that is popular in the physics
literature. As this function is not standard in mathematics, we chose, beginning with \cite{MeIMRN}, to use the standard double gamma
function and its cousins instead. We believe that the wealth of its known properties makes its use more advantageous.
The probabilistic construction is based on the theory of Barnes beta probability distributions, which we introduced in \cite{MeIMRN} and developed in
\cite{Me13}, \cite{Me14}, and \cite{Me16}.
The Barnes beta distributions constitute a novel family of log-infinitely divisible probability distributions having the property that their Mellin transform is defined in the form of an intertwining product of ratios of multiple Barnes gamma factors. We review their remarkable properties and show
that in the special case of Barnes beta distributions corresponding to the double gamma function these distributions provide
the second, purely probabilistic proof of the existence of Selberg and Morris integral probability distributions. Moreover, we obtain their
explicit factorizations: the Selberg integral distribution is the product of independent lognormal, Fr\'echet, and three Barnes beta distributions,
the Morris integral distribution is the product of independent Fr\'echet and a single Barnes beta.
The two constructions rely on special properties of the double gamma function such as functional equations, Barnes and Shintani infinite
factorizations, scaling invariance, Barnes multiplication, integral representations of its logarithm, and asymptotic expansions. We review these
properties in some detail to make the paper accessible to a wider audience, especially as the double gamma function has multiple known normalizations (classical, Alexeiewsky-Barnes, Ruijsenaars) that are all useful in different contexts. Nonetheless, we restrict ourselves to giving the relevant
formulas without providing detailed proofs from the theory of multiple gamma functions as that would take us too far afield.
We illustrate our theoretical results on the Morris and Selberg integral distributions with three types of applications. Following
Fyodorov and Bouchaud \cite{FyoBou} and Fyodorov \emph{et. al.} \cite{FLDR}, who discovered the connection between
the asymptotic distribution of the maximum of the 2D Gaussian Free Field restricted to the circle and interval and the laws
of the total mass of Bacry-Muzy measure for these geometries, we give a probabilistic re-formulation\footnote{In the circle case
we also extend the original conjecture in \cite{FyoBou}, which was restricted to the Dyson integral.}
of their results in terms
of the conjectured laws of the corresponding derivative martingales. Our second application has to do with the computation
of the inverse participation ratios of the Fyodorov-Bouchaud model that were known previously only by means of a heuristic analytic continuation
of the Morris integral to negative dimensions, cf. \cite{Fyo09}. Our result on the conjectured law of the Bacry-Muzy
measure on the circle with a logarithmic potential allows us to treat this continuation rigorously. In the third application we conjecture
several mod-Gaussian limit theorems. The concept of mod-Gaussian convergence as a means of precisely quantifying divergent sequences
of random variables was first introduced by Keating and Snaith \cite{KeaSna} and was formalized and developed into
a powerful mathematical tool by Jacod \emph{et. al.} \cite{Jacod}, see also \cite{Feray} and \cite{Meliot}.
The idea of associating such theorems with GMC was first introduced in \cite{Menon}
in the context of mesoscopic statistics of Riemann zeroes. We review some of those results and show how they
can be combined with the methods of Fyodorov \emph{et. al.} to conjecture a mod-Gaussian limit theorem
for the distribution of the maximum of the centered Gaussian Free Field on the circle and interval, also known as the Fractional Brownian
motion with Hurst index $H=0,$ cf. \cite{FKS}.
This paper is largely a review of results that have already appeared elsewhere. The only new results are those on the analytic continuation
of the complex Selberg integral,
the computation of the inverse participation ratios of the Fyodorov-Bouchaud model, and the conjectured
mod-Gaussian theorems for the centered Gaussian Free Field.
The plan of the rest of the paper is as follows. In Section 2 we briefly recall Bacry-Muzy measures and then state the problem of
total mass for them. In Section 3 we review multiple gamma functions. In Sections 4 and 5 we state our analytical results on the Morris
and Selberg integral probability distributions, respectively, followed by the proofs in Section 6. In Section 7 we present the theory of Barnes beta probability distributions. In Section 8 we state our probabilistic results on the Morris and Selberg integral probability distributions, followed by
the proofs in Section 9. In Section 10 we give our results on the critical Morris and Selberg integral distributions. In Section 11 we give the
analytic continuation of the complex Selberg integral. In Section 12 we give some applications of our results on conjectured laws of Bacry-Muzy measures. Conclusions are given in Section 13. The Appendix gives proofs of results on Barnes beta distributions.
Our results are mathematically rigorous except in Section \ref{SomeApplications}.
\section{Bacry-Muzy GMC and Total Mass Problem}\label{Problem}
In this section we will informally recall the construction of Bacry-Muzy measures on the interval and circle and pose the specific version
of the total mass problem that we will be considering in the rest of the paper.
Following \cite{MRW}, define a centered gaussian process with the covariance
\begin{align}
{\bf{Cov}}\left[V_{\varepsilon}(u), \,V_{\varepsilon}
(v)\right] = &
\begin{cases}\label{covk}
-
2 \, \log|u-v|, \, \varepsilon < |u-v|\leq 1, \\
- 2\log\varepsilon,\, u=v,
\end{cases}
\end{align}
Let $0\leq \beta<1.$
The theorem of Bacry and Muzy states that the regularized exponential functional of this field converges weakly a.s. as $\varepsilon\rightarrow 0$ to
a non-degenerate limit random measure, called the Bacry-Muzy GMC measure on the interval,
\begin{gather}
e^{\beta^2\log\varepsilon}\int_a^b e^{\beta V_{\varepsilon}(u)} \, du\longrightarrow M_{\beta}(a, b), \label{chaosinterval} \\
{\bf{E}}[M_{\beta}(a, b)]=|b-a|.
\end{gather}
It is worth emphasizing that the choice of covariance regularization for $|u-v|\leq \varepsilon$ has no effect on the law of the total mass,
see the proof of universality in \cite{RVrevis}.
The moments of the total mass of the limit measure with a logarithmic potential are given by the Selberg integral: let $n<1/\beta^2,$
\begin{equation}
{\bf{E}} \Bigl[\Bigl(\int_0^1 s^{\lambda_1}(1-s)^{\lambda_2} \,
M_\beta(ds)\Bigr)^n\Bigr] = \int\limits_{[0,\,1]^n} \prod_{i=1}^n
s_i^{\lambda_1}(1-s_i)^{\lambda_2} \prod_{i<j}^n
|s_i-s_j|^{-2\beta^2} ds_1\cdots ds_n.
\end{equation}
Define the quantity
\begin{equation}
\tau = \frac{1}{\beta^2} > 1.
\end{equation}
In statistical physics one thinks of $\beta$ as the inverse temperature.
Recall the classical Selberg integral,
\begin{equation}
\int\limits_{[0,\,1]^n} \prod_{i=1}^n s_i^{\lambda_1}(1-s_i)^{\lambda_2}\, \prod\limits_{i<j}^n |s_i-s_j|^{-2/\tau} ds_1\cdots ds_n = \prod_{k=0}^{n-1}\frac{\Gamma(1-(k+1)/\tau)
\Gamma(1+\lambda_1-k/\tau)\Gamma(1+\lambda_2-k/\tau)}
{\Gamma(1-1/\tau)\Gamma(2+\lambda_1+\lambda_2-(n+k-1)/\tau)}, \label{Selberg}
\end{equation}
cf. \cite{ForresterBook} for a modern treatment. We will assume for simplicity that $\lambda_i \geq 0.$ The integral is convergent for $n<\tau.$
The Bacry-Muzy GMC on the circle is a periodized version of the Bacry-Muzy measure on the interval. It was first considered heuristically
in \cite{FyoBou} and formalized in \cite{Jones}. Let $V_\varepsilon(\psi)$ be a centered gaussian process with the covariance
\begin{align}
{\bf{Cov}}\left[V_{\varepsilon}(\psi), \,V_{\varepsilon}
(\xi)\right] = &
\begin{cases}\label{covkc}
-
2 \, \log|e^{2\pi i\psi}-e^{2\pi i\xi}|, \, |\xi-\psi|> \varepsilon, \\
-2\log\varepsilon, \psi=\xi,
\end{cases}
\end{align}
Once again,
the regularized exponential functional of this field converges weakly a.s. as $\varepsilon\rightarrow 0$ to
a non-degenerate limit random measure, which we refer to as the Bacry-Muzy GMC measure on the circle.
Let $0\leq \beta<1.$
\begin{gather}
e^{\beta^2 \log\varepsilon}\int_\phi^\psi e^{\beta V_{\varepsilon}(\theta)} \, d\theta\longrightarrow M_{\beta}(\phi, \psi), \label{chaoscircle} \\
{\bf{E}}[M_{\beta}(\phi, \psi)]=|\psi-\phi|.
\end{gather}
The moments of the total mass of the limit measure with a logarithmic potential are given by the Morris integral: let $n<1/\beta^2,$
\begin{align}
{\bf{E}} \Bigl[\Bigl( \int_{[-\frac{1}{2},\,\frac{1}{2}]} e^{2\pi i\psi \frac{\lambda_1-\lambda_2}{2}} \, |1+e^{2\pi i\psi}|^{\lambda_1+\lambda_2}\, M_{\beta}(d\psi)\Bigr)^n\Bigr] = & \int\limits_{[-\frac{1}{2},\,\frac{1}{2}]^n} \prod\limits_{l=1}^n e^{2\pi i \theta_l\frac{\lambda_1-\lambda_2}{2}} |1+e^{2\pi i\theta_l}|^{\lambda_1+\lambda_2}\times \nonumber\\
& \times
\prod\limits_{k<l}^n |e^{2\pi i \theta_k}-e^{2\pi i\theta_l}|^{-2\beta^2} \,d\theta.
\end{align}
Recall the Morris integral,
see Chapter 4 of \cite{ForresterBook}.
\begin{gather}
\int\limits_{[-\frac{1}{2},\,\frac{1}{2}]^n} \prod\limits_{l=1}^n e^{ \pi i \theta_l(\lambda_1-\lambda_2)} |1+e^{2\pi i\theta_l}|^{\lambda_1+\lambda_2} \,
\prod\limits_{k<l}^n |e^{2\pi i \theta_k}-e^{2\pi i\theta_l}|^{-2/\tau} \,d\theta, \nonumber \\ = \prod\limits_{j=0}^{n-1} \frac{\Gamma(1+\lambda_1+\lambda_2- \frac{j}{\tau})\,\Gamma(1-\frac{(j+1)}{\tau})}{\Gamma(1+\lambda_1- \frac{j}{\tau})\,\Gamma(1+\lambda_2- \frac{j}{\tau})\,\Gamma(1-\frac{1}{\tau})} \label{morris2}.
\end{gather}
We will restrict our attention to a special case of the total mass problem on the circle corresponding to
\begin{equation}
\lambda_1=\lambda_2=\lambda\geq 0.
\end{equation}
In this case the moments of the total mass are given by
\begin{align}
{\bf{E}} \Bigl[\Bigl( \int_{[-\frac{1}{2},\,\frac{1}{2}]} |1+e^{2\pi i\psi}|^{2\lambda}\, M_{\beta}(d\psi)\Bigr)^n\Bigr] = & \int\limits_{[-\frac{1}{2},\,\frac{1}{2}]^n} \prod\limits_{l=1}^n |1+e^{ 2\pi i\theta_l}|^{2\lambda} \,
\prod\limits_{k<l}^n |e^{2\pi i \theta_k}-e^{2\pi i\theta_l}|^{-2/\tau} \,d\theta, \nonumber \\
= & \prod\limits_{j=0}^{n-1} \frac{\Gamma(1+2\lambda- \frac{j}{\tau})\,\Gamma(1-\frac{(j+1)}{\tau})}{\Gamma(1+\lambda - \frac{j}{\tau})^2\,\Gamma(1-\frac{1}{\tau})}.
\label{momlambda}
\end{align}
In the special case of $\lambda=0$ this integral is known as the Dyson integral and has a much simpler evaluation corresponding to the moments of
the Fyodorov-Bouchaud model.
In the rest of the paper we will construct probability distributions having the properties
that they match the positive integer moments
specified in Eqs. \eqref{Selberg} and \eqref{momlambda} and \emph{intermittency expansions} of the total mass distributions. To state this precisely, we need to
briefly remind the reader of the key results of the intermittency renormalization formalism, cf. \cite{Me2}, \cite{Me3}, \cite{MeIMRN}, \cite{MeIntRen}.
Let us write the total mass in the form
\begin{equation}
M_\beta[\varphi](\mathcal{D}) = \int\limits_\mathcal{D} \varphi(x) \, M_\beta(dx)\, dx,
\end{equation}
so that
\begin{align}
\mathcal{D} & = [0,1],\; \varphi(x) = x^{\lambda_1}(1-x)^{\lambda_2}\; (interval), \\
\mathcal{D} & = [-\frac{1}{2},\,\frac{1}{2}],\; \varphi(x) = |1+e^{2\pi ix}|^{2\lambda}\; (circle),
\end{align}
and introduce the quantity
\begin{equation}
\bar{\varphi} \triangleq \int\limits_\mathcal{D} \varphi(x) \,dx.
\end{equation}
Finally, we need to introduce the quantity called the intermittency by
\begin{equation}
\mu = 2\beta^2 = \frac{2}{\tau}.
\end{equation}
Then, the log-moments of the total mass have the representation in the form,
\begin{equation}\label{cpl}
\log{\bf{E}} \Bigl[\Bigl( \int_\mathcal{D} \varphi(x)\, M_{\beta}(dx)\Bigr)^n\Bigr] = n\log \bar{\varphi}+ \sum\limits_{p=1}^\infty c_p(n) \mu^p,
\end{equation}
for some coefficients $c_p(n)$ that are known to be \emph{polynomial} in the moment order $n,$ cf. \cite{MeIntRen}.
One of the key results of the intermittency renormalization formalism is that the full intermittency expansion (formal power series expansion)
of the Mellin transform of the total mass of a general GMC measure can be calculated in closed form in terms of
the expansion of the log-moments in intermittency by the following formula,
\begin{equation}\label{intermittencyMellin}
{\bf E}\Bigl[\Bigl(\int_\mathcal{D} \varphi(x)
\,M_\beta(dx)\Bigr)^q\Bigr]=\bar{\varphi}^q \exp\Bigl(\sum_{p=1}^\infty
\mu^{p} \,c_p(q)\Bigr), \; \Re(q)<\tau.
\end{equation}
It is naturally interpreted as the asymptotic expansion in the limit of low intermittency (high temperature).
The coefficients $c_p(n)$ are known explicitly for the Bacry-Muzy measures.
We have on the interval,
\begin{gather}
c_p(n) =
\frac{1}{p2^p}
\Bigl[ \bigl(\zeta(p,
1+\lambda_1)+\zeta(p,
1+\lambda_2)\bigr)\Bigl(\frac{B_{p+1}(q)-B_{p+1}}{p+1}\Bigr)
-\zeta(p)n + \zeta(p)\times \nonumber \\ \times
\Bigl(\frac{B_{p+1}(n+1)-B_{p+1}}{p+1}\Bigr) - \zeta(p,
2+\lambda_1+\lambda_2)
\Bigl(\frac{B_{p+1}(2q-1)-B_{p+1}(q-1)}{p+1}\Bigr)\Bigr], \label{cpninterval}
\end{gather}
and on the circle,
\begin{gather}
c_p(n) = \frac{1}{p2^p} \Bigl[\bigl(\zeta(p,\,1+2\lambda)-2\zeta(p, 1+\lambda)\bigr)\frac{B_{p+1}(n)-B_{p+1}}{p+1}+
\zeta(p)\frac{B_{p+1}(n+1)-B_{p+1}}{p+1}-n\zeta(p)\Bigr]. \label{cpncircle}
\end{gather}
As usual, $B_n(s)$ denotes the $n$th Bernoulli polynomial, $\zeta(s,
a)$ the Hurwitz zeta function, $\zeta(s)$ the Riemann zeta function, and $\zeta(1, a)\triangleq -\psi(a)$
the digamma function. These formulas are elementary corollaries of Eqs. \eqref{Selberg} and \eqref{momlambda} and
the following summation formulas,
\begin{align}
\log\Gamma(a+z) = & \log\Gamma(a)+\sum\limits_{p=1}^\infty \frac{(-z)^p}{p} \zeta(p,a), \\
\sum\limits_{j=x}^y j^p = &\frac{B_{p+1}(y+1)-B_{p+1}(x)}{p+1}.
\end{align}
Given these preliminaries, we can now give a precise statement of the problem that is reviewed in the rest of the paper, which is our version
of the moment problem for the total mass. The problem is to construct and describe properties of positive probability distributions that have positive integer moments given by Eqs. \eqref{Selberg} and \eqref{momlambda} and whose asymptotic expansion of the Mellin transform in intermittency coincides with the expansion in Eq. \eqref{cpl} with the $c_p(n)$ coefficients specified in Eqs. \eqref{cpninterval} and \eqref{cpncircle}, respectively.
\section{A Review of Barnes Double Gamma Function}\label{BarnesReview}
In this section we review several formulations of the multiple gamma functions of Barnes with an emphasis
on the double gamma function.
In general, let $a=(a_1,\cdots, a_M),$ $M\in \mathbb{N},$ $a_i>0 $ $\forall i=1\cdots M.$
The multiple gamma function of Barnes $\Gamma_{M}(z\,|\,a)$ is a meromorphic function of $z\in\mathbb{C}$ that satisfies $M$ functional equations,
\begin{equation}\label{feq}
\Gamma_{M}(z\,|\,a) = \Gamma_{M-1}(z\,|\,\hat{a}_i)\,\Gamma_M\bigl(z+a_i\,|\,a\bigr),\,i=1\cdots
M,
\end{equation}
$\hat{a}_i = (a_1,\cdots, a_{i-1},\,a_{i+1},\cdots, a_{M}),$ and
\begin{align}
\Gamma_0(z) = & \frac{1}{z}, \\
\Gamma_1(z\,|\,\tau) = & \frac{\tau^{z/\tau-1/2}}{\sqrt{2\pi}} \,\Gamma\bigl(\frac{z}{\tau}\bigr), \label{gamma1}
\end{align}
where $\Gamma(z)$ is Euler's gamma function, cf. \cite{mBarnes}.
By iterating Eq. \eqref{feq} one sees that $\Gamma_M(z\,|\,a)$ is
meromorphic over $z\in\mathbb{C}$ having no zeroes and poles at
\begin{equation}\label{poles}
z=-(k_1 a_1+\cdots + k_M a_M),\; k_1\cdots k_M\in\mathbb{N},
\end{equation}
with multiplicity equal the number of $M-$tuples $(k_1, \cdots,
k_M)$ that satisfy Eq. \eqref{poles}.
The case of $M=2$ is referred to as the double gamma function. The fundamental functional equations then
take on the form, cf. \cite{Double},
\begin{gather}
\frac{\Gamma_2(z\,|\,a_1,a_2)}{\Gamma_2(z+a_1\,|\,a_1,a_2)}
= \frac{a_2^{z/a_2-1/2}}
{\sqrt{2\pi}}\Gamma\bigl(\frac{z}{a_2}\bigr), \\
\frac{\Gamma_2(z\,|\,a_1,a_2)}{\Gamma_2(z+a_2\,|\,a_1,a_2)}
= \frac{a_1^{z/a_1-1/2}}
{\sqrt{2\pi}}\Gamma\bigl(\frac{z}{a_1}\bigr),
\end{gather}
which have the following useful corollary that is used repeatedly below. Let $a=(1,\tau)$ and $k\in\mathbb{N}.$ Then,
\begin{align}
\frac{\Gamma_2(z+1-k\,|\,1,\tau)}{\Gamma_2(z+1\,|\,1,\tau)} = & \prod\limits_{j=0}^{k-1}
\Gamma_1\bigl(z-j\,|\,1,\tau\bigr), \nonumber \\
= &
\bigl(\frac{1}{2\pi\tau}\bigr)^{k/2} \tau^{\sum\limits_{j=0}^{k-1} (z-j)/\tau}\;
\prod\limits_{j=0}^{k-1} \Gamma\bigl(\frac{z}{\tau}-\frac{j}{\tau}\bigr). \label{repeated}
\end{align}
The multiple gamma function is defined classically by
\begin{equation}\label{barnes}
\Gamma^{-1}_M(z\,|\,a) = e^{P(z\,|\,a)}\,w\prod\limits_{n_1,\cdots , n_M=0}^\infty
{}' \Bigl(1+\frac{z}{\Omega}\Bigr)\exp\Bigl(\sum_{k=1}^M \frac{(-1)^k}{k}\frac{z^k}{\Omega^k}\Bigr),
\end{equation}
where $\Re(z)>0,$ $P(z\,|\,a)$ is a polynomial in $z$ of degree $M$ that depends on one's choice of
normalization,
\begin{equation}\label{Omega}
\Omega\triangleq \sum_{i=1}^M n_i \, a_i,
\end{equation}
and the prime indicates that the product is over all indices except $n_1=\cdots =n_M=0.$
The \emph{classical} normalization condition for the double gamma function that was used by Barnes is
\begin{equation}\label{classical}
\lim\limits_{z\rightarrow 0}\,
\Bigl[z\,\Gamma_2(z\,|\,a_1,a_2)\Bigr] = 1.
\end{equation}
In this normalization the double gamma function is closely related to the so-called Alexeiewsky-Barnes $G(z\,|\,\tau)$ function, which was historically
introduced first, cf. \cite{A} and \cite{Genesis}.
The function $G(z\,|\,\tau)$ is defined for
$z\in\mathbb{C}$ and $\tau\in\mathbb{C}$ such that
$|\arg(\tau)|<\pi$
and
satisfies the following
normalization and functional equations.
\begin{align}
G(z=1\,|\,\tau)= & 1, \\
G(z+1\,|\,\tau) = & \Gamma\Big(\frac{z}{\tau}\Bigr)\,G(z\,|\,\tau), \label{Gfunct1} \\
G(z+\tau\,|\,\tau) = &
(2\pi)^{\frac{\tau-1}{2}}\,\tau^{-z+\frac{1}{2}}\,\Gamma(z)\,
G(z\,|\,\tau).
\end{align}
$G(z\,|\,\tau)$ is a meromorphic function of $z$ with no poles and
roots at $z=-(m\tau+n),$ $m,n=0, 1, 2,\cdots.$
The relationship between the double gamma function in the classical normalization and the Alexeiewsky-Barnes
function was established by Barnes in \cite{Double}.
Using Barnes' notation, define the function
\begin{equation}
{}_2 S_0 (z\,|\,a_1, a_2) \triangleq \frac{z^2-z(a_1+a_2)}
{2a_1 a_2}.
\end{equation}
Then,
\begin{equation}
\Gamma_2^{-1}(z\,|\,a_1,a_2) = (2\pi)^{-z/2a_1}
a_2^{1+{}_2 S_0 (z\,|\,a_1, a_2)}
G\Bigl(\frac{z}{a_1}\,\Big|\,\frac{a_2}{a_1}\Bigr).
\end{equation}
Equivalently, we can write
\begin{equation}\label{GfromG2}
G(z\,|\,\tau) = (2\pi)^{z/2} \tau^{-\bigl(1+{}_2
S_0(z,|\,1,\tau)\bigr)}\,\Gamma_2^{-1}(z\,|\,1,\tau),
\end{equation}
so that $G(z\,|\,\tau)$ is, up to normalization, the reciprocal of the double gamma function with parameters $(1,\tau).$
It is an elementary exercise to check that the normalization conditions and functional equations of the double gamma function are equivalent to
those of the $G(z\,|\,\tau)$ function. Moreover, $\Gamma_2(z\,|\,a_1,a_2)$ is symmetric in $(a_1, a_2).$
The function $G(z\,|\,\tau)$ has the following useful properties. The first is an immediate corollary of the functional equation
in Eq. \eqref{Gfunct1}.
\begin{equation}\label{Grepeated}
\frac{G(1+z\,|\,\tau)}{G(1+z-k\,|\,\tau)} = \prod\limits_{j=0}^{k-1} \Gamma\bigl(\frac{z-j}{\tau}\bigr).
\end{equation}
We note that the use of the physicists' equivalent of Eq. \eqref{Grepeated} in constructing analytic continuations was pioneered by
Fyodorov \emph{et. al.} \cite{FLDR} and constitutes the core of their method.
The second property is the integral representation of the logarithm in the form of
a Malmst$\acute{\text{e}}$n-type formula due to Lawrie and King \cite{LawKing}, see also \cite{MeIMRN} for an elementary derivation
based on \cite{Genesis}. Given $\Re(z),\,\Re(\tau)>0,$
\begin{equation}
\log G(z\,|\,\tau) = \int\limits_0^\infty \frac{dt}{t} \Bigl[
\frac{1-z}{e^{t\tau}-1}+(1-z)
e^{-t\tau}+(z^2-z)\frac{e^{-t\tau}}{2\tau}+\frac{1-e^{-t(z-1)}}{(e^t-1)(1-e^{-t\tau})}\Bigr]. \label{LK}
\end{equation}
The third is the asymptotic expansion due to Billingham and King
\cite{BillKing}. We cite it here in a simplified form, which is sufficient for our needs.
\begin{equation}
\log G(z\,|\,\tau) = \frac{z^2}{2\tau}\log(z) - \frac{z^2}{\tau}
\left(\frac{3}{4}+\frac{\log\tau}{2}\right) -
\frac{1}{2}\left(\frac{1}{\tau}+1\right)z\log z + O(z),\,\,
z\rightarrow\infty, \,\,|\arg(z/\tau)|<\pi. \label{BillKingAsymp}
\end{equation}
The fourth property is the integral representation and asymptotic expansion of the logarithm of the ratio of two $G(z\,|\,\tau)$ functions due to
\cite{MeIMRN}.
Let $\Re(q)<1+a+\tau,$ $a>-\tau,$ and $\tau>0.$ Define the function
\begin{equation}
I(q\,|\,a, \tau) \triangleq \int\limits_0^\infty
\frac{dx}{x}\frac{e^{-ax}}{e^{x\tau}-1}
\Bigl[\frac{e^{xq}-1}{e^{x}-1}
-q-\frac{(q^2-q)}{2}x\Bigr].\label{Iintegral}
\end{equation}
Then,
\begin{equation}\label{IfuncG}
I(q\,|\,a, \tau) =
\log\frac{G(1+a+\tau\,|\,\tau)}{G(1-q+a+\tau\,|\,\tau)}
-q\log\Bigl[\Gamma\bigl(1+\frac{a}{\tau}\bigr)\Bigr]+
\frac{(q^2-q)}{2\tau}\psi\bigl(1+\frac{a}{\tau}\bigr),
\end{equation}
and $I(q\,|\, a\tau, \tau)$ has the asymptotic expansion
\begin{equation}\label{IfuncGAsymptotic}
I(q\,|\,a\tau, \tau) \thicksim \sum\limits_{r=1}^\infty \frac{\zeta(r+1,
\,1+a)}{r+1}\Bigl(\frac{B_{r+2}(q)-B_{r+2}}{r+2}\Bigr)/\tau^{r+1}
\end{equation}
in the limit $\tau\rightarrow +\infty,$ which follows from a slight extension of Ramanujan's generalization
of Watson's lemma, confer Lemma 10.2 in Chap. 38 of \cite{Berndt}, see \cite{MeIMRN} for details.
As a corollary of Eq. \eqref{IfuncG} and the classical identities, cf. \cite{Temme},
\begin{equation}\label{Malmsten}
\log\Gamma(1+s) =\int\limits_0^\infty
\Bigl(\frac{e^{-ts}-1}{e^t-1}+se^{-t}\Bigr) \frac{dt}{t}
\,\,\,\,\text{(Malmst$\acute{\text{e}}$n)}, \,\,\Re(s)>-1,
\end{equation}
\begin{equation}\label{Frullani}
\log(s) = \int\limits_0^\infty
\bigl(e^{-t}-e^{-ts}\bigr)\frac{dt}{t}
\,\,\,\,\text{(Frullani)},\,\,\Re(s)>0.
\end{equation}
we established in \cite{MeIMRN} the following result.
Given $b, c, d>0$ and $\Re(q)<b,$ there holds the identity
\begin{align}
\exp\left(\int\limits_0^\infty
\frac{dx}{x}\frac{e^{-bx}(1-e^{-cx})(1-e^{-dx})}{(1-e^{-x})(1-e^{-x\tau})}
(e^{xq}-1)\right) = &
\frac{G(b\,|\,\tau)}{G(b-q\,|\,\tau)}\frac{G(b-q+c\,|\,\tau)}{G(b+c\,|\,\tau)}\times \nonumber\\
&\times \frac{G(b-q+d\,|\,\tau)}{G(b+d\,|\,\tau)}
\frac{G(b+c+d\,|\,\tau)}{G(b-q+c+d\,|\,\tau)}.\label{Gratioidentity}
\end{align}
Finally, the last property is the Shintani factorization, cf. \cite{Shi}.
Given $z\in\mathbb{C}$ and
$|\arg(\tau)|<\pi,$ then
\begin{equation}\label{ShintaniG}
G(z+\tau\,|\,\tau) = (2\pi)^{\frac{(\tau-1)}{2}} \tau^{-\frac{1}{2}}
e^{\gamma\frac{(z-z^2)}{2\tau}} \prod\limits_{m=1}^\infty
(m\tau)^{z-1} e^{\frac{(z^2-z)}{2m\tau}} \frac{\Gamma(1+m\tau)}
{\Gamma(z+m\tau)},
\end{equation}
or, equivalently, in terms of the double gamma function in its classical renormalization,
\begin{equation}\label{ShintaniGamma}
\Gamma_2(z\,|\,1,\tau) = (2\pi)^{\frac{z}{2}}
\tau^{\frac{(z-z^2)}{2\tau} - \frac{z}{2}}
e^{\gamma\frac{(z^2-z)}{2\tau}} \Gamma(z) \prod\limits_{m=1}^\infty
(m\tau)^{1-z} e^{\frac{(z-z^2)}{2m\tau}} \frac{\Gamma(z+m\tau)}
{\Gamma(1+m\tau)}.
\end{equation}
We refer the interested reader to \cite{KataOhts} for a further discussion of the double and multiple gamma functions
in the classical normalization.
We note that the well-known Barnes $G(z)$ function, confer \cite{Genesis} and \cite{SriCho}, is a special case of
the Alexeiewsky-Barnes
function.
\begin{equation}\label{Gdef}
G(z) \triangleq G(z\,|\,\tau=1).
\end{equation}
It satisfies the functional equation
\begin{equation}
G(z+1) = \Gamma(z)\,G(z).
\end{equation}
It is known that $G(z)$ is entire with zeroes at $z=-n, \,n\in\mathbb{N},$ of
multiplicity $n+1.$
The \emph{modern} normalization of the double and, more generally, multiple gamma functions is due to Ruijsenaars \cite{Ruij}.
Its advantages over the classical approach is that it simplifies some formulas, notably Barnes multiplication and scaling, and streamlines integral representations and asymptotic expansions.
The approach of Ruijsenaars is based on multiple zeta functions. Define the function
\begin{equation}\label{fdef}
f(t) = t^M \prod\limits_{j=1}^M (1-e^{-a_j t})^{-1}
\end{equation}
for some integer $M\geq 0$ and parameters $a_j>0,$ $j=1\cdots M.$
Slightly
modifying the definition in \cite{Ruij}, we define multiple
Bernoulli polynomials for $a=(a_1,\cdots, a_M)$ by
\begin{equation}\label{Bdefa}
B_{M, m}(x\,|\,a) \triangleq \frac{d^m}{dt^m}\Big|_{t=0} \bigl[f(t)
e^{-xt}\bigr].
\end{equation}
The generalized zeta function is defined by
\begin{equation}\label{zdef}
\zeta_M(s, \,w\,|\,a) \triangleq \frac{1}{\Gamma(s)} \int\limits_0^\infty
t^{s-1} e^{-wt}\,f(t) \,\frac{dt}{t^M}, \,\,\Re(s)>M,\,\Re(w)>0.
\end{equation}
It is shown in \cite{Ruij} that $\zeta_M(s, \,w)$ has the analytic
continuation to a meromorphic function in $s\in\mathbb{C}$
with simple poles at $s=1, 2, \cdots M.$ The generalized log-multiple gamma
function is then defined by
\begin{equation}\label{Ldef}
L_M(w\,|\,a) \triangleq \partial_s \zeta_M(s, \,w\,|\,a)|_{s=0}, \,\,\Re(w)>0.
\end{equation}
It can be analytically continued to a function that is holomorphic
over $\mathbb{C}-(-\infty, 0].$ The key result of \cite{Ruij}
is the following Malmst\'en-type formula for $L_M(w\,|\,a).$
Let $\Re(w)>0.$
\begin{equation}\label{key}
L_M(w\,|\,a) = \int\limits_0^\infty \frac{dt}{t^{M+1}} \Bigl(
e^{-wt}\,f(t) - \sum\limits_{k=0}^{M-1} \frac{t^k}{k!}\,B_{M,k}(w\,|\,a)
- \frac{t^M\,e^{-t}}{M!}\, B_{M,M}(w\,|\,a)\Bigr).
\end{equation}
$L_M(w\,|\,a)$ satisfies the asymptotic expansion,
\begin{gather}
L_M(w\,|\,a) = -\frac{1}{M!} B_{M, M}(w\,|\,a)\,\log(w) + \sum\limits_{k=0}^M
\frac{B_{M,k}(0\,|\,a) (-w)^{M-k}}{k!(M-k)!}\sum\limits_{l=1}^{M-k}
\frac{1}{l} + R_M(w\,|\,a), \label{asym}\\
R_M(w\,|\,a) = O(w^{-1}), \,|w|\rightarrow\infty, \, |\arg(w)|<\pi.
\label{asymremainder}
\end{gather}
Now, it is not difficult to show that Eq.
\eqref{zdef} implies
\begin{equation}\label{mzdef}
\zeta_M\bigl(s,\,w\,|\,a\bigr) =
\sum\limits_{k_1,\cdots,k_M=0}^\infty \bigl(w+k_1 a_1+\cdots+k_M
a_M\bigr)^{-s},\,\,\Re(s)>M,\,\Re(w)>0,
\end{equation}
which is the formula given originally by
Barnes \cite{mBarnes} for the multiple zeta function.
Following \cite{Ruij}, define the Barnes multiple gamma function by
\begin{equation}\label{mgamma}
\Gamma_M(w\,|\,a) \triangleq \exp\bigl(L_M(w\,|\,a)\bigr).
\end{equation}
It follows from Eqs. \eqref{mzdef} and \eqref{mgamma} that
$\Gamma_M(w\,|\,a)$ satisfies the fundamental functional equation
in Eq. \eqref{feq}.
The multiple gamma function in the modern normalization has the following properties, cf. \cite{Me14} for a detailed review.
The first property is scaling invariance.
Let \(\Re(w)>0,\) \(\kappa>0\) and \((\kappa\,a)_i\triangleq\kappa\,a_i,\;i=1\cdots M.\)
\begin{equation}\label{scale}
\Gamma_M(\kappa w\,|\,\kappa a) = \kappa^{-B_{M,M}(w\,|\,a)/M!}\,\Gamma_M(w\,|\,a).
\end{equation}
In the case of classical normalization, this result appears to be due to \cite{KataOhts}.
It was re-discovered in \cite{Kuz} in the special case of $M=2.$
The second property is Barnes multiplication.
Let $\Re(w)>0$ and $k=1,2,3,\cdots.$
\begin{equation}\label{multiplic}
\Gamma_M(kw\,|\,a) = k^{-B_{M,
M}(kw\,|\,a)/M!}\,\prod\limits_{p_1,\cdots,p_M=0}^{k-1}\Gamma_M\Bigl(w+\frac{\sum_{j=1}^M
p_j a_j}{k}\,\Big|\,a\Bigr).
\end{equation}
In the classical case, this result is due to \cite{mBarnes}. We will be particularly interested in the special case of $M=2$
and record the formula for $B_{2,2}(w\,|\,a)$ for future convenience.
\begin{equation}\label{B22}
B_{2,2}(w\,|\,a) =
\frac{w^2}{a_1a_2}-\frac{w(a_1+a_2)}{a_1a_2}+\frac{a_1^2+3a_1a_2+a_2^2}{6a_1a_2}.
\end{equation}
Finally, we state the general Shintani factorization in a slightly simplified way that is sufficient for our needs.
Given arbitrary $x>0,$ there exist functions \(\Psi_{M+1}(w,y\,|\,a)\) and
$\phi_{M+1}\bigl(w,y\,|\,a, a_{M+1}\bigr)$ such that
\begin{align}
\Gamma_{M+1}\bigl(w\,|\,a,a_{M+1}\bigr) = & \prod\limits_{k=1}^\infty \frac{\Gamma_M(w+ka_{M+1}\,|\,a)}{\Gamma_M(x+ka_{M+1}\,|\,a)}e^{\Psi_{M+1}(x,\,ka_{M+1}\,|\,a)-\Psi_{M+1}(w,\,ka_{M+1}\,|\,a)} \times \nonumber \\
&\times \exp{\bigl(\phi_{M+1}(w,x\,|\,a, a_{M+1})\bigr)}\, \Gamma_{M}(w\,|\,a). \label{generalfactorization}
\end{align}
\(\Psi_{M+1}(w,y\,|\,a)\) and \(\phi_{M+1}(w,y\,|\,a, a_{M+1})\) are \emph{polynomials} in $w$ of degree $M+1.$
Explicit formulas for these functions are given in \cite{Me14}. In the classical normalization this type of factorization, cf. Eq. \eqref{ShintaniGamma} above, was discovered in \cite{Shi} for $M=1$ and
extended to general $M$ in \cite{KataOhts}. We gave new proofs for $M=1$ using the classical normalization in \cite{MeIMRN} and of the general case in the modern normalization in \cite{Me14}.
For concreteness, in what follows we will write $\Gamma_M(z\,|\,a)$ to mean the multiple gamma function in the modern as opposed to classical normalization, unless specifically stated to the contrary. As we will see, the key formulas are invariant of the choice of normalization as the
fundamental functional equation Eq. \eqref{feq} is the same in both normalizations and our formulas involve \emph{ratios}. This is in particular true of the identity in Eq. \eqref{repeated}.
We conclude this section with a brief mention of the multiple sine function, which occurs in the context of ratios of Barnes beta distributions
as well as in the analytic continuation of the complex Selberg integral.
The multiple sine function \cite{KurKoya} is defined by
\begin{equation}\label{msinedef}
S_M(w|a) \triangleq \frac{\Gamma_M(|a|-w|a)^{(-1)^M}}{\Gamma_M(w|a)},
\end{equation}
where $M=0,1,2\cdots,$ $|a|=\sum_{i=1}^M a_i,$ and $a=(a_1,\cdots, a_M)$ are fixed positive
constants.
It satisfies the same functional equation as the multiple gamma function,
\begin{equation}\label{feqsine}
S_{M}(w\,|\,a) = S_{M-1}(w\,|\,\hat{a}_i)\,S_M\bigl(w+a_i\,|\,a\bigr),\,i=1\cdots
M,
\end{equation}
$\hat{a}_i = (a_1,\cdots, a_{i-1},\,a_{i+1},\cdots, a_{M}).$ In particular, when $M=1,$ we recover the classical sine function,
\begin{equation}
S_1(w\,|\,a) = 2\sin(\pi w/a).
\end{equation}
Let $a=(1, \, \tau).$ Given the functional equation, instead of Eq. \eqref{repeated}, we have the identity,
\begin{align}
\frac{S_2(w+1-k\,|\,1,\tau)}{S_2(w+1\,|\,1,\tau)} = & \prod\limits_{j=0}^{k-1}
S_1\bigl(w-j\,|\,1,\tau\bigr), \nonumber \\
= &
\prod\limits_{j=0}^{k-1} 2 \sin\pi\bigl(\frac{w}{\tau}-\frac{j}{\tau}\bigr). \label{Srepeated}
\end{align}
In most of the applications of the double gamma function below its second argument is $a=(1,\,\tau).$
In all such cases we will write $\Gamma_2(\cdot\,|\,\tau)$ as an abbreviation for $\Gamma_2(\cdot\,|\,1, \tau).$
\section{Morris Integral Distribution: Analytical Approach}\label{CirAnalytical}
\noindent
In this section we will construct a positive probability distribution having the properties that it matches the moments and intermittency
expansion of the total mass of the Bacry-Muzy GMC on the circle as specified in Eqs. \eqref{momlambda} and \eqref{cpncircle} in Section \ref{Problem}.
Our approach in this section is analytical and focused on the Mellin transform of what we call the Morris integral probability distribution.
Its moments match the full Morris integral in Eq. \eqref{morris2}.
The fine probabilistic structure of this distribution is described in Section \ref{Probabilistic} below. The proofs of all results
in this section are given in Section \ref{proofsanalytical}.
Define the function
\begin{align}
\mathfrak{M}(q\,|\tau,\,\lambda_1,\,\lambda_2)=&\frac{\tau^{\frac{q}{\tau}}}{\Gamma^q\bigl(1-\frac{1}{\tau}\bigr)}
\frac{\Gamma_2(\tau(\lambda_1+\lambda_2+1)+1-q\,|\,\tau)}{\Gamma_2(\tau(\lambda_1+\lambda_2+1)+1\,|\,\tau)}
\frac{\Gamma_2(-q+\tau\,|\,\tau)}{\Gamma_2(\tau\,|\,\tau)}\times \nonumber \\ & \times
\frac{\Gamma_2(\tau(1+\lambda_1)+1\,|\,\tau)}{\Gamma_2(\tau(1+\lambda_1)+1-q\,|\,\tau)}
\frac{\Gamma_2(\tau(1+\lambda_2)+1\,|\,\tau)}{\Gamma_2(\tau(1+\lambda_2)+1-q\,|\,\tau)}. \label{thefunctioncircle}
\end{align}
Equivalently, we have the identity in terms of the Alexeiewsky-Barnes $G-$function.
\begin{align}
\mathfrak{M}(q\,|\tau,\,\lambda_1,\,\lambda_2)=&\frac{1}{\Gamma^q\bigl(1-\frac{1}{\tau}\bigr)}
\frac{G(\tau(\lambda_1+\lambda_2+1)+1\,|\,\tau)}{G(\tau(\lambda_1+\lambda_2+1)+1-q\,|\,\tau)}
\frac{G(\tau\,|\,\tau)}{G(-q+\tau\,|\,\tau)}\times \nonumber \\ & \times
\frac{G(\tau(1+\lambda_1)+1-q\,|\,\tau)}{G(\tau(1+\lambda_1)+1\,|\,\tau)}
\frac{G(\tau(1+\lambda_2)+1-q\,|\,\tau)}{G(\tau(1+\lambda_2)+1\,|\,\tau)}. \label{MG}
\end{align}
Then, we have the following results.
\begin{theorem}\label{InfinFacCir}
The function $\mathfrak{M}(q\,|\tau,\,\lambda_1,\,\lambda_2)$ has the infinite product factorization,
\begin{align}
\mathfrak{M}(q\,|\tau,\,\lambda_1,\,\lambda_2) = & \frac{\Gamma(1-q/\tau)}{\Gamma^q(1-1/\tau)} \prod\limits_{m=1}^\infty \Bigl[
\frac{\Gamma(1-q+m\tau)}{\Gamma(1+m\tau)} \frac{\Gamma(1+\tau\lambda_1+m\tau)}{\Gamma(1-q+\tau\lambda_1+m\tau)}
\frac{\Gamma(1+\tau\lambda_2+m\tau)}{\Gamma(1-q+\tau\lambda_2+m\tau)} \nonumber \times \\ & \times \frac{\Gamma(1-q+\tau(\lambda_1+\lambda_2)+m\tau)}{\Gamma(1+\tau(\lambda_1+\lambda_2)+m\tau)}\Bigr].
\end{align}
\end{theorem}
\begin{theorem}\label{CirMoments}
The function $\mathfrak{M}(q\,|\tau,\,\lambda_1,\,\lambda_2)$ reproduces the product in Eq. \eqref{morris2} when $q=n<\tau.$
The values of $\mathfrak{M}(q\,|\tau,\,\lambda_1,\,\lambda_2)$ at the negative integers, $q=-n,$ $n\in\mathbb{N},$ are
\begin{equation}\label{CirNegMoments}
\mathfrak{M}(-n\,|\tau,\,\lambda_1,\,\lambda_2) = \prod\limits_{j=0}^{n-1} \frac{\Gamma(1+\lambda_1+\frac{(j+1)}{\tau}) \,\Gamma(1+\lambda_2+\frac{(j+1)}{\tau})\Gamma(1-\frac{1}{\tau})}{\Gamma(1+\lambda_1+\lambda_2+\frac{(j+1)}{\tau})\,\Gamma(1+\frac{j}{\tau})}.
\end{equation}
\end{theorem}
\begin{theorem}[Morris Integral Probability Distribution]\label{CircleExist}
The function $\mathfrak{M}(iq\,|\tau,\,\lambda_1,\,\lambda_2),$ $q\in\mathbb{R},$ $\tau>1,$
is the Fourier transform of an
infinitely divisible, absolutely continuous probability distribution on $\mathbb{R}$
with the L\'evy-Khinchine decomposition
\begin{align}
\log \mathfrak{M}(iq\,|\tau,\,\lambda_1,\,\lambda_2) = & \int\limits_0^\infty
\frac{dx}{x} (e^{ixq}-1) \frac{e^{-\tau x}\bigl(1-e^{-(1+\tau\lambda_1)x}\bigr)\bigl(1-e^{-(1+\tau\lambda_2)x}\bigr)}{(1-e^{-x})(1-e^{-x\tau})}
+ \nonumber \\
& + \int\limits_0^\infty
\frac{dx}{x} (e^{ixq}-1-ixq) \frac{e^{-x\bigl(1+\tau(\lambda_1+\lambda_2)\bigr)}}{e^{x\tau}-1} + iq \,const . \label{CirLKh}
\end{align}
\end{theorem}
\begin{corollary}\label{CirExistRV}
Denote the density of the probability distribution in Theorem \ref{CircleExist} by $f(x).$ The function
$q\rightarrow\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2)$ for
$\Re(q)<\tau$ and $\tau>1$ is the Mellin transform of the
probability density function $f(\log
y)/y,$ $y\in (0,\,\infty).$
\end{corollary}
Denote the probability distribution corresponding to $f(\log y)/y$ by $M_{(\tau, \lambda_1,
\lambda_2)}.$ We call it the Morris integral probability distribution. Thus, we have established the identity
\begin{equation}
{\bf E} [M_{(\tau, \lambda_1,
\lambda_2)}^q] = \mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2), \; \Re(q)<\tau.
\end{equation}
\begin{theorem}\label{CirAsymp}
The function $\log\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2)$ has
the asymptotic expansion as $\tau\rightarrow +\infty,$
\begin{gather}
\log\mathfrak{M}(q\,|\tau,\,\lambda_1,\,\lambda_2)\thicksim q\Bigl(\log\Gamma(1+\lambda_1+\lambda_2)-\log\Gamma(1+\lambda_1)-\log\Gamma(1+\lambda_2)\Bigr) +
\nonumber \\ + \sum\limits_{p=1}^\infty
\frac{1}{p\tau^p} \Bigl[\bigl(\zeta(p,\,1+\lambda_1+\lambda_2)-\zeta(p, 1+\lambda_1)-\zeta(p, 1+\lambda_2)\bigr)\frac{B_{p+1}(q)-B_{p+1}}{p+1}+\nonumber \\ +
\zeta(p)\frac{B_{p+1}(q+1)-B_{p+1}}{p+1}-q\zeta(p)\Bigr].
\end{gather}
\end{theorem}
\begin{theorem}\label{CicInvolution}
The Mellin transform is involution invariant under
\begin{equation}\label{invtranscircle}
\tau\rightarrow \frac{1}{\tau},\; q\rightarrow \frac{q}{\tau}, \; \lambda_i\rightarrow \tau\lambda_i.
\end{equation}
\begin{equation}
\mathfrak{M}\bigl(\frac{q}{\tau}\,\Big|\,\frac{1}{\tau},\tau\lambda_1,\tau\lambda_2\bigr) \,\Gamma^{\frac{q}{\tau}}(1-\tau) \Gamma(1-\frac{q}{\tau}) =
\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2) \Gamma^{q}(1-\frac{1}{\tau}) \Gamma(1-q). \label{invcircle}
\end{equation}
\end{theorem}
It is worth pointing out that in the special case of $\lambda_1=\lambda_2=0,$ we have
\begin{equation}
\mathfrak{M}(q\,|\tau, 0, 0)=\frac{\Gamma\bigl(1-\frac{q}{\tau}\bigr)}{\Gamma^q\bigl(1-\frac{1}{\tau}\bigr)},
\end{equation}
\begin{conjecture}[Law of Total Mass]\label{ourmainconjcircle}
Let $M_{(\tau,\lambda_1,\lambda_2)}$ be as constructed in Theorem \ref{CircleExist}. Let
\begin{equation}
\lambda_1=\lambda_2=\lambda,\footnote{This restriction is necessary as $M_{(\tau,\lambda_1,\lambda_2)}$ is real-valued whereas
$\int_{-1/2}^{1/2} e^{2\pi i\psi \frac{\lambda_1-\lambda_2}{2}} \, |1+e^{2\pi i\psi}|^{\lambda_1+\lambda_2}\, M_{\beta}(d\psi)$ is not in general, unless $\lambda_1=\lambda_2.$ The problem of determining the law of $\int_{-1/2}^{1/2} e^{2\pi i\psi \frac{\lambda_1-\lambda_2}{2}} \, |1+e^{2\pi i\psi}|^{\lambda_1+\lambda_2}\, M_{\beta}(d\psi)$ for $\lambda_1\neq\lambda_2$ is left to future research.}
\end{equation}
\begin{equation}
M_{(\tau,\lambda,\lambda)} \overset{{\rm in \,law}}{=} \int_{-1/2}^{1/2} |1+e^{2\pi i\psi}|^{2\lambda}\, M_{\beta}(d\psi),\; \tau=1/\beta^2>1.
\end{equation}
\end{conjecture}
This conjecture for $\lambda=0$ is due to \cite{FyoBou}. It was recently verified in \cite{Remy}. The general case is due to
\cite{Me16}.
\section{Selberg Integral Distribution: Analytical Approach}\label{IntAnalytical}
\noindent
In this section we will construct a positive probability distribution having the properties that it matches the moments and intermittency
expansion of the total mass of the Bacry-Muzy GMC on the interval as specified in Eqs. \eqref{Selberg} and \eqref{cpninterval} in Section \ref{Problem}.
Our approach in this section is analytical and focused on the Mellin transform of what we call the Selberg integral probability distribution.
The fine probabilistic structure of this distribution is described in Section \ref{Probabilistic} below. The proofs of all results
in this section are given in Section \ref{proofsanalytical}.
Define the function
\begin{align}
\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2) \triangleq &
\Bigl(\frac{2\pi\,\tau^{\frac{1}{\tau}}}{\Gamma\bigl(1-1/\tau\bigr)}\Bigr)^q\;
\frac{\Gamma_2(1-q+\tau(1+\lambda_1)\,|\,\tau)}{\Gamma_2(1+\tau(1+\lambda_1)\,|\,\tau)}
\frac{\Gamma_2(1-q+\tau(1+\lambda_2)\,|\,\tau)}{\Gamma_2(1+\tau(1+\lambda_2)\,|\,\tau)}\times
\nonumber \\ & \times
\frac{\Gamma_2(-q+\tau\,|\,\tau)}{\Gamma_2(\tau\,|\,\tau)}
\frac{\Gamma_2(2-q+\tau(2+\lambda_1+\lambda_2)\,|\,\tau)}{\Gamma_2(2-2q+\tau(2+\lambda_1+\lambda_2)\,|\,\tau)}
\label{thefunctioninterval}
\end{align}
for $\Re(q)<\tau.$
Equivalently, we have the identity in terms of the Alexeiewsky-Barnes $G-$function.
\begin{gather}
\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2) =
\Gamma^{-q}\bigl(1-1/\tau\bigr)
\frac{G(1+\tau(1+\lambda_1)\,|\,\tau)}{G(1-q+\tau(1+\lambda_1)\,|\,\tau)}
\frac{G(1+\tau(1+\lambda_2)\,|\,\tau)}{G(1-q+\tau(1+\lambda_2)\,|\,\tau)}\times
\nonumber \\ \times \frac{G(1+\tau\,|\,\tau)}{G(-q+\tau\,|\,\tau)}
\frac{G(2-2q+\tau(2+\lambda_1+\lambda_2)\,|\,\tau)}{G(2-q+\tau(2+\lambda_1+\lambda_2)\,|\,\tau)}. \label{MGI}
\end{gather}
\begin{theorem}\label{InfinFacInt}
Given $\Re(q)<\tau,$
\begin{gather}
\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2) =
\frac{\tau^{q}\Gamma\bigl(1-q/\tau\bigr)\Gamma\bigl(2-2q+\tau(1+\lambda_1+\lambda_2)\bigr)}{
\Gamma^{q}\bigl(1-1/\tau\bigr)\Gamma\bigl(2-q+\tau(1+\lambda_1+\lambda_2)\bigr)}
\prod\limits_{m=1}^\infty \left(m\tau\right)^{2q}
\frac{\Gamma\bigl(1-q+m\tau\bigr)}{\Gamma\bigl(1+m\tau\bigr)}
\times \nonumber \\
\times
\frac{\Gamma\bigl(1-q+\tau\lambda_1+m\tau\bigr)}{\Gamma\bigl(1+\tau\lambda_1+m\tau\bigr)}
\frac{\Gamma\bigl(1-q+\tau\lambda_2+m\tau\bigr)}{\Gamma\bigl(1+\tau\lambda_2+m\tau\bigr)}
\frac{\Gamma\bigl(2-q+\tau(\lambda_1+\lambda_2)+m\tau\bigr)}{\Gamma\bigl(2-2q+\tau(\lambda_1+\lambda_2)+m\tau\bigr)}.\label{InfiniteSelberg}
\end{gather}
\end{theorem}
\begin{theorem}\label{IntMoments}
The function $\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2)$ reproduces the product in Eq. \eqref{Selberg} for $q=n<\tau.$
The values of $\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2)$ at the negative integers, $q=-n,$ $n\in\mathbb{N},$ are
\begin{equation}
\mathfrak{M}(-n\,|\,\tau,\lambda_1,\lambda_2) = \prod_{k=0}^{n-1}
\frac{\Gamma\bigl(2+\lambda_1+\lambda_2+(n+2+k)/\tau\bigr)
\Gamma\bigl(1-1/\tau\bigr) }{
\Gamma\bigl(1+\lambda_1+(k+1)/\tau\bigr)\Gamma\bigl(1+\lambda_2+(k+1)/\tau\bigr)
\Gamma\bigl(1+k/\tau\bigr) }. \label{IntNegMoments}
\end{equation}
\end{theorem}
\begin{theorem}[Selberg integral probability distribution]\label{IntExist}
The function
$q\rightarrow\mathfrak{M}(iq\,|\,\tau,\lambda_1,\lambda_2),$
$q\in\mathbb{R},$ $\tau>1,$ is the Fourier transform of an
infinitely divisible probability distribution on $\mathbb{R}$
with the L$\acute{\text{e}}$vy-Khinchine decomposition
\begin{equation}
\log\mathfrak{M}(iq\,|\,\tau,\lambda_1,\lambda_2) =
iq\,\mathfrak{m}(\tau) - \frac{1}{2} q^2
\sigma^2(\tau)+\int_{\mathbb{R}\setminus \{0\}} \bigl(e^{iq u}-1-iq
u/(1+u^2)\bigr) d\mathcal{M}_{(\tau, \lambda_1, \lambda_2)}(u)
\end{equation}
for
some $\mathfrak{m}(\tau)\in\mathbb{R}$ and the following gaussian
component and spectral function
\begin{gather}
\sigma^2(\tau) = \frac{4\,\log 2}{\tau}, \\
\mathcal{M}_{(\tau, \lambda_1,
\lambda_2)}(u)\!\!=\!\!-\!\!\int\limits_u^\infty \!\!
\Bigl[\frac{\bigl(e^x+e^{-x\tau\lambda_1}+e^{-x\tau\lambda_2}+e^{-x(1+\tau(1+\lambda_1+\lambda_2))}\bigr)}{\bigl(e^x-1\bigr)(e^{x\tau}-1)}
\!-\!\frac{e^{-x(1+\tau(1+\lambda_1+\lambda_2))/2}}{\bigl(e^{x/2}-1\bigr)(e^{x\tau/2}-1)}\Bigr]\!\!
\frac{dx}{x} \label{IntLKh}
\end{gather}
for $u>0,$ and $\mathcal{M}_{(\tau, \lambda_1, \lambda_2)}(u)=0$ for
$u<0.$ $d\mathcal{M}_{(\tau, \lambda_1, \lambda_2)}(u)/du > 0$ for
$u>0.$
\end{theorem}
\begin{corollary}\label{IntExistRV}
The probability distribution in Theorem \ref{IntExist} has a bounded,
continuous, zero-free density function $f(x),$ $x\in\mathbb{R}.$ The function
$q\rightarrow\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2)$ for
$\Re(q)<\tau$ and $\tau>1$ is the Mellin transform of the
probability density function $f(\log
y)/y,$ $y\in\mathbb{R}_+.$
\end{corollary}
Denote the probability distribution corresponding to $f(\log y)/y$ by $M_{(\tau, \lambda_1,
\lambda_2)}.$ We call it the Selberg integral probability distribution.
Thus, we have established the identity
\begin{equation}
{\bf E} [M_{(\tau, \lambda_1,
\lambda_2)}^q] = \mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2), \; \Re(q)<\tau.
\end{equation}
\begin{theorem}\label{IntAsymp}
The function $\log\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2)$ has
the asymptotic expansion as $\tau\rightarrow +\infty,$
\begin{gather}
\log\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2) \thicksim
q\log\Bigl(\frac{\Gamma(1+\lambda_1)\Gamma(1+\lambda_2)}{\Gamma(2+\lambda_1+\lambda_2)}\Bigr)+
\sum\limits_{p=1}^\infty \Bigl(\frac{1}{\tau}\Bigr)^{p}
\frac{1}{p}\Bigl[-\zeta(p)q+\nonumber
\\+\bigl(\zeta(p, 1+\lambda_1)+\zeta(p,
1+\lambda_2)\bigr)\Bigl(\frac{B_{p+1}(q)-B_{p+1}}{p+1}\Bigr)
+ \zeta(p) \times \nonumber \\
\times\Bigl(\frac{B_{p+1}(q+1)-B_{p+1}}{p+1}\Bigr) - \zeta(p,
2+\lambda_1+\lambda_2)
\Bigl(\frac{B_{p+1}(2q-1)-B_{p+1}(q-1)}{p+1}\Bigr)\Bigr].\label{logMellinAsympI}
\end{gather}
\end{theorem}
\begin{theorem}\label{Mtransforminvol}
The Mellin transform is involution invariant under
\begin{equation}
\tau\rightarrow \frac{1}{\tau},\; q\rightarrow \frac{q}{\tau}, \; \lambda_i\rightarrow \tau\lambda_i.
\end{equation}
\begin{align}
\mathfrak{M}\bigl(\frac{q}{\tau}\,\Big|\,\frac{1}{\tau},\tau\lambda_1,\tau\lambda_2\bigr) (2\pi)^{-\frac{q}{\tau}}\,\Gamma^{\frac{q}{\tau}}(1-\tau) \Gamma(1-\frac{q}{\tau}) = &
\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2) (2\pi)^{-q} \times \nonumber \\ & \times \Gamma^{q}(1-\frac{1}{\tau}) \Gamma(1-q).\label{involutionint}
\end{align}
\end{theorem}
This result can be proved by both analytic and probabilistic means. We prefer the latter, cf. Corollary \ref{mycorollary}, so that the proof of Theorem \ref{Mtransforminvol} is given in Section \ref{ProbProofs} below. The analytic proof is similar to the proofs of Theorem \ref{CicInvolution} in Section \ref{proofsanalytical} and Corollary \ref{MComplextransforminvol} in Section \ref{AnalyticalComplexSelberg}.
\begin{conjecture}[Law of Total Mass]\label{ourmainconjinterval}
Let $M_{(\tau,\lambda_1,\lambda_2)}$ be as constructed in Theorem \ref{IntExist}. Then,
\begin{equation}
M_{(\tau,\lambda_1,\lambda_2)} \overset{{\rm in \,law}}{=} \int_0^1 \,s^{\lambda_1}(1-s)^{\lambda_2} \,
M_\beta(ds),\; \tau=1/\beta^2>1.
\end{equation}
\end{conjecture}
The expression for the negative moments in Eq. \eqref{IntNegMoments} with $\lambda_1=\lambda_2=0$ first appeared in \cite{Me3}.
Theorem \ref{IntMoments} first appeared in \cite{FLDR}, who gave an equivalent expression for the right-hand
side of Eq. \eqref{MGI} and verified that it matches Eq. \eqref{Selberg} without proving analytically that their formula corresponds to the Mellin transform of a probability distribution. The special case of $\lambda_1=\lambda_2=0$ of Theorems \ref{InfinFacInt}, \ref{IntExist}, and
\ref{IntAsymp} first appeared in \cite{Me4} and the general case in \cite{MeIMRN}.
The involution invariance of the Mellin transform in the equivalent form of self-duality, see Eq. \eqref{selfdual} below, was first discovered in the special
case of $\lambda_1=\lambda_2=0$ in \cite{FLDR}. We extended it to the general case in the form of Eq. \eqref{involutionint} in \cite{Me14},
followed by the general form of self-duality in \cite{FLD}.
The interested reader can find additional information about the Selberg integral
distribution such as functional equations, an analytical approach to the moment problem based on Eq. \eqref{BillKingAsymp}, and the tails
in \cite{Me4} and \cite{MeIMRN}.
It is worth pointing out that in our original derivation of the Selberg integral probability distribution in \cite{Me4} in the
special case of $\lambda_1=\lambda_2=0$ and in \cite{MeIMRN} in general
we first established Eq. \eqref{MGI} by
summing the asymptotic series in Theorem \ref{IntAsymp} using Hardy's moment
constant method, cf. Sections 4.12 and 4.13 in \cite{Hardy},
and Ramanujan's generalization of Watson's lemma, cf. Lemma 10.2 in Chap. 38 of \cite{Berndt},
\emph{i.e.} we summed
the intermittency expansions in closed form as explained in Section \ref{Problem}. This asymptotic
series is divergent in general, however it is convergent for finite ranges of positive and negative
integer $q,$ see \cite{Me4}.
\section{Proofs of Analytical Results}\label{proofsanalytical}
\noindent In this section we will give proofs of all
results that were stated in Sections \ref{CirAnalytical} and \ref{IntAnalytical}. All the proofs rely on various properties
of the Alexeiewsky-Barnes $G$-function and the double gamma function, which were reviewed in
Section \ref{BarnesReview}. Recall that $\tau=2/\mu$ as defined in Section \ref{Problem}.
\subsection{The Circle}
\noindent
We start by noting that the equivalence between the formulas for $\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2)$
in Eqs. \eqref{thefunctioncircle} and \eqref{MG} follows from Eq. \eqref{GfromG2}.
\begin{proof}[Proof of Theorem \ref{InfinFacCir}.]
Using the functional equation of the $G(z\,|\,\tau)$ function in Eq. \eqref{Gfunct1}, we can re-write the product in Eq. \eqref{MG} in the
form
\begin{align}
\mathfrak{M}(q\,|\tau,\,\lambda_1,\,\lambda_2)=&\frac{\Gamma(1-q/\tau)}{\Gamma^q\bigl(1-\frac{1}{\tau}\bigr)}
\frac{G(\tau(\lambda_1+\lambda_2+1)+1\,|\,\tau)}{G(\tau(\lambda_1+\lambda_2+1)+1-q\,|\,\tau)}
\frac{G(1+\tau\,|\,\tau)}{G(1-q+\tau\,|\,\tau)}\times \nonumber \\ & \times
\frac{G(\tau(1+\lambda_1)+1-q\,|\,\tau)}{G(\tau(1+\lambda_1)+1\,|\,\tau)}
\frac{G(\tau(1+\lambda_2)+1-q\,|\,\tau)}{G(\tau(1+\lambda_2)+1\,|\,\tau)}.
\end{align}
It now remains to apply the Shintani identity in Eq. \eqref{ShintaniG} to each of the $G-$factors and notice the cancellations of all
terms except for the $\Gamma-$factors. \qed
\end{proof}
\begin{proof}[Proof of Theorem \ref{CirMoments}.]
These formulas are immediate from Eq. \eqref{repeated} or equivalently Eq. \eqref{Grepeated}. \qed
\end{proof}
\begin{proof}[Proof of Theorem \ref{CircleExist}.]
The proof is based on the identities in Eqs. \eqref{Malmsten} and \eqref{Gratioidentity}. We start by re-writing Eq. \eqref{MG} by means of
Eq. \eqref{Gfunct1} in the form
\begin{align}
\mathfrak{M}(q\,|\tau,\,\lambda_1,\,\lambda_2)=&\frac{1}{\Gamma^q\bigl(1-\frac{1}{\tau}\bigr)}
\frac{\Gamma(\frac{ \tau(\lambda_1+\lambda_2+1)+1-q}{\tau})}{\Gamma(\frac{ \tau(\lambda_1+\lambda_2+1)+1}{\tau})}
\frac{G(\tau(\lambda_1+\lambda_2+1)+2\,|\,\tau)}{G(\tau(\lambda_1+\lambda_2+1)+2-q\,|\,\tau)}
\frac{G(\tau\,|\,\tau)}{G(-q+\tau\,|\,\tau)}\times \nonumber \\ & \times
\frac{G(\tau(1+\lambda_1)+1-q\,|\,\tau)}{G(\tau(1+\lambda_1)+1\,|\,\tau)}
\frac{G(\tau(1+\lambda_2)+1-q\,|\,\tau)}{G(\tau(1+\lambda_2)+1\,|\,\tau)}.
\end{align}
We now make two observations. The four ratios of $G-$factors are in the same functional form as in Eq. \eqref{Gratioidentity} with
$b=\tau,$ $c=1+\tau\lambda_1,$ and $d=1+\tau\lambda_2.$ The ratio of the $\Gamma-$ factors has the functional form
$\Gamma(1+(z-q)/\tau)/\Gamma(1+z/\tau)$ with $z=1+\tau(\lambda_1+\lambda_2)$ and can be represented by means of Eq. \eqref{Malmsten}.
We have the identity
\begin{equation}
\log\Gamma\bigl(1+\frac{z-q}{\tau}\bigr) - \log\Gamma\bigl(1+\frac{z}{\tau}\bigr) = \int\limits_{0}^\infty \frac{dt}{t}
\Bigl[ e^{-tz}\bigl(\frac{e^{tq}-1}{e^{t\tau}-1}\bigr) - \frac{q}{\tau} e^{-t\tau}\Bigr].
\end{equation}
The formula in Eq. \eqref{CirLKh} follows. Thus, we have reduced $\log \mathfrak{M}(iq\,|\tau,\,\lambda_1,\,\lambda_2)$ to
the L\'evy-Khinchine form. It follows from the general theory of infinitely divisible probability distributions on the real line
that $\mathfrak{M}(iq\,|\tau,\,\lambda_1,\,\lambda_2)$ is the Fourier transform of an infinitely divisible distribution that is supported
on the real line, cf. Proposition 8.2 in Chapter 4 of \cite{SteVHar}, and is absolutely continuous, cf. Theorem 4.23 in Chapter 4 of \cite{SteVHar}.
\qed
\end{proof}
\begin{proof}[Proof of Corollary \ref{CirExistRV}.]
Denote the random variable corresponding
to the L\'evy-Khinchine decomposition
by $X$ and its probability density by $f(x),$ $x\in\mathbb{R}.$ The Fourier transform of
$f(x)$ is
$\mathfrak{M}(iq\,|\,\mu,\lambda_1,\lambda_2)$ by construction,
which is analytic as a function of $q$ in the strip $\Im(q)>-\tau.$
Then, by the fundamental theorem of analytic characteristic
functions, confer Theorem 7.1.1 in Chap. 7 of \cite{Lukacs}, we have
for all $q$ in the strip of analyticity $\Im(q)>-\tau$
\begin{equation}
\mathfrak{M}(iq\,|\,\mu,\lambda_1,\lambda_2) =
\int\limits_\mathbb{R} e^{iqx}\,f(x)\,dx.
\end{equation}
On the other hand, the random variable
$M_{(\mu,\lambda_1,\lambda_2)}\triangleq \exp(X)$ has the density $f_{(\mu, \lambda_1,
\lambda_2)}(\log y)/y,$ $y\in (0, \infty)$ so that the right-hand
side is precisely its Mellin transform for
$\Re(q)<\tau$
\begin{equation}
\mathfrak{M}(q\,|\,\mu,\lambda_1,\lambda_2) = \int\limits_0^\infty
y^q\,f_{(\mu, \lambda_1, \lambda_2)}(\log y)\,\frac{dy}{y} = {\bf
E}\bigl[M_{(\mu,\lambda_1,\lambda_2)}^q\bigr]
\end{equation}
as seen upon relabeling $iq\rightarrow q$ and changing variables $y
= e^x.$
\qed
\end{proof}
\begin{proof}[Proof of Theorem \ref{CirAsymp}.]
This is an immediate corollary of Eqs. \eqref{IfuncG} and \eqref{IfuncGAsymptotic}.
It remains to apply these equations to each of the four ratios of the $G-$factors in Eq. \eqref{MG}
and collect the terms. \qed
\end{proof}
\begin{proof}[Proof of Theorem \ref{CicInvolution}.]
To prove the involution invariance in Eq. \eqref{invcircle}, we need to recall the scaling property of the multiple gamma function, see Eq. \eqref{scale}.
The transformation in Eq. \eqref{invtranscircle} corresponds to $\kappa=1/\tau.$ We note first that
\begin{equation}
\frac{1}{\tau}(1,\,\tau)=(1, \,\frac{1}{\tau})
\end{equation}
in the sense of the parameters of the double gamma function. We apply Eq. \eqref{scale} to each of the double gamma factors in Eq. \eqref{thefunctioncircle}
under the transformation in Eq. \eqref{invtranscircle}. For example,
\begin{align}
\Gamma_2\bigl(\frac{1}{\tau}(\tau\lambda_1+\tau\lambda_2+1)+1-\frac{q}{\tau}\,\Big|\,\frac{1}{\tau}\bigr) = & \Gamma_2\Bigl(\frac{1}{\tau}\bigl(\tau(\lambda_1+\lambda_2+1)+1-q\bigr)\,\Big|\,\frac{1}{\tau}\Bigr), \nonumber \\
= & \bigl(\frac{1}{\tau}\bigr)^{-B_{2,2}(\tau(\lambda_1+\lambda_2+1)+1-q)/2} \times\nonumber \\ & \times \Gamma_2\Bigl(\tau(\lambda_1+\lambda_2+1)+1-q\,\Big|\,\tau\Bigr).
\end{align}
Using the formula for $B_{2,2}(x\,|\,a)$ in Eq. \eqref{B22}
with $(a_1=1,\,a_2=\tau),$ we collect all terms and simplify to obtain
\begin{equation}
\mathfrak{M}\bigl(\frac{q}{\tau}\,|\,\frac{1}{\tau},\tau\lambda_1,\tau\lambda_2\bigr)
\,\Gamma^{\frac{q}{\tau}}(1-\tau) =
\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2)\Bigl(\frac
\tau^{\frac{1}{\tau}}}{\Gamma(1-\frac{1}{\tau})}\Bigr)^{-q}
\frac{\Gamma_2(1-q\,|\tau)\Gamma_2(\tau\,|\tau)}{\Gamma_2(1\,|\tau)\Gamma_2(\tau-q\,|\tau)}.
\end{equation}
It remains to observe that the functional equation of the double gamma function implies the identity
\begin{equation}\label{mydoublegammaidentity}
\tau^{-\frac{q}{\tau}} \frac{\Gamma_2(1-q\,|\tau)\Gamma_2(\tau\,|\tau)}{\Gamma_2(1\,|\tau)\Gamma_2(\tau-q\,|\tau)} =
\frac{\Gamma(1-q)}{\Gamma(1-\frac{q}{\tau})},
\end{equation}
which gives the result. \qed
\end{proof}
\subsection{The Interval}
\noindent
We start by noting that the equivalence between the formulas for $\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2)$
in Eqs. \eqref{thefunctioninterval} and \eqref{MGI} follows from Eq. \eqref{GfromG2} and the fact
\begin{equation}
G(\tau\,|\,\tau) = G(1+\tau\,|\,\tau).
\end{equation}
\begin{proof}[Proof of Theorem \ref{InfinFacInt}.]
The starting point is the Shintani identity, cf. Eq. \eqref{ShintaniG}.
Using the functional equations of the Alexeiewsky-Barnes
$G-$function, we first reduce some of the $G-$factors in Eq. \eqref{MGI} as
follows.
\begin{align}
\frac{1}{G(-q+\tau\,|\,\tau)}
\frac{G(2-2q+\tau(2+\lambda_1+\lambda_2)\,|\,\tau)}{G(2-q+\tau(2+\lambda_1+\lambda_2)\,|\,\tau)}
= & \tau^q
\frac{\Gamma\bigl(2-2q+\tau(1+\lambda_1+\lambda_2)\bigr)}{\Gamma\bigl(2-q+\tau(1+\lambda_1+\lambda_2)\bigr)}
\times \nonumber \\ & \times
\frac{\Gamma\bigl(1-q/\tau\bigr) }{G(1-q+\tau\,|\,\tau)}
\frac{G(2-2q+\tau(1+\lambda_1+\lambda_2)\,|\,\tau)}{G(2-q+\tau(1+\lambda_1+\lambda_2)\,|\,\tau)}.
\end{align}
We now apply the Shintani identity to each of the resulting
$G-$factors. It is easy to see that the terms that are
quadratic in $q$ all cancel out resulting in Eq. \eqref{InfiniteSelberg}. \qed
\end{proof}
\begin{proof}[Proof of Theorem \ref{IntMoments}.]
These formulas are immediate from Eq. \eqref{repeated} or equivalently Eq. \eqref{Grepeated}.
We note that these formulas also follow directly from Eq. \eqref{InfiniteSelberg}, see \cite{MeIMRN} for details. \qed
\end{proof}
\begin{proof}[Proof of Theorem \ref{IntExist}.]
The proof is based on the
Malmst\'en-type formula for $\log G(z\,|\,\tau)$ in Eq. \eqref{LK}.
Let $\Re(q)<\tau,$ $\tau>1,$ and $\lambda_1,
\lambda_2>-1/\tau.$ Applying Eq. \eqref{LK} to each $G-$factor in Eq. \eqref{MGI}, we can write
\begin{gather}
\log\mathfrak{M}(q\,|\,\mu,\lambda_1,\lambda_2) =
\int\limits_0^\infty \frac{dt}{t}
\Bigl[\frac{1}{(e^t-1)(e^{t\tau}-1)}\Bigl(
e^{t(q-\lambda_1\tau)}+e^{t(q-\lambda_2\tau)}+e^{t(q+1)}+ \nonumber
\\ +
e^{t(q-1-\tau(1+\lambda_1+\lambda_2))}-e^{t(2q-1-\tau(1+\lambda_1+\lambda_2))}\Bigr)
+ A(t)+B(t)\,q+C(t)\,q^2\Bigr]
\end{gather}
for some functions $A(t),$ $B(t),$ and $C(t)$ depending also on $\tau,$ $\lambda_1,$ $\lambda_2.$
Note that $C(t)=0$ as it is proportional to $2^2-2-1-1.$
Then, we can rewrite this expression in the form
\begin{gather}
\log\mathfrak{M}(q\,|\,\mu,\lambda_1,\lambda_2) =
\int\limits_0^\infty \frac{dt}{t} \Bigl[
\frac{\bigl(e^{tq}-1-tq\bigr)}{(e^t-1)(e^{t\tau}-1)}\bigl(e^{-t\lambda_1\tau}+e^{-t\lambda_2\tau}+e^{t}+e^{-t(1+\tau(1+\lambda_1+\lambda_2))}\bigr)
- \nonumber \\ -\frac{1}{2}(2tq)^2
\frac{e^{-t(1+\tau(1+\lambda_1+\lambda_2))}}{{(e^t-1)(e^{t\tau}-1)}}+A(t)+B(t)\,q\Bigr]
- \int\limits_0^\infty
\frac{dt}{t}\Bigl[\frac{\bigl(e^{2tq}-1-2tq-(2tq)^2/2\bigr)}{(e^t-1)(e^{t\tau}-1)}
e^{-t(1+\tau(1+\lambda_1+\lambda_2))}\Bigr]
\end{gather}
for some appropriately modified $A(t)$ and $B(t).$ It follows that
we have
\begin{equation}
\log\mathfrak{M}(q=0\,|\,\mu,\lambda_1,\lambda_2)=\int\limits_0^\infty
\frac{dt}{t} \, A(t).
\end{equation}
On the other hand, $\mathfrak{M}(q=0\,|\,\mu,\lambda_1,\lambda_2)=1$
by construction. Hence this term vanishes. We now change variables
$t'=2t$ in the second
integral and then bring the two integrals back under the same integral sign.
\begin{gather}
\log\mathfrak{M}(q\,|\,\mu,\lambda_1,\lambda_2) =
\int\limits_0^\infty \frac{dt}{t} \Bigl[
\bigl(e^{tq}-1-tq\bigr)\Bigl(\frac{\bigl(e^{-t\lambda_1\tau}+e^{-t\lambda_2\tau}+e^{t}+e^{-t(1+\tau(1+\lambda_1+\lambda_2))}\bigr)}
{(e^t-1)(e^{t\tau}-1)} - \nonumber \\
-\frac{e^{-t/2(1+\tau(1+\lambda_1+\lambda_2))}}{(e^{t/2}-1)(e^{t\tau/2}-1)}\Bigr)
+
\frac{q^2t^2}{2}\Bigl(\frac{e^{-t/2(1+\tau(1+\lambda_1+\lambda_2))}}{(e^{t/2}-1)(e^{t\tau/2}-1)}
-
\frac{4\,e^{-t(1+\tau(1+\lambda_1+\lambda_2))}}{{(e^t-1)(e^{t\tau}-1)}}\Bigr)
+ B(t)\,q\Bigr].
\end{gather}
Denote
\begin{gather}
\mathfrak{f}(t)\triangleq \frac{\bigl(e^{-t\lambda_1\tau}+e^{-t\lambda_2\tau}+e^{t}+e^{-t(1+\tau(1+\lambda_1+\lambda_2))}\bigr)}{(e^t-1)(e^{t\tau}-1)} - \frac{e^{-t/2(1+\tau(1+\lambda_1+\lambda_2))}}{(e^{t/2}-1)(e^{t\tau/2}-1)}, \\
\mathfrak{g}(t)\triangleq \frac{t^2}{(e^{t/2}-1)(e^{t\tau/2}-1)} -
\frac{(2t)^2}{{(e^t-1)(e^{t\tau}-1)}}.
\end{gather}
The functions $\mathfrak{f}$ and $\mathfrak{g}$ have the property
$\mathfrak{f}=O\bigl(t^{-1}\bigr),$ $\mathfrak{g}=O(t)$ as
$t\rightarrow 0$ and $\mathfrak{f}, \,\mathfrak{g}$ are
exponentially small as $t\rightarrow +\infty.$ Noticing the
individual existence and equality of the integrals
\begin{equation}
\int\limits_0^\infty \frac{dt}{t}
\Bigl[\frac{t^2\,\bigl(e^{-t/2(1+\tau(1+\lambda_1+\lambda_2))}-1\bigr)}{(e^{t/2}-1)(e^{t\tau/2}-1)}\Bigr]
= \int\limits_0^\infty \frac{dt}{t}
\Bigl[\frac{(2t)^2\,\bigl(e^{-t(1+\tau(1+\lambda_1+\lambda_2))}-1\bigr)}{{(e^t-1)(e^{t\tau}-1)}}\Bigr],
\end{equation}
we can write
\begin{equation}
\log\mathfrak{M}(q\,|\,\mu,\lambda_1,\lambda_2) =
q\int\limits_0^\infty \frac{dt}{t} \,B(t) +\int\limits_0^\infty
\frac{dt}{t} \Bigl[\bigl(e^{tq}-1-tq\bigr)\,\mathfrak{f}(t)\Bigr]
+\frac{q^2}{2}\int\limits_0^\infty \frac{dt}{t} \,\mathfrak{g}(t).
\end{equation}
Denote
\begin{equation}
\sigma^2(\tau)\triangleq \int\limits_0^\infty \frac{dt}{t}
\,\mathfrak{g}(t).
\end{equation}
Define the function $\mathcal{M}_{(\tau, \lambda_1,
\lambda_2)}(u)\triangleq -\int_u^\infty \mathfrak{f}(t)\,dt/t$ for
$u>0$ and $\mathcal{M}_{(\tau, \lambda_1, \lambda_2)}(u)\triangleq 0$
for $u<0.$ We have thus established the decomposition for
$\Re(q)<\tau,$
\begin{equation}
\log\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2) =
q\int\limits_0^\infty \frac{dt}{t} B(t)+\frac{1}{2}q^2\sigma^2(\mu)+
\int\limits_{\mathbb{R}\setminus\{0\}} \bigl(e^{uq}-1-uq\bigr)
\,d\mathcal{M}_{(\tau, \lambda_1, \lambda_2)}(u). \label{Levydecom}
\end{equation}
Finally, since $\tau$ satisfies $\tau>1,$ we have the
inequalities
\begin{gather}
\mathfrak{f}(t)>0 \,\, \text{for} \,\,t>0, \label{fposit} \\
\sigma^2(\tau)>0.
\end{gather}
Their validity can be established as follows. Let
$a=e^{-t\lambda_1\tau/2},$ $b=e^{-t\lambda_2\tau/2},$ $c=e^{-t/2},$
and $d=e^{-t\tau/2}.$ Note that $a, b\geq 0$ and $0<c, d<1.$ Then,
Eq. \eqref{fposit} is equivalent to
\begin{equation}
a^2+b^2+c^{-2}+a^2b^2c^2d^2-ab(1+c)(1+d)>0.
\end{equation}
The latter inequality follows from
\begin{align}
a^2+b^2+c^{-2}+a^2b^2c^2d^2-ab(1+c)(1+d) & = (a-b)^2+c^{-2} +
ab(1-c)(1-d) + \nonumber
\\ & +(abcd-1)^2-1\geq c^{-2}-1>0.
\end{align}
The integral for $\sigma^2(\tau)$ can be computed explicitly by means of Eq. \eqref{Frullani}.
\begin{align}
\int\limits_0^\infty \frac{dt}{t} \,\mathfrak{g}(t) & =
\int\limits_0^\infty \frac{dt}{t}
\,\Bigl[\frac{t^2}{(e^{t/2}-1)(e^{t\tau/2}-1)} -
\frac{4}{\tau}e^{-t}\Bigr] - \int\limits_0^\infty \frac{dt}{t}
\,\Bigl[\frac{(2t)^2}{{(e^t-1)(e^{t\tau}-1)}}-\frac{4}{\tau}e^{-t}\Bigr],
\nonumber
\\ & = \frac{4}{\tau}\,\int\limits_0^\infty \frac{dt}{t}
\,\Bigl[e^{-t}-e^{-2t}\Bigr] = \frac{4}{\tau}\log 2.
\end{align}
Hence $\sigma^2(\tau)>0$ and $\mathcal{M}_{(\tau, \lambda_1,
\lambda_2)}(u)$ is continuous and non-decreasing on $(-\infty,\,0)$
and $(0,\,\infty)$ and satisfies the integrability and limit
conditions $\int_{[-1,\,1]\setminus\{0\}} u^2\,d\mathcal{M}_{(\tau,
\lambda_1, \lambda_2)}(u)<\infty,$ $\lim\limits_{u\rightarrow
\pm\infty} \mathcal{M}_{(\tau, \lambda_1,
\lambda_2)}(u) =0$ so that
$\mathcal{M}_{(\tau, \lambda_1, \lambda_2)}(u)$ is a valid spectral
function.\footnote{Note that it is only the mean $\int_0^\infty B(t)
\,dt/t$ that necessitates $\tau>1.$ The gaussian component
$\sigma^2(\tau)$ and spectral function $\mathcal{M}_{(\mu, \lambda_1,
\lambda_2)}(u)$ satisfy the required properties for all $\tau>0.$ It is this property that allows us to define
the critical Selberg integral distribution, cf. Section \ref{DerivM}.}
The decomposition in Eq. \eqref{Levydecom} assumes the canonical form, confer
Theorem 4.4 in Chap. 4 of \cite{SteVHar}, in the case of purely
imaginary $q$ by a trivial change in the linear term.\qed
\end{proof}
\begin{proof}[Proof of Corollary \ref{IntExistRV}.]
This follows from Theorem \ref{IntExist}
by some general properties of analytic and infinitely divisible
characteristic functions. Denote the random variable corresponding
to the L\'evy-Khinchine decomposition in Theorem \ref{IntExist}
by $X.$ As it has a nonzero
gaussian component, it is absolutely continuous with a bounded,
continuous, and zero-free density by Theorem 8.4 and Corollary 8.8
in Chap. 4 of \cite{SteVHar}. Denote the probability density of
$X$ by $f(x),$ $x\in\mathbb{R}.$ The Fourier transform of
$f(x)$ is
$\mathfrak{M}(iq\,|\,\mu,\lambda_1,\lambda_2)$ by construction,
which is analytic as a function of $q$ in the strip $\Im(q)>-\tau.$
Then, by the fundamental theorem of analytic characteristic
functions, confer Theorem 7.1.1 in Chap. 7 of \cite{Lukacs}, we have
for all $q$ in the strip of analyticity $\Im(q)>-\tau$
\begin{equation}
\mathfrak{M}(iq\,|\,\mu,\lambda_1,\lambda_2) =
\int\limits_\mathbb{R} e^{iqx}\,f(x)\,dx.
\end{equation}
On the other hand, the random variable
$M_{(\mu,\lambda_1,\lambda_2)}\triangleq \exp(X)$ has the density $f(\log y)/y,$ $y\in (0, \,\infty),$ so that the right-hand
side of this equation is precisely the Mellin transform of $M_{(\mu,\lambda_1,\lambda_2)}$ for
$\Re(q)<\tau$,
\begin{equation}
\mathfrak{M}(q\,|\,\mu,\lambda_1,\lambda_2) = \int\limits_0^\infty
y^q\,f_{(\mu, \lambda_1, \lambda_2)}(\log y)\,\frac{dy}{y} = {\bf
E}\bigl[M_{(\mu,\lambda_1,\lambda_2)}^q\bigr]
\end{equation}
as seen upon relabeling $iq\rightarrow q$ and changing variables $y
= e^x.$
\end{proof}
\begin{proof}[Proof of Theorem \ref{IntAsymp}.]
The key element in the proof is the integral in Eq. \eqref{Iintegral} and its connection with the ratio of $G(z\,|\,\tau)$ functions in
Eq. \eqref{IfuncG} and its asymptotic expansion in Eq. \eqref{IfuncGAsymptotic}. As in the proof of Theorem \ref{CirAsymp}
it remains to apply these equations
to each of the ratios of the $G-$factors in Eq. \eqref{MGI} and collect the terms.
The only other term in Eq. \eqref{logMellinAsympI}, which has a structure that is
different from the series in Eq. \eqref{IfuncGAsymptotic}, can be treated using the
elementary identity
\begin{equation}
\sum\limits_{r=1}^\infty \frac{\zeta(r+1)}{r+1}/\tau^{r+1} =
\log\Gamma\bigl(1-1/\tau\bigr)+\frac{\psi(1)}{\tau}, \, |\tau|>1.
\end{equation}
The result now follows by a straightforward algebraic reduction.
\qed
\end{proof}
\section{Barnes Beta Distributions}\label{BarnesBeta}
\noindent
We now proceed to review the theory of Barnes beta probability distributions following \cite{Me13}, \cite{Me14}, and \cite{Me16}.
For the immediate needs of this paper we only need these distributions of types $M=N=2$ and $M=1, N=0.$
Nonetheless, we review the general case $M\leq N+1$ as it requires the same amount of effort as the special cases of interest.
The cases of $M<N,$ $M=N, $ and $M=N+1$ are all constructed in the same way but have somewhat different properties.
For this reason the first two cases $M<N,$ $M=N, $ and the third case $M=N+1$ are treated separately. As we will see below,
Barnes beta distributions have the property that their moments are expressed as products of ratios of multiple
gamma functions, whereas their ratios can have moments in the form of products of ratios of multiple sine functions. The proofs of all results
in this section are given in the Appendix.
\subsection{$M\leq N$}
Define the action of the combinatorial operator $\mathcal{S}_N$ on a function \(h(x)\) by
\begin{definition}\label{Soperator}
\begin{equation}\label{S}
(\mathcal{S}_Nh)(q\,|\,b) \triangleq \sum\limits_{p=0}^N (-1)^p
\sum\limits_{k_1<\cdots<k_p=1}^N
h\bigl(q+b_0+b_{k_1}+\cdots+b_{k_p}\bigr).
\end{equation}
\end{definition}
\noindent In other words, in Eq. \eqref{S} the action of $\mathcal{S}_N$
is defined as an alternating sum over all combinations of $p$
elements for every $p=0\cdots N.$
\begin{definition}\label{bdef}
Given $q\in\mathbb{C}-(-\infty, -b_0],$ \(a=(a_1\cdots a_{M})\), \(b=(b_0,b_1\cdots b_{N}),\) let\footnote{
We will abbreviate $\bigl(\mathcal{S}_N \log\Gamma_M\bigr)(q\,|a,\,b)$ to mean the action of $\mathcal{S}_N$ on
$\log\Gamma_M(x|a),$ \emph{i.e.} $\bigl(\mathcal{S}_N \log\Gamma_M(x|a)\bigr)(q\,|\,b).$}
\begin{equation}\label{eta}
\eta_{M,N}(q\,|a,\,b) \triangleq \exp\Bigl(\bigl(\mathcal{S}_N
\log\Gamma_M\bigr)(q\,|a,\,b) - \bigl(\mathcal{S}_N \log\Gamma_M\bigr)(0\,|a,\,b)\Bigr).
\end{equation}
\end{definition}
The function $\eta_{M,N}(q\,|a,\,b)$ is holomorphic over
$q\in\mathbb{C}-(-\infty, -b_0]$ and equals a product of ratios of
multiple gamma functions by construction. Specifically,
\begin{align}
\eta_{M,N}(q\,|a, b)
= & \frac{\Gamma_M(q+b_0|a)}{\Gamma_M(b_0|a)}\prod\limits_{j_1=1}^N \frac{\Gamma_M(b_0+b_{j_1}|a)}{\Gamma_M(q+b_0+b_{j_1}|a)}
\prod\limits_{j_1<j_2}^N \frac{\Gamma_M(q+b_0+b_{j_1}+b_{j_2}|a)}{\Gamma_M(b_0+b_{j_1}+b_{j_2}|a)}\times \nonumber \\
&\times\prod\limits_{j_1<j_2<j_3}^N \frac{\Gamma_M(b_0+b_{j_1}+b_{j_2}+b_{j_3}|a)}{\Gamma_M(q+b_0+b_{j_1}+b_{j_2}+b_{j_3}|a)} \cdots,
\end{align}
until all the $N$ indices are exhausted. The function $\log\eta_{M,N}(q\,|a,\,b)$ has an important integral representation that follows from
that of $\log\Gamma_M(w|a)$ in Eq. \eqref{key}.
\begin{theorem}[Existence and Structure]\label{main}
Given $M, N\in\mathbb{N}$ such that $M\leq N,$ the function
$\eta_{M,N}(q\,|a,\,b)$ is the Mellin transform of a probability
distribution on $(0, 1].$ Denote it by $\beta_{M, N}(a,b).$ Then,
\begin{equation}
{\bf E}\bigl[\beta_{M, N}(a,b)^q\bigr] = \eta_{M, N}(q\,|a,\,b),\;
\Re(q)>-b_0.
\end{equation}
The distribution $-\log\beta_{M, N}(a,b)$ is infinitely divisible on
$[0, \infty)$ and has the L\'evy-Khinchine decomposition for $\Re(q)>-b_0,$
\begin{equation}\label{LKH}
{\bf E}\Bigl[\exp\bigl(q\log\beta_{M, N}(a,b)\bigr)\Bigr] =
\exp\Bigl(\int\limits_0^\infty (e^{-tq}-1) e^{-b_0
t} \frac{
\prod\limits_{j=1}^N (1-e^{-b_j t})}{\prod\limits_{i=1}^M (1-e^{-a_i t})}
\frac{dt}{t} \Bigr).
\end{equation}
$\log\beta_{M,N}(a,b)$ is absolutely continuous if and only if $M=N.$ If $M<N,$
$-\log\beta_{M,N}(a,b)$ is compound Poisson and
\begin{subequations}
\begin{align}
{\bf P}\bigl[\beta_{M,N}(a,b)=1\bigr] & =
\exp\Bigl(-\int\limits_0^\infty e^{-b_0 t}\frac{
\prod\limits_{j=1}^N (1-e^{-b_j t})}{\prod\limits_{i=1}^M (1-e^{-a_i t})}
\frac{dt}{t}\Bigr), \label{Pof1} \\
& = \exp\bigl(-(\mathcal{S}_N \log\Gamma_M)(0\,|a,\,b)\bigr). \label{Pof12}
\end{align}
\end{subequations}
\end{theorem}
\begin{corollary}\label{momentproblembarnes}
The Stieltjes moment problem for the positive integer moments of $\beta_{M,N}(a,b)$ is determinate.
\end{corollary}
It is worth emphasizing that the integral representation of $\log\eta_{M,N}(q\,|a,\,b)$ in Eq. \eqref{LKH} is the main result as it
automatically implies that $\beta_{M, N}(a,b)$ is a valid probability distribution having
infinitely divisible logarithm, see Chapter 3 of \cite{SteVHar} for background material
on infinitely divisible distributions on $[0, \infty).$
\begin{theorem}[Asymptotics]\label{Asymptotics}
If $M<N$ and $|\arg(q)|<\pi,$
\begin{equation}\label{ourasym}
\lim\limits_{q\rightarrow\infty} \eta_{M, N}(q\,|\,a, b) =
\exp\bigl(-(\mathcal{S}_N \log\Gamma_M)(0\,|\,a, b)\bigr).
\end{equation}
If $M=N$ and $|\arg(q)|<\pi,$
\begin{equation}\label{ourasymN}
\eta_{N, N}(q\,|\,a, b) = \exp\bigl(-(b_1\cdots b_N/a_1\cdots a_M) \log(q) +
O(1)\bigr), \;q\rightarrow\infty.
\end{equation}
\end{theorem}
The Mellin transform of Barnes beta distributions satisfies a function equation
and two remarkable factorizations
that are inherited from those of the multiple gamma function.
\begin{theorem}[Functional equation]\label{FunctEquat}
$1\leq M\leq N,$ $q\in\mathbb{C}-(-\infty, -b_0],$ $i=1\cdots M,$
\begin{equation}
\eta_{M, N}(q+a_i\,|\,a,\,b) =
\eta_{M, N}(q\,|\,a,\,b)\,\exp\bigl(-(\mathcal{S}_N
\log\Gamma_{M-1})(q\,|\,\hat{a}_i, b)\bigr). \label{fe1}
\end{equation}
\end{theorem}
\begin{corollary}[Symmetries]\label{FunctSymmetry}
$1\leq M\leq N,$ $q\in\mathbb{C}-(-\infty, -b_0],$ $i=1\cdots M,$
$j=1\cdots N,$
\begin{align}
\eta_{M, N}(q\,|\,a,\,b_0+x)\,\eta_{M, N}(x\,|\,a,\,b) & = \eta_{M, N}(q+x\,|\,a,\,b), \label{fe2}\\
\eta_{M, N}(q\,|\,a,\,b)\,\eta_{M, N-1}(q\,|\,a,\,b_0+b_j,
\hat{b}_j) & =
\eta_{M, N-1}(q\,|\,a,\,\hat{b}_j), \label{fe3}\\
\eta_{M, N}(q+a_i\,|\,a,\,b)\,\eta_{M-1, N}(q\,|\,\hat{a}_i,\,b) & =
\eta_{M, N}(q\,|\,a,\,b)\,\eta_{M, N}(a_i\,|\,a,\,b), \label{fe1eq}\\
\eta_{M, N}(q\,|\,a,\,b_j+a_i) \,\eta_{M-1,
N-1}(b_j\,|\,\hat{a}_i,\,\hat{b}_j) & = \eta_{M, N}(q\,|\,a,\,b)\,
\eta_{M-1, N-1}(q+b_j\,|\,\hat{a}_i,\,\hat{b}_j), \label{fe4} \\
\eta_{M, N}(q+a_i\,|\,a,\,b) \, \eta_{M-1,
N-1}(q\,|\,\hat{a}_i,\,\hat{b}_j) & = \eta_{M, N}(q\,|\,a,\,b) \,
\eta_{M-1, N-1}(q+b_j\,|\,\hat{a}_i,\,\hat{b}_j).
\label{funceqsymmetry}
\end{align}
\end{corollary}
\begin{corollary}[Factorizations]\label{FactorizBarnes}
Let $\Omega\triangleq \sum_{i=1}^M n_i \, a_i.$
\begin{align}
\eta_{M,N}(q\,|\,a, b) = & \prod\limits_{k=0}^\infty \frac{\eta_{M-1,N}(q+k
a_i\,|\,\hat{a}_i, b)}{\eta_{M-1,N}(k
a_i\,|\,\hat{a}_i, b)}, \label{infinprod2} \\
\eta_{M,N}(q\,|\,a, b) = & \prod\limits_{n_1,\cdots ,n_M=0}^\infty \Bigl[
\frac{b_0+\Omega}{q+b_0+\Omega}\prod\limits_{j_1=1}^N \frac{q+b_0+b_{j_1}+\Omega}
{b_0+b_{j_1}+\Omega} \prod\limits_{j_1<j_2}^N \frac{b_0+b_{j_1}+b_{j_2}+\Omega}
{q+b_0+b_{j_1}+b_{j_2}+\Omega} \times \nonumber \\
& \times
\prod\limits_{j_1<j_2<j_3}^N \frac{q+b_0+b_{j_1}+b_{j_2}+b_{j_3}+\Omega}{b_0+b_{j_1}+b_{j_2}+b_{j_3}+\Omega}\cdots\Bigr].
\label{infbarnesfac}
\end{align}
Probabilistically, these factorizations are equivalent to, respectively,
\begin{align}
\beta_{M, N}(a,\,b) \overset{{\rm in \,law}}{=}&
\prod\limits_{k=0}^\infty \beta_{M-1, N}(\hat{a}_i,\,b_0+ka_i, \, \,b_1,\cdots, b_N), \label{probshinfac} \\
\beta_{M, N}(a,\,b) \overset{{\rm in \,law}}{=}&
\prod\limits_{n_1,\cdots ,n_M=0}^\infty \beta_{0, N}(b_0+\Omega,\,\,b_1,\cdots, b_N). \label{probbarnesfac}
\end{align}
\end{corollary}
We note that the factorizations in Eqs. \eqref{infinprod2} and \eqref{infbarnesfac} correspond to the Shintani and Barnes factorizations of the multiple gamma function, see Eqs. \eqref{generalfactorization} and \eqref{barnes}, respectively.
The functional equation in Eq. \eqref{fe1} gives us the moments.
\begin{corollary}[Moments]\label{moments}
Assume $a_i=1$ for some $i.$
Let $k\in\mathbb{N}.$
\begin{align}
{\bf E}\bigl[\beta_{M, N}(a, b)^{k }\bigr] & =
\exp\Bigl(-\sum\limits_{l=0}^{k-1} \bigl(\mathcal{S}_N
\log\Gamma_{M-1}\bigr)(l \,|\,\hat{a}_i, b)\Bigr), \label{posmom} \\
{\bf E}\bigl[\beta_{M, N}(a, b)^{-k }\bigr] & =
\exp\Bigl(\sum\limits_{l=0}^{k-1} \bigl(\mathcal{S}_N
\log\Gamma_{M-1}\bigr)(-(l+1) \,|\,\hat{a}_i, b)\Bigr), \; k<b_0. \label{negmom}
\end{align}
\end{corollary}
The scaling property in Eq. \eqref{scale} gives us the scaling invariance.
\begin{theorem}[Scaling invariance]\label{barnesbetascaling}
Let $\kappa>0.$ Then,
\begin{equation}
\beta^{\kappa}_{M, N}(\kappa\,a, \kappa\,b) \overset{{\rm in \,law}}{=}\beta_{M,
N}(a,\,b).
\end{equation}
\end{theorem}
\begin{corollary}[Barnes factorization for $a_i=1$]\label{BarnesFactorSpecial} Let $0\leq M\leq N$ and $a_i=1$ for all $i=1\cdots M.$ Then,
\begin{align}
\eta_{M,N}(q\,|\,1, b) & = \prod\limits_{k=0}^\infty \Bigl[
\frac{b_0+k}{q+b_0+k}\prod\limits_{j_1=1}^N \frac{q+b_0+b_{j_1}+k}
{b_0+b_{j_1}+k} \prod\limits_{j_1<j_2}^N \frac{b_0+b_{j_1}+b_{j_2}+k}
{q+b_0+b_{j_1}+b_{j_2}+k} \times \nonumber \\
& \times \prod\limits_{j_1<j_2<j_3}^N \frac{q+b_0+b_{j_1}+b_{j_2}+b_{j_3}+k}{b_0+b_{j_1}+b_{j_2}+b_{j_3}+k}\cdots\Bigr]^{(k\,|\,M)},
\label{barnesfactorspecial} \\
(k\,|\,M) & \triangleq \sum\limits_{m=1}^M \binom{k-1}{m-1}\binom{M}{m}.
\end{align}
\end{corollary}
We will next present the special case of $M=N=2$ in order to illustrate the general
theory with a concrete yet quite non-trivial example. In addition,
this case is also of a particular interest in the probabilistic
theory of the Selberg and Morris integral probability distributions that we will review in Section \ref{Probabilistic}. Let
$a_1=1$ and $a_2=\tau>0$ and write $\beta_{2, 2}(\tau, b),$
$\eta_{2,2}(q\,|\,\tau, b),$ and
$\Gamma_2\bigl(w\,|\,(1,\tau)\bigr)=\Gamma_2(w\,|\,\tau)$ for
brevity. From Eq. \eqref{eta} and Theorem \ref{main} we have
${\bf E}\bigl[\beta_{2, 2}(\tau, b)^q\bigr] = \eta_{2,2}(q\,|\,\tau,
b)$ for $\Re(q)>-b_0$ and
\begin{equation}
\eta_{2,2}(q\,|\,\tau, b) =
\frac{\Gamma_2(q+b_0\,|\,\tau)}{\Gamma_2(b_0\,|\,\tau)}
\frac{\Gamma_2(b_0+b_1\,|\,\tau)}{\Gamma_2(q+b_0+b_1\,|\,\tau)}
\frac{\Gamma_2(b_0+b_2\,|\,\tau)}{\Gamma_2(q+b_0+b_2\,|\,\tau)}
\frac{\Gamma_2(q+b_0+b_1+b_2\,|\,\tau)}{\Gamma_2(b_0+b_1+b_2\,|\,\tau)}.
\end{equation}
The asymptotic behavior of $\eta_{2,2}(q\,|\,\tau, b)$
follows from Theorem \ref{Asymptotics}.
\begin{equation}
\eta_{2, 2}(q\,|\,\tau, b) = \exp\Bigl(-\frac{b_1 b_2}{\tau}\log(q)
+ O(1)\Bigr), \; q\rightarrow\infty,\;|\arg(q)|<\pi.
\end{equation}
Using Eq. \eqref{gamma1}, the functional equation in Theorem
\ref{FunctEquat} takes the form
\begin{align}
\eta_{2, 2}(q+1\,|\,\tau, b) & = \eta_{2, 2}(q\,|\,\tau, b)\,
\frac{\Gamma\bigl((q+b_0+b_1)/\tau\bigr)\Gamma\bigl((q+b_0+b_2)/\tau\bigr)}
{\Gamma\bigl((q+b_0)/\tau\bigr)\Gamma\bigl((q+b_0+b_1+b_2)/\tau\bigr)},
\\
\eta_{2, 2}(q+\tau\,|\,\tau, b) & = \eta_{2, 2}(q\,|\,\tau, b)\,
\frac{\Gamma(q+b_0+b_1)\Gamma(q+b_0+b_2)}
{\Gamma(q+b_0)\Gamma\bigl(q+b_0+b_1+b_2)}.
\end{align}
The positive moments in Corollary \ref{moments} for $k\in\mathbb{N}$
are
\begin{align}
{\bf E}\bigl[\beta_{2, 2}(\tau, b)^k\bigr] & =
\prod\limits_{l=0}^{k-1}
\Bigl[\frac{\Gamma\bigl((l+b_0+b_1)/\tau\bigr)\,\Gamma\bigl((l+b_0+b_2)/\tau\bigr)}{\Gamma\bigl((l
+b_0)/\tau\bigr)\,\Gamma\bigl((l+b_0+b_1+b_2)/\tau\bigr)}\Bigr].
\end{align}
The negative moments are
\begin{align}
{\bf E}\bigl[\beta_{2, 2}(\tau, b)^{-k}\bigr] & =
\prod\limits_{l=0}^{k-1} \Bigl[\frac{\Gamma\bigl((-(l+1)
+b_0)/\tau\bigr)\,\Gamma\bigl((-(l+1)+b_0+b_1+b_2)/\tau\bigr)}{\Gamma\bigl((-(l+1)+b_0+b_1)/\tau\bigr)\,
\Gamma\bigl((-(l+1)+b_0+b_2)/\tau\bigr)}\Bigr], \; k<b_0.
\end{align}
The factorization equations in Corollary \ref{FactorizBarnes}
are
\begin{align}
\eta_{2,2}(q\,|\,\tau, b) & =
\prod\limits_{k=0}^\infty\Bigl[
\frac{\Gamma((q+k+b_0)/\tau) }{\Gamma((k+b_0)/\tau)}
\frac{\Gamma((k+b_0+b_1)/\tau)}{\Gamma((q+k+b_0+b_1)/\tau)}
\frac{\Gamma((k+b_0+b_2)/\tau)}{\Gamma((q+k+b_0+b_2)/\tau)} \times
\nonumber \\ & \times
\frac{\Gamma((q+k+b_0+b_1+b_2)/\tau)}{\Gamma((k+b_0+b_1+b_2)/\tau)}\Bigr],
\\
\eta_{2,2}(q\,|\,\tau, b) & = \prod\limits_{k=0}^\infty\Bigl[
\frac{\Gamma(q+k\tau+b_0)}{\Gamma(k\tau+b_0)}
\frac{\Gamma(k\tau+b_0+b_1)}{\Gamma(q+k\tau+b_0+b_1)}
\frac{\Gamma(k\tau+b_0+b_2)}{\Gamma(q+k\tau+b_0+b_2)} \times
\nonumber \\ & \times
\frac{\Gamma(q+k\tau+b_0+b_1+b_2)}{\Gamma(k\tau+b_0+b_1+b_2)}\Bigr], \\
\eta_{2,2}(q\,|\,\tau, b) & =
\prod\limits_{n_1, \,n_2=0}^\infty \Bigl[
\frac{b_0+n_1+n_2\tau}{q+b_0+n_1+n_2\tau} \frac{q+b_0+b_1+n_1+n_2\tau}
{b_0+b_1+n_1+n_2\tau}
\frac{q+b_0+b_2+n_1+n_2\tau}{b_0+b_2+n_1+n_2\tau}\times
\nonumber \\ & \times
\frac{b_0+b_1+b_2+n_1+n_2\tau}{q+b_0+b_1+b_2+n_1+n_2\tau} \Bigr]
.
\end{align}
Finally, in the special case of $\tau=1$ we get from Corollary \ref{BarnesFactorSpecial},
\begin{equation}
\eta_{2,2}(q\,|\,1, b) =
\prod\limits_{k=0}^\infty \Bigl[
\frac{b_0+k}{q+b_0+k} \frac{q+b_0+b_1+k}
{b_0+b_1+k}
\frac{q+b_0+b_2+k}{b_0+b_2+k}
\frac{b_0+b_1+b_2+k}{q+b_0+b_1+b_2+k} \Bigr]^{k+1}
.\label{specialfactorization}
\end{equation}
\begin{remark} The analytic structure of $\eta_{M,N}(a,b)$ is not fully understood, even in the case of $M=0.$ The latter with $b_i=i,$ $i=1\cdots N,$ is of particular interest as it occurs in the context of the Riemann xi function, cf. Section 7 in \cite{Me14}.
\end{remark}
\subsection{$N=M-1$}
Let $M\in\mathbb{N},$ $a=(a_1,\cdots, a_{M}),$ and $b=(b_0, b_1,\cdots,b_{M-1}),$ all assumed to be positive.
In other words, $N=M-1$ in the sense of the previous subsection.
Define
\begin{equation}
\eta_{M, M-1}(q|a, b) \triangleq \exp\Bigl( \bigl(\mathcal{S}_{M-1}
\log\Gamma_M\bigr)(q\,|a,\,b) - \bigl(\mathcal{S}_{M-1} \log\Gamma_M\bigr)(0\,|a,\,b) \Bigr). \label{etaL}
\end{equation}
For example, in the case of $M=2$ we have
\begin{equation}
\eta_{2, 1}(q|a, b) =
\frac{\Gamma_2(q+b_0\,|\,a)}{\Gamma_2(b_0\,|\,a)}
\frac{\Gamma_2(b_0+b_1\,|\,a)}{\Gamma_2(q+b_0+b_1\,|\,a)}.
\end{equation}
The case of $M=2$ was first treated in \cite{Kuz} and then studied in depth in \cite{LetSim} in the context of
the Mellin transform of certain functionals of the stable L\'evy process. We extended the theory to general $M$ in
\cite{Me16}.
\begin{theorem}[Existence and Structure]\label{mainsine}
Assume
\begin{equation}
\Re(q)>-b_0.
\end{equation}
$\eta_{M, M-1}(q\,|a,\,b)$ is the Mellin transform of a probability
distribution $\beta_{M, M-1}(a,b)$ on $(0, \infty).$
\begin{equation}
{\bf E}\bigl[\beta_{M, M-1}(a,b)^q\bigr] = \eta_{M, M-1}(q\,|a,\,b).
\end{equation}
The distribution $\log\beta_{M, M-1}(a,b)$ is infinitely divisible and absolutely continuous on
$\mathbb{R}$ and has the L\'evy-Khinchine decomposition
\begin{align}
{\bf E}\Bigl[\exp\bigl(q\log\beta_{M, M-1}(a,b)\bigr)\Bigr] =
\exp&\Bigl(
\int\limits_0^\infty (e^{-tq}-1+qt) e^{-b_0
t} \frac{
\prod\limits_{j=1}^{M-1} (1-e^{-b_j t})}{\prod\limits_{i=1}^M (1-e^{-a_i t})}
\frac{dt}{t} +\nonumber \\&+ q
\int\limits_0^\infty \Bigl[\frac{e^{-t}}{t} \frac{\prod\limits_{j=1}^{M-1} b_j}{\prod\limits_{i=1}^{M} a_i}
-e^{-b_0
t}\frac{
\prod\limits_{j=1}^{M-1} (1-e^{-b_j t})}{\prod\limits_{i=1}^M (1-e^{-a_i t})}\Bigr]
dt
\Bigr).\label{LKHsine}
\end{align}
$\eta_{M, M-1}(q\,|a,\,b)$ satisfies the functional equation in Eq. \eqref{fe1} and moment formulas in Eqs. \eqref{posmom}
and \eqref{negmom}. Given $\kappa>0,$ the scaling invariance property of $\beta_{M, M-1}(a,b)$ is
\begin{equation}\label{scalinvgen}
\beta^{\kappa}_{M, M-1}(\kappa\,a, \kappa\,b) \overset{{\rm in \,law}}{=}\kappa^{\prod\limits_{j=1}^{M-1} b_j/\prod\limits_{i=1}^M a_i} \;\beta_{M, M-1}(a,\,b).
\end{equation}
\end{theorem}
\begin{theorem}[Asymptotics]\label{newetaasympt}
Given $|\arg(q)|<\pi,$
\begin{equation}\label{ourasymN}
\eta_{M, M-1}(q\,|\,b) = \exp\Bigl(\bigl(\prod\limits_{j=1}^{M-1} b_j/\prod\limits_{i=1}^M a_i\bigr) q\log(q) +
O(q)\Bigr), \;q\rightarrow\infty.
\end{equation}
\end{theorem}
It is expected that the Stieltjes moment problem for $\beta_{M, M-1}(a,b)$ is determinate (unique solution) iff
\begin{equation}\label{condition}
\prod\limits_{j=1}^{M-1} b_j \leq 2\prod\limits_{i=1}^M a_i .
\end{equation}
This is a classical result for $M=1.$ In general, the asymptotic behavior in Eq. \eqref{ourasymN} coincides with
the asymptotic behavior of generalized gamma distributions and we expect that Eq. \eqref{condition} should follow from
the known solution to the Stieltjes moment problem for these distributions, see \cite{Stoyanov}.
It is also interesting to look at the ratio of two independent Barnes beta distributions of type $(M, M-1)$ as this ratio
has remarkable factorization properties. Let
\begin{equation}\label{bdefine}
\bar{b} \triangleq \bigl(\bar{b}_0, b_1, \cdots b_{M-1}\bigr)
\end{equation}
for some fixed $\bar{b}_0>0,$
define
\begin{equation}
\beta_{M, M-1}(a,b, \bar{b})\triangleq \beta_{M, M-1}(a,b)\,\beta^{-1}_{M, M-1}(a,\bar{b}),
\end{equation}
and denote its Mellin transform by $\eta_{M, M-1}(q\,|\,a, b, \bar{b}).$
Then, the Mellin transform of
$\beta_{M, M-1}(a,b, \bar{b})$
satisfies
two factorizations and scaling invariance that are similar to those of Barnes beta distributions for $M\leq N.$
\begin{theorem}[Properties]\label{FunctEquatSine}
Let $M\in\mathbb{N},$ $\bar{b}_0>\Re(q)>-b_0,$ $i=1\cdots M,$ $\Omega\triangleq \sum_{i=1}^M n_i \, a_i.$
\begin{align}
\eta_{M, M-1}(q\,|\,a, b, \bar{b}) = & \prod\limits_{k=0}^\infty \frac{\eta_{M-1,M-1}(q+k
a_i\,|\,\hat{a}_i, b)}{\eta_{M-1,M-1}(k
a_i\,|\,\hat{a}_i, b)} \,\frac{\eta_{M-1,M-1}(-q+k
a_i\,|\,\hat{a}_i, \bar{b})}{\eta_{M-1,M-1}(k
a_i\,|\,\hat{a}_i, \bar{b})}, \label{infinprod2L} \\
\eta_{M, M-1}(q\,|\,a, b, \bar{b}) = & \prod\limits_{n_1,\cdots ,n_M=0}^\infty \Bigl[
\frac{b_0+\Omega}{q+b_0+\Omega}\,\frac{\bar{b}_0+\Omega}{-q+\bar{b}_0+\Omega}
\times \nonumber \\
& \times
\prod\limits_{j_1=1}^{M-1} \frac{q+b_0+b_{j_1}+\Omega}
{b_0+b_{j_1}+\Omega} \,\frac{-q+\bar{b}_0+b_{j_1}+\Omega}
{\bar{b}_0+b_{j_1}+\Omega}
\times \nonumber \\
& \times\prod\limits_{j_1<j_2}^{M-1} \frac{b_0+b_{j_1}+b_{j_2}+\Omega}
{q+b_0+b_{j_1}+b_{j_2}+\Omega} \, \frac{\bar{b}_0+b_{j_1}+b_{j_2}+\Omega}
{-q+\bar{b}_0+b_{j_1}+b_{j_2}+\Omega} \cdots\Bigr].\label{infinprod1L}
\end{align}
Probabilistically, these factorizations are equivalent to, respectively,
\begin{align}
\beta_{M, M-1}(a,\,b, \bar{b}) \overset{{\rm in \,law}}{=}&
\prod\limits_{k=0}^\infty \beta_{M-1, M-1}(\hat{a}_i,\,b_0+ka_i,\,\,b_1,\cdots, b_{M-1})\times \nonumber \\ & \times \beta_{M-1, M-1}^{-1}(\hat{a}_i,\,\bar{b}_0+ka_i,\,b_1,\cdots, b_{M-1}), \label{probshin}\\
\beta_{M, M-1}(a,\,b, \bar{b}) \overset{{\rm in \,law}}{=}&
\prod\limits_{n_1,\cdots ,n_M=0}^\infty \beta_{0, M-1}(b_0+\Omega,\,b_1,\cdots, b_{M-1})\times \nonumber \\ & \times\beta_{0, M-1}^{-1}(\bar{b}_0+\Omega, \,\,b_1,\cdots, b_{M-1}). \label{probbarnes}
\end{align}
Let $\kappa>0.$ Then,
\begin{equation}\label{scalinvL}
\beta^{\kappa}_{M, M-1}(\kappa\,a, \kappa\,b, \kappa\,\bar{b}) \overset{{\rm in \,law}}{=}\beta_{M, M-1}(a,\,b, \bar{b}).
\end{equation}
\end{theorem}
We will illustrate the general theory with the special case of $\beta_{1,0}.$
\begin{align}
{\bf E}[\beta_{1,0}(a, b)^q] = & \frac{\Gamma_1(q+b_0\,|\,a)}{\Gamma_1(b_0\,|\,a)}, \\
= & a^{\frac{q}{a}} \frac{\Gamma(\frac{q+b_0}{a})}{\Gamma(\frac{b_0}{a})},
\end{align}
by Eq. \eqref{gamma1} so that $\beta_{1,0}$ is a Fr\'echet distribution. This distribution plays an important role
in the structure of both Selberg and Morris integral probability distributions. For example, the $Y$ distribution in Eq. \eqref{Ydist} is
\begin{equation}
Y = \tau^{1/\tau}\,\beta^{-1}_{1,0}(a=\tau, b_0=\tau).
\end{equation}
\subsection{Ratios and Multiple Sine Functions}
\noindent
Recall the definition of the multiple sine function in Eq. \eqref{msinedef}.
Let
\begin{equation}\label{b0spec}
\bar{b}_0 = |a|-\sum_{j=0}^{N} b_j>0.
\end{equation}
We then have the identity
\begin{equation}
(\mathcal{S}_N \log S_M)(q\,|\,a, b) = (-1)^{N+M} (\mathcal{S}_N \log\Gamma_M)(-q\,|\,a, \bar{b}) - (\mathcal{S}_N \log\Gamma_M)(q\,|\,a, b),
\end{equation}
where $\bar{b}$ is as in Eq. \eqref{bdefine}.
It follows that we obtain the interesting identity,
\begin{align}
\exp\Bigl(\bigl(\mathcal{S}_{N}
\log S_M\bigr)(0\,|a,\,b) - \bigl(\mathcal{S}_{N} \log S_M\bigr)(q\,|a,\,b)\Bigr) = & \frac{S_M(b_0|a)}{S_M(q+b_0|a)}\prod\limits_{j_1=1}^{N} \frac{S_M(q+b_0+b_{j_1}|a)}{S_M(b_0+b_{j_1}|a)} \times\nonumber \\
&\times
\prod\limits_{j_1<j_2}^{N} \frac{S_M(b_0+b_{j_1}+b_{j_2}|a)}{S_M(q+b_0+b_{j_1}+b_{j_2}|a)} \times\nonumber \\
&\times\prod\limits_{j_1<j_2<j_3}^{N} \frac{S_M(q+b_0+b_{j_1}+b_{j_2}+b_{j_3}|a)}{S_M(b_0+b_{j_1}+b_{j_2}+b_{j_3}|a)} \cdots,
\nonumber \\
= & \eta_{M,N}(q\,|\,a, b) \eta_{M,N}(-q\,|\,a, \bar{b})^{(-1)^{M+N+1}}.
\end{align}
In particular, when $N=M-1,$ this identity becomes
\begin{align}
\exp\Bigl(\bigl(\mathcal{S}_{M-1}
\log S_M\bigr)(0\,|a,\,b) - \bigl(\mathcal{S}_{M-1} \log S_M\bigr)(q\,|a,\,b)\Bigr) = & \eta_{M,M-1}(q\,|\,a, b) \eta_{M,M-1}(-q\,|\,a, \bar{b}),
\nonumber \\
= & \eta_{M, M-1}(q|a, b, \bar{b}),
\end{align}
and so is the Mellin transform of the ratio of two independent Barnes beta random variables of type $(M, M-1).$
We will illustrate this result with the special cases of $\beta_{1,0}$ and $\beta_{2,1}.$ Let $M=1.$
Let $\bar{b}_0=a-b_0$ as in Eq. \eqref{b0spec},
\begin{equation}
{\bf E}[\beta_{1,0}(a, b, \bar{b})^q] = \frac{\sin(\frac{\pi b_0}{a})}{\sin(\frac{\pi (q+b_0)}{a})}.
\end{equation}
Now, let $M=2,$ $a=(a_1, a_2),$ $b=(b_0, b_1),$ $\bar{b}=(a_1+a_2-b_0-b_1, b_1).$
\begin{equation}
{\bf E}[\beta_{2,1}(a, b, \bar{b})^q] = \frac{S_2(b_0\,|\,a)}{S_2(q+b_0\,|\,a)}\frac{S_2(q+b_0+b_1\,|\,a)}{S_2(b_0+b_1\,|\,a)}.
\end{equation}
\section{Morris and Selberg Integral Distributions: Probabilistic Approach}\label{Probabilistic}
\noindent
In this section we will re-consider the problem of finding positive probability distributions having the property
that their positive integer moments are given by the Morris and Selberg integrals, respectively. Throughout this section we let $\tau>1$
and restrict $\lambda_1, \lambda_2\geq 0$ for simplicity. As we did in Sections \ref{CirAnalytical} and \ref{IntAnalytical}, we write $\tau$ as an abbreviation of $a=(1,\tau)$ in the list of parameters of the double gamma function and $\beta_{2,2}(a, b).$ The proofs of all results
in this section are given in Section \ref{ProbProofs}.
\subsection{Morris Integral Probability Distribution}
\begin{theorem}[Existence and Properties]\label{theoremcircle}
The function
\begin{align}\label{thefunctioncopy}
\mathfrak{M}(q\,|\tau,\,\lambda_1,\,\lambda_2)=&\frac{(\tau^{\frac{1}{\tau}})^q}{\Gamma^q\bigl(1-\frac{1}{\tau}\bigr)}
\frac{\Gamma_2(\tau(\lambda_1+\lambda_2+1)+1-q\,|\,\tau)}{\Gamma_2(\tau(\lambda_1+\lambda_2+1)+1\,|\,\tau)}
\frac{\Gamma_2(-q+\tau\,|\,\tau)}{\Gamma_2(\tau\,|\,\tau)}\times \nonumber \\ & \times
\frac{\Gamma_2(\tau(1+\lambda_1)+1\,|\,\tau)}{\Gamma_2(\tau(1+\lambda_1)+1-q\,|\,\tau)}
\frac{\Gamma_2(\tau(1+\lambda_2)+1\,|\,\tau)}{\Gamma_2(\tau(1+\lambda_2)+1-q\,|\,\tau)}
\end{align}
is the Mellin transform of the distribution
\begin{align}
M_{(\tau, \lambda_1, \lambda_2)} = & \frac{ \tau^{1/\tau}}{\Gamma(1-1/\tau)} \,\beta^{-1}_{22}(\tau, b_0=\tau,\,b_1=1+\tau \lambda_1, \,b_2=1+\tau \lambda_2) \times \nonumber \\ & \times
\beta_{1,0}^{-1}(\tau, b_0=\tau(\lambda_1+\lambda_2+1)+1), \label{thedecompcircle}
\end{align}
where $\beta^{-1}_{22}(a, b)$ is the inverse Barnes beta of type $(2,2)$ and $\beta^{-1}_{1,0}(a, b)$ is the independent inverse Barnes beta of type $(1,0).$ In particular, $\log M_{(\tau, \lambda_1, \lambda_2)}$ is infinitely divisible and the Stieltjes moment problem for $M^{-1}_{(\tau, \lambda_1, \lambda_2)}$ is determinate (unique solution).
In the special case of $\lambda_1=\lambda_2=0,$ we have
\begin{gather}
\mathfrak{M}(q\,|\tau, 0, 0)=\frac{1}{\Gamma^q\bigl(1-\frac{1}{\tau}\bigr)}\Gamma\bigl(1-\frac{q}{\tau}\bigr), \\
M_{(\tau, 0, 0)} = \frac{ \tau^{1/\tau}}{\Gamma(1-1/\tau)}\beta_{1,0}^{-1}(\tau, b_0=\tau).
\end{gather}
\end{theorem}
The special case of $\lambda_1=\lambda_2=0$ was first treated in \cite{FyoBou} and corresponds to
the Dyson integral. The general case first appeared in \cite{Me16}.
\subsection{Selberg Integral Probability Distribution}
We begin with a ``master'' theorem (Theorem \ref{general}), which as we will see below is a corollary of Barnes multiplication in Eq. \eqref{multiplic},
followed by the main result in Theorem \ref{BSM} and a corollary pertaining to the Stieltjes moment problem for the Selberg integral distribution.
\begin{theorem}\label{general}
Let $a\triangleq(a_1,\,a_2),$ $a_i>0,$ and $x\triangleq
(x_1,\,x_2),$ $x_1,\,x_2>0,$ then
\begin{equation}\label{eqgeneral}
{\bf E}\big[M_{(a, x)}^q\bigr] \triangleq
\frac{\Gamma_2(x_1-q\,|\,a)}{\Gamma_2(x_1\,|\,a)}
\frac{\Gamma_2(x_2-q\,|\,a)}{\Gamma_2(x_2\,|\,a)}
\frac{\Gamma_2(a_1+a_2-q\,|\,a)}{\Gamma_2(a_1+a_2\,|\,a)}
\frac{\Gamma_2(x_1+x_2-q\,|\,a)}{\Gamma_2(x_1+x_2-2q\,|\,a)}
\end{equation}
is the Mellin transform of a probability distribution $M_{(a, x)}$
on $(0,\,\infty).$ Let $L$ be lognormal
\begin{equation}\label{Ldefin}
L \triangleq \exp\bigl(\mathcal{N}(0,\,4\log 2/a_1a_2)\bigr),
\end{equation}
and let $X_1,\,X_2,\,X_3$ have the $\beta^{-1}_{2, 2}(a, b)$
distribution with the parameters
\begin{align}
X_1 &\triangleq \beta_{2,2}^{-1}\Bigl(a,
b_0=x_1,\,b_1=b_2=(x_2-x_1)/2\Bigl), \label{X1}\\
X_2 & \triangleq \beta_{2,2}^{-1}\Bigl(a,
b_0=(x_1+x_2)/2,\,b_1=a_1/2,\,b_2=a_2/2\Bigr), \label{X2}\\
X_3 & \triangleq \beta_{2,2}^{-1}\Bigl(a, b_0=a_1+a_2,\,
b_1=b_2=(x_1+x_2-a_1-a_2)/2\Bigl). \label{X3}
\end{align}
Then, $M_{(a, x)}$ has the factorization
\begin{equation}\label{generaldecomp}
M_{(a, x)} \overset{{\rm in \,law}}{=}
2^{-\bigl(2(x_1+x_2)-(a_1+a_2)\bigr)/a_1a_2}\, L\,X_1\,X_2\,X_3.
\end{equation}
In particular, $\log M_{(a, x)}$ is absolutely continuous and
infinitely divisible.
\end{theorem}
\begin{theorem}[Selberg Integral Probability Distribution]\label{BSM}
Let $\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2)$ be as in Eq. \eqref{thefunctioninterval}
for $\Re(q)<\tau.$ Then, $\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2)$ is the Mellin transform of a
probability distribution $M_{(\tau, \lambda_1, \lambda_2)}$ on $(0,\infty),$
\begin{equation}\label{Mint}
\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2) = {\bf E}\bigl[M_{(\tau,\lambda_1,\lambda_2)}^q\bigr], \;\Re(q)<\tau,
\end{equation}
$\log M_{(\tau, \lambda_1, \lambda_2)}$ is absolutely continuous and
infinitely divisible.
Let
\begin{equation}\label{xidef}
x_i(\tau,\lambda_i) \triangleq 1+\tau(1+\lambda_i),\,i=1,2,\,a_1=1, \,a_2=\tau,
\end{equation}
and $L$ and $X_i$ be as in Theorem \ref{general}
corresponding to $a(\tau)=(1,\tau)$ and $x(\tau,\lambda)=\bigl(x_1(\tau,\lambda_1), x_2(\tau,\lambda_2)\bigr),$
\begin{align}
L(\tau) & \triangleq \exp\bigl(\mathcal{N}(0,\,4\log 2/\tau)\bigr), \\
X_1(\tau, \lambda) &\triangleq \beta_{2,2}^{-1}\Bigl(\tau,
b_0=1+\tau+\tau\lambda_1,\,b_1=\tau(\lambda_2-\lambda_1)/2, \,
b_2=\tau(\lambda_2-\lambda_1)/2\Bigr),\\
X_2(\tau, \lambda) & \triangleq \beta_{2,2}^{-1}\Bigl(\tau,
b_0=1+\tau+\tau(\lambda_1+\lambda_2)/2,\,b_1=1/2,\,b_2=\tau/2\Bigr),\\
X_3(\tau, \lambda) & \triangleq \beta_{2,2}^{-1}\Bigl(\tau, b_0=1+\tau,\,
b_1=\frac{1+\tau+\tau\lambda_1+\tau\lambda_2}{2}, \,
b_2=\frac{1+\tau+\tau\lambda_1+\tau\lambda_2}{2}\Bigr),
\end{align}
\emph{i.e.} $\log L$ is a zero-mean normal with variance $4\log
2/\tau$ and $X_1,\,X_2,\,X_3$ have the $\beta^{-1}_{2,
2}(\tau, b)$ distribution with the specified parameters.
Define also the distribution $Y$ to be a power of the exponential,
\begin{align}
Y(\tau) \triangleq &\tau\,y^{-1-\tau}\exp\bigl(-y^{-\tau}\bigr)\,dy,\; y>0.\label{Ydist}
\end{align}
Then,
\begin{equation}\label{Decomposition}
M_{(\tau, \lambda_1, \lambda_2)} \overset{{\rm in \,law}}{=} 2\pi\,
2^{-\bigl[3(1+\tau)+2\tau(\lambda_1+\lambda_2)\bigr]/\tau}\,\Gamma\bigl(1-1/\tau\bigr)^{-1}\,
L\,X_1\,X_2\,X_3\,Y.
\end{equation}
\end{theorem}
\begin{corollary}\label{Uniqueness}
The Stieltjes moment problem for $M_{(\tau, \lambda_1, \lambda_2)}^{-1}$ is indeterminate.
Let $\widetilde{M}_{(\tau, \lambda_1, \lambda_2)}$ be a probability distribution on $(0,\,\infty)$ such that
\begin{equation}
\widetilde{M}_{(\tau, \lambda_1, \lambda_2)} \overset{{\rm in \,law}}{=} L\,N,
\end{equation}
\begin{equation}\label{NYdef}
L \triangleq \exp\bigl(\mathcal{N}(0,\,4\log 2/\tau)\bigr),
\end{equation}
\emph{i.e.} $\log L$ is a zero-mean normal with variance $4\log
2/\tau,$
and $N$ is some distribution that is independent of $L.$ If the negative moments of
$\widetilde{M}_{(\tau, \lambda_1, \lambda_2)}$ equal those of $M_{(\tau, \lambda_1, \lambda_2)}$ in Eq. \eqref{IntNegMoments},
then $\widetilde{M}_{(\tau, \lambda_1, \lambda_2)}\overset{{\rm in \,law}}{=}M_{(\tau, \lambda_1, \lambda_2)}.$
\end{corollary}
The last result that we will consider in this section has to do with the scaling invariance
in the context of the analytic extension of the Selberg integral in Theorem \ref{BSM}. Specifically, we are interested in the behavior of the decomposition in Eq. \eqref{Decomposition} under the involution $\tau\rightarrow 1/\tau.$ Let $a$ and $x$ be as in Eq. \eqref{xidef}.
\begin{theorem}\label{scalinvar}
The distributions $L(\tau),$ $X_i(\tau,\lambda),$ and $M_{(a, x)}$ are involution invariant under $\tau\rightarrow 1/\tau$ and
$\lambda\rightarrow \tau\lambda.$
\begin{align}
L^{1/\tau}\bigl(\frac{1}{\tau}\bigr) &\overset{{\rm in \,law}}{=} L(\tau), \label{Linvar} \\
X^{1/\tau}_i\bigl(\frac{1}{\tau}, \tau\lambda\bigr) &\overset{{\rm in \,law}}{=}
X_i(\tau, \lambda), \;i=1,2,3, \label{Xinvar} \\
M^{1/\tau}_{\bigl(a(1/\tau), \,x(1/\tau,\tau\lambda)\bigr)} &\overset{{\rm in \,law}}{=} M_{\bigl(a(\tau), \,x(\tau,\lambda)\bigr)}. \label{Minvar}
\end{align}
\end{theorem}
\begin{corollary}\label{mycorollary}
The Mellin transform of the Selberg integral distribution satisfies the identity in Theorem \ref{Mtransforminvol}.
\end{corollary}
Theorem \ref{BSM} and Corollary \ref{Uniqueness} first appeared in \cite{MeIMRN}, where we discovered the decomposition in Eq. \eqref{Decomposition}, which led to the development of the theory of Barnes beta distributions in \cite{Me13}.
The probabilistic approach based on Theorem \ref{general} first appeared in \cite{Me14}, where we gave a new, purely probabilistic proof of Eq. \eqref{Decomposition} and also established Theorem \ref{scalinvar}.
\begin{remark}
It is interesting to point out that the $L$ and $X_i,$ $i=1,2,3$ distributions
on the one hand and $Y$ on the other in the structure of $M_{(\tau,\lambda_1,\lambda_2)}$ are intrinsically different. This can be seen on three levels. First, the proof of Theorem \ref{BSM} indicates that $LX_1X_2X_3$
appears as one block from Theorem \ref{general}, while $Y$ is only needed to match the moments given by Selberg's formula. Second, Theorem \ref{scalinvar} shows that $L$ and $X_i,$ are involution invariant, whereas the proof of Corollary \ref{mycorollary} shows that
$Y$ is not. Finally, the law of the total mass of the Bacry-Muzy measure on the circle was conjectured in \cite{FyoBou} and verified in \cite{Remy} to be precisely
the same as $Y,$ while our conjecture for the law of the total mass of this measure on the unit interval is $LX_1X_2X_3$ times $Y$ so that it appears
that $Y$ comes from the circle and $LX_1X_2X_3$ is a superstructure that is needed to transform the law of the total mass from the circle to the unit interval.
\end{remark}
\section{Proofs of Probabilistic Results}\label{ProbProofs}
\begin{proof}[Proof of Theorem \ref{theoremcircle}]
The inverse Barnes beta distribution $\beta^{-1}_{2,2}(a, b)$ with parameters
$a=(1,\tau)$ and $b=(\tau, 1+\tau\lambda_1, 1+\tau\lambda_2)$ has the Mellin transform
\begin{align}
{\bf E}\bigl[\beta^{-q}_{2,2}(\tau, \tau, 1+\tau\lambda_1, 1+\tau\lambda_2)\bigr] = &\frac{\Gamma_2(-q+\tau\,|\,\tau)}{\Gamma_2(\tau\,|\,\tau)}
\frac{\Gamma_2(\tau(1+\lambda_1)+1\,|\,\tau)}{\Gamma_2(\tau(1+\lambda_1)+1-q\,|\,\tau)}\times \nonumber \\ & \times
\frac{\Gamma_2(\tau(1+\lambda_2)+1\,|\,\tau)}{\Gamma_2(\tau(1+\lambda_2)+1-q\,|\,\tau)}
\times \nonumber \\ & \times
\frac{\Gamma_2(\tau(\lambda_1+\lambda_2+1)+2-q\,|\,\tau)}{\Gamma_2(\tau(\lambda_1+\lambda_2+1)+2\,|\,\tau)}.
\end{align}
The difference between this expression and that in Eq. \eqref{thefunctioncopy} is in the last factor. Applying the functional
equation, we get
\begin{align}
\frac{\Gamma_2(\tau(\lambda_1+\lambda_2+1)+1-q\,|\,\tau)}{\Gamma_2(\tau(\lambda_1+\lambda_2+1)+1\,|\,\tau)}
=&\tau^{-\frac{q}{\tau}}
\frac{\Gamma(\lambda_1+\lambda_2+1+\frac{1-q}{\tau})}{\Gamma(\lambda_1+\lambda_2+1+\frac{1}{\tau})}\times \nonumber \\ & \times
\frac{\Gamma_2(\tau(\lambda_1+\lambda_2+1)+2-q\,|\,\tau)}{\Gamma_2(\tau(\lambda_1+\lambda_2+1)+2\,|\,\tau)}.
\end{align}
Recalling the definition of the Mellin transform of the inverse Barnes beta $\beta^{-1}_{1,0}(a, b)$ with parameters
$a=\tau$ and $b=1+\tau(1+\lambda_1+\lambda_2),$
\begin{equation}
{\bf E} \bigl[\beta^{-q}_{1,0}\bigl((\tau, 1+\tau(1+\lambda_1+\lambda_2)\bigr)\bigr] =
\tau^{-\frac{q}{\tau}} \frac{\Gamma(-\frac{q}{\tau}+1+\lambda_1+\lambda_2+\frac{1}{\tau})}{\Gamma(1+\lambda_1+\lambda_2+\frac{1}{\tau})},
\end{equation}
we see that the Mellin transform of the distribution $M_{(\tau, \lambda_1, \lambda_2)}$ in Eq. \eqref{thedecompcircle} coincides with the expression in Eq. \eqref{thefunctioncopy}.
The infinite divisibility of $\log M_{(\tau, \lambda_1, \lambda_2)}$ follows from that of $\log\beta^{-1}_{2,2}(a, b)$ and $\log\beta^{-1}_{1,0}(a, b).$
The determinacy of the Stieltjes moment problem for $M^{-1}_{(\tau, \lambda_1, \lambda_2)}$ follows from
the fact $\beta_{2,2}(a, b)$ is compactly supported and so has a determinate moment problem and $\beta_{1,0}(a, b)$ is Fr\'echet and so
too has a determinate moment problem, cf. \cite{Char}, Sections 2.2 and 2.3. \qed
\end{proof}
\begin{proof}[Proof of Theorem \ref{general}.]
Recalling the Mellin transform of $\beta_{2,2}(a, b)$ and
definition of $X_1, X_2, X_3$ in Eqs. \eqref{X1}--\eqref{X3}, we
can write for ${\bf E}\bigl[X_1^q\bigr]{\bf E}\bigl[X_2^q\bigr]{\bf
E}\bigl[X_3^q\bigr]$ after some simplification
\begin{gather}
\frac{\Gamma_2(x_1-q\,|\,a)}{\Gamma_2(x_1\,|\,a)}
\frac{\Gamma_2(x_2-q\,|\,a)}{\Gamma_2(x_2\,|\,a)}
\frac{\Gamma_2(a_1+a_2-q\,|\,a)}{\Gamma_2(a_1+a_2\,|\,a)}
\frac{\Gamma_2(x_1+x_2-q\,|\,a)}{\Gamma_2(x_1+x_2\,|\,a)} \times
\nonumber \\
\times
\frac{\Gamma_2((x_1+x_2)/2\,|\,a)}{\Gamma_2((x_1+x_2)/2-q\,|\,a)}
\frac{\Gamma_2((x_1+x_2+a_1)/2\,|\,a)}{\Gamma_2((x_1+x_2+a_1)/2-q\,|\,a)}
\frac{\Gamma_2((x_1+x_2+a_2)/2\,|\,a)}{\Gamma_2((x_1+x_2+a_2)/2-q\,|\,a)}\times
\nonumber \\
\times
\frac{\Gamma_2((x_1+x_2+a_1+a_2)/2\,|\,a)}{\Gamma_2((x_1+x_2+a_1+a_2)/2-q\,|\,a)}
.
\end{gather}
Eq. \eqref{multiplic} gives us
\begin{gather}
\Gamma_2(x_1+x_2-2q\,|\,a) = 2^{-B_{2, 2}(x_1+x_2-2q\,|\,a)/2}
\Gamma_2((x_1+x_2)/2-q\,|\,a)
\times\nonumber \\ \times
\Gamma_2((x_1+x_2+a_1)/2-q\,|\,a)\times\Gamma_2((x_1+x_2+a_2)/2-q\,|\,a)
\times\nonumber \\ \times
\Gamma_2((x_1+x_2+a_1+a_2)/2-q\,|\,a).
\end{gather}
Hence, we can write for ${\bf E}\bigl[X_1^q\bigr]{\bf
E}\bigl[X_2^q\bigr]{\bf E}\bigl[X_3^q\bigr]$
\begin{align}
{\bf E}\bigl[X_1^q\bigr]{\bf
E}\bigl[X_2^q\bigr]{\bf E}\bigl[X_3^q\bigr] & =
\frac{2^{B_{2, 2}(x_1+x_2\,|\,a)/2}}{2^{B_{2,
2}(x_1+x_2-2q\,|\,a)/2}}
\frac{\Gamma_2(x_1-q\,|\,a)}{\Gamma_2(x_1\,|\,a)}
\frac{\Gamma_2(x_2-q\,|\,a)}{\Gamma_2(x_2\,|\,a)}\times \nonumber \\ & \times
\frac{\Gamma_2(a_1+a_2-q\,|\,a)}{\Gamma_2(a_1+a_2\,|\,a)}
\frac{\Gamma_2(x_1+x_2-q\,|\,a)}{\Gamma_2(x_1+x_2-2q\,|\,a)}.
\end{align}
Now, using the formula for $B_{2,2}(x\,|\,a)$ in Eq. \eqref{B22},
and the definition of $L$ in Eq. \eqref{Ldefin}, we can write
\begin{align}
{\bf E}\bigl[L^q\bigr]{\bf E}\bigl[X_1^q\bigr]{\bf
E}\bigl[X_2^q\bigr]{\bf E}\bigl[X_3^q\bigr]= & 2^{\bigl(2(x_1+x_2)-(a_1+a_2)\bigr)q/a_1a_2}\frac{\Gamma_2(x_1-q\,|\,a)}{\Gamma_2(x_1\,|\,a)}
\frac{\Gamma_2(x_2-q\,|\,a)}{\Gamma_2(x_2\,|\,a)}
\times\nonumber \\ & \times
\frac{\Gamma_2(a_1+a_2-q\,|\,a)}{\Gamma_2(a_1+a_2\,|\,a)}
\frac{\Gamma_2(x_1+x_2-q\,|\,a)}{\Gamma_2(x_1+x_2-2q\,|\,a)}.
\end{align}
This proves that the expression on the right-hand side of
Eq. \eqref{eqgeneral} is in fact the Mellin transform of the probability
distribution on $(0,\,\infty)$ that is given by the right-hand side of Eq. \eqref{generaldecomp}.
Its properties follow from the known properties of the normal and $\beta_{2,2}(a, b)$ distributions.\qed
\end{proof}
We now proceed to give a probabilistic proof of the existence of the Selberg integral probability distribution that is based on Theorem \ref{general}.
\begin{proof}[Proof of Theorem \ref{BSM} and Corollary \ref{Uniqueness}]
Recall the definition of $a=(a_1, a_2)$ and $x_i(\tau,\lambda)$ in Eq. \eqref{xidef},
and let $L(\tau),$ $X_i(\tau,\lambda),$ and $Y(\tau)$ be as in the statement of Theorem \ref{BSM}.
By the functional equation of $\Gamma_2$ in Eq. \eqref{feq}
and the definition of $Y(\tau)$ in Eq. \eqref{Ydist}, we have
\begin{align}
{\bf E}[Y(\tau)^q] & =\Gamma(1-q/\tau), \label{YMellin} \\
\Gamma_{2}(\tau-q\,|\,\tau) & =
\frac{\tau^{(\tau-q)/\tau-1/2}}{\sqrt{2\pi}}{\bf E}[Y(\tau)^q]\,\Gamma_2\bigl(1+\tau-q\,|\,\tau\bigr). \label{Ytransform}
\end{align}
Then, given Theorem \ref{general}, the difference between Eqs. \eqref{thefunctioninterval} and \eqref{eqgeneral} is in the third double gamma ratio. Define
\begin{equation
M_{(\tau, \lambda_1, \lambda_2)} \triangleq 2\pi\,
2^{-\bigl[3(1+\tau)+2\tau(\lambda_1+\lambda_2)\bigr]/\tau}\,
\frac{
L(\tau)\,X_1(\tau,\lambda)\,X_2(\tau,\lambda)\,X_3(\tau,\lambda)\,Y(\tau)}{\Gamma\bigl(1-1/\tau\bigr)}.
\end{equation}
It is now elementary to see that the Mellin transform of $M_{(\tau, \lambda_1, \lambda_2)}$ equals the expression in Eq. \eqref{thefunctioninterval}. The appearance of the $Y$ distribution in Eq. \eqref{Decomposition} is to account for this difference in the third gamma ratio.
The
remaining computation, which determines the overall constant in Eq. \eqref{Decomposition}, is straightforward.
The proof of Corollary \ref{Uniqueness} follows from Eq. \eqref{Decomposition} due to
the determinacy of the Stieltjes moment problems for $\beta_{2,2}$ (compactly supported) and $Y^{-1}$ (Carleman's criterion)
and its indeterminacy for $L^{-1}$ (lognormal), cf. \cite{Char}, Sections 2.2 and 2.3.
\qed
\end{proof}
\begin{proof}[Proof of Theorem \ref{scalinvar}.]
We have by Eq. \eqref{Ldefin},
\begin{align}
{\bf E}\Bigl[L^{q/\tau} \bigl(\frac{1}{\tau}\bigr)\Bigr] & = {\bf E}\Bigl[e^{(q/\tau)\mathcal{N}(0,\,4\tau\log 2)}\Bigr], \nonumber \\
& = e^{4q^2\log 2/2\tau} \equiv {\bf E}\bigl[L^{q} (\tau)\bigr].
\end{align}
To prove Eq. \eqref{Xinvar}, observe the identity
\begin{equation}\label{xinvar}
x_i(\tau, \lambda_i)/\tau = x_i(1/\tau, \tau\lambda_i), \;i=1,2.
\end{equation}
Hence, by the definition of $X_i(\tau,\lambda)$ and
Theorem \ref{barnesbetascaling}, we have the identity
\begin{align}
X^{1/\tau}(1/\tau, \tau\lambda) & \overset{{\rm in \,law}}{=} \beta_{2,2}^{-1/\tau}\bigl(a/\tau, b/\tau\bigr), \nonumber \\
& \overset{{\rm in \,law}}{=} \beta_{2,2}^{-1}\bigl(a, b\bigr),
\end{align}
where $a=(1,\tau)$ and $b=(b_0, b_1, b_2)$ is given in terms of
$a$ and $x_i, \,i=1, 2$ in Eqs. \eqref{X1}--\eqref{X3}.
The proof of Eq. \eqref{Minvar} follows from Eqs. \eqref{Linvar} and \eqref{Xinvar}
(or can be seen directly from Eq. \eqref{eqgeneral} by Eqs. \eqref{scale} and \eqref{B22}). \qed
\end{proof}
\begin{proof}[Proof of Theorem \ref{Mtransforminvol} and Corollary \ref{mycorollary}.]
The $Y$ distribution in Eq. \eqref{Decomposition} is not involution invariant, confer Eq. \eqref{YMellin}. Instead,
we have by Eq. \eqref{YMellin} the identity
\begin{equation}
{\bf E}\bigl[Y^{q/\tau}(1/\tau)\bigr] = \frac{\Gamma(1-q)}{\Gamma(1-q/\tau)}
\,{\bf E}\bigl[Y^q(\tau)\bigr].
\end{equation}
The proof now follows from Eq. \eqref{Minvar}. \qed
\end{proof}
\section{Critical Morris and Selberg Integral Distributions}\label{DerivM}
\noindent
In this section we will consider conjectured laws of the derivative martingales of the Bacry-Muzy GMC on the circle and interval.
These laws are obtained from the sub-critical laws that we studied in previous sections in the
limit of $\tau\rightarrow 1,$ \emph{i.e.} in the limit of the so-called
critical temperature and are of a particular interest in the theory of critical GMC chaos, confer \cite{barraletal} and \cite{dupluntieratal}.
\begin{definition}[Critical Morris Integral Distribution]
Let $M_{(\tau,\lambda_1,\lambda_2)}$ denote the Morris integral distribution as in Theorem \ref{theoremcircle}.
\begin{equation}
M_{(\tau=1,\lambda_1,\lambda_2)}\triangleq \lim\limits_{\tau\downarrow 1} \Gamma\bigl(1-1/\tau\bigr)\,M_{(\tau,\lambda_1,\lambda_2)}.
\end{equation}
\end{definition}
This limit exists by Eq. \eqref{thedecompcircle}. We will refer to $M_{(\tau=1,\lambda_1,\lambda_2)}$ as the critical Morris integral distribution.
\begin{definition}[Critical Selberg Integral Distribution]
Let $M_{(\tau,\lambda_1,\lambda_2)}$ denote the Selberg integral distribution as in Theorem \ref{BSM}.
\begin{equation}\label{Selbergcrit}
M_{(\tau=1,\lambda_1,\lambda_2)}\triangleq \lim\limits_{\tau\downarrow 1} \Gamma\bigl(1-1/\tau\bigr)\,M_{(\tau,\lambda_1,\lambda_2)}.
\end{equation}
\end{definition}
This limit exists by Eq. \eqref{Decomposition}. We will refer to $M_{(\tau=1,\lambda_1,\lambda_2)}$ as the critical Selberg integral distribution.
Recall the definition of the Barnes $G(z)$ function in Eq. \eqref{Gdef}. Throughout this section we let $\Re(q)<1.$
\begin{theorem}[Critical Morris integral distribution]\label{criticalMorris}
\begin{gather}
{\bf E}\bigl[M_{(\tau=1,\lambda_1,\lambda_2)}^q\bigr] =
\frac{G(2-q+\lambda_1)}{G(2+\lambda_1)}
\frac{G(2-q+\lambda_2)}{G(2+\lambda_2)} \frac{G(1)}{G(-q+1)}
\frac{G(2+\lambda_1+\lambda_2)}{G(2-q+\lambda_1+\lambda_2)}. \label{MGcrit}
\end{gather}
\begin{align}
\mathfrak{M}(q\,|\tau=1,\,\lambda_1,\,\lambda_2) = & \Gamma(1-q) \prod\limits_{m=1}^\infty \Bigl[
\frac{\Gamma(1-q+m)}{\Gamma(1+m)} \frac{\Gamma(1+\lambda_1+m)}{\Gamma(1-q+\lambda_1+m)}
\frac{\Gamma(1+\lambda_2+m)}{\Gamma(1-q+\lambda_2+m)} \nonumber \times \\ & \times \frac{\Gamma(1-q+\lambda_1+\lambda_2+m)}{\Gamma(1+\lambda_1+\lambda_2+m)}\Bigr], \\
= & \frac{\Gamma(\lambda_1+\lambda_2+2-q)}{\Gamma(\lambda_1+\lambda_2+2)} \times \nonumber \\
& \times
\prod\limits_{k=0}^\infty \Bigl[
\frac{1+k}{1+k-q} \frac{2+\lambda_1+k-q}
{2+\lambda_1+k}
\frac{2+\lambda_2+k-q}{2+\lambda_2+k}
\frac{3+\lambda_1+\lambda_2+k}{3+\lambda_1+\lambda_2+k-q} \Bigr]^{k+1}
.\label{Morrisspecial}
\end{align}
The negative moments of the critical Morris integral distribution are
\begin{equation}
{\bf E}[M_{(\tau=1,\lambda_1,\lambda_2)}^{-n}] = \prod\limits_{j=0}^{n-1} \frac{\Gamma(2+\lambda_1+j) \,\Gamma(2+\lambda_2+j)}{\Gamma(2+\lambda_1+\lambda_2+j)\,\Gamma(1+j)}. \label{CirNegMomentscrit}
\end{equation}
The distribution is a product of two Barnes beta distributions,
\begin{align}
M_{(\tau=1, \lambda_1, \lambda_2)} = & \beta^{-1}_{22}(a=(1,1), b_0=1,\,b_1=1+ \lambda_1, \,b_2=1+\lambda_2) \times \nonumber \\ & \times
\beta_{1,0}^{-1}(a=1, b_0=\lambda_1+\lambda_2+2), \label{thedecompcirclecrit}
\end{align}
where $\beta^{-1}_{22}(a, b)$ is the inverse Barnes beta of type $(2,2)$ and $\beta^{-1}_{1,0}(a, b)$ is the independent inverse Barnes beta of type $(1,0).$ In particular, $\log M_{(\tau=1, \lambda_1, \lambda_2)}$ is infinitely divisible.
In the special case of $\lambda_1=\lambda_2=0,$ we have
\begin{equation}
M_{(\tau=1, \lambda_1, \lambda_2)} = \Gamma(1-q).
\end{equation}
\end{theorem}
We will now describe the critical Selberg integral distribution.
\begin{theorem}[Critical Selberg integral distribution]\label{criticalSelberg}
\begin{gather}
{\bf E}\bigl[M_{(\tau=1,\lambda_1,\lambda_2)}^q\bigr] =
\frac{G(2+\lambda_1)}{G(2-q+\lambda_1)}
\frac{G(2+\lambda_2)}{G(2-q+\lambda_2)} \frac{G(1)}{G(-q+1)}
\frac{G(4-2q+\lambda_1+\lambda_2)}{G(4-q+\lambda_1+\lambda_2}. \label{MGIcrit}
\end{gather}
\begin{gather}
{\bf E}[M_{(\tau=1,\lambda_1,\lambda_2)}^q] = \Gamma\bigl(1-q\bigr)
\frac{\Gamma\bigl(3-2q+\lambda_1+\lambda_2\bigr)}{
\Gamma\bigl(3-q+\lambda_1+\lambda_2\bigr)}
\prod\limits_{m=1}^\infty m^{2q}
\frac{\Gamma\bigl(1-q+m\bigr)}{\Gamma\bigl(1+m\bigr)}
\times \nonumber \\
\times
\frac{\Gamma\bigl(1-q+\lambda_1+m\bigr)}{\Gamma\bigl(1+\lambda_1+m\bigr)}
\frac{\Gamma\bigl(1-q+\lambda_2+m\bigr)}{\Gamma\bigl(1+\lambda_2+m\bigr)}
\frac{\Gamma\bigl(2-q+\lambda_1+\lambda_2+m\bigr)}{\Gamma\bigl(2-2q+\lambda_1+\lambda_2+m\bigr)}.\label{InfiniteSelbergcrit}
\end{gather}
\begin{equation}
{\bf E}[M_{(\tau=1,\lambda_1,\lambda_2)}^{-n}] = \prod_{k=0}^{n-1}
\frac{\Gamma\bigl(4+\lambda_1+\lambda_2+n+k\bigr)
}{
\Gamma\bigl(2+\lambda_1+k\bigr)\Gamma\bigl(2+\lambda_2+k\bigr)
\Gamma\bigl(1+k\bigr) }. \label{IntNegMomentscrit}
\end{equation}
Define the distributions
\begin{align}
L \triangleq &\exp\bigl(\mathcal{N}(0,\,4\log 2)\bigr), \\ Y
\triangleq &\,y^{-2}\exp\bigl(-y^{-1}\bigr)\,dy,\; y>0,\label{Ydistcrit}
\end{align}
\emph{i.e.} $\log L$ is a zero-mean normal with variance $4\log
2$ and $Y$ is Fr\'echet. Let $X_1,\,X_2,\,X_3$ have the $\beta^{-1}_{2,
2}(a=(1,1), b)$ distribution with the parameters
\begin{align}
X_1 &\triangleq \beta_{2,2}^{-1}\Bigl(a=(1,1),
b_0=2+\lambda_1,\,b_1=(\lambda_2-\lambda_1)/2, \,
b_2=(\lambda_2-\lambda_1)/2\Bigr),\\
X_2 & \triangleq \beta_{2,2}^{-1}\Bigl(a=(1,1),
b_0=2+(\lambda_1+\lambda_2)/2,\,b_1=1/2,\,b_2=1/2\Bigr),\\
X_3 & \triangleq \beta_{2,2}^{-1}\Bigl(a=(1,1), b_0=2,\,
b_1=1+(\lambda_1+\lambda_2)/2, \,
b_2=1+(\lambda_1+\lambda_2)/2\Bigr).
\end{align}
Then,
\begin{equation}\label{Decompositioncrit}
M_{(\tau=1, \lambda_1, \lambda_2)} \overset{{\rm in \,law}}{=} 2\pi\,
2^{-\bigl[6+2(\lambda_1+\lambda_2)\bigr]}\,
L\,X_1\,X_2\,X_3\,Y.
\end{equation}
In particular, $M_{(\tau=1,\lambda_1,\lambda_2)}$ is infinitely divisible and absolutely continuous.
\end{theorem}
Unlike the critical Morris integral distribution, which becomes Fr\'echet in the special case of $\lambda_1=\lambda_2=0,$
the critical Selberg integral distribution remains non-trivial.
\begin{theorem}\label{Deriv}
Let $\lambda_1=\lambda_2=0.$ Then, the critical Selberg integral distribution satisfies
\begin{equation}\label{Mderiv}
{\bf E}[M_{(\tau=1,0,0)}^q] =
\frac{G(4-2q)}{G(1-q)\, G^2(2-q)\,G(4-q)}
\end{equation}
for $\Re(q)<1.$ The Mellin transform satisfies the infinite product representation
\begin{equation}
{\bf E}\bigl[M_{(\tau=1,0,0)}^q\bigr] =
\frac{\Gamma\bigl(1-q\bigr)\Gamma\bigl(3-2q\bigr)}{
\Gamma\bigl(3-q\bigr)}
\prod\limits_{m=1}^\infty m^{2q}
\frac{\Gamma^3\bigl(1-q+m\bigr)}{\Gamma^3\bigl(1+m\bigr)}
\frac{\Gamma\bigl(2-q+m\bigr)}{\Gamma\bigl(2-2q+m\bigr)}.
\end{equation}
The negative moments for $l\in\mathbb{N}$ are
\begin{equation}
{\bf E}\bigl[M_{(\tau=1,0,0)}^{-l}\bigr] =
\prod_{k=0}^{l-1}
\frac{(3+l+k)!}{(k+1)!^2 \,k!}.
\end{equation}
$M_{(\tau=1,0,0)}$ has the factorization
\begin{equation}\label{McDecomposition}
M_{(\tau=1,0,0)} = \frac{\pi}{32}\,L\,\,X_2\,X_3\,Y,
\end{equation}
where
\begin{align}
L & = \exp\bigl(\mathcal{N}(0, 4\log 2)\bigr) \;(\text{Lognormal}), \\
X_2 & = \beta^{-1}_{2,2}\bigl(a=(1,1), \,b_0=2, \,b_1=b_2=1/2\bigr), \\
X_3 & = \frac{2}{y^3}\,dy, \;y>1 \;(\text{Pareto}),\\
Y & = \frac{1}{y^2}\,e^{-1/y}\,dy,\;y>0 \;(\text{Fr\'echet}).
\end{align}
\end{theorem}
This result first appeared in \cite{Me14}.
Theorems \ref{criticalMorris}--\ref{Deriv} are straightforward corollaries of the corresponding results for the Morris and Selberg
integral distributions in Sections \ref{CirAnalytical}, \ref{IntAnalytical}, and \ref{Probabilistic}.
We note that Eq. \eqref{Morrisspecial} follows from Eq. \eqref{thedecompcirclecrit} by means of Eq. \eqref{specialfactorization}. \qed
\begin{remark}
It is clear from Theorem \ref{Deriv} that the most nontrivial component of
the critical Selberg distribution is $X_2.$ Consider more generally a family of $\beta_{2,2}$ distributions
that is parameterized by $\delta>0.$
\begin{equation}
\beta_{2,2}(\delta) \triangleq \beta_{2,2}\bigl(a=(1,1), \,b_0=\delta, \,b_1=b_2=1/2\bigr).
\end{equation}
Its Mellin transform satisfies,
\begin{align}
{\bf E}\bigl[\beta_{2,2}^q(\delta)\bigr] & = \frac{G(\delta)}{G(q+\delta)}
\frac{G^2(q+\delta+1/2)}{G^2(\delta+1/2)}
\frac{G(\delta+1)}{G(q+\delta+1)}, \\
& = \prod\limits_{k=0}^\infty \Bigl[
\frac{\delta+k}{q+\delta+k} \frac{(q+\delta+1/2+k)^2}
{(\delta+1/2+k)^2} \frac{\delta+1+k}
{q+\delta+1+k}\Bigr]^{k+1}
\end{align}
by Corollary \ref{BarnesFactorSpecial}.
Remarkably, this distribution is intrinsically related to the Riemann xi function, cf. Section 6 in \cite{Me14}, which suggests that
the critical Selberg integral distribution is also related to the xi function.
\end{remark}
\section{Analytic Continuation of the Complex Selberg Integral}\label{AnalyticalComplexSelberg}
\noindent The complex Selberg Integral, also known as the Dotsenko-Fateev integral, was computed independently by Aomoto \cite{Aom} and
Dotsenko and Fateev \cite{DF}.
It is a two-dimensional generalization of the classical Selberg integral. Unlike the Morris and Selberg integrals, it does not correspond
to the moments of total mass of a GMC measure and so its analytic continuation is not the Mellin transform of a probability distribution. It can however be
naturally interpreted as a rescaled limit of the moments of total mass of the two-dimensional GMC measure with the kernel
$-\log|\vec{r}_1-\vec{r}_2|$ in the limit of the
domain, over which it is defined, going to infinity, cf. \cite{cao17} for details. Then, the analytic continuation of the
complex Selberg integral becomes the mod-Gaussian limit of the REM (random energy model) that is associated with the
GMC measure. This observation explains the interest in the complex Selberg integral, for its analytic continuation provides a
glimpse at the behavior of the total mass in two-dimensions. Our goal in this section is to write down the analytic continuation and
prove its involution invariance. We note that the physicists' approach to this problem can be found in \cite{cao17},
which leads to interesting conjectures about the maximum of the underlying gaussian field.
Let $\vec{u}$ be an arbitrary unit vector. Following \cite{ForresterWarnaar}, define the complex Selberg integral,
\begin{gather}
\int\limits_{(\mathbb{R}^2)^n} \prod_{i=1}^n |\vec{r}_i|^{2\lambda_1} |\vec{u}-\vec{r}_i|^{2\lambda_2}\, \prod\limits_{i<j}^n |\vec{r}_i-\vec{r}_j|^{-4/\tau} d\vec{r}_1\cdots d\vec{r}_n , \nonumber \\
= \frac{1}{n!} \Bigl[\prod_{k=0}^{n-1}\frac{\Gamma(1-(k+1)/\tau)
\Gamma(1+\lambda_1-k/\tau)\Gamma(1+\lambda_2-k/\tau)}
{\Gamma(1-1/\tau)\Gamma(2+\lambda_1+\lambda_2-(n+k-1)/\tau)}\Bigr]^2\times \nonumber \\
\times \prod_{k=0}^{n-1} \frac{\sin\pi((k+1)/\tau)
\sin\pi(1+\lambda_1-k/\tau)\sin\pi(1+\lambda_2-k/\tau)}
{\sin(\pi/\tau)\sin\pi(2+\lambda_1+\lambda_2-(n+k-1)/\tau)}.\label{ComplexSelberg}
\end{gather}
\begin{theorem}[Analytic continuation of the complex Selberg integral]\label{analcomplexS}
Let $\lambda_1,$ $\lambda_2,$ and $\tau$ satisfy
\begin{gather}
\tau>1,\; \lambda_1,\lambda_2<0, \label{bound1} \\
1+\tau(1+\lambda_i) >0, \, i=1,2,\label{bound2}\\
1+\tau(\lambda_1+\lambda_2)< 0. \label{bound3}
\end{gather}
Let $q$ belong to the strip
\begin{equation}\label{qcomplexstrip}
\max\Big\{\frac{1}{2}\bigl(1+\tau(1+\lambda_1+\lambda_2)\bigr),\, 1+\tau(1+\lambda_1+\lambda_2)\Big\}<\Re(q)<\min\Big\{\tau, 1+\tau(1+\lambda_1), 1+\tau(1+\lambda_2)\Big\}.
\end{equation}
Let $\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2)$ denote the analytic continuation of the Selberg integral in Eq. \eqref{thefunctioninterval}
and $S_2(z\,|\,\tau)$ denote the double sine function as in Eq. \eqref{msinedef} ($M=2,\; a=(1,\tau)).$
Then, the function
\begin{align}
\mathfrak{CM}(q\,|\,\tau,\lambda_1,\lambda_2)=& \frac{ \mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2)^2}{\Gamma(1+q) \, \bigl(4 \sin(\pi/\tau)\bigr)^q} \,
\frac{S_2(1-q+\tau(1+\lambda_1)\,|\,\tau)}{S_2(1+\tau(1+\lambda_1)\,|\,\tau)}
\frac{S_2(1-q+\tau(1+\lambda_2)\,|\,\tau)}{S_2(1+\tau(1+\lambda_2)\,|\,\tau)}\times
\nonumber \\ \times
& \frac{S_2(\tau-q\,|\,\tau)}{S_2(\tau\,|\,\tau)}
\frac{S_2(2-q+\tau(2+\lambda_1+\lambda_2)\,|\,\tau)}{S_2(2-2q+\tau(2+\lambda_1+\lambda_2)\,|\,\tau)} \label{complexS1}
\end{align}
recovers the expression in Eq. \eqref{ComplexSelberg} when $q=n\in\mathbb{N}$ satisfies Eq. \eqref{qcomplexstrip}. The function
$\mathfrak{CM}(q\,|\,\tau,\lambda_1,\lambda_2)$ satisfies
\begin{align}
\mathfrak{CM}(q\,|\,\tau,\lambda_1,\lambda_2) = &
\Bigl(\frac{\Gamma(1/\tau)\,\pi\,\tau^{\frac{1}{\tau}}}{\Gamma\bigl(1-1/\tau\bigr)}\Bigr)^q\, \Gamma\bigl(1-\frac{q}{\tau}\bigr)\,
\frac{\Gamma_2(1-q+\tau(1+\lambda_1)\,|\,\tau)}{\Gamma_2(1+\tau(1+\lambda_1)\,|\,\tau)}
\frac{\Gamma_2(1-q+\tau(1+\lambda_2)\,|\,\tau)}{\Gamma_2(1+\tau(1+\lambda_2)\,|\,\tau)}\times
\nonumber \\ & \times
\frac{\Gamma_2(1-q+\tau\,|\,\tau)}{\Gamma_2(1+\tau\,|\,\tau)}
\frac{\Gamma_2(2-q+\tau(2+\lambda_1+\lambda_2)\,|\,\tau)}{\Gamma_2(2-2q+\tau(2+\lambda_1+\lambda_2)\,|\,\tau)}
\frac{\Gamma_2(q+1+\tau\,|\,\tau)}{\Gamma_2(1+\tau\,|\,\tau)}
\times
\nonumber \\ & \times
\frac{\Gamma_2(q-\tau\lambda_1\,|\,\tau)}{\Gamma_2(-\tau\lambda_1\,|\,\tau)}
\frac{\Gamma_2(q-\tau\lambda_2\,|\,\tau)}{\Gamma_2(-\tau\lambda_2\,|\,\tau)}
\frac{\Gamma_2(q-1-\tau(1+\lambda_1+\lambda_2)\,|\,\tau)}{\Gamma_2(2q-1-\tau(1+\lambda_1+\lambda_2)\,|\,\tau)}.\label{complexS2}
\end{align}
\end{theorem}
\begin{corollary}\label{MComplextransforminvol}
The function
$\mathfrak{CM}(q\,|\,\tau,\lambda_1,\lambda_2)$ satisfies the following involution invariance under
\begin{equation}\label{involutiondef}
\tau\rightarrow \frac{1}{\tau},\; q\rightarrow \frac{q}{\tau}, \; \lambda_i\rightarrow \tau\lambda_i.
\end{equation}
\begin{align}
\mathfrak{CM}\bigl(\frac{q}{\tau}\,|\,\frac{1}{\tau},\tau\lambda_1,\tau\lambda_2\bigr) \,
\Bigl(\frac{\Gamma(1-\tau)}{\pi\,\Gamma(\tau)}\Bigr)^{\frac{q}{\tau}} \Gamma(1-\frac{q}{\tau}) = &
\mathfrak{CM}(q\,|\,\tau,\lambda_1,\lambda_2) \Bigl(\frac{\Gamma(1-\frac{1}{\tau})}{\pi\,\Gamma(1/\tau)}\Bigr)^{q}
\Gamma(1-q).\label{involutionintcomplex}
\end{align}
\end{corollary}
\begin{corollary}
In the critical case $\tau \downarrow 1,$ the function $\mathfrak{CM}(q\,|\, \tau,\lambda_1,\lambda_2)$ has the limit,
\begin{align}
\lim\limits_{\tau\downarrow 1} \Bigl[\Gamma(1-1/\tau)^q\,\mathfrak{CM}(q\,|\,\tau,\lambda_1,\lambda_2)\Bigr] = &
\pi^q\, \Gamma\bigl(1-q\bigr)\,
\frac{G(2+\lambda_1)}{G(2-q+\lambda_1)}
\frac{G(2+\lambda_2)}{G(2-q+\lambda_2)}\times
\nonumber \\ & \times
\frac{G(2)}{G(2-q)}
\frac{G(4-2q+\lambda_1+\lambda_2)}{G(4-q+\lambda_1+\lambda_2)}
\frac{G(2)}{G(q+2)}
\times
\nonumber \\ & \times
\frac{G(-\lambda_1)}{G(q-\lambda_1)}
\frac{G(-\lambda_2)}{G(q-\lambda_2)}
\frac{G(2q-2-(\lambda_1+\lambda_2))}{G(q-2-(\lambda_1+\lambda_2))}.\label{critcomplexS}
\end{align}
\end{corollary}
\begin{remark}
While the analytic continuation of the complex Selberg integral is not the Mellin transform of a random variable,
it is nonetheless possible to decompose it into the product of the Mellin transform of a random variable and an extra factor.
Let $M_{(a, x)}=L\,X_1\,X_2\,X_3$ denote the random variable that is described in Theorem \ref{general} and $Y$ be the Fr\'echet factor
as in Eq. \eqref{Ydist}. Let
\begin{align}
M_1 = & M_{(a, x)} \; \text{with}\; a=(1,\tau), \; x=\bigl(1+\tau(1+\lambda_1), \, 1+\tau(1+\lambda_2)\bigr), \\
M_2 = & M_{(a, x)} \; \text{with}\; a=(1,\tau), \; x=(-\tau\lambda_1, \,-\tau\lambda_2).
\end{align}
Then, up to a constant $C,$ Eq. \eqref{complexS2}
is equivalent to the identity
\begin{align}
\mathfrak{CM}(q\,|\,\tau,\lambda_1,\lambda_2) = & e^{Cq} {\bf E}[Y^q]\,
{\bf E}[M_1^q]\,{\bf E}[M_2^{-q}]\,\frac{\Gamma_2(q-1-\tau(1+\lambda_1+\lambda_2)\,|\,\tau)}{\Gamma_2(2q-1-\tau(1+\lambda_1+\lambda_2)\,|\,\tau)}\,\frac{\Gamma_2(2q-\tau(\lambda_1+\lambda_2)\,|\,\tau)}{\Gamma_2(q-\tau(\lambda_1+\lambda_2)\,|\,\tau)}, \nonumber \\
= & e^{Cq} {\bf E}[Y^q]\,
{\bf E}[M_1^q]\,{\bf E}[M_2^{-q}]\,\frac{\Gamma\bigl(\frac{q-1-\tau(1+\lambda_1+\lambda_2)}{\tau}\bigr)}{\Gamma\bigl(\frac{2q-1-\tau(1+\lambda_1+\lambda_2)}{\tau}\bigr)}\,\frac{\Gamma(q-\tau(1+\lambda_1+\lambda_2))}{\Gamma(2q-\tau(1+\lambda_1+\lambda_2))}.
\end{align}
Thus, the random variable is the product of an independent Selberg integral distribution $Y\,M_1,$ cf. Theorem \ref{BSM}, and $M_2^{-1}.$
\end{remark}
\begin{proof}[Proof of Theorem \ref{analcomplexS}.]
It is elementary to see that the bounds in Eqs. \eqref{bound1}--\eqref{bound3} guarantee that the strip in Eq. \eqref{qcomplexstrip} is non-empty. Now, we observe that the structure of the product of sines in Eq. \eqref{ComplexSelberg} is essentially the same as the product
of gamma factors in the Selberg integral. Moreover, the functional equation of the double sine function in Eq. \eqref{feqsine} is the same
as that of the double gamma function in Eq. \eqref{feq}, except $\Gamma_1$ is replaced with $S_1.$ It follows that
the analytic continuation in Eq. \eqref{complexS1} follows from Eq. \eqref{Srepeated} in the same way as Eq. \eqref{thefunctioninterval}
followed from Eq. \eqref{repeated}. It remains to verify that the expression is Eq. \eqref{complexS1} is well-defined under the
conditions in Eq. \eqref{qcomplexstrip}, \emph{i.e.} that the arguments of $\Gamma_2$ factors satisfy $\Re(z)>0$ and those of
$S_2$ factors satisfy $0<\Re(z)<1+\tau.$ For $\Re(z)>0$ this is true by the upper bound in Eq. \eqref{qcomplexstrip}.
To satisfy $\Re(z)<1+\tau, $ in addition to the lower bound in Eq. \eqref{qcomplexstrip} we also need $\Re(q)>\tau\lambda_i$ and $\Re(q)>-(1+\tau).$\footnote{$\Re(q)=-1$ is a removable singularity as both $S_2(\tau-q\,|\,\tau)$ and $\Gamma(1+q)$ have simple poles there, see Eq. \eqref{simplepole}.} However, these are automatically satisfied due to $\max\{\tau\lambda_i, -(1+\tau)\} < 1+\tau(1+\lambda_1
+\lambda_2)$ by Eq. \eqref{bound2}.
The proof of Eq. \eqref{complexS2} follows from the definition of the double sine function
in Eq. \eqref{msinedef} and the identity in Eq. \eqref{mydoublegammaidentity}. One obtains,
\begin{align}
\frac{1}{\Gamma(1+q)} \frac{S_2(\tau-q\,|\,\tau)}{S_2(\tau\,|\,\tau)} = & \frac{\Gamma_2(1+q+\tau\,|\,\tau)}{\Gamma_2(1-q+\tau\,|\,\tau)} \,
\frac{\tau^{q/\tau}}{\Gamma(1-q/\tau)}, \label{simplepole} \\
\tau^{q/\tau} \frac{\Gamma_2(\tau-q\,|\,\tau)}{\Gamma_2(\tau\,|\,\tau)} =& \Gamma(1-\frac{q}{\tau})\,\frac{\Gamma_2(1-q+\tau\,|\,\tau)}{\Gamma_2(1+\tau\,|\,\tau)}.
\end{align}
Finally, we need the identity
\begin{equation}
\Gamma(1-\frac{1}{\tau})\sin\pi/\tau = \frac{\pi}{\Gamma(1/\tau)}.
\end{equation}
The result now follows by substituting these identities into Eq. \eqref{complexS1} and using the definition of the double sine function. \qed
\end{proof}
\begin{proof}[Proof of Corollary \ref{MComplextransforminvol}.]
The proof is very similar to that of Theorem \ref{CicInvolution}. One starts with Eq. \eqref{complexS2} and observes that
all the double gamma
terms are invariant under the involution in Eq. \eqref{involutiondef} due to
the scaling property of the double gamma function in Eq. \eqref{scale}.
In our case $\kappa=1/\tau.$ It remains to collect the power of $\kappa$ that comes from the pre-factor
in Eq. \eqref{scale}. A direct calculation shows that it is
\begin{equation}
\bigl(\frac{1}{\tau}\bigr)^{-(q+\frac{q}{\tau})}.
\end{equation}
The result follows. \qed
\end{proof}
\section{Applications}\label{SomeApplications}
\noindent In this section we will consider three conjectured applications of our results.
\subsection{Maximum Distribution of the Gaussian Free Field on the Interval and Circle}
In this section we will formulate precise conjectures about the distribution of the maximum of the discrete 2D Gaussian
Free Field (GFF) with a non-random logarithmic potential restricted to the unit interval and circle. We will not attempt to review the GFF here but rather refer the reader to \cite{FyoBou} for the circle case and to \cite{FLDR} and Section 3 of \cite{Menon} for the interval case. Suffice it to say that our approach to the
GFF construction is essentially based on the construction of Bacry-Muzy, cf. \cite{MRW}, \cite{BM1}, \cite{BM}. Our results in this section first appeared in \cite{Me16}.
Let
\begin{equation}
N=1/\varepsilon.
\end{equation}
Let the gaussian field $V_\varepsilon(x)$ be as in Eq. \eqref{covk}.
The limit $\lim\limits_{\varepsilon\rightarrow 0} V_{\varepsilon}(x)$ is
called the continuous GFF on the interval $x\in[0, \,1]$ and its discretized version $V_\varepsilon(x_i),$
$x_i=i\varepsilon,$ $i=0\cdots N,$ is the discrete GFF on the interval.
We note that in applications, see \cite{Menon} and subsection \ref{modG} below, for example, the GFF construction arises is a slightly more general form of
\begin{align}
{\bf{Cov}}\left[V_{\varepsilon}(u), \,V_{\varepsilon}
(v)\right] = &
\begin{cases}\label{covkk}
-
2 \, \log|u-v|, \, \varepsilon\ll|u-v|\leq 1, \\
2
\left(\kappa-\log\varepsilon\right),\, u=v,
\end{cases} \nonumber \\
& + O(\varepsilon),
\end{align}
where $\kappa\geq0$ is some fixed constant and the details of regularization are relegated to the $O(\varepsilon)$ term.
It is worth emphasizing that the choice of covariance regularization for $|u-v|\leq \varepsilon$ has no effect on
distribution of the maximum, so long as the variance behaves as in Eq. \eqref{covk}, due to Theorem 6 in \cite{BM1} as explained below.
The same remark applies to the GFF on the circle.
Let the gaussian field $V_\varepsilon(\psi)$ be as in Eq. \eqref{covkc} or more generally satisfy
\begin{align}
{\bf{Cov}}\left[V_{\varepsilon}(\psi), \,V_{\varepsilon}
(\xi)\right] = &
\begin{cases}\label{covkcir}
-
2 \, \log|e^{2\pi i\psi}-e^{2\pi i\xi}|, \, |\xi-\psi|\gg \varepsilon, \\
2
\left(\kappa-\log\varepsilon\right), \psi=\xi,
\end{cases} \nonumber \\
& + O(\varepsilon),
\end{align}
where $\kappa\geq 0$ is some fixed constant. The limit $\lim\limits_{\varepsilon\rightarrow 0} V_{\varepsilon}(\psi)$ is
called the GFF on the circle $\psi\in[-\frac{1}{2}, \frac{1}{2})$ and its discretized version $V_\varepsilon(\psi_j),$
$\psi_j=j\varepsilon,$ $j=-N/2\cdots N/2,$ is the discrete GFF on the circle.
The existence of such objects follows from the general theory of \cite{RajRos} as shown in \cite{BM1} and \cite{BM}.
\begin{definition}[Problem formulation for the interval]
Let $\lambda_1,\,\lambda_2\geq 0.$
What is the distribution of
\begin{equation}
V_N\triangleq\max\Big\{V_{\varepsilon}(x_i)+\lambda_1\,\log(x_i)+\lambda_2\,\log(1-x_i),\,i=1\cdots N\Big\}
\end{equation}
in the form of an asymptotic expansion in $N$ in the limit $N\rightarrow \infty?$
\end{definition}
\begin{definition}[Problem formulation for the circle]
Let $\lambda\geq 0.$
What is the distribution of
\begin{equation}
V_N\triangleq\max\Big\{V_{\varepsilon}(\psi_j)+2\lambda\log|1+e^{2\pi i\psi_j}|,\,j=-N/2\cdots N/2\Big\}
\end{equation}
in the form of an asymptotic expansion in $N$ in the limit $N\rightarrow \infty?$
\end{definition}
We will consider the case of the GFF on the interval first. Recall the critical Selberg integral probability
distribution $M_{(\tau=1,\lambda_1,\lambda_2)}$ that we defined in Section \ref{DerivM}.
\begin{conjecture}[Maximum of the GFF on the Interval]\label{maxint}
The leading asymptotic term in the expansion of the Laplace transform of $V_N$ in $N$ is
\begin{equation}
{\bf E}[e^{q\,V_N}] \approx e^{q(2\log N-(3/2)\log\log N+{\rm const})}\,{\bf E}\bigl[M^q_{(\tau=1,\lambda_1,\lambda_2)}
\bigr],\;N\rightarrow\infty.
\end{equation}
Probabilistically, let $X_i,$ $i=1,2,3$ and $Y$ be as in Theorem \ref{criticalSelberg}.
Then, as $N\rightarrow \infty,$
\begin{align}
V_N = & 2\log N-\frac{3}{2}\log\log N+{\rm const}+\mathcal{N}(0,\,4\log 2) + \log X_1+\log X_2+\log X_3+\nonumber \\ & +
\log Y+\log Y'+o(1),
\end{align}
where $Y'$ is an independent copy of $Y.$
\end{conjecture}
This conjecture at the level of the Mellin transform is due to \cite{FLDR} in the case of $\lambda_1=\lambda_2=0$ and \cite{FLD} for
general $\lambda_1$ and $\lambda_2.$ Our probabilistic re-formulation of their conjecture was first given in \cite{Me16}.
Similarly, to formulate our conjecture for the maximum of the GFF on the circle, we need to recall the critical Morris integral
probability distribution $M_{(\tau=1,\lambda_1,\lambda_2)}$ that we considered in Theorem \ref{criticalMorris}.
\begin{conjecture}[Maximum of the GFF on the Circle]\label{maxintcircle}
The leading asymptotic term in the expansion of the Laplace transform of $V_N$ in $N$ is
\begin{equation}
{\bf E}[e^{q\,V_N}] \approx e^{q(2\log N-(3/2)\log\log N+{\rm const})}\,{\bf E}\bigl[M^q_{(\tau=1,\lambda,\lambda)}
\bigr],\;N\rightarrow\infty.
\end{equation}
Probabilistically, let $X \triangleq \beta_{2,2}^{-1}\bigl(\tau=1, b_0=1,\,b_1=1+\lambda,\,b_2=1+\lambda\bigr)$ and
$Y \triangleq \beta_{1,0}^{-1}(\tau=1, b_0=2\lambda+2)$ be as in Theorem \ref{criticalMorris}
and $Y' \triangleq \beta_{1,0}^{-1}\bigl(\tau=1, b_0=1\bigr).$
Then,
\begin{equation}
V_N = 2\log N-\frac{3}{2}\log\log N+{\rm const}+\log X+ \log Y+\log Y'+o(1).
\end{equation}
If $\lambda=0,$
\begin{equation}
V_N = 2\log N-\frac{3}{2}\log\log N+{\rm const}+ \log Y+\log Y'+o(1),
\end{equation}
where
\begin{equation}
Y\overset{{\rm in \,law}}{=}Y'=\beta_{1,0}^{-1}\bigl(\tau=1, b_0=1\bigr).
\end{equation}
\end{conjecture}
This conjecture is due to \cite{FyoBou} in the case of $\lambda=0.$ The extension to general $\lambda$ was first given in \cite{Me16}.
In the rest of this section we will give a heuristic derivation of our conjectures.
Our approach is based on the freezing hypothesis and calculations of \cite{FyoBou} and \cite{FLDR} as well as our Conjectures
\ref{ourmainconjcircle} and \ref{ourmainconjinterval} and the involution invariance of the Morris and Selberg integral distributions.
Let $0\leq \beta<1$ and $\tau=1/\beta^2>1.$
Following \cite{FLDR}, define the exponential functional
\begin{equation}\label{Zdef}
Z_{\lambda_1,\lambda_2,\varepsilon}(\beta) \triangleq \sum\limits_{i=1}^N x_i^{\beta\lambda_1}(1-x_i)^{\beta\lambda_2} e^{\beta V_\varepsilon(x_i)}.
\end{equation}
Using the identity
\begin{equation}\label{keymax}
{\bf P} \bigl(V_N < s \bigr) = \lim\limits_{\beta\rightarrow \infty} {\bf E}\Bigl[
\exp\Bigl(-e^{-\beta s}\,Z_{\lambda_1,\lambda_2,\varepsilon}(\beta)/C\Bigr)\Bigr],
\end{equation}
which is applicable to any sequence of random variables and an arbitrary \emph{$\beta$-independent} constant $C,$
the distribution of the maximum is reduced to the Laplace transform of the exponential functional in the limit $\beta\rightarrow \infty.$
Now, by Eq. \eqref{chaosinterval}, it is known that for $0<\beta<1$ the exponential functional converges\footnote{
Theorem 6 in \cite{BM1} shows that the laws of the total mass of the continuous and discrete multiplicative chaos measures are the same
provided the $\varepsilon$ parameter coincides with the discretization step.} as
$N\rightarrow \infty$ to the total mass of the Bacry-Muzy measure on the unit interval with a logarithmic potential,
which is conjectured to be given by the Selberg integral distribution, resulting
in the approximation\footnote{The validity of approximating the finite $N$ quantity with the $N\rightarrow\infty$
limit is discussed in \cite{FyoBou}.}
\begin{equation}\label{keyapprox}
Z_{\lambda_1,\lambda_2,\varepsilon}(\beta) \approx N^{1+\beta^2}\, e^{\beta^2\kappa}\,M_{(\tau,\beta\lambda_1,\beta\lambda_2)},\;N\rightarrow\infty.
\end{equation}
Next, we recall the involution invariance of the Mellin transform of the Selberg integral distribution, see Eq. \eqref{involutionint}.
Denoting the Mellin transform as in Theorem \ref{BSM} by $\mathfrak{M}(q\,|\,\tau,\lambda_1,\lambda_2)$
and introducing the function
\begin{equation}\label{Func}
F(q\,|\,\beta, \lambda_1,\lambda_2) \triangleq \mathfrak{M}\bigl(\frac{q}{\beta}\,|\,\frac{1}{\beta^2},\beta\lambda_1,\beta\lambda_2\bigr) (2\pi)^{-\frac{q}{\beta}}\,\Gamma^{\frac{q}{\beta}}(1-\beta^2) \Gamma(1-\frac{q}{\beta}),
\end{equation}
one observes that by the involution invariance in Eq. \eqref{involutionint} this function satisfies the identity
\begin{equation}\label{selfdual}
F(q\,|\,\beta, \lambda_1,\lambda_2) = F(q\,\big|\,\frac{1}{\beta}, \lambda_1,\lambda_2),
\end{equation}
first discovered in \cite{FLDR} in the special case of $\lambda_1=\lambda_2=0,$ then formulated in general in the form of Eq. \eqref{involutionint} in
\cite{Me14}, and in \cite{FLD} in this form.
On the other hand, as shown in \cite{FLDR}, one has the general identity
\begin{equation}\label{generalidentity}
\int_\mathbb{R} e^{yq} \frac{d}{dy}\exp\Bigl(-e^{-\beta y}X\Bigr) dy = X^{\frac{q}{\beta}} \,\Gamma(1-\frac{q}{\beta}),\;\Re(q)<0, \,X>0.
\end{equation}
Letting
\begin{equation}\label{X}
X = Z_{\lambda_1,\lambda_2,\varepsilon}(\beta) \frac{e^{\kappa}\,\Gamma(1-\beta^2)}{2\pi}
\end{equation}
one sees by means of Eq. \eqref{keyapprox} that for $0<\beta<1,$
\begin{equation}
{\bf E}\bigl[X^{\frac{q}{\beta}}\bigr]\approx e^{q\kappa(\beta+\frac{1}{\beta})} N^{q(\beta+\frac{1}{\beta})}\,\mathfrak{M}\bigl(\frac{q}{\beta}\,|\,\frac{1}{\beta^2},\beta\lambda_1,\beta\lambda_2\bigr) (2\pi)^{-\frac{q}{\beta}}\,\Gamma^{\frac{q}{\beta}}(1-\beta^2),\; N\rightarrow\infty,
\end{equation}
so that
\begin{equation}\label{rhs}
{\bf E}\bigl[X^{\frac{q}{\beta}}\bigr]\, \Gamma(1-\frac{q}{\beta}) \approx e^{q\kappa(\beta+\frac{1}{\beta})} N^{q(\beta+\frac{1}{\beta})}\,F(q\,|\,\beta, \lambda_1,\lambda_2),\;
N\rightarrow\infty.
\end{equation}
On the other hand, by letting the constant $C$ in Eq. \eqref{keymax} to be taken to be
\begin{equation}
C = \frac{2\pi}{e^{\kappa}\Gamma(1-\beta^2)},
\end{equation}
and combining Eq. \eqref{keymax} with Eq. \eqref{generalidentity} , one obtains
\begin{equation}
{\bf E}[e^{q V_N}] = \lim_{\beta\rightarrow \infty} \Bigl[{\bf E}\bigl[X^{\frac{q}{\beta}}\bigr] \Gamma(1-\frac{q}{\beta})\Bigr].
\end{equation}
The right-hand side of this equation has only been determined for $0<\beta<1,$ see Eq. \eqref{rhs}.
Due to the self-duality of the right-hand side, one assumes that it gets frozen at $\beta=1,$ as first formulated in \cite{FLDR}.
\begin{conjecture}[The Freezing Hypothesis]
Let $\beta>1.$
\begin{equation}
{\bf E}\bigl[X^{\frac{q}{\beta}}\bigr] \Gamma(1-\frac{q}{\beta}) =
{\bf E}\bigl[X^{\frac{q}{\beta}}\bigr] \Gamma(1-\frac{q}{\beta})\Big|_{\beta=1}.
\end{equation}
\end{conjecture}
One must note however that $C$ is not \emph{$\beta$-independent}. It is argued in \cite{YO} and \cite{FLDR2}
that the $\Gamma(1-\beta^2)$ term
shifts the maximum by $-(3/2) \log \log N,$ see also \cite{Ding}.
Overall, we then obtain by Eq. \eqref{rhs},
\begin{equation}
{\bf E}[e^{q\,V_N}] \approx e^{q(2\log N-(3/2)\log\log N+\,{\rm const})}\,F(q\,|\,\beta=1, \lambda_1,\lambda_2),\;
N\rightarrow\infty
\end{equation}
for some constant.\footnote{As remarked in \cite{FyodSimm}, this procedure only determines the distribution of the maximum up
to a constant term.}
Finally, recalling definitions of the critical Selberg integral distribution in Eq. \eqref{Selbergcrit} and of $F(q\,|\,\beta, \lambda_1,\lambda_2)$
in Eq. \eqref{Func}, and appropriately adjusting the constant,
\begin{equation}
{\bf E}[e^{q\,V_N}] \approx e^{q(2\log N-(3/2)\log\log N+\,{\rm const})}\,{\bf E}\Bigl[M^q_{(\tau=1,\lambda_1,\lambda_2)}\Bigr]\Gamma(1-q),
\end{equation}
so that $Y'$ comes from the $\Gamma(1-q)$ factor and has the same law as $Y.$
The argument for the GFF on the circle goes through verbatim so we will only point out the key steps and omit redundant details.
Define the exponential functional
\begin{equation}\label{Zdefc}
Z_{\lambda,\varepsilon}(\beta) = \sum\limits_{j=-N/2}^{N/2} |1+e^{2\pi i\psi_j}|^{2\lambda\,\beta} e^{\beta V_\varepsilon(\psi_j)}.
\end{equation}
To describe its limit as $N\rightarrow \infty$ we need to compute the distribution of the total mass of the Bacry-Muzy measure on the circle
with a logarithmic potential that was defined in Eq. \eqref{chaoscircle}. Assuming Conjecture \ref{ourmainconjcircle},
we have
\begin{equation}\label{keyapproxc}
Z_{\lambda,\varepsilon}(\beta) \approx N^{1+\beta^2}\, e^{\beta^2\kappa}\,M_{(\tau,\beta\lambda,\beta\lambda)},\;N\rightarrow\infty.
\end{equation}
The rest of the argument is the same as for the GFF on the interval, with Eq. \eqref{invcircle} replacing Eq. \eqref{involutionint}.
\subsection{Inverse Participation Ratios of the Fyodorov-Bouchaud Model}\label{IPRsection}
In this section we will compute inverse participation ratios (IPR) of the Fyodorov-Bouchaud model based on
Conjecture \ref{ourmainconjcircle}.
Let the gaussian field $V_\varepsilon(\psi)$ be as in Eq. \eqref{covkc}, $\beta\in(0,1)$ denote the inverse temperature, and
$\tau=1/\beta^2.$
Consider the associated partition function corresponding to the field with a non-random logarithmic
potential,
\begin{equation}\label{Zdefc}
Z_{\lambda,\varepsilon}(\beta) = \sum\limits_{j=-N/2}^{N/2} |1+e^{2\pi i\psi_j}|^{2\lambda} e^{-\beta V_\varepsilon(\psi_j)}.
\end{equation}
When $\lambda=0,$ we will simply write
\begin{equation}\label{Zdef}
Z_{\varepsilon}(\beta) = \sum\limits_{j=-N/2}^{N/2} e^{-\beta V_\varepsilon(\psi_j)}.
\end{equation}
The problem of computing the IPR of the Fyodorov-Bouchaud model
is that of computing
\begin{equation}
{\bf E}\Bigl[\frac{Z_{\varepsilon}(n\beta)}{Z^n_{\varepsilon}(\beta)}\Bigr],\;N\rightarrow\infty,
\end{equation}
for positive integer $n,$ see \cite{Fyo09}.
We will consider here a more general problem of computing the analytic
continuation in the form
\begin{equation}
{\bf E} \Bigl[Z_{\varepsilon}(q\beta)
Z^s_{\varepsilon}(\beta)\Bigr],\;N\rightarrow\infty
\end{equation}
for real $q$ and generally complex $s.$
Our results are as follows. Let $\mathfrak{M}(q\,|\,\tau,\lambda)$ denote the Mellin transform of the Morris integral probability distribution
with $\lambda_1=\lambda_2=\lambda,$ see Eq. \eqref{thefunctioncircle}. Then,
\begin{equation}
{\bf E} \Bigl[Z_{\varepsilon}(q\beta)
Z^s_{\varepsilon}(\beta)\Bigr] \approx N^{1+q^2\beta^2+(1+\beta^2)s} \
\mathfrak{M}(s\,|\,\tau, \lambda=-q\beta^2), \, N\rightarrow\infty.
\end{equation}
Explicitly,
\begin{align}\label{qsfunction}
{\bf E} \Bigl[Z_{\varepsilon}(q\beta)
Z^s_{\varepsilon}(\beta)\Bigr] \approx N^{1+q^2\beta^2+(1+\beta^2)s}
&\frac{\tau^{\frac{s}{\tau}}}{\Gamma^s\bigl(1-\beta^2\bigr)}
\frac{\Gamma_2((-2q\beta^2+1)/\beta^2+1-s\,|\,\tau)}{\Gamma_2((-2q\beta^2+1)/\beta^2+1\,|\,\tau)}
\times \nonumber \\ & \times
\frac{\Gamma_2(-s+1/\beta^2\,|\,\tau)}{\Gamma_2(\tau\,|\,\tau)}
\frac{\Gamma_2((1-q\beta^2)/\beta^2+1\,|\,\tau)^2}{\Gamma_2((1-q\beta^2)/\beta^2+1-s\,|\,\tau)^2}.
\end{align}
In particular, when $s=-n$ and $n\in\mathbb{N},$ we can use Eq. \eqref{CirNegMoments} to simplify this expression to
\begin{equation}\label{qn}
{\bf E} \Bigl[Z_{\varepsilon}(q\beta)
Z^{-n}_{\varepsilon}(\beta)\Bigr] \approx N^{1+q^2\beta^2-(1+\beta^2)n}
\prod\limits_{j=0}^{n-1} \frac{\Gamma(1-q\beta^2+(j+1)\beta^2)^2 \,\Gamma(1-\beta^2)}{\Gamma(1-2q\beta^2+(j+1)\beta^2)\,\Gamma(1+j\beta^2)}.
\end{equation}
Finally, when $q=n,$ we obtain the expression for the IPR,
\begin{equation}\label{IPRformula}
{\bf E}\Bigl[\frac{Z_{\varepsilon}(n\beta)}{Z^n_{\varepsilon}(\beta)}\Bigr]
\approx
N^{1+n^2\beta^2-(1+\beta^2)n}
\prod\limits_{j=0}^{n-1} \frac{\Gamma(1-n\beta^2+(j+1)\beta^2)^2 \,\Gamma(1-\beta^2)}{\Gamma(1-2n\beta^2+(j+1)\beta^2)\,\Gamma(1+j\beta^2)}.
\end{equation}
For example, when $n=2,$ we recover the result of \cite{Fyo09},
\begin{equation}
{\bf E}\Bigl[\frac{Z_{\varepsilon}(2\beta)}{Z^2_{\varepsilon}(\beta)}\Bigr]
\approx N^{2\beta^2-1
\frac{\Gamma(1-\beta^2)^4}{\Gamma(1-3\beta^2)
\Gamma(1-2\beta^2) \, \Gamma(1+\beta^2)}.
\end{equation}
The conditions of validity of our approximation are as follows. It is clear from Eq. \eqref{qsfunction} that
the condition on $q$ is
\begin{equation}
1-2q\beta^2+\beta^2 >0.
\end{equation}
Thus, the range of allowed values of $q$ is
\begin{equation}
q< \frac{1+\beta^2}{2\beta^2}.
\end{equation}
The range of $s$ is determined by
\begin{equation}
\Re(s) < \min\Bigl(\frac{1}{\beta^2}, \frac{1}{\beta^2} + 1 - 2q, \frac{1}{\beta^2} + 1 - q\Bigr).
\end{equation}
In particular, for $q\in \mathbb{N},$
\begin{equation}\label{rescondition}
\Re(s) < \frac{1}{\beta^2} + 1 - 2q.
\end{equation}
For example, the expression in Eq. \eqref{qsfunction} holds for all $q<0$ and $\Re(s)<1/\beta^2$ and
the expression in Eq. \eqref{qn} holds for all $q<0$ and $n\in \mathbb{N}.$ In the specific case of IPR,
$s=-n,$ $q=n,$ so that Eq. \eqref{IPRformula} holds for the range
\begin{equation}
n < \frac{1+\beta^2}{2\beta^2}
\end{equation}
as the condition in Eq. \eqref{rescondition} is automatically satisfied.
We now proceed to explain our calculations. Following \cite{FyoBou}, we observe
\begin{equation}\label{keyapproxc}
Z_{\lambda,\varepsilon}(\beta) \approx N^{1+\beta^2}\,\int_{-\frac{1}{2}}^{\frac{1}{2}} |1+e^{2\pi i s}|^{2\lambda} \,M_\beta(ds),\;
N\rightarrow\infty,
\end{equation}
where $M_\beta(ds)$ denotes the Bacry-Muzy GMC on the circle. On the other hand,
we can use the following elementary application of the Girsanov theorem for gaussian fields.
Consider the change of measure
\begin{equation}
\frac{d\mathcal{Q}}{d\mathcal{P}} = e^{q^2\beta^2\log\varepsilon} e^{-q\beta V_\varepsilon(\phi)},
\end{equation}
where $\phi$ is some fixed angle. Then, we have the identity of gaussian processes in law
viewed as functions of $\psi,$
\begin{equation}
V_\varepsilon(\psi)-q\beta {\bf Cov}(V_\varepsilon(\psi), V_\varepsilon(\phi))\big|_\mathcal{P} =
V_\varepsilon(\psi)\big|_\mathcal{Q}.
\end{equation}
which is verified by a straightforward calculation of their characteristic functions.
We now choose $\phi=1/2,$ and write by the rotational invariance of the field,\footnote{The idea
of using rotational invariance in this context is due to \cite{Fyo09}.}
\begin{align}
{\bf E} \Bigl[Z_{\varepsilon}(q\beta)
Z^s_{\varepsilon}(\beta)\Bigr] = & N {\bf E} \Bigl[e^{-q\beta V_\varepsilon(1/2)}\,
Z^s_{\varepsilon}(\beta)\Bigr], \nonumber \\
= & N e^{-q^2\beta^2\log\varepsilon} {\bf E} \Bigl[\Bigl( \sum_j e^{-\beta(V_\varepsilon(\psi_j)
+2q\beta \log |e^{2\pi i\psi_j}+1|)}\Bigr)^s\Bigr], \nonumber \\
= & N^{1+q^2\beta^2} {\bf E} \Bigl[Z^s_{\lambda=-q\beta^2, \varepsilon}(\beta)\Bigr],
\end{align}
and the result follows from Conjecture \ref{ourmainconjcircle}.
\subsection{Mod-Gaussian Limit Theorems}\label{modG}
\noindent
The idea of reformulating the law of total mass of the Bacry-Muzy measure on the interval as a mod-Gaussian limit
was first introduced in \cite{Menon}. It is based on the observation that the smoothed indicator function of a linear statistic,
which converges to the $\mathcal{H}^{1/2}$ gaussian noise, converges to the \emph{centered} GFF on the interval, \emph{i.e.}
the process $V_\varepsilon(x)-V_\varepsilon(0),$ where $V_\varepsilon(x)$ is as in Eq. \eqref{covkk}, also known
as Fractional Brownian motion with $H=0,$ cf. \cite{FKS}. There are many known examples of linear statistics
that converge to the $\mathcal{H}^{1/2}$ gaussian noise: counting statistics of Riemann zeroes \cite{BK}, \cite{Rodg},
the CLT of Soshnikov for the CUE ensemble \cite{Sosh} and the recent work on log-absolute
value of the characteristic polynomial of suitably scaled GUE matrices \cite{FKS}, see \cite{joint} for more examples.
Hence, the results that are conjectured below are expected to be highly universal, \emph{i.e.} independent of the origin
of the linear statistic. For concreteness, we will assume in this section that the statistic comes from the Riemann zeroes following \cite{Menon}.
Consider the class of $\mathcal{H}^{1/2}$ test functions that was considered in \cite{BK}.
\begin{align}
\langle f,\,g\rangle \triangleq & \Re\int |w| \hat{f}(w)\overline{\hat{g}(w)}\,dw, \label{scalarf} \\
= & -\frac{1}{2\pi^2} \int f'(x) g'(y)\log|x-y|dx\,dy \label{scalar}
\end{align}
plus some mild conditions on the growth of $f(x)$ and its Fourier transform
$\hat{f}(w) \triangleq 1/2\pi \int f(x) e^{-iwx} \, dx$
at infinity.
Assuming the Riemann hypothesis, we write non-trivial zeroes of the Riemann zeta function in the form $\{1/2+i\gamma\},$
$\gamma\in\mathbb{R}.$ Let $\lambda(t)$ be a function of $t>0$ that satisfies
the asymptotic condition
\begin{equation}\label{lt}
1\ll \lambda(t) \ll \log t
\end{equation}
in the limit $t\rightarrow \infty,$ where the number theoretic notation $a(t)\ll b(t)$ means $a(t)=o\bigl(b(t)\bigr).$
Let $\omega$ denote a uniform random variable over $(1, 2),$ $\gamma(t) \triangleq \lambda(t) (\gamma-\omega t),$
and define the statistic
\begin{equation}\label{St}
S_t(f) \triangleq \sum\limits_{\gamma} f\bigl(\gamma(t)\bigr) -\frac{\log t}{2\pi \lambda(t)} \int f(u) du
\end{equation}
given a test function $f(x)$ in the $\mathcal{H}^{1/2}$ class. We note that $S_t(f)$ is centered
in the limit $t\rightarrow \infty$ as it is well known that the number of Riemann zeroes in the interval $[t,\,2t]$ is asymptotic to $t\log t/2\pi$ in this limit. The principal result of \cite{BK} and \cite{Rodg}
and the starting point of our construction is the following theorem, which is the number theoretic equivalent of Soshnikov's CLT for CUE.
\begin{theorem}[Convergence to a gaussian vector]\label{strong}
Given test function $f_1,$ $\cdots$ $f_k$ in $\mathcal{H}^{1/2},$
the random vector $\bigl(S_t(f_1),\cdots, S_t(f_k)\bigr)$ converges in law
in the limit $t\rightarrow\infty$ to
a centered gaussian vector $\bigl(S(f_1),\cdots S(f_k)\bigr)$ having the covariance
\begin{equation}
{\bf Cov} \bigl(S(f_i), \,S(f_j)\bigr) = \langle f_i,\,f_j\rangle.
\end{equation}
\end{theorem}
The significance of the condition $\lambda(t)\ll \log t$ is that the number of zeroes that are visited by $f$ as $t\rightarrow \infty$ goes to infinity, \emph{i.e.} Theorem \ref{strong} is a mesoscopic central limit theorem.
We can now summarize our results. Let $0<u<1$ and $\chi_u(x)$ denote the indicator function of the interval $[0, \,u].$
Let $\phi(x)$ be a smooth bump function supported on $(-1/2, \,1/2),$
and denote
\begin{equation}\label{kappa}
\kappa\triangleq -\int
\phi(x)\phi(y)\log|x-y|\,dxdy.
\end{equation}
Define the $\varepsilon$-rescaled bump function by $\phi_\varepsilon(x)\triangleq 1/\varepsilon\phi(x/\varepsilon),$
and let $f_{\varepsilon, u}(x)$ be the smoothed indicator function of
the interval $[0, \,u]$ given by
the convolution of $\chi_u(x)$ with $\phi_\varepsilon(x),$
\begin{equation}\label{fu}
f_{\varepsilon, u}(x) \triangleq (\chi_u\star\phi_\varepsilon)(x) = \frac{1}{\varepsilon}\int \chi_u(x-y)\phi(y/\varepsilon)\,dy.
\end{equation}
Theorem \ref{strong} applies to $f_{\varepsilon, u}(x)$ for all $u>\varepsilon>0.$ Fix $\varepsilon>0$ and define the statistic
$S_t(\mu,u,\varepsilon),$
\begin{equation}\label{Su}
S_t(\mu, u, \varepsilon)\triangleq \pi\sqrt{2\mu}\Bigl[\sum\limits_{\gamma} f_{\varepsilon, u}\bigl(\gamma(t)\bigr) - \frac{\log t}{2\pi \lambda(t)} \int f_{\varepsilon, u}(x) dx\Bigr].
\end{equation}
Then, we have the following key result, cf. \cite{Menon}. By Theorem \ref{strong},
the process $u\rightarrow S_t(\mu, u, \varepsilon),$ $u\in (0, 1),$
converges in law in the limit $t\rightarrow\infty$ to the centered gaussian field having the asymptotic covariance
\begin{align}\label{limcov1}
\begin{cases}
& -\mu
\bigl(\log\varepsilon - \kappa + \log|u-v| - \log|u| - \log|v| \bigr) + O(\varepsilon), \; \text{if $|u-v|\gg \varepsilon$},
\\
& -2\mu
\bigl(\log\varepsilon - \kappa - \log|u| \bigr) + O(\varepsilon),
\; \text{if $u=v.$}
\end{cases}
\end{align}
Thus, recalling the process $V_\varepsilon(u)$ in Eq. \eqref{covkk}, we have shown that the smoothed counting statistic $S_t(\mu,u,\varepsilon)$ converges to
the \emph{centered} GFF on the interval,
\begin{equation}
S_t(\mu,u,\varepsilon) \rightarrow \sqrt{\mu/2}\,\bigl(V_\varepsilon(u)-V_\varepsilon(0)\bigr).
\end{equation}
Moreover, we also showed in \cite{Menon} that in the case of $\varepsilon$ varying with $t$ the
$\mathcal{H}^{1/2}$ covariance in Theorem \ref{strong}
is preserved under the following natural slow decay condition,
\begin{equation}
\frac{\lambda(t)}{\log t}\ll\varepsilon(t) \ll 1, \label{vart}
\end{equation}
so that under this condition we have the approximation,
\begin{equation}
S_t\bigl(\mu,u,\varepsilon(t)\bigr) \approx \sqrt{\mu/2}\,\bigl(V_{\varepsilon(t)}(u)-V_{\varepsilon(t)}(0)\bigr), \; t\rightarrow \infty.
\end{equation}
We emphasize that these results are believed to be universal in that they only require $\mathcal{H}^{1/2}-$Gaussianity of the limiting statistic
and refer the reader to \cite{joint} for the corresponding CUE calculations.
We will now formulate some conjectured mod-Gaussian limit theorems for the centered GFF on the interval and the associated smoothed
counting statistic $S_t(\mu,u,\varepsilon)$ in the limit $t\rightarrow \infty.$
\begin{conjecture}[Weak version]\label{modcentered}
Let $-(\tau+1)/2<\Re(q)<\tau,$ $\tau=2/\mu,$ $0<\mu<2,$ then
\begin{align}
& \lim\limits_{\varepsilon\rightarrow 0} e^{\mu(\log \varepsilon-\kappa)\frac{q(q+1)}{2}} \Big[\lim\limits_{t\rightarrow\infty}
{\bf E} \Bigl[\Bigl(\int_0^1 e^{S_t(\mu,u,\varepsilon)} du\Bigr)^q\Bigr]\Bigr], \label{M1transf}\\
& = \lim\limits_{\varepsilon\rightarrow 0} e^{\mu(\log \varepsilon-\kappa)\frac{q(q+1)}{2}} \Big[
{\bf E} \Bigl[\Bigl(\int_0^1 e^{\sqrt{\mu/2} \bigl(V_{\varepsilon}(u)-V_{\varepsilon}(0)\bigr)
} du\Bigr)^q\Bigr]\Bigr], \label{M1transfV}\\
& = \Bigl(\frac{2\pi\tau^{1/\tau}}{\Gamma\bigl(1-\frac{1}{\tau}\bigr)}\Bigr)^q
\frac{\Gamma_2(1+q+\tau\,|\,\tau)}{\Gamma_2(1+2q+\tau\,|\,\tau)}
\frac{\Gamma_2(1-q+\tau\,|\,\tau)}{\Gamma_2(1+\tau\,|\,\tau)}
\frac{\Gamma_2(-q+\tau\,|\,\tau)}{\Gamma_2(\tau\,|\,\tau)}
\frac{\Gamma_2(2+q+2\tau\,|\,\tau)}{\Gamma_2(2+2\tau\,|\,\tau)}.
\label{M1}
\end{align}
\end{conjecture}
\begin{conjecture}[Strong version]
Let $-(\tau+1)/2<\Re(q)<\tau,$ $\tau=2/\mu,$ $0<\mu<2,$ and $\varepsilon(t)$ satisfy Eq. \eqref{vart}. Then
\begin{align}
\lim\limits_{t\rightarrow\infty} e^{\mu(\log \varepsilon(t)-\kappa)\frac{q(q+1)}{2}} \Big[
{\bf E} \Bigl[\Bigl(\int_0^1 e^{S_t(\mu,u,\varepsilon(t))} du\Bigr)^q\Bigr]\Bigr] =
\lim\limits_{\varepsilon\rightarrow 0} e^{\mu(\log \varepsilon-\kappa)\frac{q(q+1)}{2}} \Big[\lim\limits_{t\rightarrow\infty}
{\bf E} \Bigl[\Bigl(\int_0^1 e^{S_t(\mu,u,\varepsilon)} du\Bigr)^q\Bigr]\Bigr].
\end{align}
\end{conjecture}
The calculations behind these conjectures are all based on applications of the Girsanov theorem similar to those in Subsection \ref{IPRsection} and
Conjecture \ref{ourmainconjinterval}. In particular, the expression in Eq. \eqref{M1} corresponds to the expression for $\mathfrak{M}(q\,|\,\tau,
\lambda_1, \lambda_2)$ in Eq. \eqref{thefunctioninterval} with $\lambda_1=\mu\,q$ and $\lambda_2=0.$
The reader can find details of these calculations in Theorem 4.5 and Lemma 4.6 of \cite{Menon}.
The interest in the strong version of the conjecture is that it contains information about the statistical distribution of the zeroes at large but finite $t,$
whereas the weak version only describes the distribution at $t=\infty.$ Moreover,
as the strong conjecture fits into the general framework of mod-Gaussian convergence, cf. Jacod \emph{et. al.} \cite{Jacod},
the results of \cite{Feray} and \cite{Meliot} and the explicit knowledge of the limiting function make it possible to quantify the normality zone,
\emph{i.e.} the scale up to which the tails of our exponential functionals are normal,
and the breaking of symmetry near the edges of the normality zone thereby quantifying precise deviations at large $t.$
We refer the interested reader to \cite{joint} for some rigorous results for positive integer $q$ that partially verify
our conjectures when the underlying statistic comes from CUE.
The same type of result can be formulated for the GFF on the circle.
\begin{conjecture}
Let $V_\varepsilon(\psi)$ be the GFF on the circle as in Eq. \eqref{covkcir}.
\begin{align}
\lim\limits_{\varepsilon\rightarrow 0} e^{\mu(\log \varepsilon-\kappa)\frac{q(q+1)}{2}} \Big[
{\bf E} \Bigl[\Bigl(\int_0^1 e^{\sqrt{\mu/2} \bigl(V_{\varepsilon}(\psi)-V_{\varepsilon}(1/2)\bigr)
} d\psi\Bigr)^q\Bigr]\Bigr] = &
\frac{\tau^{\frac{q}{\tau}}}{\Gamma^q\bigl(1-\frac{1}{\tau}\bigr)}
\frac{\Gamma^3_2(q+1+\tau\,|\,\tau)}{\Gamma^2_2(\tau+1\,|\,\tau)\Gamma_2(2q+1+\tau\,|\,\tau)} \times \nonumber \\
& \times
\frac{\Gamma_2(-q+\tau\,|\,\tau)}{\Gamma_2(\tau\,|\,\tau)}.\label{M1cir}
\end{align}
\end{conjecture}
The limiting function in this case corresponds to the Mellin transform of the Morris integral distribution in Eq. \eqref{thefunctioncircle}
with $\lambda=\mu q/2.$
We end this section with another conjecture, which combines Conjectures \ref{maxint} and \ref{maxintcircle} with Conjecture \ref{modcentered}, Eq. \eqref{M1transfV}. It
is a mod-Gaussian statement about the maximum of the centered GFF on the circle and interval. For simplicity, we let $\kappa=0.$
\begin{conjecture}
Let the gaussian field $V_\varepsilon(u)$ be as in Eq. \eqref{covkk} and let $V_\varepsilon(\psi)$ be the corresponding field on the circle.
Let $N=1/\varepsilon$ and consider the discretizations as in Conjectures \ref{maxint} and \ref{maxintcircle}. Let $-1<\Re(q)<1.$
\begin{align}
\lim\limits_{N\rightarrow \infty} N^{-q^2-2q} (\log N)^{3q/2} {\bf E} \Bigl[e^{q \max\big\{V_\varepsilon(x_j)-V_\varepsilon(0), \,j=1\cdots N\big\}}\Bigr]
= & e^{q\,\text{const}}\,\Gamma(1-q)
\frac{G(2+2q)}{G(2+q)}
\frac{G(2)}{G(2-q)} \times\nonumber \\ & \times
\frac{G(1)}{G(1-q)}
\frac{G(4)}{G(4+q)}, \label{M1crit}
\\
\lim\limits_{N\rightarrow \infty} N^{-q^2-2q} (\log N)^{3q/2} {\bf E} \Bigl[e^{q \max\big\{V_\varepsilon(\psi_j)-V_\varepsilon(1/2), \,j=-N/2\cdots N/2\big\}}\Bigr] = & e^{q\,\text{const}}\,\Gamma(1-q)
\frac{G(2+2q)\,G^2(2)}{G^3(2+q)}
\times
\nonumber \\ & \times
\frac{G(1)}{G(1-q)}. \label{M1critcir}
\end{align}
\end{conjecture}
It should be emphasized that the expressions on the right-hand side of Eqs. \eqref{M1}, \eqref{M1cir}, \eqref{M1crit}, and \eqref{M1critcir} are
not Mellin transforms of probability distributions. We refer the interested reader to \cite{cao17} and \cite{cao18} for deep results on the subtle nature of the distribution of the maximum of the centered GFF fields. In addition, as shown in \cite{cao17}, the distribution of the maximum of the two-dimensional gaussian field with the covariance $-\log|\vec{r}_1-\vec{r}_2|$ can be similarly quantified
in terms of the critical analytic continuation of the complex Selberg integral in Eq. \eqref{critcomplexS}.
\section{Conclusions}
\noindent We have reviewed conjectured laws of the Bacry-Muzy GMC measures on the circle and interval with logarithmic potentials.
We have described both the analytical and probabilistic approaches to the Morris and Selberg integral
probability distributions that are believed to give these laws. The building blocks of the Morris and Selberg integral distributions are
the so-called Barnes beta distributions, whose theory we have reviewed in detail. We have also described critical Morris and Selberg integral
distributions, which are conjectured to be the distributions of derivative martingales of the Bacry-Muzy GMC measures on the circle and interval.
Our analytical methods are not limited to the Morris and Selberg integrals. We have given the analytic continuation of the complex
Selberg integral and established its involution invariance property.
We have considered three applications of our conjectures. The first application is the calculations of the distribution of
the maximum of the discrete Gaussian Free Field restricted to the circle and interval in terms of the critical Morris and Selberg integral
distributions, respectively. The second application is the calculation of inverse participation ratios of the Fyodorov-Bouchaud model.
In the third application we have conjectured two kinds of mod-Gaussian limit theorems. The first kind relates linear statistics
that converge to the $\mathcal{H}^{1/2}-$Gaussian noise to the Selberg integral distribution. The second kind relates
the distribution of the maximum of the centered Gaussian Free Field restricted to the circle and interval to the
critical Morris and Selberg integral distributions.
\section*{Acknowledgments}
\noindent The author wishes to thank Y. V. Fyodorov for bringing refs. \cite{cao17} and \cite{Fyo09} to our attention.
|
2,877,628,089,812 | arxiv | \section{Introduction}
I came across a certain set of Legendrian links while searching for
examples to illustrate the main theorem of my thesis \cite{en}, and they
served that purpose very well. Since then I kept returning to them because
I could always discover something pretty. This paper is a collection of
those findings.
The links in question (see Figure \ref{fig:lagpic}), that I call
\emph{Legendrian closures of positive braids}, denote by $L_\beta$, and represent by
front diagrams $f_\beta$, are
Legendrian representatives of braid-positive links, i.e.\ link types that
can be obtained as the closure of a positive braid $\beta$. (These are not
to be confused with the more general notion of positive link, i.e.\ link
types that can be represented with diagrams whose geometric and algebraic
crossing numbers agree.) In fact I conjecture that $L_\beta$ is essentially
the only Legendrian representative of such a link type, in the following
sense.
\begin{sejt} Any braid-positive Legendrian link is a stabilization of the
corresponding Legendrian closure shown in Figure \ref{fig:lagpic}. In
particular, braid-positive links are Legendrian simple. \end{sejt}
This paper however is not about compiling evidence for this conjecture. Let
us only mention that Etnyre and Honda \cite{EH1} proved it for positive
torus knots, that the set of links treated by Ding and Geiges \cite{geig}
includes many two-component braid-positive links (for example, positive $(2k,2)$ torus links), and that Chekanov's example
\cite{chek} of a non-Legendrian simple knot type is $5_2$, which is the
smallest positive, but not braid-positive knot. Also, by Rutherford's work
\cite{rulpoly}, the Thurston--Bennequin number of $L_\beta$ is maximal in its
smooth isotopy class because the front diagram $f_\beta$ is easily seen to admit
rulings (see section \ref{sec:rul}). Because some (actually, all) of those rulings
are $2$--graded, the maximal Thurston--Bennequin number is only attained along with
rotation number $0$.
Instead, we will concentrate on Legendrian isotopy invariants of
$L_\beta$. Some of these have been evaluated in \cite{en}, of which the present paper
is a continuation. It is thus assumed that the reader is familiar with sections 2 (basic
notions) and 6 (Legendrian closures of positive braids and their relative contact homology)
of \cite{en}. In this paper, we will re-formulate some of those computations, and get new
results also, by using what we call the path matrix of a positive braid. This construction
is very similar to that of Lindstr\"om \cite{lind}
and Gessel--Viennot \cite{gv}, which is also included in the volume
\cite{book}.
The paper is organized as follows. We review some results of \cite{en} in section
\ref{sec:prelim}, and discuss elementary properties of the path matrix in section
\ref{sec:matrix}.
Then, the main results are Theorem \ref{thm:ideal}, where we compute a new generating
set for the (abelianized) image $I$ of the contact homology differential and its
consequence, Theorem \ref{thm:gauss}, which gives a quick test to decide whether a given
set of crossings is an augmentation. The latter is in terms of an $LU$--decomposition,
i.e.\ Gaussian elimination of the path matrix. In Theorem \ref{thm:grob}, we point out
that the old generators, i.e. the ones read off of the knot diagram, automatically form
a Gr\"obner basis for $I$. In Theorem \ref{thm:szimultan}, we construct a subset of the
crossings of $\beta$ which is simultaneously an augmentation and a ruling of $L_\beta$. This
strengthens the well known relationship between augmentations and rulings. We close the
paper with a few examples.
Acknowledgements: Part of the research was carried out in the summer of 2005 when I visited
the Alfr\'ed
R\'enyi Mathematical Institute in Budapest. It is a great pleasure to thank the Institute
for their hospitality and Andr\'as N\'emethi, Endre Szab\'o, and Andr\'as Sz\H{u}cs for
stimulating discussions. I am grateful to Alexander Stoimenow for providing me with a list
of braid-positive knots up to $16$ crossings. My conversations with Mikhail Kotchetov were
invaluable to the discovery of Theorem \ref{thm:grob}. Last but not least, many thanks to
Supap Kirtsaeng for writing the computer implementation of Theorem \ref{thm:gauss}.
\section{Preliminaries}\label{sec:prelim}
The goal of this section is to recall some results from \cite{en} relevant
to this paper. We will work in the standard contact $3$--space
$\ensuremath{\mathbf R}^3_{xyz}$ with the kernel field of the $1$--form $\mathrm d z - y \mathrm d x$. We
will use the basic notions of Legendrian knot, Legendrian isotopy, front
($xz$) diagram, Maslov potential, Lagrangian ($xy$) diagram, resolution
\cite{computable}, Thurston--Bennequin ($tb$) and rotation ($r$) numbers,
admissible disc and contact homology\footnote{Because absolute contact
homology doesn't appear in the paper, we'll use this shorter term for what
may be better known as relative, Legendrian, or Chekanov--Eliashberg
contact homology.} etc.\ without reviewing their definitions. We will also
assume that the reader is familiar with section 6 of \cite{en}, of which
this paper is in a sense an extension. For a complete introduction to
Legendrian knots and their contact homology, see \cite{etn}.
We would like to stress a few points only whose treatment may be somewhat
non-standard. Crossings $a$ of both front and Lagrangian diagrams are assigned
an \emph{index}, denoted by $|a|$, which is an element of $\ensuremath{\mathbf Z}_{2r}$, with the entire
assignment known as a \emph{grading}. This is easiest to define for fronts
of single-component knots as the difference of the Maslov potentials
($\text{upper}-\text{lower}$) of the two intersecting strands. If a
Lagrangian diagram is the result of resolution, the old crossings keep their indices and
the crossings replacing the right cusps are assigned
the index $1$. In the multi-component case, the Maslov potential difference
becomes ambiguous for crossings between different components. This gives
rise to an infinite set of so-called admissible gradings. We consider
these as introduced in \cite[section 2.5]{computable} and not the larger class of
gradings described in \cite[section 9.1]{chek}.
Let $\beta$ denote an arbitrary positive braid word. The Legendrian
isotopy class $L_\beta$ is a natural Legendrian representative of the link
which is the closure of $\beta$. (All braids and braid words in this paper
are positive. The same symbol $\beta$ may sometimes refer to the braid
represented by the braid word $\beta$.) $L_\beta$, in turn, is represented by the
front diagram
$f_\beta$ and its resolution, the Lagrangian diagram $\gamma_\beta$ (see
Figure \ref{fig:lagpic}). Considering $\beta$ drawn horizontally, label
the left and right endpoints of the strands from top to bottom with the
first $q$ whole numbers ($q$ is the number of strands in $\beta$). The
crossings of $\beta$, labeled from left to right by the symbols
$b_1,\ldots,b_w$, are the only crossings of $f_\beta$. Due to resolution,
$\gamma_\beta$ also has the crossings $a_1,\ldots,a_q$.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{newlagpic.eps}
\caption{Front ($f_\beta$) and Lagrangian ($\gamma_\beta$) diagrams of the
closure ($L_\beta$) of the positive braid $\beta$}
\label{fig:lagpic}
\end{figure}
The differential graded algebra (DGA) $\mathscr A$ which is the chain
complex for the contact homology of $L_\beta$ is generated (as a
non-commutative algebra with unit) freely over $\ensuremath{\mathbf Z}_2$ by these $q+w$
symbols ($w$ is the word length or exponent sum of $\beta$). It's assigned
a $\ensuremath{\mathbf Z}$--grading\footnote{All components of $L_\beta$ have $r=0$. If there
are multiple components, what we describe here is only one of the
admissible gradings.} which takes the value $0$ on the $b_k$ and the value
$1$ on the $a_n$ (extended by the rule $|uv|=|u|+|v|$). By Theorem
6.7 of \cite{en}, the differential $\partial$ is given on the generators by the
formulas \begin{equation}\label{eq:perem}
\partial(b_k)=0\quad\text{and}\quad\partial(a_n)=1+C_{n,n}. \end{equation}
(It is extended to $\mathscr A$ by linearity and the Leibniz rule.) Here, for any $n$,
\begin{equation}\label{eq:Cii} C_{n,n}
=\sum_{\{\,i_1,\ldots,i_c\,\}\in D_n}
B_{n,i_1}B_{i_1,i_2}B_{i_2,i_3}\ldots B_{i_{c-1},i_c}B_{i_c,n},
\end{equation}
where two more terms require explanation.
\begin{Def}\label{def:sorozat}
A finite sequence of positive integers is called \emph{admissible}
if for all $s\ge 1$, between any two appearances of $s$ in the
sequence there is a number greater than $s$ which appears between
them. For $n\ge 1$, we denote by $D_n$ the set of all admissible
sequences that are composed of the numbers $1,2,\ldots,n-1$.
\end{Def}
Note that non-empty admissible sequences have a unique highest
element.
\begin{Def}\label{def:B} Let $1\le i,j\le q$. The element $B_{i,j}$ of the
DGA of $\gamma_\beta$ is the sum of the following products. For each path
composed of parts of the strands of the braid (word) $\beta$ that connects
the left endpoint labeled $i$ to the right endpoint labeled $j$ so that it
only turns around quadrants facing up, take the product of the labels of
the crossings from left to right that it turns at. (We will refer to the
paths contributing to $B_{i,j}$ as \emph{paths in the braid}.) \end{Def}
We will also use the following notation: for any $i<j$, let
\begin{equation}\label{eq:Cij}
C_{i,j}=\sum_{\{\,i,i_1,\ldots,i_c\,\}\in D_j} B_{i,i_1}B_{i_1,i_2}B_{i_2,i_3}\ldots
B_{i_{c-1},i_c}B_{i_c,j}. \end{equation}
The expressions $B_{i,j}$ and $C_{i,j}$ are elements of the DGA
$\mathscr A$. Even though $\mathscr A$ is non-commutative, we will
refer to them, as well as to similar expressions and even matrices
with such entries, as polynomials.
\section{The path matrix}\label{sec:matrix}
The polynomials $B_{i,j}$ are naturally arranged in a $q\times q$
matrix $B_\beta$ (with entries in $\mathscr A$), which we will call
the \emph{path matrix of $\beta$}.
If we substitute $0$ for each crossing label of $\beta$, then $B_\beta$
reduces to the matrix of the underlying permutation $\pi$ of $\beta$:
\[B_\beta(0,0,\ldots,0)=\left[\delta_{\pi(i),j}\right]=:P_\pi,\] where
$\delta$ is the Kronecker delta. Note that $B_\beta$ depends on the braid
\emph{word}, whereas $P_\pi$ only on the braid itself.
\begin{megj}\label{rem:csuszas}
When the braid group relation $\sigma_i\sigma_j=\sigma_j\sigma_i$, $|i-j|>1$
is applied to change $\beta$, the diagram $\gamma_\beta$ only changes by an
isotopy of the plane and the path matrix $B_\beta$ hardly changes at all. In
fact if we don't insist on increasing label indices and re-label the braid as
on the right side of Figure \ref{fig:haromszog}, then $B_\beta$ remains the same.
Therefore such changes in braid words will be largely ignored in the paper.
\end{megj}
\subsection{Multiplicativity}
The path matrix behaves multiplicatively in the following sense: If
two positive braid words $\beta_1$ and $\beta_2$ on $q$ strands are
multiplied as in the braid group (placing $\beta_2$ to the right of
$\beta_1$) to form the braid word $\beta_1*\beta_2$, then
\begin{equation}\label{eq:multip}
B_{\beta_1*\beta_2}=B_{\beta_1}\cdot B_{\beta_2}.
\end{equation}
Note that for this to hold true, $\beta_1$ and $\beta_2$ have to
carry their own individual crossing labels that $\beta_1*\beta_2$
inherits, too. Otherwise, the observation is trivial: we may
group together paths from left endpoint $i$ to right endpoint $j$ in
$\beta_1*\beta_2$ by the position of their crossing over from
$\beta_1$ to $\beta_2$.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{haromszog.eps}
\caption{Labels before and after an isotopy and a Reidemeister III move.
The signs on the right are the so called Reeb signs.}
\label{fig:haromszog}
\end{figure}
\begin{megj}
Apart from the technicality of having to view $B_{\beta_1}$ and
$B_{\beta_2}$ as polynomials of separate sets of indeterminates, there are
other problems that so far prevented the author from defining a
representation of the positive braid semigroup based on \eqref{eq:multip}.
Namely, when we represent the same positive braid by a different braid
word, the path matrix changes. This can be somewhat controlled by
requiring, as another departure from our convention of increasing label
subscripts, that whenever the braid group relation
$\sigma_i\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_i\sigma_{i+1}$ is applied to
change $\beta$, the two
sets of labels are related as on the right side of Figure \ref{fig:haromszog}.
Then the path
matrix changes \begin{equation}\label{eq:haromszog} \text{from
}\begin{bmatrix}b_2&b_3&1\\b_1&1&0\\1&0&0\end{bmatrix}\text{ to
}\begin{bmatrix}b_2+b_3b_1&b_3&1\\b_1&1&0\\1&0&0\end{bmatrix}.\end{equation}
Notice that this is just an application of Chekanov's chain map
\cite{chek} relating the DGA's of the diagrams before and after a
Reidemeister III move (and the same happens if the triangle is part of a
larger braid). Therefore we may hope that the path matrix of a positive
braid $\beta$, with its \emph{entries viewed as elements of the relative
contact homology $H(L_\beta)$}, is independent of the braid word
representing $\beta$. This is indeed the case because the set of equivalent
positive geometric braids (with the endpoints of strands
fixed\footnote{I.e., conjugation is not allowed here; if it was, the space
in question would not be contractible any more, as demonstrated in \cite{en}.})
is contractible,
thus it is possible
to canonically identify the contact homologies coming from different
diagrams. But because there isn't any known relation between the contact
homologies of $L_{\beta_1}$, $L_{\beta_2}$, and $L_{\beta_1*\beta_2}$,
this doesn't help us.
\end{megj}
The path matrix of the braid group generator $\sigma_i$, with its
single crossing labeled $b$ is block-diagonal with only two
off-diagonal entries:
\begin{equation}\label{eq:elemi}
B_{\sigma_i}=\left[\begin{array}{rccl}
I_{i-1}&&&\\
&b&1&\\
&1&0&\\
&&&I_{q-i-1}
\end{array}\right].
\end{equation}
By \eqref{eq:multip}, all path matrices are products of such
elementary matrices.
\begin{pelda}
Consider the braid $\beta$ shown in Figure \ref{fig:111}. Its path matrix is
\[B_\beta=
\begin{bmatrix}B_{1,1}&B_{1,2}\\B_{2,1}&B_{2,2}\end{bmatrix}=
\begin{bmatrix}b_1&1\\1&0\end{bmatrix}
\begin{bmatrix}b_2&1\\1&0\end{bmatrix}
\begin{bmatrix}b_3&1\\1&0\end{bmatrix}=
\begin{bmatrix}b_1+b_3+b_1b_2b_3&1+b_1b_2\\1+b_2b_3&b_2\end{bmatrix}.\]
(The path contributing $b_1b_2$ to $B_{1,2}$ is shown.)
As $D_1=\{\:\varnothing\:\}$ and
$D_2=\{\:\varnothing,\{\,1\,\}\:\}$, we have
$C_{1,1}=B_{1,1}=b_1+b_3+b_1b_2b_3$ and
$C_{2,2}=B_{2,2}+B_{2,1}B_{1,2}=b_2+(1+b_2b_3)(1+b_1b_2)$. Thus in the DGA of
$\gamma_\beta$, the relations
$\partial a_1=1+b_1+b_3+b_1b_2b_3$ and $\partial
a_2=1+b_2+(1+b_2b_3)(1+b_1b_2)=b_2+b_2b_3+b_1b_2+b_2b_3b_1b_2$ hold.
\end{pelda}
\begin{figure}
\centering
\begin{minipage}[c]{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{111.eps}
\end{minipage}
\begin{minipage}[c]{.4\textwidth}
\centering
\caption{Trefoil braid}
\label{fig:111}
\end{minipage}
\end{figure}
\subsection{Inverse matrix}
The inverse of the elementary matrix $B_{\sigma_i}$ is
\[B^{-1}_{\sigma_i}=\left[\begin{array}{rccl}
I_{i-1}&&&\\
&0&1&\\
&1&b&\\
&&&I_{q-i-1}
\end{array}\right].\]
Therefore, writing
$\beta=\sigma_{i_1}\sigma_{i_2}\cdots\sigma_{i_w}$, from
$B^{-1}_\beta=\left(B_{\sigma_{i_1}}B_{\sigma_{i_2}}\cdots
B_{\sigma_{i_w}}\right)^{-1} = B^{-1}_{\sigma_{i_w}}\cdots
B^{-1}_{\sigma_{i_1}}$ we see that $B^{-1}_\beta$ is also a path
matrix of the same braid word $\beta$, but in a different sense. This
time, the $(i,j)$--entry is a sum of the following products: For
each path composed of parts of the strands of $\beta$ that connects
the \emph{right} endpoint labeled $i$ to the \emph{left} endpoint
labeled $j$ so that it only turns at quadrants facing \emph{down},
take the product of the crossings from right to left that it turns
at. So it's as if we turned $\beta$ upside down by a $180^\circ$ rotation
while keeping the original labels of the crossings
and of the endpoints of the strands.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{felszalad.eps}
\caption{First half of a conjugation move (sequence of two Reidemeister II
moves)}
\label{fig:felszalad}
\end{figure}
That operation on the braid word produces a Legendrian isotopic closure
(where by closure we mean adding strands above the braid, as in Figure
\ref{fig:lagpic}). This is seen by a two-step process. First, apply
`half-way' the conjugation move of \cite{en} (as in Figure
\ref{fig:felszalad}) successively to each crossing of $\beta$ from left to
right. This turns $\beta$ upside down, but now the closing strands are
underneath.
Then, repeat $q$ times the procedure shown in Figure \ref{fig:felfordul},
which we borrow from \cite{tab}. The box may contain any front diagram.
Before the move represented by the third arrow, we make the undercrossing
strand on the left steeper than all slopes that occur inside the box, so
that it slides underneath the entire diagram without a self-tangency
moment. (In $3$--space, increasing the slope results in a huge
$y$--coordinate. Recall that fronts appear on the $xz$--plane, in
particular the $y$--axis points away from the observer. So the motion of
the strand happens far away, way behind any other piece of the knot.)
\begin{figure}
\centering
\includegraphics[width=\linewidth]{felfordul.eps}
\caption{Moving a strand to the other side of a front diagram.}
\label{fig:felfordul}
\end{figure}
\begin{pelda}
The inverse of the matrix from the previous example is
\[B^{-1}_\beta=
\begin{bmatrix}0&1\\1&b_3\end{bmatrix}
\begin{bmatrix}0&1\\1&b_2\end{bmatrix}
\begin{bmatrix}0&1\\1&b_1\end{bmatrix}=
\begin{bmatrix}b_2&1+b_2b_1\\1+b_3b_2&b_3+b_1+b_3b_2b_1\end{bmatrix}.\]
In Figure \ref{fig:111}, the path contributing $b_3b_2b_1$ to the $(2,2)$ entry is shown.
\end{pelda}
\subsection{Permutation braids}
As an illustration, we examine the path matrices of permutation
braids, which are positive braids in which every pair
of strands crosses at most once. They are in a one-to-one
correspondence with elements of the symmetric group $S_q$ and they play a crucial role
in Garside's solution \cite{garside} of the word and conjugacy problems in the braid group $B_q$.
It is always possible to represent a braid with a braid word in which the
product $\sigma_i\sigma_{i+1}\sigma_i$ doesn't appear for any $i$. (That
is, all possible triangle moves in which the ``middle strand is pushed
down,'' as in Figure \ref{fig:haromszog} viewed from the right to the
left, have been performed.) Such \emph{reduced braid words} for
permutation braids (up to the relation $\sigma_i\sigma_j=\sigma_j\sigma_i$,
$|i-j|>1$; see Remark \ref{rem:csuszas}) are unique.
\begin{all}
Let $\pi\in S_q$. The path matrix $B_\pi$ associated to its reduced
permutation braid word is obtained from the permutation matrix
$P_\pi$ as follows. Changes are only made to entries that are above
the $1$ in their column and to the left of the $1$ in their row. At
each such position, a single crossing label appears in $B_\pi$.
\end{all}
In particular, the positions that carry different entries in $P_\pi$
and $B_\pi$ are in a one-to-one correspondence with the inversions
of $\pi$.
\begin{proof} Starting at the left endpoint labeled $i$, our first
``intended destination'' (on the right side of the braid) is $\pi(i)$.
Whenever we turn along a path in the braid, the intended destination
becomes a smaller number because the two strands don't meet again. This
shows that entries in $B_\pi$ that are to the right of the $1$ in their
row are $0$. Traversing the braid from right to left, we see that entries
under the $1$ in their column are $0$, too. Either one of the two
arguments shows that the $1$'s of $P_\pi$ are left unchanged in $B_\pi$.
(This part of the proof is valid for any positive braid word representing
a permutation braid; cf.\ Figure \ref{fig:haromszog} and equation
\eqref{eq:haromszog}.)
We claim that any path in the braid contributing to any $B_{i,j}$ can
contain at most one turn. Assume the opposite: then a strand $s$ crosses
under the strand $t_1$ and then over the strand $t_2$, which are different
and which have to cross each other as well. This contradicts our
assumption that the braid word is reduced, for it is easy to argue that
(in a permutation braid) the triangle $s,t_1,t_2$ that we have just found
must contain an elementary triangle as on the right side of Figure
\ref{fig:haromszog}.
So the
paths we have not yet enumerated are those with exactly one turn.
Because strands cross at most once, these contribute to different
matrix entries. Finally, if $(i,j)$ is a position as described in
the Proposition, then $\pi(i)>j$ and $\pi^{-1}(j)>i$. This means
that the strand starting at $i$ has to meet the strand ending at
$j$, so that the label of that crossing becomes $B_{i,j}$.
\end{proof}
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{permbraids.eps}
\caption{Permutation braids of $(14)$ and of $(14)(23)$
(the latter also known as the Garside braid $\Delta_4$).}
\label{fig:permbraids}
\end{figure}
\begin{pelda}
The transposition $(14)$ of $S_4$ is represented by the reduced braid word shown
in Figure \ref{fig:permbraids}. It contains $5$ inversions, corresponding to the
$5$ crossings of the braid. Its path matrix is
$B_{(14)}=\begin{bmatrix}b_3&b_4&b_5&1\\b_2&1&0&0\\b_1&0&1&0\\1&0&0&0
\end{bmatrix}$.
The path matrix of the Garside braid (half-twist) $\Delta_4$, also
shown in Figure \ref{fig:permbraids}, is
$\begin{bmatrix}b_3&b_5&b_6&1\\b_2&b_4&1&0\\b_1&1&0&0\\1&0&0&0\end{bmatrix}$.
\end{pelda}
The latter pattern obviously generalizes to $\Delta_n$ for any $n$.
\subsection{Row reduction}
There is yet another way to factorize the path matrix. Let
$\tau_i\in S_q$ denote the underlying permutation (transposition) of
the elementary braid $\sigma_i\in B_q$.
\begin{lemma}\label{lem:atmegy}
Let $\lambda\in S_q$ be an arbitrary permutation. Then for all $i$,
\begin{equation}\label{eq:atmegy}\left[\begin{array}{c}
\text{matrix}\\
\text{of }\lambda
\end{array}\right]\cdot
\left[\begin{array}{rccl}
I_{i-1}&&&\\
&b&1&\\
&1&0&\\
&&&I_{q-i-1}
\end{array}\right]=
\left[\begin{array}{rccl}1&&&\\&1&&\\&b&\ddots&\\&&&1\end{array}\right]\cdot
\left[\begin{array}{c}
\text{matrix}\\
\text{of }\tau_i\circ\lambda\end{array}\right],\end{equation}
where in the first
term of the right hand side, the single non-zero off-diagonal entry
$b$ appears in the position $\lambda^{-1}(i),\lambda^{-1}(i+1)$.
\end{lemma}
\begin{figure}
\centering
\includegraphics[width=4in]{bizony.eps}
\caption{Braids so decorated that they have the same path matrix}
\label{fig:bizony}
\end{figure}
\begin{proof}
The essence of the proof is in Figure
\ref{fig:bizony}. It will be crucial that the path matrix depends on how
the braid is decorated with labels. On the other hand, for the purposes of
the argument, over- and
undercrossing information in the braids is irrelevant. In fact, although
we will not change our terminology, we will actually think of them (in
particular, when we take an inverse) as words written in the generators
$\tau_1,\ldots,\tau_{q-1}$ of $S_q$.
Take the permutation braid for $\lambda$ (or choose any other positive braid word
with this underlying permutation) and label its crossings with zeros. (In
Figure \ref{fig:bizony} we used $\lambda=(1342)\in S_4$ as an example.)
Add a single generator $\sigma_i$, with its crossing labeled $b$ to it
(Figure \ref{fig:bizony} shows $i=2$). The left hand side of
\eqref{eq:atmegy} is the path matrix of this braid $\beta$.
Next, choose any positive braid word $\mu$ in which the strands with right
endpoints $\lambda^{-1}(i)$, $\lambda^{-1}(i+1)$ cross (say exactly once)
and form the product $\mu^{-1}*\mu*\beta$. Label the crossings of
$\mu^{-1}$ and $\mu$ with zeros, as in the middle of Figure
\ref{fig:bizony}. This way, the path matrix does not change.
Now, it does not matter for the path matrix where exactly the single
non-zero label $b$ appears in the braid as long as that crossing
establishes a path between the same two endpoints. In other words, we may
move the label from the first (from the right) to the third, fifth etc.\
crossing of the same two strands. By construction, one of those crossings
is either in $\mu$ (if $\lambda^{-1}(i)>\lambda^{-1}(i+1)$, as is the case
in Figure \ref{fig:bizony}) or in $\mu^{-1}$, and we move the label there
(bottom of Figure \ref{fig:bizony}). When we read off the path matrix from
this form, we obtain the right hand side of \eqref{eq:atmegy}: The path
matrix of $\mu^{-1}*\mu$ is $I_q$ except for the single $b$ that
establishes a path from $\lambda^{-1}(i)$ to $\lambda^{-1}(i+1)$, and the
path matrix of $\beta$, now labeled with only zeros, is
$P_{\tau_i\circ\lambda}$.
\end{proof}
Next, for the positive braid word
$\beta=\sigma_{i_1}\sigma_{i_2}\cdots\sigma_{i_w}$ with crossings labeled
$b_1,b_2,\ldots,b_w$, we'll introduce a sequence of elementary matrices.
The underlying permutation is $\pi=\tau_{i_w}\ldots\tau_{i_2}\tau_{i_1}$.
Let us denote the ``permutation up to the $k$'th crossing'' by
$\pi_k=\tau_{i_k}\ldots\tau_{i_1}$, so that $\pi_0=\id$ and
$\pi_w=\pi$. Let $A_k$ be the $q\times q$ identity matrix with a single
non-zero off-diagonal entry of $b_k$ added in the position
$\pi_{k-1}^{-1}(i_k),\pi_{k-1}^{-1}(i_k+1)$. Note that because we work over
$\ensuremath{\mathbf Z}_2$, $A_k^2=I_q$ for all $k$.
\begin{all}
For the positive braid word
$\beta=\sigma_{i_1}\sigma_{i_2}\cdots\sigma_{i_w}$ with underlying
permutation $\pi$, we have $B_\beta=A_1A_2\ldots A_wP_\pi$, where
$P_\pi$ is the permutation matrix.
\end{all}
\begin{proof} If $\lambda=\pi_{k-1}$, $i=i_k$, and $b=b_k$, then equation
\eqref{eq:atmegy} reads $P_{\pi_{k-1}} B_{\sigma_{i_k}} = A_k
P_{\pi_k}$. Starting from $B_\beta=(P_{\pi_0}
B_{\sigma_{i_1}})B_{\sigma_{i_2}}\ldots B_{\sigma_{i_w}}$, we apply Lemma
\ref{lem:atmegy} $w$ times.
\end{proof}
Read in another way, this result shows that $B_\beta$ reduces to $P_\pi$ by
applying a particular sequence of elementary row operations:
$A_w\ldots A_1B_\beta=P_\pi$. This works in the non-commutative sense.
\begin{pelda} For the braid $\beta$ of Figure \ref{fig:111}, we have
\[B_\beta=\begin{bmatrix}1&b_1\\0&1\end{bmatrix}
\begin{bmatrix}1&0\\b_2&1\end{bmatrix}
\begin{bmatrix}1&b_3\\0&1\end{bmatrix}
\begin{bmatrix}0&1\\1&0\end{bmatrix},\]
that is
\[\begin{bmatrix}1&b_3\\0&1\end{bmatrix}
\begin{bmatrix}1&0\\b_2&1\end{bmatrix}
\begin{bmatrix}1&b_1\\0&1\end{bmatrix}
\begin{bmatrix}b_1+b_3+b_1b_2b_3&1+b_1b_2\\1+b_2b_3&b_2\end{bmatrix}
=\begin{bmatrix}0&1\\1&0\end{bmatrix}.\]
\end{pelda}
\section{Algebraic results}
In this section, we treat (re-define, if you like) the symbols
$B_{i,j}$ as independent variables. Instead of $\ensuremath{\mathbf Z}_2$--coefficients,
we will work in the free non-commutative unital ring generated by
these symbols (where $1\le i,j\le q$) \emph{over $\ensuremath{\mathbf Z}$}.
After the first set
of statements, we will abelianize so that we can consider
determinants.
Note that the $C_{i,j}$ (equation \eqref{eq:Cij}) are polynomials in the
$B_{i,j}$. To state our results, we will need a similar family of
polynomials whose definition is based on the notion of admissible sequence
(Definition \ref{def:sorozat}).
\begin{Def} For any $1\le i,j\le q$, let
\begin{equation*}
M_{i,j}=\sum_{\{\,i_1,\ldots,i_c\,\}\in D_{\min\{i,j\}}}
B_{i,i_1}B_{i_1,i_2}B_{i_2,i_3}\ldots B_{i_{c-1},i_c}B_{i_c,j}.
\end{equation*}
\end{Def}
Note that $M_{1,j}=C_{1,j}=B_{1,j}$, $M_{i,1}=B_{i,1}$, $M_{n,n}=C_{n,n}$, and
$M_{i-1,i}=C_{i-1,i}$,
whenever these expressions are defined.
\begin{lemma}
\begin{multline*}\left[\begin{array}{ccccc}
1&C_{1,2}&C_{1,3}&\cdots&C_{1,q}\\
&1&C_{2,3}&\cdots&C_{2,q}\\
&&1&\cdots&C_{3,q}\\
&&&\ddots&\vdots\\
&&&&1
\end{array}\right]\cdot
\left[\begin{array}{ccccc}
1&-M_{1,2}&-M_{1,3}&\cdots&-M_{1,q}\\
&1&-M_{2,3}&\cdots&-M_{2,q}\\
&&1&\cdots&-M_{3,q}\\
&&&\ddots&\vdots\\
&&&&1
\end{array}\right]\\
= \left[\begin{array}{ccccc}
1&-M_{1,2}&-M_{1,3}&\cdots&-M_{1,q}\\
&1&-M_{2,3}&\cdots&-M_{2,q}\\
&&1&\cdots&-M_{3,q}\\
&&&\ddots&\vdots\\
&&&&1
\end{array}\right]\cdot
\left[\begin{array}{ccccc}
1&C_{1,2}&C_{1,3}&\cdots&C_{1,q}\\
&1&C_{2,3}&\cdots&C_{2,q}\\
&&1&\cdots&C_{3,q}\\
&&&\ddots&\vdots\\
&&&&1
\end{array}\right]
=I_q,\end{multline*} and a similar statement can be formulated for lower
triangular matrices.
\end{lemma}
Note that the two claims don't imply each other because we work
over a non-commutative ring.
\begin{proof}
We need that for all $1\le i<j\le q$,
\[-M_{i,j}-C_{i,i+1}M_{i+1,j}-C_{i,i+2}M_{i+2,j}-\ldots
-C_{i,j-1}M_{j-1,j}+C_{i,j}=0\]
and that
\[C_{i,j}-M_{i,i+1}C_{i+1,j}-M_{i,i+2}C_{i+2,j}-\ldots
-M_{i,j-1}C_{j-1,j}-M_{i,j}=0.\]
We may view both of these equalities as identities for $C_{i,j}$.
The first one groups the terms of $C_{i,j}$ according to the highest
element of the admissible sequence. The second groups them according
to the first element which is greater than $i$. The lower triangular
version is analogous.
\end{proof}
\begin{lemma}\label{lem:MMB}
For all $1\le n\le q$,
\begin{multline}\label{eq:MMB}
\left[\begin{array}{ccccc}
-1&&&&\\
M_{2,1}&-1&&&\\
M_{3,1}&M_{3,2}&-1&&\\
\vdots&\vdots&\vdots&\ddots&\\
M_{n,1}&M_{n,2}&M_{n,3}&\cdots&-1
\end{array}\right]\cdot
\left[\begin{array}{ccccc}
1&-M_{1,2}&-M_{1,3}&\cdots&-M_{1,n}\\
&1&-M_{2,3}&\cdots&-M_{2,n}\\
&&1&\cdots&-M_{3,n}\\
&&&\ddots&\vdots\\
&&&&1
\end{array}\right]\\
=\left[\begin{array}{cccccc}
-1&B_{1,2}&\cdots&B_{1,i}&\cdots&B_{1,n}\\
B_{2,1}&-1-B_{2,1}B_{1,2}&\cdots&B_{2,i}&\cdots&B_{2,n}\\
\vdots&\vdots&\ddots&\vdots&&\vdots\\
B_{i,1}&B_{i,2}&\cdots&B_{i,i}-C_{i,i}-1&\cdots&B_{i,n}\\
\vdots&\vdots&&\vdots&\ddots&\vdots\\
B_{n,1}&B_{n,2}&\cdots&B_{n,i}&\cdots&B_{n,n}-C_{n,n}-1
\end{array}\right].
\end{multline}
\end{lemma}
\begin{proof}
For entries above the diagonal ($i<j$), the claim is that
\[B_{i,j}=-M_{i,1}M_{1,j}-M_{i,2}M_{2,j}-\ldots-M_{i,i-1}M_{i-1,j}+M_{i,j}.\]
Viewing this as an identity for $M_{i,j}$, we see that it holds
because terms are grouped with respect to the highest element in the
admissible sequence. The reasoning is the same for positions below
the diagonal. For the diagonal entries, we need to show that
\[B_{i,i}-C_{i,i}-1=-M_{i,1}M_{1,i}-M_{i,2}M_{2,i}-\ldots
-M_{i,i-1}M_{i-1,i}-1.\]
Isolating $C_{i,i}$ this time, we again see a separation of its
terms according to the highest element of the admissible sequence.
\end{proof}
For the rest of the section, we will work in the \emph{commutative}
polynomial ring generated over $\ensuremath{\mathbf Z}$ by the $B_{i,j}$, so that we can
talk about determinants.
\begin{tetel}\label{thm:ideal}
The ideal $I'$ generated by the polynomials
\[1+C_{1,1},\quad 1+C_{2,2},\quad\ldots,\quad 1+C_{q,q}\]
agrees with the ideal $I$
generated by the polynomials
\[L_1=B_{1,1}+1,\enskip L_2=\begin{vmatrix}B_{1,1}&B_{1,2}\\B_{2,1}&B_{2,2}
\end{vmatrix}-1,\enskip \ldots,\enskip
L_q=\begin{vmatrix}B_{1,1}&\cdots&B_{1,q}\\
\vdots&\ddots&\vdots\\
B_{q,1}&\cdots&B_{q,q}\end{vmatrix}-(-1)^q.\]
\end{tetel}
\begin{proof}
Let $n\le q$ and take determinants of both sides of equation
\eqref{eq:MMB}: $(-1)^n$, on the left hand side, agrees with
$\begin{vmatrix}B_{1,1}&\cdots&B_{1,n}\\
\vdots&\ddots&\vdots\\
B_{n,1}&\cdots&B_{n,n}\end{vmatrix}$ plus an element of $I'$ on the
right hand side. Thus, $L_n\in I'$ for all $n$.
The proof of the other containment relation is also based on
equation \eqref{eq:MMB} and goes by induction on $n$. Note that
$1+C_{1,1}=L_1$ and assume that
$1+C_{1,1},1+C_{2,2},\ldots,1+C_{n-1,n-1}$ are all in $I$ (actually,
they are in the ideal generated by $L_1,L_2,\ldots,L_{n-1}$).
Re-writing the determinant of the matrix on the right hand side of
\eqref{eq:MMB}, we find that
\begin{multline*}\hspace{-8pt}(-1)^n=\left|\begin{array}{cccccc}
-1&B_{1,2}&\cdots&B_{1,n-1}&B_{1,n}\\
B_{2,1}&-1-B_{2,1}B_{1,2}&\cdots&B_{2,n-1}&B_{2,n}\\
\vdots&\vdots&\ddots&\vdots&\vdots\\
B_{n-1,1}&B_{n-1,2}&\cdots&B_{n-1,n-1}-C_{n-1,n-1}-1&B_{n-1,n}\\
B_{n,1}&B_{n,2}&\cdots&B_{n,n-1}&B_{n,n}
\end{array}\right|\\
-\left|\begin{array}{cccccc}
-1&B_{1,2}&\cdots&B_{1,n-1}&B_{1,n}\\
B_{2,1}&-1-B_{2,1}B_{1,2}&\cdots&B_{2,n-1}&B_{2,n}\\
\vdots&\vdots&\ddots&\vdots&\vdots\\
B_{n-1,1}&B_{n-1,2}&\cdots&B_{n-1,n-1}-C_{n-1,n-1}-1&B_{n-1,n}\\
0&0&\cdots&0&1+C_{n,n}
\end{array}\right|.
\end{multline*}
Notice that the second determinant is $(-1)^{n-1}(1+C_{n,n})$ (by
Lemma \ref{lem:MMB}), while the first is
$\begin{vmatrix}B_{1,1}&\cdots&B_{1,n}\\
\vdots&\ddots&\vdots\\
B_{n,1}&\cdots&B_{n,n}\end{vmatrix}$ plus an element of $I'$, but
the latter, by the inductive hypothesis, is also in $I$. Isolating
$1+C_{n,n}$, we are done.
\end{proof}
So we see that the ideal $I$ defined in terms of the upper left
corner subdeterminants of the general determinant is also generated
by the polynomials $1+C_{n,n}$, which arise from contact homology
(counting holomorphic discs).
In fact much more is true: the $1+C_{n,n}$ form the reduced
Gr\"obner basis for $I$. Of course this can only be true for certain
term orders that we'll describe now.
In the (commutative) polynomial ring $\ensuremath{\mathbf Z}[B_{i,j}]$, take any order $\prec$
of the indeterminates where any diagonal entry $B_{i,i}$ is larger than
any off-diagonal one. Extend this order to the monomials
lexicographically. (But not degree lexicographically! For example,
$B_{2,2}\succ B_{2,1}B_{1,2}$.) This is a multiplicative term order.
\begin{tetel}\label{thm:grob} The polynomials $1+C_{n,n}$, $n=1,\ldots,q$ (defined in
equation \eqref{eq:Cii}), form the reduced Gr\"obner basis for the ideal
\[I=\left\langle
B_{1,1}+1,\quad\begin{vmatrix}B_{1,1}&B_{1,2}\\B_{2,1}&B_{2,2}\end{vmatrix}-1,
\quad\ldots,\quad \begin{vmatrix}B_{1,1}&\cdots&B_{1,q}\\
\vdots&\ddots&\vdots\\
B_{q,1}&\cdots&B_{q,q}\end{vmatrix}-(-1)^q\right\rangle\] under any of the
term orders $\prec$ described above. \end{tetel}
\begin{proof}
This is obvious from the definitions (see for example \cite{bernd}),
after noting that the initial term of $1+C_{n,n}$ is $B_{n,n}$ and
that by the definition of an admissible sequence, no other term in
$1+C_{n,n}$ contains any $B_{i,i}$. (The initial ideal of $I$ is that
generated by the $B_{n,n}$.)
\end{proof}
\section{Augmentations}
\begin{Def}\label{def:aug} Let $\gamma$ be a Lagrangian diagram of a
Legendrian link $L$. If $L$ has more than one components, we assume that
an admissible grading of the DGA of $\gamma$ has been chosen, too. An
\emph{augmentation} is a subset $X$ of the crossings (the \emph{augmented
crossings}) of $\gamma$ with the following properties.
\begin{itemize}
\item The index of each element of $X$ is $0$.
\item For each generator
$a$ of index $1$, the number of admissible discs with positive corner $a$
and all negative corners in $X$ is even.
\end{itemize}
\end{Def}
Here, an admissible disc is the central object of Chekanov--Eliashberg
theory: These discs determine the differential $\partial$ of the DGA
$\mathscr A$, and thus contact homology $H(L)$. Unlike most of the
literature, we expand the notion of augmentation here (in the
multi-component case) by allowing `mixed' crossings between different
components to be augmented, as long as they have index $0$ in the one
grading we have chosen. Such sets of crossings would typically not be
augmentations for other admissible gradings because it's exactly the index
of a mixed crossing that is ambiguous. Our motivation is that
$\gamma_\beta$, even if it is of multiple components, has the natural
admissible grading introduced in section \ref{sec:prelim}.
The evaluation homomorphism (which is defined on the link DGA, and which
is also called an augmentation) $\varepsilon_X\colon\mathscr A\to\ensuremath{\mathbf Z}_2$
that sends elements of $X$ to $1$ and other generators to $0$, gives rise
to an algebra homomorphism $(\varepsilon_X)_*\colon H(L)\to\ensuremath{\mathbf Z}_2$. In fact,
the second requirement of Definition \ref{def:aug} is just an elementary
way of saying that $\varepsilon_X$ vanishes on
$\partial(a)$ for each generator $a$ of index $1$, while for other indices
this is already automatic by the first point and the fact that $\partial$
lowers the index by $1$.
\begin{megj} As a preview of a forthcoming paper, let us mention that
augmentations do define a Legendrian isotopy invariant in the following
sense: the set of all induced maps $(\varepsilon_X)_*\colon H(L)\to\ensuremath{\mathbf Z}_2$
depends only on $L$. (The correspondence between augmentations of
different diagrams of $L$ is established using pull-backs by the
isomorphisms constructed in Chekanov's proof of the invariance of $H(L)$.)
The number of augmentations in the sense of Definition \ref{def:aug} may however
change by a factor of $2$ when a Reidemeister II move or its inverse,
involving crossings of index $0$ and $-1$, is performed. \end{megj}
In practice, finding an augmentation means solving a system of polynomial equations
(one equation provided by each index $1$ crossing) over $\ensuremath{\mathbf Z}_2$. In this sense,
augmentations form a variety. In this section we prove a few statements about the
variety associated to $\gamma_\beta$.
The main result is the following
theorem, which allows for an enumeration of all
augmentations of $\gamma_\beta$. The author is greatly indebted to Supap Kirtsaeng,
who wrote a computer program based on this criterion. It may first seem ineffective
to check all subsets of the crossings of $\beta$, but it turns out that a significant
portion of them are augmentations (see section \ref{sec:ex}).
Let $Y$ be a subset of the crossings of $\beta$. Let $\varepsilon_Y\colon\mathscr A\to\ensuremath{\mathbf Z}_2$
be the evaluation homomorphism that sends elements of $Y$ to $1$ and other generators to $0$.
In particular, we may talk of the $0$-$1$--matrix $\varepsilon_Y(B_\beta)$. (This could also
have been denoted by $B_\beta(\chi_Y)$, where the $0$-$1$--sequence $\chi_Y$ is the
characteristic function of $Y$.)
\begin{tetel}\label{thm:gauss}
Let $Y$ be a subset of the crossings of the positive braid word $\beta$. $Y$ is an
augmentation of $\gamma_\beta$ if and only if the
$0$-$1$--matrix $\varepsilon_Y(B_\beta)$ is such that every upper left
corner square submatrix of it has determinant $1$.
\end{tetel}
It is then a classical theorem of linear algebra that the condition
on $\varepsilon_Y(B_\beta)$ is equivalent to the requirement that it
possess an $LU$--decomposition and also to the requirement that
Gaussian elimination can be completed on it without permuting rows.
\begin{proof} In our admissible grading, each crossing of $\beta$ has
index $0$. Therefore $Y$ is an augmentation if and only if $\varepsilon_Y$
vanishes on $\partial(a)$ for each index $1$ DGA generator $a$. This in
turn is clearly equivalent to saying that $\varepsilon_Y$ vanishes on the
two-sided ideal generated by these polynomials. In fact because
$\varepsilon_Y$ maps to a commutative ring ($\ensuremath{\mathbf Z}_2$), we may abelianize
$\mathscr A$ and say that the condition for $Y$ to be an augmentation is
that $\varepsilon_Y$ vanishes on the ideal generated by the expressions
$\partial(a_1),\ldots,\partial(a_q)$, which are now viewed as honest
polynomials in the commuting indeterminates $b_1,\ldots,b_w$.
In \cite{en}, section 6, we computed these polynomials and found that they
really were polynomials of the polynomials $B_{i,j}$, as stated in
equation \eqref{eq:perem}. Now by (the modulo $2$ reduction of) Theorem
\ref{thm:ideal}, the ideal generated by the $\partial(a_n)$ is also
generated by the polynomials
\[B_{1,1}+1,\quad\begin{vmatrix}B_{1,1}&B_{1,2}\\B_{2,1}&B_{2,2}\end{vmatrix}
+1,\quad\ldots,\quad
\begin{vmatrix}B_{1,1}&\cdots&B_{1,q}\\ \vdots&\ddots&\vdots\\
B_{q,1}&\cdots&B_{q,q}\end{vmatrix}+1,\] which implies the Theorem
directly.
\end{proof}
\begin{megj}\label{rem:utso} Notice that for a path matrix $B_\beta$, a
quick look at \eqref{eq:elemi} with formula \eqref{eq:multip} implies that
we always have $\det(B_\beta)=1$. Therefore the condition on the $q\times
q$ subdeterminant is vacuous: if a subset of the crossings of $\beta$
``works as an augmentation'' for $a_1,\ldots,a_{q-1}$, then it
automatically works for $a_q$ as well. \end{megj}
Let us give a geometric explanation of the appearance of
$LU$--de\-com\-po\-si\-tions. Figure \ref{fig:plat} shows another Lagrangian
diagram of $L_\beta$ that is obtained from the front diagram $f_\beta$ by
pushing all the right cusps to the extreme right and then applying resolution.
This has the advantage that all admissible discs are embedded. Label the
$q(q-1)$ new crossings as in Figure \ref{fig:plat}. Our preferred grading is
extended to the new crossings by assigning $0$ to the $c_{i,j}$ and $1$ to the
$s_{i,j}$. This implies $\partial(c_{i,j})=0$, while the index $1$ generators
are mapped as follows: \[\partial(a_n) = 1 + c_{n,1}B_{1,n} +\ldots
+c_{n,n-1}B_{n-1,n} + B_{n,n}\] and \[\partial(s_{i,j}) = c_{i,1}B_{1,j}
+\ldots+ c_{i,i-1}B_{i-1,j} + B_{i,j}.\] Setting the latter $q+{q\choose 2}$
expressions equal to $0$ is equivalent to saying that the matrix product
\[\left[\begin{array}{ccccc}
1&0&0&\cdots&0\\
c_{2,1}&1&0&\cdots&0\\
c_{3,1}&c_{3,2}&1&\cdots&0\\
\vdots&\vdots&\vdots&\ddots&\vdots\\
c_{q,1}&c_{q,2}&c_{q,3}&\cdots&1
\end{array}\right]
\left[\begin{array}{ccccc}
B_{1,1}&B_{1,2}&B_{1,3}&\cdots&B_{1,q}\\
B_{2,1}&B_{2,2}&B_{2,3}&\cdots&B_{2,q}\\
B_{3,1}&B_{3,2}&B_{3,3}&\cdots&B_{3,q}\\
\vdots&\vdots&\vdots&\ddots&\vdots\\
B_{q,1}&B_{q,2}&B_{q,3}&\cdots&B_{q,q}
\end{array}\right]\]
is unit upper triangular. Thus an augmentation evaluates $B_\beta$ to an
$LU$--decomposable $0$-$1$--matrix and the converse is not hard to prove either.
\begin{figure}
\centering
\includegraphics[width=4in]{plat.eps}
\caption{Another Lagrangian diagram of $L_\beta$}
\label{fig:plat}
\end{figure}
\section{Rulings}\label{sec:rul}
\begin{Def}\label{def:rul} An \emph{ungraded ruling} is a partial splicing
of a front diagram where certain crossings, called \emph{switches}, are
replaced by a pair of arcs as in Figure \ref{fig:switch} so that the
diagram becomes a (not necessarily disjoint) union of standard unknot diagrams, called \emph{eyes}.
(An eye is a pair of arcs connecting the same two cusps that contain no
other cusps and that otherwise do not meet, not even at switches.) It is
assumed that in the vertical ($x=\text{const.}$) slice of the diagram
through each switch, the two eyes that meet at the switch follow one of
the three configurations in the middle of Figure \ref{fig:switch}.
Let us denote the set of all ungraded rulings of a front diagram $f$ of a Legendrian
link by $\Gamma_1(f)$. We get $2$--graded rulings, forming the set $\Gamma_2(f)$, if
we require that the index of each switch be
even. $\ensuremath{\mathbf Z}$--graded rulings (set $\Gamma_0(f)$) are those where each switch has index $0$.
\end{Def}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{switch.eps}
\caption{Allowed and disallowed configurations for switches of rulings}
\label{fig:switch}
\end{figure}
$\Gamma_1$ is of course grading-independent. For multi-component oriented link diagrams,
$\Gamma_2$ doesn't depend on the chosen grading, but $\Gamma_0$ might.
Rulings can also be classified by the value
\[\theta=\text{number of eyes}-\text{number of switches}.\] The counts of
ungraded, $2$--graded, and $\ensuremath{\mathbf Z}$--graded rulings with a given $\theta$ are
all Legendrian isotopy invariants\footnote{In the $\ensuremath{\mathbf Z}$--graded case, we may have to assume that the Legendrian has a single component.} \cite{chp,fuchs}. (In particular, the sizes of the
sets $\Gamma_i(f)$, $i=0,1,2$, don't depend on $f$, only on the Legendrian isotopy class.)
We may arrange these numbers as coefficients in the ruling polynomials\footnote{These are
honest polynomials for knots, but for multi-component links, they may contain negative
powers of $z$. It may seem unnatural first to write them the way we do, but there are two
good reasons to do so: One is Rutherford's pair of theorems below, and the other is that
rulings can also be thought of as surfaces, in which case $\theta$ becomes their Euler
characteristic and (in the one-component and $2$--graded case) $1-\theta$ is twice their genus.}
\[R_i(z)=\sum_{\rho\in\Gamma_i}z^{1-\theta(\rho)}.\]
Fuchs
notes that the existence of a $2$--graded ruling
implies $r=0$. Let us add that if we treat the eyes as discs and
join them by twisted bands at the switches, then a $2$--graded
ruling becomes an orientable surface. The number $\theta$ is its
Euler characteristic and thus $\theta+\mu$, where $\mu$ is the
number of the components of the Legendrian, is even. In particular,
$\theta$ is odd for any $2$--graded ruling of a Legendrian knot.
There is a marked difference between $\ensuremath{\mathbf Z}$--graded rulings and the two less restrictive
cases. $R_1$ and $R_2$ only depend on the smooth type of the Legendrian and its
Thurston--Bennequin number. In fact, Rutherford \cite{rulpoly} proved that for any link,
$R_1(z)$ is the coefficient of $a^{-tb-1}$ in the Dubrovnik version of the Kauffman
polynomial, and that $R_2(z)$ is the coefficient of $v^{tb+1}$ in the Homfly polynomial.
On the other hand, $R_0$ is more sensitive: Chekanov \cite{chek2} constructed two
Legendrian knots of type $5_2$, both with $tb=1$ and $r=0$, so that one has $R_0(z)=1+z^2$
and the other has $R_0(z)=1$.
Because $f_\beta$ only contains crossings of index $0$, any ungraded
ruling is automatically $2$--graded and $\ensuremath{\mathbf Z}$--graded in this case. Thus we may talk about
a single ruling polynomial. By Rutherford's theorems, this implies that the coefficients
of the terms with minimum $v$--degree in the Homfly and Kauffman
polynomials (for the latter, replace $a$ with $v^{-1}$ in its Dubrovnik version) of a
braid-positive link agree. In fact, using Tanaka's results \cite{tanaka}, the same can be said about arbitrary positive links. (See \cite{meginten} for more.) This, without any reference to Legendrians yet with essentially the same proof, has been first observed by Yokota \cite{yok}.
\begin{pelda} The positive trefoil knot that is the closure of the braid
in Figure \ref{fig:111} has one ruling with $\theta=-1$ and two with
$\theta=1$, shown in Figure \ref{fig:rulings}. The numbers $1$ and $2$ (i.e., the ruling polynomial $R(z)=2+z^2$)
appear as the leftmost coefficients in the Homfly polynomial
\[\begin{array}{ccc}z^2v^2&&\\&&\\2v^2&&-v^4\end{array}\] and also in the Kauffman polynomial
\[\begin{array}{cccc}z^2v^2&&-z^2v^4&\\&-zv^3&&+zv^5\\2v^2&&-v^4.\end{array}\]
\begin{figure}
\centering
\includegraphics[width=\linewidth]{rulings.eps}
\caption{The Seifert ruling and the other two rulings of the positive trefoil}
\label{fig:rulings}
\end{figure}
\end{pelda}
The diagram $f_\beta$ admits many rulings. The one that is easiest to see
is what we will call the \emph{Seifert ruling}, in which the set of
switches agrees with the set of crossings in $\beta$. This is the only
ruling with the minimal value $\theta=q-w$. Another ruling, that one with the maximum
value $\theta=\mu$, will be constructed in Theorem \ref{thm:szimultan}.
The second lowest possible value of $\theta$ for a ruling of $f_\beta$ is
$q-w+2=-tb(L_\beta)+2$. It is easy to see that in such a ruling, the two crossings
of $\beta$ that are not switches have to be `on the same level' (represented by the
same braid group generator) without any other crossing between them on that level, and
also that any such arrangement works. Thus, assuming that each generator occurs in the
braid word $\beta$ (i.e., that $f_\beta$ is connected), the number of such rulings is
$w-(q-1)=tb(L_\beta)+1$. In all of the examples known to the author, the next value of
$\theta$, that is $\theta=q-w+4=-tb+4$, is realized by exactly
${{w-q}\choose{2}}={{tb}\choose{2}}$ rulings (but I don't know how to prove this).
At $\theta=-tb+6$ and higher, dependence on the braid occurs (see section \ref{sec:ex}).
It would be very interesting to have a test, similar to Theorem \ref{thm:gauss}, that
decides from the path matrix whether a given crossing set of $f_\beta$ is a ruling.
From work of Fuchs, Ishkhanov \cite{fuchs, masikirany}, and Sabloff \cite{josh}, we
know that $\ensuremath{\mathbf Z}$--graded
rulings for a Legendrian exist if and only if augmentations do.
Ng and Sabloff also worked out a
surjective correspondence \cite{manytoone} that assigns a $\ensuremath{\mathbf Z}$--graded ruling to each
augmentation. In that correspondence, the size of the preimage of each $\ensuremath{\mathbf Z}$--graded
ruling $\rho$ of the front diagram $f$ is the number $2^{(\theta(\rho)+\chi^*(f))/2}$, where
\begin{eqnarray*}
\chi^*(f)&\hspace{-8pt}
=&\hspace{-8pt}- \left(\sum_{\text{crossings } a \text{ of } f \text{ with } |a|<0}(-1)^{|a|}
\right)
+ \left(\sum_{\text{crossings } a \text{ of } f \text{ with } |a|\ge 0}(-1)^{|a|}\right)\\
&&\hspace{-8pt}-\hspace{5pt}\text{number of right cusps}.
\end{eqnarray*}
In particular, the number of augmentations belonging to $\rho$ depends on $\theta(\rho)$ and
the diagram only. (Note that $\chi^*$ has the same parity as $tb$, and because $r=0$ is
even, it also has the same parity as $\mu$.) Thus the total number of augmentations is
\begin{equation}\label{eq:aug}
R_0(z)\cdot z^{-1-\chi^*}\bigg|_{z=2^{-1/2}}.
\end{equation}
For the diagram $f_\beta$, which is without negatively graded crossings, we have
$\chi^*(f_\beta)=tb(L_\beta)=w-q$. Thus among the rulings of
$f_\beta$, the zeroth power of $2$ corresponds only to the Seifert ruling.
Therefore the number of augmentations of $f_\beta$ is odd.
The next theorem may further illuminate the relationship between augmentations and rulings.
\begin{tetel}\label{thm:szimultan}
For any positive braid word $\beta$, there exists a subset of
its crossings which is (the set of switches in) a ruling of $f_\beta$ and
an augmentation of $\gamma_\beta$ at the same time.
\end{tetel}
The set we will construct is not, however, fixed by Ng and Sabloff's
many-to-one correspondence.
\begin{proof} The set $X$ is constructed as follows: In $\beta$, the
strands starting at the left endpoint $1$ and ending at the right endpoint
$1$ either agree or intersect for an elementary geometric reason. In the latter case,
splice/augment their
first crossing from the left, $d_1$. In either case, remove the path $s_1$
connecting $1$ to $1$ from the braid. (If splicing was necessary to create $s_1$, then leave
a marker $1$ on the lower strand as shown in Figure \ref{fig:marker}.) Proceed by induction
to find the
paths $s_2,\ldots,s_q$ and for those $s_i$ that were the result of splicing, leave a marker
$i$ and place the spliced crossing $d_i$ in $X$.
\begin{figure}
\centering
\begin{minipage}[c]{.3\textwidth}
\centering
\includegraphics[width=\textwidth]{marker.eps}
\end{minipage}
\hfill
\begin{minipage}[c]{.6\textwidth}
\centering
\caption{Splicing a crossing to create the path $s_i$; after removing $s_i$,
a marker $i$ is left on the remaining diagram.}
\label{fig:marker}
\end{minipage}
\end{figure}
The components of $L_\beta$ are enumerated by the cycles in the permutation $\pi$ that
underlies $\beta$ and the construction treats these components independently of one another.
The number of elements in $X$ is $q$ minus the number $\mu$ of these cycles/components: it
is exactly the
largest element $i$ of each cycle of $\pi$ whose corresponding path $s_i$ `exists
automatically,' without splicing. A way to see this is the following. Suppose $\pi$ contains
a single cycle. Unless $q=1$, $d_1$ exists. When $s_1$ is removed from the braid, $1$ is
`cut out' of $\pi$: in the next, smaller braid, the underlying permutation takes
$\pi^{-1}(1)$ to $\pi(1)$. In particular, we still have a single cycle. Unless $2$ is its
largest (and only) element, $d_2$ will exist and the removal of $s_2$ cuts $2$ out of the
permutation. This goes on until we reach $q$, at which stage the braid is a single strand
and more splicing is neither possible nor necessary.
We define an oriented graph $G_\beta$ on the vertex set $\{\,1,2,\ldots,q\,\}$ by the rule
that an oriented edge connects $i$ to $j$ if $s_j$ contains the marker $i$. Note that $i<j$
is necessary for this and that each $i$ can be the starting vertex of at most one edge. For
that reason, $G_\beta$ doesn't even have unoriented cycles (consider the smallest number in
a supposed cycle). Thus, $G_\beta$ is a $\mu$--component forest. The largest element of each
tree is its only sink.
$X$ is a ruling with the $i$th eye partially bounded by the path $s_i$. These are easily
seen to satisfy Definition \ref{def:rul}: if the $i$th and $j$th eyes meet at the switch
$d_i$, then an edge connects $i$ to $j$ in $G_\beta$, thus $i<j$ and we see that in the
vertical slice through $d_i$, we have the second of the admissible configurations of Figure
\ref{fig:switch}. The value of $\theta$ for this ruling is $\mu$.
To prove that $X$ is also an augmentation, we'll check it directly using
the analysis of admissible discs in $\gamma_\beta$ from p.\
2056 of \cite{en}.
Note that each of $a_1,\ldots,a_q$ (Figure
\ref{fig:lagpic}) has a trivial admissible disc contributing $1$ to its
differential, so it suffices to show that for each $j$, there is exactly
one more admissible disc with positive corner at $a_j$ and all negative
corners at crossings in $X$. In fact we will use induction to prove the following:
\begin{itemize}
\item For each $j$, this second disc $\Pi_j$
will have either no negative corner or, if $d_j$ exists, then exactly one
negative corner at $d_j$.
\item In the admissible sequence corresponding to $\Pi_j$, $i$ appears if and only
if $G_\beta$ contains an oriented path from $i$ to $j$, and each such $i$ shows up
exactly once.
\end{itemize}
The path $s_1$ completes the boundary of an admissible disc with positive
corner at $a_1$. Because $s_1$ is removed in the first stage, no crossing
along $s_1$ other than $d_1$ will be in $X$.
Now, assume that for each $j<n$, a unique disc $\Pi_j$ exists with the
said properties. Building a non-trivial admissible disc with positive corner at $a_n$,
we start along the path $s_n$. (We will concentrate on the boundary of the admissible
disc. Proposition 6.4 of \cite{en} classifies, in terms of admissible sequences, which
of the possible paths correspond to admissible discs.) When we reach a marker $j$, we
are forced to enter $\partial\Pi_j$. Then by the inductive hypothesis, we have no other
choice but to follow $\partial\Pi_j$ until we reach $a_j$. There, we travel around the
$j$th trivial disc and continue along $\partial\Pi_j$, back to $d_j$ and $s_n$. By the
hypothesis, each $a_i$ is visited at most once, so their sequence is admissible. At the
next marker along $s_n$, a similar thing happens but using another, disjoint branch of
$G_\beta$, so the sequence stays admissible.
If $d_n$ exists, then upon reaching it, we seemingly get a choice of turning or not. If
we do turn, i.e.\ continue along $s_n$, then after a few more markers, we successfully
complete the construction of $\Pi_n$. Because all markers along $s_n$ were visited, it
has both of the required properties.
We still have to rule out the option of not turning at $d_n$. Suppose that's what we do.
Then we end up on a path $s_m$, where $m$ is the endpoint of the edge of $G_\beta$ starting
at $n$; in particular, $m>n$. We may encounter markers along $s_m$, but the previous
analysis applies to them and eventually we always return to $s_m$ and exit the braid at
the right endpoint $m$ (or at an even higher number, in case we left $s_m$ at $d_m$). But
this is impossible by Lemma 6.2 of \cite{en}.
\end{proof}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{regipelda.eps}
\caption{A braid $\beta$ with an augmentation $X$ (marked crossings) which is also a ruling.
The forest graph is $G_\beta$ and the other two graph components constitute the
`graph realized by $X$,' as in \cite{en}.}
\label{fig:fonat}
\end{figure}
\begin{megj}
In \cite{en}, we used a two-component link of the braid-positive knots $8_{21}$ and
$16n_{184868}$ to illustrate a different construction of an augmentation.
Comparing Figure \ref{fig:fonat} to Figure 15 of \cite{en}, we see that the set $X$
constructed in the above proof is indeed different from that of Proposition 7.11 of
that paper. Also, the graph realized by this `new' $X$ (in the sense of Definition
7.9 in \cite{en}) is different from what we called the augmented graph of the underlying
permutation of $\beta$ there. In the example, these are both due to the fact that the
position of the augmented crossing `$3$' has changed.
\end{megj}
\section{Examples}\label{sec:ex}
The following proposition is easy to prove either using skein relations of Homfly and/or
Kauffman polynomials, or by a straightforward induction proof:
\begin{all}\label{pro:ketszal}
The ruling polynomial of the $(p,2)$ torus link is $R(z)=$
\[z^{p-1}+(p-1)z^{p-3}+{p-2\choose 2}z^{p-5}+{p-3\choose 3}z^{p-7}
+\ldots+{p-\lfloor p/2\rfloor\choose\lfloor p/2\rfloor}z^{p-2\lfloor p/2\rfloor -1}.\]
The total number of rulings is $R(1)=f_p$, the $p$'th Fibonacci number. The total number
of augmentations is $R(2^{-1/2})2^{(\chi^*+1)/2}=(2^{p+1}-(-1)^{p+1})/3$.
\end{all}
In particular, these ruling polynomials can be easily read off of Pascal's triangle, as
shown in Figure \ref{fig:pascal}. For example for $p=11$, we get the ruling polynomial
$R(z)=z^{10}+10z^8+36z^6+56z^4+35z^2+6$.
It seems likely that among Legendrian closures of positive braids with a given value
of $tb$, the $(p,2)$ torus link with $p=tb+2$ has the least number of rulings for all
values of $\theta$. For $tb=9$, the braid-positive knots with the largest number of
rulings (for each $\theta$) are the mutants
$13n_{981}$ and
$13n_{1104}$. These have $R(z)=z^{10}+10z^8+36z^6+60z^4+47z^2+14$.
\begin{figure}
\centering
\begin{minipage}[c]{.5\textwidth}
\centering
\includegraphics[width=\textwidth]{pascal.eps}
\end{minipage}
\hfill
\begin{minipage}[c]{.4\textwidth}
\centering
\caption{Ruling invariants of the $(3,2)$ and $(11,2)$ torus knots in Pascal's triangle.}
\label{fig:pascal}
\end{minipage}
\end{figure}
Mutant knots share the same Kauffman and Homfly polynomials, thus mutant braid-positive
knots cannot be distinguished by their ruling polynomials.
The braid-positive knots
$12n_{679}$ and
$13n_{1176}$ are not mutants yet they share the same ruling polynomial
$R(z)=z^{10}+10z^8+36z^6+58z^4+42z^2+11$ (their Kauffman and Homfly polynomials are
actually different, but they agree in the coefficients that mean numbers of rulings).
Proposition \ref{pro:ketszal} shows that for the $(p,2)$ torus link, roughly two thirds of
the $2^p$ subsets of its crossings are augmentations. This ratio depends above all on the
number of strands in the braid and goes down approximately by a factor of two every time
the latter increases by one. When the number of strands is low, the ratio is quite
significant\footnote{Thus the relatively complicated nature of the proof of Theorem \ref{thm:szimultan} and of the construction in section 7 of \cite{en} is somewhat misleading.}. This phenomenon seems to be unique to braid-positive links. (It may be
worthwhile to compare to Chekanov's $5_2$ diagrams, where out of the $64$ subsets, only $3$,
respectively $2$, are augmentations.)
\begin{pelda}
The following were computed using a computer program written by Supap Kirtsaeng, based on
Theorem \ref{thm:gauss}. (Note that mere numbers of augmentations can also be determined
from the Homfly or Kauffman polynomials using formula \eqref{eq:aug}.) The braid word
$(\sigma_1\sigma_2)^6$, corresponding to the $(3,6)$ torus link, yields 1597 augmentations
(about $39$\% of all subsets of its crossings). The knot
$12n_{679}$ (braid word $\sigma_1^3\sigma_2^2\sigma_1^2\sigma_2^5$) has $1653$ augmentations
(appr.\ $40$\%).
$13n_{1176}$ also has $1653$ augmentations, but its braid index is $4$; for the braid word
$\sigma_1\sigma_2^2\sigma_3\sigma_1^2\sigma_2^2\sigma_3^2\sigma_2\sigma_1\sigma_2$, the
augmentations account for only $20$\% of all subsets of crossings. The knots
$13n_{981}$ (closure of $\sigma_1\sigma_2^3\sigma_3\sigma_1\sigma_3\sigma_2^3\sigma_3^3$) and
$13n_{1104}$ ($\sigma_1\sigma_2^2\sigma_3\sigma_1\sigma_3\sigma_1^2\sigma_2^3\sigma_3\sigma_1$)
both have $1845$ augmentations (i.e., $23$\% of all possibilities work). About the following
two knots, Stoimenow \cite{stoi} found that their braid index is $4$, but in order to obtain
them as closures of positive braids, we need $5$ strands.
$16n_{92582}$ (braid word
$\sigma_1\sigma_2^2\sigma_3\sigma_4\sigma_3\sigma_1^2\sigma_2^2\sigma_3^2
\sigma_2\sigma_4\sigma_3^2$) has 7269 augmentations, which is only about $11$\% of all
possibilities.
$16n_{29507}$
($\sigma_1\sigma_2^2\sigma_3\sigma_1\sigma_3\sigma_4\sigma_1\sigma_2\sigma_4\sigma_2
\sigma_3^3\sigma_4\sigma_2$) has $8109$ ($12$\%).
\end{pelda}
{\small
|
2,877,628,089,813 | arxiv | \section{Introduction}
Non-stationary time series have been investigated through several
approaches. In particular, the characterization of fluctuations and
their scaling behavior have been the focus of many studies, since
they reveal the nature of the dynamics. In this context, financial
time series have attracted considerable attention \cite{Mante}. The
large length of the available data of various stock market indices
make them ideal candidates for analysis. Furthermore, the complex
dynamics of the variations in stock prices yield fluctuations which
can show correlations, as well as scaling behavior. The goal of the
present paper is to study the self-similar and correlation
properties of NASDAQ and BSE indices belonging to two different
economical environments. These stock indices belonging to developed
and developing countries may show characteristic differences,
possibly arising due to differences in their underlying dynamics. We
concentrate on the nature of correlations and fractal behavior for
which wavelet transform \cite{daub,mall} is used as a tool for
extracting the fluctuations at different scales.
A number of methods have been devised to find scaling behavior in
time series. The well-known structure function method \cite{ba} and
the recently developed wavelet transform modulus maxima (WTMM)
method \cite{arn1}, relying on continuous wavelet transforms, are
widely used for the analysis of stationary data. The fact that most
of the time series arising in real systems are non-stationary in
nature introduces complications in estimating the scaling behavior,
while using the above two approaches, which are global in nature.
Hence, in recent times, local approaches, like detrended fluctuation
analysis (DFA) and its generalization MF-DFA
\cite{gopi,muzy,pen,khu,jan} have been developed to handle
non-stationary data. In this case, one uses windows of various sizes
to separate fluctuations from the trend, which can also be shuffled
to remove any correlation in the data set. While isolating the
average or the trend of the data points in a given window, one takes
recourse to linear or quadratic fit in the DFA approach. We have
introduced a new method based on discrete wavelets
\cite{mani1,mani2,ran} to characterize the scaling behavior of
non-stationary time series. The present procedure is similar to
those in MF-DFA \cite{jan}, except that in order to detrend, we use
wavelets and MF-DFA uses local polynomial fits. Recently, this
method has been used by Brodu, to analyze in real time, fractal
behavior of dynamic time series \cite{nico}. The relative merits of
MF-DFA and a variety of other approaches to characterize
fluctuations have been carried out in Ref.\cite{jaro}. It is worth
emphasizing that fluctuation analysis and characterization have been
earlier attempted using Haar wavelets, in the context of bio-medical
applications, without the study of scaling behavior
\cite{pkp1,pkp2}.
Wavelets from Daubechies family are used for extracting trend from
the given data set. Fluctuations are captured by high-pass
coefficients and the trend captured by the low-pass coefficients of
wavelet transform. The discrete wavelet transform provide a handy
tool for isolating the trend in a non-stationary data set, because
of its built-in ability to analyze data in variable window sizes. In
this note, we analyze returns of stock index values through our new
wavelet based method. Multi-fractal properties are also investigated
using multi-fractal spectrum. We analyze daily price of NASDAQ
composite index for a period of 20 years, starting from 11-Oct-1984
to 24-Nov-2004, and BSE sensex index, over a period of 15 years,
starting from 2-Jan-1991 to 12-May-2005.
\section{Data analysis}
It is found that the nature of correlation is quite different
between these two financial time series and significant
non-statistical correlation exists in both of them. Removal of the
same reveals that BSE index is primarily mono-fractal with the
fluctuations showing a Gaussian random noise character. On the other
hand, the NASDAQ index shows a weak multifractal behavior with long
range statistical correlation.
\begin{figure}
\centering
\includegraphics[width=3in]{chap21.eps}
\caption{[a] NASDAQ daily (close) composite index for a period of 20
years, starting from 11-Oct-1984 to 24-Nov-2004, [b] daily returns
show more clusters of small and large fluctuations and [c] the
returns after shuffling show disappearance of clustering behavior.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3in]{chap22.eps}
\caption{[a] BSE sensex daily (close) index value for a period of 15
years, starting from 2-Jan-1991 to 12-May-2005, [b] daily returns
show much less (compared to Fig. 1 (b)) clustering of fluctuations
and [c] the returns after shuffling show no significant difference
in appearance relative to Fig. 2 (b).}
\end{figure}
From the financial (NASDAQ composite index and BSE sensex) time
series $x(t)$, we first compute the scaled returns defined as,
\begin{equation}
G(t)\equiv [ln(x(t+1))- ln(x(t))]/\sigma,~~~ t=1,2...(N-1);
\end{equation}
here $\sigma$ is the standard deviation of $x(t)$. From the returns,
the signal profile is estimated as the cumulative,
\begin{equation}
Y(i) = \sum_{t=1}^i [G(t)], ~~~ i=1,....,N-1.
\end{equation}
Next, we carry out wavelet transform on the profile $Y(i)$ to
separate the fluctuations from the trend by considering precise
values of window sizes $W$ corresponding to different levels of
wavelet decomposition. We obtain the trend by discarding the
high-pass coefficients and reconstructing the trend using inverse
wavelet transform. The fluctuations are then extracted at each
level by subtracting the obtained time series from the original
data. Though the Daubechies wavelets extract the fluctuations
nicely, its asymmetric nature and wrap around problem affects the
precision of the values. This is corrected by applying wavelet
transform to the reverse profile, to extract a new set of
fluctuations. These fluctuations are then reversed and averaged over
the earlier obtained fluctuations. These are the fluctuations (at a
particular level), which we consider for analysis. In Figs. 1 and 2,
we give the time series for the two index data sets, and the
corresponding returns. We also show shuffled returns for the two
series to examine the correlation as well as "bursty" (clustering)
behaviour.
The extracted fluctuations are subdivided into non-overlapping
segments $M_s = int(N/s)$ where $s=2^{(L-1)}W$ is the wavelet window
size at a particular level $L$ for the chosen wavelet. Here $W$ is
the number of filter coefficients of the discrete wavelet transform
basis under consideration. For example, with Db-4 wavelets, $s=4$ at
level 1 and $s=8$ at level 2 and so on. It is obvious that some data
points would have to be discarded, in case $N/s$ is not an integer.
This causes statistical errors in calculating the local variance. In
such cases, we have to repeat the above procedure starting from the
end and going to the beginning to calculate the local variance. The
detrending and extracted fluctuations have been depicted in Figs 3
and 4.
\begin{figure}
\centering
\includegraphics[width=3in]{rnasL3.eps}
\caption{[a] Detrending the integrated returns of NASDAQ composite
index at the scale level-3, through Db-8, wavelet, and [b] the
extracted fluctuations at the scale level-3 (window size 32).}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3in]{rnasL4.eps}
\caption{[a] Detrending the integrated returns of NASDAQ composite
index at the scale level-4, through Db-8, wavelet, and [b] the
extracted fluctuations at the scale level-4 (window size 64).}
\end{figure}
The $q^{th}$ order fluctuation function $F_q(s)$ is obtained by
squaring and averaging fluctuations over all segments:
\begin{equation}
F_q(s) \equiv \{ \frac {1}{2 M_s} \sum_{b=1}^{2 M_s} [
F^2(b,s)]^{q/2}\}^{1/q}.
\end{equation}
Here '$q$' is the order of moments that takes real values. The above
procedure is repeated for variable window sizes for different values
of $q$ (except $q=0$). The scaling behaviour is obtained by
analyzing the fluctuation function,
\begin{equation}
F_q(s) \sim s^{h(q)},
\end{equation}
in a logarithmic scale for each value of $q$. If the order $q = 0$,
direct evaluation through Eq. (3) leads to divergence of the scaling
exponent. In that case, logarithmic averaging has to be employed to
find the fluctuation function:
\begin{equation}
F_q(s) \equiv exp \{ \frac {1}{2 M_s} \sum_{b=1}^{2 M_s} ln [
F^2(b,s)]^{q/2} \}^{1/q}.
\end{equation}
\section{Results and Discussion}
As is well-known, if the time series is monofractal, the $h(q)$
values are independent of $q$. For multifractal time series the
$h(q)$ values depend on $q$. The correlation behaviour is
characterized from the Hurst exponent ($H=h(q=2)$), which varies
from $0 < H < 1$. For long range correlation, $H > 0.5$, $H=0.5$ for
uncorrelated and $H <0.5$ for long range anti-correlated time
series.
The scaling exponent is calculated for various values of $q$ for
both the stock index values. Figs. 5 and 6, show the way $h(q)$ and
$\tau(q)$ vary with $q$ for the returns and shuffled returns of the
two time series. The non-linear behaviour of $h(q)$ for different
$q$ values, is the measure of multifractality.
\begin{figure}
\centering
\includegraphics[width=3in]{hqnas81.eps}
\caption{[a](NASDAQ composite index) Scaling exponents h(q) values
for various $q$ values and [b] $\tau(q)$ representation of h(q)
values for various $q$ values, where $\tau(q) = qh(q)$.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3in]{hqbse81.eps}
\caption{[a] (BSE sensex) Scaling exponents $h(q)$ values for
various $q$ values and [b] $\tau(q)$ representation of h(q) values
for various $q$ values, where $\tau(q) = qh(q)$.}
\end{figure}
The scaling behaviour of the observed data sets can also be studied
by evaluating $f(\alpha)$ spectrum. $f(\alpha)$ values are obtained
from Legendre transform of $\tau(q)$: $ f(\alpha) \equiv q \alpha -
\tau(q)$, where $\alpha \equiv \frac{d\tau(q)}{dq}$. For monofractal
time series, $\alpha=const.$, whereas for multifractal time series
there occurs a distribution of $\alpha$ values. The $f(\alpha)$
spectra for the two time series are shown in Figs. 7 and 8. For the
unshuffled returns, one observes a broader spectrum, whereas for the
shuffled returns, where the correlation is lost, the same is
narrower.
\begin{figure}
\centering
\includegraphics[width=3in]{fhnas.eps}
\caption{From the integrated returns of NASDAQ composite index
values, the calculated multifractal spectrum is broader than the
spectrum of shuffled returns.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3in]{fhbse.eps}
\caption{From the integrated returns of BSE sensex values, the
calculated multifractal spectrum is much broader than the spectrum
of shuffled returns, when compared to the NASDAQ composite index
values.}
\end{figure}
The semi-log plot of distribution of logarithmic returns of NASDAQ
composite index values is shown in Fig. 9. It exhibits fat tails and
non-Gaussian features. In case of BSE sensex, the semi-log plot of
distribution of logarithmic returns shows fat tails, which are less
prominent, as shown in Fig. 10. The distribution is quite similar to
Gaussian white noise, revealing distinct differences between NASDAQ
composite index and BSE sensex values. Although correlation is
present in the two time series, they reveal distinct probability
distributions once the correlation is removed. Furthermore, we
observer that in case of BSE sensex values, the calculated
multifractal spectrum is much broader than the spectrum of shuffled
returns, when compared to the NASDAQ composite index values.
\section{Conclusion}
In conclusion, the wavelet based method presented here is found to
be quite efficient in extracting fluctuations from trend. It reveals
the distinct differences in the long-range correlation, as well as
fractal behavior of the two stock index values. Strong
non-statistical correlation is observed in BSE index, whereas the
NASDAQ index showed a multifractal behavior with long-range
statistical correlations. In case of BSE index the removal of
correlation revealed the Gaussian random noise character of the
fluctuations. It is interesting to note that the effect of country
specific parameters like corruption on economic development and
investment has been recently quantified through scaling analysis
\cite{shao}. In a similar manner, the above observed differences
between the two stock indices belonging to two different economic
environment is probably due to local dynamics.
\begin{figure}
\centering
\includegraphics[width=3in]{levynas.eps}
\caption{The semi-log plot of distribution of logarithmic returns of
NASDAQ composite index values compared with Gaussian distribution.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3in]{levybse.eps}
\caption{The semi-log plot of distribution of logarithmic returns
BSE sensex values compared with Gaussian distribution.}
\end{figure}
|
2,877,628,089,814 | arxiv | \section*{abstract}
Synaptic connections are known to change dynamically.
High-frequency presynaptic inputs induce decrease of synaptic weights.
This process is known as short-term synaptic depression.
The synaptic depression controls a gain for presynaptic inputs.
However, it remains a controversial issue what are functional roles of this gain control.
We propose a new hypothesis that one of the functional roles is to enlarge basins of attraction.
To verify this hypothesis, we employ a binary discrete-time associative memory model which consists of excitatory and inhibitory neurons.
It is known that the excitatory-inhibitory balance controls an overall activity of the network.
The synaptic depression might incorporate an activity control mechanism.
Using a mean-field theory and computer simulations, we find that the basins of attraction are enlarged whereas the storage capacity does not change.
Furthermore, the excitatory-inhibitory balance and the synaptic depression work cooperatively.
This result suggests that the synaptic depression works to improve an error-correcting ability in cortical circuits.
\newpage
\section{Introduction}
Cortical neurons receive thousands of synaptic inputs.
The neurons might receive high-frequency presynaptic inputs.
Extracting intrinsic signals from random fluctuations in high-frequency presynaptic inputs is a severe challenge.
Some mechanisms that reduce a gain for presynaptic inputs might work at synaptic sites.
Such gain control cannot be achieved through fixed synaptic weights because input rates change dynamically.
Neurophysiological experiments show that high-frequency inputs induce the decrease of synaptic weights \cite[]{Thomson94}.
This process is known as short-term synaptic depression.
The synaptic depression is known to control the gain for presynaptic inputs \cite[]{Abbott97,Tsodyks97}.
This property might influence not only the activity of a single neuron but also that of the overall activity \cite[]{Bressloff99,Kistler99,Tsodyks00}.
However, it is still a controversial issue what are functional roles of the gain control.
To elucidate the functional roles, some information must be embedded in the synaptic connections.
We employ an associative memory model that stores memory patterns in synaptic connections.
Only a few works have investigated how the synaptic depression affects the performance of the associative memory model \cite[]{Bibitchkov02,Pantic02,Torres02}.
The memory patterns embedded by Hebb rule \cite[]{Hebb49} become fixed points, i.e., attractors \cite[]{Amit89}.
Bibitchkov et al. found that the synaptic depression reduced a storage capacity \cite[]{Bibitchkov02}.
Torres et al. found that the storage capacity decreased with the degree of the depression in a thermodynamical limit \cite[]{Torres02}.
However, the main targets of these works were the steady states of the models.
It is necessary to investigate the dynamical properties of the model because the synapses change dynamically.
One of the important dynamical properties is basins of attraction which express the regions where the system converges to stored patterns.
In the view of information processing, the basins of attraction reflect an error-correcting ability.
Bibitchkov et al. found that the synaptic depression \textit{shrunk} the basins of attraction at a large loading rate and increased them a little at a small loading rate \cite[]{Bibitchkov02}.
In contrast, we propose a new hypothesis that the synaptic depression \textit{enlarges} the basins of attraction not only at a small loading rate but at a large loading rate.
We employ a binary discrete-time associative memory model which consists of excitatory and inhibitory neurons.
The memory patterns with a small firing rate, i.e., sparse patterns, are embedded in the synaptic connections among the excitatory neurons.
This coding scheme is known as sparse coding.
An activity control is requisite for the stable retrieval of the sparse patterns \cite[]{Okada96}.
The excitatory-inhibitory balance controls an overall activity.
The synaptic depression might incorporate the activity control mechanism.
Specifically, when an overall activity is high, the synaptic depression decreases the gain for presynaptic inputs and maintains the overall activity at a constant level.
We also investigate whether the excitatory-inhibitory balance and the synaptic depression work cooperatively or not.
This paper consists of seven sections.
The section 2 gives the description of the model employed in this paper.
In the section 3 mean-field equations which describe a steady state of the model are shown.
In the sections 4 and 5 we investigate how the synaptic depression influences the performance of the model which consists of only excitatory neurons.
In the section 6 we investigate the relationship between the synaptic depression and the excitatory-inhibitory balance.
In the section 7 we summarize the results obtained in this paper and make a discussion about the model.
\section{Model}
The model used in this study consists of excitatory and inhibitory neurons \cite[]{Amit94,Vreeswijk96,Matsumoto05}.
The $i$-th excitatory neuron ($i=1,\cdots,N$) is characterized by its binary state $s_i(t)=\{0,1\}$ and discrete time $t$.
If the excitatory neuron fires at time $t$, its state is $s_i(t)=1$; otherwise, $s_i(t)=0$.
A thermodynamics limit, $N \rightarrow \infty$, is considered.
The excitatory neurons are all-to-all connected.
The synaptic weight from the presynaptic excitatory neuron $j$ to the postsynaptic excitatory neuron $i$, $J_{ij}(t)$, are dynamically changed, and its specific value will be discussed later.
The synaptic weights between the excitatory neurons and the inhibitory neurons are uniform.
There are no connections among the inhibitory neurons.
Therefore, the population of the inhibitory neurons can be regarded as a single inhibitory neuron.
The state of the $i$-th excitatory neuron is updated by the synchronous rule:
\begin{eqnarray}
s_i(t+1) &=& \Theta\Big( h_i(t) - g\big(\bar{s}(t) -f \big) - \hat{\theta} \Big) \label{eq.model} \\
&=& \Theta\Big( \sum^N_{j \ne i} J_{ij}(t)s_j(t) - g\big(\bar{s}(t) -f \big) - \hat{\theta} \Big) \\
&=& \Theta\Big( \sum^N_{j \ne i} J_{ij}(t)s_j(t) - g I(t) - \hat{\theta} \Big) \\
&=& \Theta\Big( \sum^N_{j \ne i} J_{ij}(t)s_j(t) - \bar{\theta}(t) \Big)
\end{eqnarray}
where $h_i(t)=\sum^N_{j \ne i} J_{ij}(t)s_j(t)$ denotes an internal potential, $\bar{s}(t)=\frac{1}{N}\sum^N_{j=1} s_j(t)$ denotes a mean firing rate of the excitatory neurons, $f$ denotes a firing rate of memory patterns, $g$ denotes the strength of the inhibition, $\hat{\theta}$ is a uniform threshold, and $\bar{\theta}(t)=g I(t) + \hat{\theta}$ is an effective threshold.
The inhibitory neuron receives the mean output of the excitatory neurons, $\bar{s}(t)$, and it sends output to the excitatory neurons as $I(t)=\bar{s}(t)-f$.
$g=0$ means that the model consists of only excitatory neurons.
If the mean firing rate of the excitatory neurons, $\bar{s}(t)$, is higher than $f$, the inhibition increases an effective threshold $\bar{\theta}(t)$.
Then the excitatory neurons tend to be silent.
On the contrary, if $\bar{s}(t)$ is lower than $f$, the inhibition decreases the effective threshold $\bar{\theta}(t)$.
Then the excitatory neurons tend to activate.
The output function $\Theta(\cdot)$ of the excitatory neurons is a step function:
\begin{eqnarray}
\Theta(u) &=&
\begin{cases}
1. &(u \ge 0)\\
0. &(u<0)
\end{cases}
\end{eqnarray}
Memory patterns $\bm{\xi}^{\mu}$ ($\mu=1,2,\cdots,p$) are stored in the synaptic connections among the excitatory neurons.
Each element $\xi_i^{\mu}$ of the memory pattern $\bm{\xi}^{\mu}=(\xi^{\mu}_1,\xi^{\mu}_2,\cdots,\xi^{\mu}_N)$ is generated independently by
\begin{equation}
\mbox{Prob}[\xi_i^{\mu}= 1]=1-\mbox{Prob}[\xi_i^{\mu}= 0]=f.
\end{equation}
The expectation of $\bm{\xi}^{\mu}$ is $\mbox{E}[\xi_i^{\mu}]=f$.
We consider a small firing rate $f$.
A value $\alpha=p/N$ is defined as a loading rate.
When the loading rate $\alpha$ is larger than a critical value $\alpha_C$, the retrieval of memory patterns become unstable.
The critical value $\alpha_C$ is known as a storage capacity.
The closeness between $\bm{s}(t)$ and $\bm{\xi}^{\mu}$ at time $t$ is characterized by an overlap
\begin{equation}
m^{\mu}(t)=\frac{1}{Nf(1-f)}\sum_{i=1}^N(\xi^{\mu}_i-f)s_i(t).
\end{equation}
If the overlap is close to $1$, i.e., $m^{\mu}(t) \approx 1$, the model succeeds to retrieve the memory pattern $\bm{\xi}^{\mu}$.
Hereafter, the target pattern for the retrieval is the first memory pattern $\bm{\xi}^1$.
The synaptic weight $J_{ij}(t)$ incorporating the synaptic depression is determined by a phenomenological model of synapses \cite[]{Abbott97,Tsodyks97}.
When the synapses transmit input signals, they exhaust a finite amount of resources, e.g., neuromodulators.
A dynamical amplitude factor $x_j(t)$ denotes the fraction of available resources.
After each spike is emitted, the resources decrease by a certain fraction $U_{SE}$ and to recover with a time constant $\tau$.
If the recover lags behind the interval of high-frequency presynaptic inputs, the amount of resources decreases and the synapses are depressed.
The factor $x_j(t)$ is updated by the synaptic dynamics \cite[]{Tsodyks97,Pantic02}:
\begin{equation}
x_j(t+1)=x_j(t)+\frac{1-x_j(t)}{\tau}-U_{SE}x_j(t)s_j(t). \label{eq.depression}
\end{equation}
$x_j(t)=1$ implies that the synapses are not depressed.
The synaptic weight $J_{ij}(t)$ incorporating the synaptic depression is obtained by multiplying a fixed synaptic weight $\tilde{J}_{ij}$ and the dynamic amplitude factor $x_j(t)$ ($0<x_j(t)\leq1$):
\begin{equation}
J_{ij}(t)=\tilde{J}_{ij}x_j(t).
\end{equation}
The synaptic weight $\tilde{J}_{ij}$ is determined by a Hebbian-like rule, i.e., a covariance rule:
\begin{equation}
\tilde{J}_{ij} = \frac{1}{Nf(1-f)}\sum^p_{\mu =1}(\xi^{\mu}_i-f)(\xi^{\mu}_j-f).
\end{equation}
A self-connection $\tilde{J}_{ii}$ is assumed to be nonexistent.
For simplicity, the synaptic depression is incorporated into only excitatory-excitatory connections.
The excitatory-inhibitory connections are fixed.
\section{Mean-field Equations at a Steady State}
In this section, we derive mean-field equations at a steady state, i.e., $t \rightarrow \infty$.
For simplicity, individual values at the steady state are written by $x_j(\infty)=x_j$, $h_i(\infty)=h_i$, $s_i(\infty)=s_i$, and $m^{\mu}(\infty) = m^{\mu}$.
The factor $x_j(t)$ reaches its steady-state value by $t \rightarrow \infty$ in equation (\ref{eq.depression}) \cite[]{Bibitchkov02}:
\begin{equation}
x_j=\frac{1}{1+\gamma s_j},
\end{equation}
where $\gamma=\tau U_{SE}$.
The value $\gamma$ indicates the level of the synaptic depression.
Since $s_j$ takes a binary value, i.e., $s_j = \{0,1\}$, the value $x_j s_j$ can be written as
\begin{equation}
x_j s_j=\frac{s_j}{1+\gamma s_j}=\frac{1}{1+\gamma}s_j.
\end{equation}
By using this relationship, the internal potential $h_i$ at the steady state is written as
\begin{equation}
h_i = \sum_{j \ne i}^N \tilde{J}_{ij}x_j s_j = \sum_{j \ne i}^N \tilde{J}_{ij}\frac{s_j}{1+\gamma} = \frac{1}{1+\gamma}\sum_{j \ne i}^N \tilde{J}_{ij}s_j.
\end{equation}
Then, the neuronal state $s_i$ is written as
\begin{equation}
s_i = \Theta( h_i - g(\bar{s} -f ) - \hat{\theta} ) = \Theta\Big( \frac{1}{1+\gamma}\sum_{j \ne i}^N \tilde{J}_{ij}s_j - g(\bar{s} -f ) - \hat{\theta}\Big).
\end{equation}
By using equations (7), (9), and (12), the internal potential $h_i$ is represented as
\begin{eqnarray}
h_i \!\!\!\!&=&\!\!\!\! \frac{1}{1+\gamma}\sum_{j \ne i}^N\tilde{J}_{ij}s_j
= \frac{1}{1+\gamma}\frac{1}{Nf(1-f)}\sum^p_{\mu =1}\sum^N_{j \ne i}(\xi^{\mu}_i-f)(\xi^{\mu}_j-f)s_j\\
\!\!\!\!&=&\!\!\!\! \frac{1}{1+\gamma}\bigg\{ \frac{1}{Nf(1-f)}\sum^N_{j=1}(\xi^1_i-f)(\xi^1_j-f)s_j\nonumber \\
\!\!\!\!& &\!\!\!\! +\frac{1}{Nf(1-f)}\sum^p_{\mu =2}\sum^N_{j=1}(\xi^{\mu}_i-f)(\xi^{\mu}_j-f)s_j
-\frac{1}{Nf(1-f)} \sum^p_{\mu=1} (\xi^{\mu}_i-f)^2s_i \bigg\} \nonumber\\ \\
\!\!\!\!&=&\!\!\!\! \frac{1}{1+\gamma} \bigg\{ (\xi^1_i-f)m^1+z_i\bigg\},\\
z_i \!\!\!\!&=&\!\!\!\! \sum_{\mu=2}^p(\xi^{\mu}_i-f)m^{\mu} -\alpha s_i.
\end{eqnarray}
The first term of the equation (17) is a signal term for the retrieval of the target pattern $\bm{\xi}^1$.
The second term is a cross-talk noise term which represents contributions from non-target patterns and prevents the target pattern $\bm{\xi}^1$ from being retrieved.
According to a mean-field theory \cite[]{Okada96,Shiino92}, the cross-talk noise obeys a Gaussian distribution with mean $\Gamma s_i$ and variance $\sigma^2$.
By using this theory, the neuronal state is written as
\begin{equation}
s_i = \Theta \Big(\frac{1}{1+\gamma}\left((\xi^1-f)m^1 + \sigma z_i + \Gamma s_i \right) - g(\bar{s} - f) - \hat{\theta} \Big).
\end{equation}
By applying Maxwell rule to equation (19), the solution $s_i$ is obtained as
\begin{equation}
s_i = \Theta \Big(\frac{1}{1+\gamma}\left((\xi^1-f)m^1 + \sigma z_i + \frac{\Gamma}{2} \right) - g(\bar{s} - f) - \hat{\theta} \Big).
\end{equation}
The mean-field equations describing the steady state of the model are obtained by the following equations.
For simplicity, the overlap $m^1$ is written as $m$.
\begin{eqnarray}
m &=& -\frac{1}{2} \mathrm{erf}(\phi_1) + \frac{1}{2} \mathrm{erf}(\phi_2),\\
U &=& \frac{f}{\sqrt{2\pi} \sigma}\exp(-\phi_1^2) + \frac{1-f}{\sqrt{2\pi} \sigma}\exp(-\phi_2^2),\\
q &=& \frac{1}{2}-\frac{f}{2}\mathrm{erf}(\phi_1) - \frac{1-f}{2}\mathrm{erf}(\phi_2),\\
\bar{s} &=& \frac{1}{2}-\frac{f}{2}\mathrm{erf}(\phi_1) - \frac{1-f}{2}\mathrm{erf}(\phi_2),
\end{eqnarray}
where $\phi_1=\frac{-(1-f)m+(1+\gamma)g(\bar{s}-f)+(1+\gamma)\hat{\theta}-\frac{\Gamma}{2}}{\sqrt{2 \alpha \sigma^2}}$, $\phi_2=\frac{fm+(1+\gamma)g(\bar{s}-f)+(1+\gamma)\hat{\theta}-\frac{\Gamma}{2}}{\sqrt{2 \alpha \sigma^2}}$, $\sigma^2 = \frac{\alpha q}{(1-U)^2}$, $\Gamma=\frac{\alpha U}{1-U}$, $\mathrm{erf}(y) = \frac{2}{\sqrt{\pi}}\int_0^y\mathrm{exp}(-u^2)du$.
Solving these equations numerically, we discuss the macroscopic state of the model.
The detailed derivation of the mean-field equations is shown in appendix.
The mean-field equations derived in this paper are different from the equations in the previous works \cite[]{Bibitchkov02,Torres02}.
The equations that Bibitchkov et al. derived dropped out the equation of $U$ (equation (22) in this paper).
Therefore, the cross-talk noise was not estimated accurately.
Torres et al. assumed that $x_j$ was independent of $s_j$.
However, the equation (\ref{eq.depression}) apparently shows that $x_j$ depends on $s_j$.
Therefore, this assumption was invalid.
Thus, the mean-field equations derived in this paper can describe the steady state of the model more accurately than in the previous works.
\section{Storage Capacity}
We investigate how the synaptic depression influences the steady state of the model.
In this section, we consider excitatory neurons, i.e., $g=0$.
Since the output function in equation (\ref{eq.model}) is a step function, the neuronal state is determined by the sign of an argument.
Then, the neuronal state at the steady state is written as
\begin{align}
s_i &= \Theta\Big( \frac{1}{1+\gamma}\sum_{j \ne i}^N \tilde{J}_{ij}s_j -\hat{\theta}\Big)
= \Theta\Big(\frac{1}{1+\gamma}\big(\sum_{j \ne i}^N \tilde{J}_{ij}s_j -(1+\gamma)\hat{\theta}\big)\Big)\\
&= \Theta\Big( \sum_{j \ne i}^N \tilde{J}_{ij}s_j -(1+\gamma)\hat{\theta}\Big).
\end{align}
If the threshold $\hat{\theta}$ is set at $\hat{\theta}=\theta/(1+\gamma)$, the neuronal state at the steady state is written as
\begin{equation}
s_i = \Theta\Big( \sum_{j \ne i}^N \tilde{J}_{ij}s_j -\theta\Big),
\end{equation}
where $\theta$ is a threshold when the synaptic depression is not incorporated into the model.
This equation implies that the steady state of the model incorporating the synaptic depression is equivalent to the one without the synaptic depression when the threshold is set at $\hat{\theta}=\theta/(1+\gamma)$.
Hereafter, the threshold with the synaptic depression is written as $\hat{\theta}$ while the threshold without the synaptic depression as $\theta$.
Here, we show the mechanisms where the memory pattern $\bm{\xi}^1$ is retrieved in the model with the synaptic depression.
From the equation (20), the internal potential at the steady state is given by
\begin{equation}
h_i = \frac{1}{1+\gamma}\left\{(\xi^1-f)m^1 + \sigma z_i + \frac{\Gamma}{2} \right\}.
\end{equation}
At first, we consider the case where the number of memory patterns is small, i.e., $p \sim O(1)$ and the synapses are not depressed, i.e., $\gamma = 0$.
The cross-talk noise does not exist in this case, and the second and third terms in the equation (28) vanish because these terms come from the cross-talk noise.
Let the neuronal state at the steady state be equivalent to the first memory pattern: $\bm{s}=\bm{\xi}^1$.
The probability distribution of $h_i$ is shown in Figure \ref{fig1}(a).
When the threshold $\theta$ is set between $-f$ and $1-f$, the neuronal state $s_i$ takes $1$ with probability $f$ and $0$ with $1-f$.
The retrieval of the memory pattern $\bm{\xi}^1$ is successful.
Next, we consider the case where the number of memory patterns is order $N$, i.e., $p \sim O(N)$ and the synapses are not depressed, i.e., $\gamma = 0$.
The probability distribution of $h_i$ is shown in Figure \ref{fig1}(b).
Setting the threshold $\theta$ at an appropriate value is essential for the stable retrieval \cite[]{Okada96,Matsumoto02}.
Finally, we consider the case where the number of memory patterns is order $N$, i.e., $p \sim O(N)$ and the synapses are depressed, i.e., $\gamma>0$.
Since the signal and the variance of the cross-talk noise are scaled by $1/(1+\gamma)$ (see equation (28)), the probability distribution of $h_i$ is shown in Figure \ref{fig1}(c).
When the threshold $\hat{\theta}$ is set at $\hat{\theta}=\theta/(1+\gamma)$, the retrieval of the memory pattern $\bm{\xi}^1$ succeeds.
Solving the mean-field equations (equations (21-24)) numerically, the steady state of the model can be analyzed.
Figure \ref{fig2}(a) shows the dependency of the overlap $m$ on the loading rate $\alpha$ without the synaptic depression, i.e., $\gamma=0$ at $f=0.1$.
The dashed lines are obtained by solving the mean-field equations (equations (21-24)) numerically while the error bars indicate medians and quartile deviations of $m(100)$ obtained by computer simulations in $11$ trials at $N=5000$.
The initial state is set at the first memory pattern, i.e., $\bm{s}(0)=\bm{\xi}^1$, and the threshold is fixed at $\theta=0.51$ which is optimized to maximize the storage capacity.
The storage capacity $\alpha_C$ is $0.44$.
Figure \ref{fig2}(b) shows the dependency of the overlap $m$ on the loading rate $\alpha$ with the synaptic depression, i.e., $\gamma=1.0$, $\tau=2.0$, $U_{SE}=0.5$, $x_j(0)=0.5$, and $\hat{\theta}=\frac{\theta}{1+\gamma}=0.255$.
The factor $x_j(t)$ obeys equation (\ref{eq.depression}) in computer simulations.
The storage capacity $\alpha_C$ is $0.44$.
This results show that the synaptic depression does not change the steady states.
\section{Basins of Attraction}
We investigate how the synaptic depression influences basins of attraction.
When a loading rate $\alpha$ is less than a storage capacity $\alpha_C$, a critical overlap $m_C$ exists \cite[]{Amari88}.
When an initial overlap $m^1(0)$ is larger than the critical overlap $m_C$, the retrieval of the memory pattern $\bm{\xi}^1$ succeeds.
In other words, the system converges to the pattern $\bm{\xi}^1$.
Therefore, the region of $m^1(0)>m_C$ is known as the basins of attraction.
The basins of attraction express an error-correcting ability of the model.
If the basins of attraction are enlarged, it means that the error-correcting ability of the associative memory model is improved.
In this section, we consider excitatory neurons, i.e., $g=0$, and this is same case as section 4.
Here, we investigate the basins of attraction without the synaptic depression, i.e., $\gamma=0$.
At first, we consider the case where the number of memory patterns is small, i.e., $p \sim O(1)$.
Let us consider the probability distribution of the internal potential $h_i(0)$ at time $t=0$.
The peak of the distribution of $\xi_i^1=1$ is located at $(1-f)m^1(0)$.
If the peak of the distribution of $\xi_i^1=1$ is smaller than the threshold $\theta$, the states of all neurons become $0$ at time $t=1$ and the retrieval fails (Figure \ref{fig3}(a)).
Therefore, the peak is larger than the threshold $\theta$, i.e., $(1-f)m^1(0) > \theta$ for the successful retrieval (Figure \ref{fig3}(b)).
When the loading rate $\alpha$ is small, the critical overlap $m_C$ satisfies the relationship of $m_C=\frac{\theta}{1-f}$.
Next, we consider the case where the number of memory patterns is order $N$, i.e., $p \sim O(N)$.
If the threshold is set at a small value at $t=0$, the initial overlap $m^1(0)$ can be a small value.
If the threshold is fixed at a small value, the threshold crosses the distribution of $\xi^1_i=0$ because of the cross-talk noise (Figure \ref{fig4}(a)).
At the next time $t=1$, the neuronal state whose internal potential $h_i(0)$ is larger than the threshold $\theta$ becomes $1$ even though the neuron codes $\xi^1_i=0$ (shadow part of Figure \ref{fig4}(a)).
The mean firing rate of the model increases, and the overlap decreases.
Then the distribution of $\xi^1_i=1$ is smaller than the threshold, and the retrieval fails.
If the threshold increases at $t=1$ to keep the mean firing rate at a constant level, the overlap increases at the next time $t=2$.
As the threshold increases to maintain the mean firing rate at a constant level, the overlap increases (Figure 4(b)).
This implies that if the threshold increases gradually for the increase of the signal in the progress of the retrieval, even though the initial overlap $m^1(0)$ is a small value, the retrieval succeeds.
In other words, the basins of attraction are enlarged.
This is an activity control mechanism \cite[]{Okada96,Matsumoto02}.
Here, we investigate the basins of attraction with the synaptic depression, i.e., $\gamma>0$.
We consider the case where the number of memory patterns is order $N$, i.e., $p \sim O(N)$.
We set $x_j(0)=1$, which implies that the synaptic depression does not work at time $t=0$.
Let the threshold $\hat{\theta}$ be fixed at a small value.
The distribution of the internal potential $h_i(0)$ at time $t=0$ is shown in Figure \ref{fig5}(a).
In the progress of the retrieval, the overlap increases.
This is a similar case as Figure \ref{fig4}(a).
By using the synaptic depression, the internal potential is written as
\begin{equation}
h_i(t) = \sum_{j=1}^N \tilde{J}_{ij} x_j(t) s_j(t).
\end{equation}
In the progress of the retrieval, $x_j(t)$ following equation (\ref{eq.depression}) decreases.
The signal might be fixed because the effect of the increase of the signal can be canceled out by the decrease of $x_j(t)$.
At a steady state, the distribution of $h_i$ is shown in Figure \ref{fig4}(b).
The relative relationship between the fixed threshold and the signal does not change in the progress of the retrieval.
In the activity control mechanism, the relative relationship between the threshold and the signal does not change in the progress of the retrieval.
Thus, the synaptic depression might have qualitatively same mechanism as the activity control mechanism.
In order to check the qualitative consideration, we calculate the basins of attraction by computer simulations.
Figure \ref{fig6}(a) shows the basins of attraction without the synaptic depression, i.e., $\gamma=0$.
The region in which $m^1(0)$ is larger than $m_C$ represents the basin of attraction for the retrieval of the target pattern $\bm{\xi}^1$.
The value $m_C$ is obtained by setting the initial state of the network at $\bm{\xi}^1$ with additional noise.
We employ the following method to add noise.
$100y\%$ of the minority components ($s_i(0)=1$) are flipped, while the same number of majority components ($s_i(0)=0$) are also flipped.
The initial overlap $m^1(0)$ is given as $1-\frac{2y}{1-f}$.
Then the mean firing rate of the model is kept equal to the firing rate of the memory pattern, $f$.
When the threshold $\theta$ is fixed at an optimal value which maximize the storage capacity $\alpha_C$, the basins of attraction are small ($-$ in Figure \ref{fig6}(a)).
When the threshold $\theta$ is small, the basins of attraction are enlarged, but the storage capacity decreases ($\square$, $\times$, $*$ in Figure \ref{fig6}(a)).
Figure \ref{fig6}(b) shows the basins of attraction with the synaptic depression.
As discussed in section 4, the storage capacity $\alpha_C$ takes $0.44$ because the threshold is set at $\hat{\theta}=\theta/(1+\gamma)$.
Since the optimal threshold is $\theta=0.51$ without the synaptic depression, the threshold $\hat{\theta}$ with the synaptic depression is set at $\hat{\theta}=\theta/(1+\gamma)=0.425$ ($\gamma=0.2$,$\square$ in Figure \ref{fig6}(b)), $\hat{\theta}=0.340$ ($\gamma=0.5$, $\times$ in Figure \ref{fig6}(b)), $\hat{\theta}=0.255$ ($\gamma=1.0$, $*$ in Figure \ref{fig6}(b)).
As the value of $\gamma$ increases, the basins of attraction are more enlarged.
To check the qualitative consideration that the synaptic depression might have same mechanism as the activity control mechanism, we compare the temporal change of the overlap in the cases using the synaptic depression and the activity control mechanism.
Figure \ref{fig7}(a) and (b) shows the temporal change of the overlap $m^1(t)$ and the factor $x_j(t)$, respectively.
The parameters used in Figure \ref{fig7}(a) and (b) are optimized to enlarge the basins of attraction.
Figure \ref{fig7}(a) indicates that the system converges to the target pattern $\bm{\xi}^1$ within $4$ time steps.
Figure \ref{fig7}(b) indicates that the value $x_j(t)$ converges at $5$ time steps.
This result indicates that it is crucial that the time constant of the convergence to $\bm{\xi}^1$ is close to that of the convergence of $x_j(t)$ in order to enlarge the basins of attraction.
To compare the effect of the activity control mechanism with that of the synaptic depression, the temporal change of $m^1(t)$ with the activity control mechanism is shown in Figure \ref{fig7}(c).
The threshold is set to maintain the mean firing rate of the model at $f$.
Figure \ref{fig7}(c) indicates that the system converges to $\bm{\xi}^1$ within $5$ time steps.
The dynamics of the overlap is similar to when the synaptic depression is incorporated (figure \ref{fig7}(a)).
The basin of attraction in this case is larger than when the synaptic depression is incorporated.
The temporal change of the threshold is shown in figure \ref{fig7}(d).
The threshold converges at $5$ time steps.
These results support that the synaptic depression might have qualitively same mechanism as the activity control mechanism.
Bibitchkov et al. reported that the synaptic depression enlarged the basins of attraction a little at a small loading rate but it made the basins of attraction shrink at a large loading rate \cite[]{Bibitchkov02}.
Setting a threshold at an appropriate value is critical in terms of increasing the storage capacity and enlarging the basins of attraction \cite[]{Okada96}.
However, Bibitchkov et al. did not set a threshold at an appropriate value.
Thus, they could not separate the effect of the synaptic depression and that of a threshold.
In contrast, we set a suitable threshold to avoid the effect of a threshold, and we can discuss only the effect of the synaptic depression.
Thus, we obtain the results that the synaptic depression enlarges the basins of attraction not only at a small loading rate but at a large loading rate.
The results obtained in this study are qualitatively different from the results obtained by \cite{Bibitchkov02}.
\section{Excitatory-Inhibitory Balanced Network}
In sections 4 and 5, we considered the model which consisted of excitatory neurons, i.e., $g=0$.
Here, we consider the excitatory-inhibitory balanced network, i.e., $g>0$.
It is known that inhibitory neurons regulate the overall activity of excitatory neurons.
If the overall activity of the excitatory neurons goes up, the inhibitory neurons send strong inhibition to the excitatory neurons to suppress the overall activity of the excitatory neurons.
If the overall activity of the excitatory neurons goes down, the inhibitory neurons become silent.
This excitatory-inhibitory balanced network must play the role of an activity control in cortical circuits.
In other words, the inhibitory neurons control the effective threshold of the excitatory neurons.
In section 5 we found that the synaptic depression might have qualitatively same mechanism as the activity control mechanism, even though the synaptic depression is a local phenomenon and the activity control is a global phenomenon.
In this section, we investigate whether the excitatory-inhibitory balance and the synaptic depression work cooperatively or not.
To do this, we consider the excitatory-inhibitory balanced network which does not incorporate the synaptic depression, i.e., $\gamma=0$.
The state of the $i$-th excitatory neuron is determined by
\begin{eqnarray}
s_i(t+1) &=& \Theta\Big( \sum^N_{j \ne i} \tilde{J}_{ij}s_j(t) - g\big(\bar{s}(t) -f \big) - \theta \Big)\\
&=& \Theta\Big( \sum^N_{j \ne i} \tilde{J}_{ij}s_j(t) - \bar{\theta}(t) \Big),
\end{eqnarray}
where $\bar{s}(t)=\frac{1}{N}\sum^N_{j \ne i} s_j(t)$ and $\bar{\theta}(t)=g\big(\bar{s}(t) -f \big)+\theta$.
When the retrieval of memory patterns succeeds, the mean firing rate of the excitatory neurons, $\bar{s}(t)$, is close to $f$.
Therefore, the effect of the inhibition disappears:
\begin{equation}
g(\bar{s}(t) -f) \approx 0.
\end{equation}
This means that the storage capacity does not change.
We investigate whether the balanced network enlarged the basins of attraction.
At first, we consider the case where the number of memory patterns is order $1$, i.e., $p \sim O(1)$.
The cross-talk noise does not exist in this case.
When the initial overlap $m^1(0)$ is smaller than $\frac{\theta}{1-f}$, no neurons fire at the next time $t=1$.
Even if the effective threshold $\bar{\theta}(t)$ decreases at time $t=1$, the distribution of $\xi_i^1=1$ also decreases because the overlap takes a very small value.
Therefore, the retrieval fails.
In other words, the basins of attraction are enlarged only a little when $p$ is a small value.
Next, we consider the case where the number of memory patterns is order $N$, i.e., $p \sim O(N)$.
The distribution of an internal potential is shown in Figure \ref{fig8}.
When the initial overlap $m^1(0)$ is smaller than $\frac{\theta}{1-f}$, almost all neurons do not fire at the next time $t=1$.
However, at $p \sim O(N)$ each distribution becomes broader because of the cross-talk noise.
This enables some neurons to activate at time $t=1$.
Since the mean firing rate of the excitatory neurons, $\bar{s}(t)$, is smaller than $f$, the effective threshold $\bar{\theta}(t)$ decreases like Figure \ref{fig8}(b).
In the progress of the retrieval, the effective threshold changes over time, following the value of $\bar{s}(t)$.
If the model retrieves the target pattern $\bm{\xi}^1$, $\bar{s}(t)$ is close to $f$.
In other words, the inhibition disappears at the steady state (Figure \ref{fig8}(c)).
Here, the synaptic depression is incorporated into the excitatory-inhibitory balanced network, i.e., $\gamma>0$.
For simplicity, we assume that the synaptic depression is occurred on the synaptic sites among the excitatory neurons.
Then the state of the $i$-th excitatory neuron can be written as
\begin{equation}
s_i(t+1) = \Theta\Big( \sum^N_{j \ne i} \tilde{J}_{ij}x_j(t)s_j(t) - g\big(\bar{s}(t) -f \big) - \hat{\theta} \Big).
\end{equation}
We consider the case where the number of memory patterns is order $1$, i.e., $p \sim O(1)$.
The retrieval fails at an initial overlap $m^1(0)$ without the synaptic depression when $(1-f)m^1(0)$ is lower than the threshold $\theta$ (Figure \ref{fig9}(a)).
On the other hand, the retrieval succeeds at the same initial overlap $m^1(0)$ with the synaptic depression (Figure \ref{fig9}(b)) because $(1-f)m^1(0)$ is higher than the threshold $\hat{\theta}=\frac{\theta}{1+\gamma}$.
Therefore, the basins of attraction are enlarged at a small loading rate.
At a large loading rate ($p \sim O(N)$), the effect of the synaptic depression is smaller than that of the inhibition because of the cross-talk noise.
Thus, at a large loading rate, the synaptic depression does not change the basins of attraction in the excitatory-inhibitory balanced network.
In order to check the qualitative consideration, we calculate the basins of attraction by computer simulations.
Figure \ref{fig10}(a) shows the basins of attraction without the synaptic depression.
As the value of $g$ increases, the basins of attraction are enlarged.
At a small loading rate the basins are enlarged a little.
Thus, the excitatory-inhibitory balanced network enlarges the basins of attraction.
At a small loading rate it enlarges the basins a little.
Figure \ref{fig10}(b) shows the basins of attraction with the synaptic depression.
To compare the effect of the synaptic depression, the case of the balanced network without the synaptic depression ($\square$) is shown.
In the balanced network with the synaptic depression, the basins of attraction are the largest ($*$).
Even at a small loading rate the basins are enlarged.
Thus, the excitatory-inhibitory balance and the synaptic depression work cooperatively.
\section{Discussion}
In this paper, we investigated how the synaptic depression influenced the performance of the associative memory model in terms of the storage capacity and the basins of attraction.
Using the mean-field theory and the computer simulations, we found that the basins of attraction were enlarged whereas the storage capacity did not decrease.
In other words, the synaptic depression had the mechanisms that the neuron threshold effectively increased in the progress of the retrieval.
Furthermore, the excitatory-inhibitory balance and the synaptic depression worked cooperatively in the excitatory-inhibitory balanced network.
This result suggests that the short-term synaptic depression might improve an error-correcting ability in cortical circuits.
In our model the synaptic depression was assumed to occur only at the synaptic sites among excitatory neurons.
In the cortical circuits the synaptic depression occurs not only at excitatory synaptic sites but at inhibitory synaptic sites.
Recently, Galarreta and Hestrin found that long-term firing induced a much stronger depression of excitatory synapses than that of inhibitory synapses although the initial rates of depression were similar at both excitatory and inhibitory synapses \cite[]{Galarreta98}.
The result implies that the synaptic depression induces strong depression at excitatory synapses, it induces weak depression at inhibitory synapses.
If the weak depression at inhibitory synapses is regarded as no depression in our model, our assumption that the synaptic depression occurs among the excitatory neurons might be valid.
\newpage
\bibliographystyle{apalike}
|
2,877,628,089,815 | arxiv | \section{Introduction}
Versions of the maximum principle for complex-valued functions defined on a domain in $\CC$ have been
of interest since the development of the classical maximum modulus theorem and
Phragm\'en--Lindel\"of principle for holomorphic functions (see, e.g. \cite[Chap. V]{titchmarsh}).
It is important to distinguish between two types of result here.
First, there is the {\em weak maximum principle\/}
asserting that under certain circumstances a nonconstant function $f: \Omega \to \CC$ cannot attain a local maximum in its domain
$\Omega$: thus if $\Omega$ is bounded and $f$ is continuous on $\overline\Omega$ we have
\begin{equation}\label{eq:maxom}
\sup_{z \in \Omega} |f(z)| = \sup_{z \in \partial\Omega} |f(z)|.
\end{equation}
Second -- and this will be our main concern in this paper -- there is the
{\em strong maximum principle\/} or {\em Phragm\'en--Lindel\"of principle}. This generally applies to unbounded
domains, and generally a supplementary hypothesis on $f$ is required for the conclusion (\ref{eq:maxom}) to hold.
For example, if $f: \Omega \to \CC$ is analytic, where $\Omega=\CC_+$, the right-hand half-plane $\{z \in \CC: \re z > 0\}$,
then if $f$ is known to be bounded we may conclude that (\ref{eq:maxom}) holds, whereas the example
$f(z)=\exp(z)$ shows that it does not hold in general.
\\
We shall use the following standard notation:\\
\[
\partial f= \dfrac{\partial f}{\partial z}=\frac12 (f_x-if_y) \quad \hbox{ and} \quad \overline\partial f=\dfrac{\partial f}{\partial \overline z}
=\frac12 (f_x+if_y).
\]
For quasi-conformal mappings $f$, that is, those satisfying the Beltrami equation $\overline\partial f = \nu \partial f$ with
$|\nu| \le \kappa < 1$, the weak maximum principle holds (see, for example \cite{chen}). This fact was used in \cite[Prop. 4.3.1]{BLRR}
to deduce a weak maximum principle for functions solving the conjugate Beltrami equation
\beq\label{eq:conjbelt}
\overline\partial f = \nu \overline{\partial f}.
\eeq
Their argument is based on the fact that if $f$ is a solution to (\ref{eq:conjbelt}), then
it also satisfies a classical Beltrami equation
$\overline\partial f = \nu_f \partial f$,
where $\nu_f(z)=\nu(z) \overline{\partial f(z)}/\partial f(z)$, and hence $f=G \circ h$ where $G$ is holomorphic and $h$ is a quasi-conformal
mapping (cf. \cite[Thm. 11.1.2]{IM01}).
Carl \cite{carl} considered functions $w$ satisfying equations of the form
\beq\label{eq:carl}
\overline\partial w(z)+A(z)w(z)+B(z)\overline{w(z)} = 0
\eeq
and deduced a weak maximum principle for such functions,
analogous to (\ref{eq:maxom}), under certain hypotheses on the functions $A$ and $B$.
We shall take this as our starting point.\\
For general background on generalized analytic functions (pseudo-analytic functions) we refer to
the books \cite{bers, kravchenko,vekua}.
The following definitions are taken from the recent paper \cite{BLRR}.
\begin{defn}
Let
$1 \le p < \infty$.
For $\nu \in W^{1,\infty}(\DD)$ (i.e., a Lipschitz function with bounded
partial derivatives), the class $H^p_\nu$ consists of
all measurable functions $f: \DD \to \CC$ satisfying the conjugate Beltrami equation
(\ref{eq:conjbelt})
in a distributional sense, such that the norm
\[
\|f\|_{H^p_\nu}=\left( \esssup_{0<r<1} \frac{1}{2\pi} \int_0^{2\pi} |f(re^{it})|^p \, dt \right)^{1/p}
\]
is finite. Clearly for $\nu=0$ we obtain the classical Hardy space $H^p(\DD)$.
If instead $\nu$ is defined on an arbitrary subdomain $\Omega \subset \CC$,
we may define the class $H^\infty_\nu(\Omega)$ as the space of all bounded measurable functions satisfying
(\ref{eq:conjbelt}), equipped with the supremum norm.\\
We may analogously define spaces $G^p_\alpha(\DD)$, where $\alpha \in L^\infty(\DD)$, and in general $G^\infty_\alpha(\Omega)$,
where now, for a function $w$ we replace (\ref{eq:conjbelt}) by
\beq\label{eq:dbar}
\overline\partial w = \alpha \overline w.
\eeq
Once again, the case $\alpha=0$ is classical.
\end{defn}
When $\nu$ is real (the most commonly-encountered situation),
there is a link between the two notions: suppose that $\|\nu\|_{L^\infty(\Omega)}$ with $\|\nu\|_\infty \le \kappa < 1$, and
set $\sigma=\dfrac{1-\nu}{1+\nu}$ and $\alpha= \frac{\overline\partial \sigma}{2\sigma}$, so that $\sigma \in W^{1,\infty}_\RR(\Omega)$.
Then $f \in L^p(\DD)$ satisfies (\ref{eq:conjbelt}) if and only if
$w:= \dfrac{f-\nu \overline f}{\sqrt{1-\nu^2}}$ satisfies (\ref{eq:dbar}).\\
We shall mainly be considering the class $G^\infty_\alpha$, for which it is possible to prove a strong maximum
principle and a generalization of the Hadamard three-lines theorem under mild hypotheses on $\alpha$, which are
satisfied in standard examples. The referee has suggested that there may be a link between these assumptions and the
strict ellipticity of $\sigma$, although we have not been able to show this.
\section{Functions defined on unbounded domains}
The following result is an immediate consequence of \cite[Thm. 1]{carl}, taking
$A=0$ and $B(z)=-\alpha(z)$ in (\ref{eq:carl}) in order to obtain (\ref{eq:dbar}).
\begin{prop}\label{prop:carl}
Suppose that $\Omega$ is a bounded domain in $\CC$ and that $w$ is a continuous function on $\overline \Omega$
such that (\ref{eq:dbar}) holds in $\Omega$, where $\alpha$ satisfies $2|\alpha|^2 \ge | \partial \alpha|$.
Then
$|w(z)| \le \sup_{\zeta \in \partial \Omega}|w(\zeta)|$ for all $z \in \Omega$.
\end{prop}
\beginpf
Taking $k=2$ in \cite[Thm. 1]{carl}, we require that the matrix $M=(m_{ij})_{i,j=1}^2$ be negative semi-definite,
where, with $a=-2|\alpha|^2 $ and $b=-\partial \alpha$, we have
\[
M=\begin{pmatrix}a + \re b & \im b \\ \im b & a-\re b\end{pmatrix}.
\]
On calculating $m_{11}$, $m_{22}$ (which must be non-positive) and $\det M$ (which must be non-negative) we obtain the
sufficient conditions
$-2|\alpha|^2 \pm \re \partial \alpha \le 0$ and $2|\alpha|^2 \ge | \partial \alpha|$: clearly the second condition
implies the first.
\endpf
\begin{exam} {\rm
In the example $\sigma=1/x$, occurring in the study of the tokamak reactor
\cite{fl,flps}, we have $\alpha(x)=-\frac{1}{4x}$ and $\partial\alpha=\frac{1}{8x^2}$; thus the inequality
$2|\alpha|^2 \ge | \partial \alpha|$ is always an equality.
Note that by rescaling $z$ we may transform the equation (\ref{eq:dbar}) to
one with $\alpha= -\frac{1}{\lambda x}$ for any $\lambda>0$ (with the domain also changing); then the inequality
requires that $2/\lambda^2 \ge 1/2\lambda$, so that if we take $0<\lambda<4$ the inequality is strict.
}
\end{exam}
Now for $\eps>0$ we write $h_\eps(z)=1/(1+\eps z)$, and note that
whenever $\Omega \subset \CC_+$ is a domain, we have that
the functions $h_\eps$ satisfy
\begin{enumerate}[(i)]
\item\label{en1} For all $\eps>0$, $h_\eps \in \Hol(\Omega) \cap C(\overline\Omega)$.
\item\label{en2} For all $\eps>0$, $\lim_{|z| \to \infty, z \in \overline\Omega} h_\eps(z)=0$.
\item\label{en3} For all $z \in \Omega$, $\lim_{\eps \to 0} |h_\eps(z)|=1$.
\item\label{en4} For all $\eps>0$, for all $z \in \partial\Omega$, $|h_\eps(z)| \le 1$.
\end{enumerate}
Suppose that $\overline\partial w=\alpha \overline w$ and that $h$ is holomorphic; then
$\overline\partial(hw)=\beta \overline{hw}$, where $\beta=\alpha h/\overline h$. Moreover,
\[
\partial \beta = \partial(\alpha h)/\overline h= (\partial \alpha)(h/\overline h) + \alpha (\partial h)/\overline h.
\]
That is, with $h=h_\eps$, we have $|\beta|=|\alpha|$ and $|\partial \beta| \le |\partial \alpha| + |\alpha| |\partial h_\eps|/|h_\eps|$.
\begin{thm}\label{thm:halfplane}
Suppose that $\Omega \subset \CC_+$ (not necessarily bounded) and that
$w$ is a continuous bounded function on $\overline \Omega$
such that (\ref{eq:dbar}) holds in $\Omega$ where $\alpha$ is a $C^1$ function satisfying $2|\alpha|^2 \ge | \partial \alpha|+
|\alpha| |\partial h_\eps|/|h_\eps|$ for all $\eps>0$.
Then
$|w(z)| \le \sup_{\zeta \in \partial \Omega}|w(\zeta)|$ for all $z \in \Omega$.
\end{thm}
\beginpf
Fix $\eps>0$ and $M=\sup_{\zeta \in \partial \Omega}|w(\zeta)|$. Suppose that $M>0$. Then by property (\ref{en2}) there is an $\eta>0$ such that
for all $z \in \overline\Omega$ with $|z| \ge \eta$ we have $|w(z) h_\eps(z)| \le M$.
Now, by property (\ref{en1}) and Proposition \ref{prop:carl} we have
\[
\sup_{z \in \Omega \cap D(0,\eta)} |w(z)h_\eps(z)| = \sup_{z \in \partial (\Omega \cap D(0,\eta))}|w(z)h_\eps(z)|,
\]
at least if $2|\alpha|^2 \ge | \partial \alpha|+ |\alpha||\partial h_\eps|/|h_\eps| $.\\
Now $\partial (\Omega \cap D(0,\eta)) \subset (\partial\Omega \cap \overline{D(0,\eta)}) \cup (\partial D(0,\eta) \cap \overline\Omega)$.
By hypothesis, $|w(z)| \le M$ if $z \in \partial\Omega$, and by property (\ref{en4}), $|h_\eps(z)| \le 1 $ for $z \in \partial \Omega$.
So $\sup_{z \in \partial\Omega \cap \overline{D(0,\eta)}} |w(z) h_\eps(z)| \le M$.
By the definition of $\eta$ we also have $|w(z) h_\eps(z)| \le M$ if $|z| \ge \eta$ with $z \in \overline \Omega$, and in
particular for $z \in \overline\Omega \cap \partial D(0,\eta)$.
We conclude that $\sup_{z \in \Omega \cap D(0,\eta)} |w(z) h_\eps(z)| \le M$. However, $|w(z) h_\eps(z)| \le M$ whenever
$z \in \overline\Omega$ with $|z| \ge \eta$, and hence
$\sup_{z \in \Omega} |w(z) h_\eps(z)| \le M$. Now, letting $\eps$ tend to $0$, and using property (\ref{en3}), we have
the result in the case $M>0$.\\
If $M=0$, then by the above we have that $\sup_{z \in \partial \Omega} |w(z)| \le \gamma$ for all $\gamma>0$, and the
same holds for $z \in \Omega$ by the above. Letting $\gamma \to 0$ we conclude that $w$ is identically $0$ on $\Omega$.
\endpf
\begin{exam}\label{ex:2.2}
{\rm
Consider the case $\alpha= -\frac{1}{\lambda x}$ and $\partial \alpha = \frac{1}{2\lambda x^2}$. For the
hypotheses of the theorem to be valid we require
\[
\frac{2}{\lambda x^2} \ge \frac{1}{2\lambda x^2}+\frac{1}{\lambda x}\frac{\eps}{|1+\eps z|}.
\]
If $\lambda=1$ (and by rescaling the domain we can assume this) then
this always holds, since $|1+\lambda z| \ge \lambda x$.
}
\end{exam}
In the following theorem, it will be helpful to note that we shall
be considering composite mappings as follow:
\[
\Lambda \xrightarrow{h} \Omega \xrightarrow{w} \CC \qquad \hbox{and}
\qquad
\Lambda \xrightarrow{h} \Omega \xrightarrow{\alpha} \CC.
\]
\begin{thm}\label{thm:notdense}
Suppose that $\Omega \subset \CC$ is simply-connected and that the disc $D(a,r)$ is contained in $ \CC \setminus \overline{\Omega}$.
Let $h: \CC \to \CC$ be defined by $h(z)=re^z+a$, and let
$\Lambda$ be a component of $h^{-1}(\Omega)$.
Set $g_\eps(z)=1/(1+\eps g(z))$, where $g(z)=\log\left (\dfrac{z-a}{r}\right)$ is a single-valued inverse to $h$ defined on $\Omega$.
Suppose that
$w$ is a continuous bounded function on $\overline \Omega$
such that (\ref{eq:dbar}) holds in $\Omega$ with $\alpha$ a $C^1$ function satisfying
\beq\label{eq:alpha}
2|\alpha|^2 \ge | \partial \alpha|+
|\alpha| |\partial g_\eps|/|g_\eps|
\eeq
for all $\eps>0$.
Then
$|w(z)| \le \sup_{\zeta \in \partial \Omega}|w(\zeta)|$ for all $z \in \Omega$.
\end{thm}
\beginpf
First we identify the equation satisfied by $v=w \circ h$, where $h$ is holomorphic. Namely,
\begin{eqnarray*}
\overline\partial v &=& \overline\partial (w \circ h )= \overline{\partial(\overline w \circ h)}=\overline{(\partial \overline w \circ h)(\partial h)}=(\overline\partial w \circ h)(\overline{\partial h})\\
&=& ((\alpha\overline w)\circ h)(\overline{\partial h})
= (\alpha \circ h)(\overline w \circ h) (\overline{\partial h})= \beta \overline v,
\end{eqnarray*}
where $\beta=(\alpha \circ h)(\overline{\partial h})$. Note that $\partial\beta = (\partial\alpha \circ h)|\partial h|^2$, since $\partial (\overline{\partial h})=0$.
The condition
\beq
\label{eq:beta}
2|\beta|^2 \ge |\partial\beta| +|\beta| |\partial h_\eps|/|h_\eps|
\eeq
at a point of $\Lambda$ can be rewritten
\[
2|\alpha \circ h|^2 |\partial h|^2 \ge |\partial\alpha \circ h|\,|\partial h|^2 + |\alpha \circ h| \, |\partial h| |\partial h_\eps|/|h_\eps|.
\]
Now $g_\eps=h_\eps \circ g$; thus $\partial h_\eps=(\partial g_\eps \circ h)(\partial h)$.
That is, (\ref{eq:beta}) is equivalent to
\[
2|\alpha \circ h|^2 |\partial h|^2 \ge |\partial\alpha \circ h|\,|\partial h|^2 + |\alpha \circ h| \, |\partial h|^2 |\partial g_\eps \circ h|/|g_\eps \circ h|,
\]
or
\[
2|\alpha \circ h|^2 \ge |\partial\alpha \circ h| + |\alpha \circ h| |\partial g_\eps \circ h|/|g_\eps \circ h|.
\]
The set $\Lambda$ is open, and thus $\partial\Lambda \cap \Lambda = \emptyset$ and also $h(\partial\Lambda) \cap \Omega = \emptyset$.
Moreover, since $h(\partial\Lambda) \subset h(\overline\Lambda) \subset \ol{h(\Lambda)}$, we get
$h(\partial\Lambda) \subset \ol\Omega \setminus \Omega = \pa \Omega$.
Since $w$ is bounded on $\Omega$, the function $v=w \circ h$ is bounded on $\Lambda$, and using the calculations above and
Theorem \ref{thm:halfplane} with condition (\ref{eq:beta}), we see that
\[
\sup_{z \in \Lambda} |v(z)| = \sup_{z \in \pa\Lambda} |v(z)|.
\]
Since $h(\Lambda)=\Omega$, $\sup_{z \in \Lambda} |v(z)|=\sup_{z \in \Omega}|w(z)|$. Moreover, since
$h(\pa\Lambda) \subset \pa\Omega$, we have also
\[
\sup_{z \in \pa\Lambda} |v(z)| \le \sup_{z \in \pa\Omega} |w(z)|.
\]
It follows that $\sup_{z \in \Omega}|w(z)| \le \sup_{z \in \pa\Omega}|w(z)|$ and we obtain equality.
\endpf
We now provide a generalization of the three-lines theorem of Hadamard (see, for example \cite[Thm. 9.4.8]{krantz}
for the classical formulation with $\alpha=0$).
\begin{thm}
Suppose that $a$ and $b$ are real numbers with $0<a<b$, and let $\Omega =\{z \in \CC: a < \re z < b\}$.
Suppose that
$w$ is a continuous bounded function on $\overline \Omega$
such that (\ref{eq:dbar}) holds in $\Omega$ where $\alpha$ is a $C^1$ function
satisfying
\beq\label{eq:3linescon}
2|\alpha|^2 \ge |\partial\alpha|+\frac{|\alpha| |\log(M(a)/M(b))|}{b-a}+|\alpha||\partial h_\eps|/|h_\eps|
\eeq
for each $\eps>0$.
Then the function $M$ defined on $[a,b]$ by
\[
M(x)=\sup_{y \in \RR} |w(x+iy)|
\]
satisfies, for all $x \in (a,b)$,
\[
M(x)^{b-a} \le M(a)^{b-x} M(b)^{x-a}.
\]
That is, $\log M$ is convex on $(a,b)$.
\end{thm}
\beginpf
Consider the function $g$ defined on $\overline\Omega$ by
\[
h(z)=M(a)^{(z-b)/(b-a)}M(b)^{(a-z)/(b-a)},
\]
where quantities of the form $M^\omega$ are defined for $M>0$ and $\omega \in \CC$ as $\exp(\omega \log M)$,
taking the principle value of the logarithm.
Now $v:= hw$ satisfies $|v(z)| \le 1$ for $z \in \partial\Omega$, since $|h(a+iy)|=1/M(a)$ and
$|h(b+iy)|=1/M(b)$.
Given that $\overline\partial w=\alpha \overline w$ and that $h$ is holomorphic, then, as we have seen,
$\overline\partial(hw)=\beta \overline{hw}$, where $\beta=\alpha h/\overline h$. Moreover,
$\partial \beta = \partial(\alpha h)/\overline h= (\partial \alpha)(h/\overline h) + \alpha (\partial h)/\overline h$.
Now $\log h= \frac{z-b}{b-a} \log M(a) + \frac{a-z}{b-a}\log M(b)$, and so
\[
\left| \frac{\partial h}{h} \right| = \frac{|\log M(a)/M(b)|}{b-a}.
\]
Thus the condition (\ref{eq:3linescon}) on $\alpha$ implies that
$\beta$ satisfies $2|\beta|^2 \ge | \partial \beta|+
|\beta| |\partial h_\eps|/|h_\eps|$. Hence we can apply Theorem \ref{thm:halfplane} to $v$, and the result follow.
\endpf
\begin{rem}
{\rm As in Example~\ref{ex:2.2}, rescaling $z$ is helpful here, since if $z$ is reparametrized as $\lambda z$, then
$\partial \alpha$ is divided by $\lambda$ and $b-a$ is also divided by $\lambda$: thus the inequality
(\ref{eq:3linescon})
becomes easier to satisfy.
}
\end{rem}
\section{Weights depending on one variable}
We look at two cases here, for functions defined on a subdomain of $\CC_+$, namely weights $\alpha=\alpha(x)$ and
radial weights $\alpha=\alpha(r)$. We revisit Theorem~\ref{thm:halfplane}.
Since we now have $\partial \alpha=\alpha'/2$, we obtain the following corollary.
\begin{cor}\label{cor:halfplane-x}
Suppose that $\Omega \subset \CC_+$ (not necessarily bounded) and that
$w$ is a continuous bounded function on $\overline \Omega$
such that (\ref{eq:dbar}) holds in $\Omega$ where $\alpha=\alpha(x)$ is a $C^1$ function satisfying $2|\alpha|^2 \ge | \alpha'|/2+
|\alpha| |\partial h_\eps|/|h_\eps|$ for all $\eps>0$.
Then
$|w(z)| \le \sup_{\zeta \in \partial \Omega}|w(\zeta)|$ for all $z \in \Omega$.
\end{cor}
Likewise, in polar coordinates $(r,\theta)$ we have
\[
\partial = \frac{1}{2} \left( e^{-i\theta}\partial_r-\frac{i e^{-i\theta}}{r}\partial_\theta\right),
\]
giving the following result.
\begin{cor}\label{cor:halfplane-x}
Suppose that $\Omega \subset \CC_+$ (not necessarily bounded) and that
$w$ is a continuous bounded function on $\overline \Omega$
such that (\ref{eq:dbar}) holds in $\Omega$ where $\alpha=\alpha(r)$ is a $C^1$ function satisfying $2|\alpha|^2 \ge | \alpha'|/2+
|\alpha| |\partial h_\eps|/|h_\eps|$ for all $\eps>0$.
Then
$|w(z)| \le \sup_{\zeta \in \partial \Omega}|w(\zeta)|$ for all $z \in \Omega$.
\end{cor}
Suppose now that $\alpha(x)=ax^\mu$. The condition we require is then
\[
2|a|^2x^{2\mu} \ge |a\mu| x^{\mu-1}/2 + |a|x^\mu \frac{\eps}{|1+\eps z|},
\]
which is only possible for $\mu=-1$. However, it is easy to write down polynomials in $x$ that do not
vanish at $0$ but which satisfy the conditions of Corollary~\ref{cor:halfplane-x}.
\subsection*{Acknowledgments.} The authors are grateful to Joseph Burrier for his assistance. They also thank the referee for some
useful comments.
|
2,877,628,089,816 | arxiv | \section{Introduction}%
\label{sec:introduction}
One of the central paradigms of neuroscience is that computational function determines
connectivity structure: if a neural network is involved in a given task, its connectivity
must be related to this task. However, a given circuit's connectivity
also depends on development and the learning of a multitude of
tasks \cite{rigotti2013importance, yang2019task}.
Accordingly, connectivity has often been depicted as containing a sum of random and structured
components \cite{rivkind2017local,mastrogiuseppe2018linking,tirozzi1991chaos,roudi2007balanced,%
ahmadian2015properties}.
Given that structure emerges through adaptive processes on top of
existing random connectivity, one would intuitively expect correlations between the two
components. Nevertheless, the functional effects of the interplay between the random and the
structured components have not been fully elucidated.
Networks designed to solve specific tasks often use purely structured
connectivity \cite{ben1995theory,hopfield1982neural,wang2002probabilistic} that has been analytically
dissected \cite{amit1985spin}.
The dynamics of networks with purely random connectivity were also thoroughly explored,
charting the transitions between chaotic and ordered activity
regimes \cite{sompolinsky1988chaos,rajan2010stimulus,brunel2000dynamics,wainrib2013topological,%
van1996chaos,huang2019circuit}.
Adding \textit{uncorrelated} random connectivity to a structured one was shown to generate the
activity statistics originating from the random component while retaining the functional
aspects of the structured one \cite{mastrogiuseppe2018linking,tirozzi1991chaos,roudi2007balanced,%
renart2007mean}.
A specific setting in which correlations between random and structured components arise is the
training of initially random networks to perform tasks. One class of training algorithms,
reservoir computing, only modifies a feedback loop on top of the initial random connectivity
\cite{maass2002real,jaeger2004harnessing,sussillo2009generating}.
These algorithms can be used to obtain a wide range of computations
\cite{enel2016reservoir,barak2013fixed}.
Recently, a specific instance of a network trained to exhibit multiple fixed points was analytically examined \cite{rivkind2017local}. It was shown that
the dependence between the feedback loop and the initial connectivity is essential to obtain the desired functionality, but the explicit form of the correlations and the manner in which they determine functionality remained elusive.
Thus there is no general theory linking the correlations between random and structured components to network dynamics.
Here we address this issue by examining the nonlinear dynamics of networks with such correlations. Because the dynamics of nonlinear systems vary between different areas of phase space, we focus on linearized dynamics around different fixed points. To facilitate the analysis, we consider low-rank structured components
which were shown to allow for a wide range of functionalities \cite{mastrogiuseppe2018linking}.
We develop a mean field theory that takes into account correlations between the
random connectivity and the low-rank part. Our theory directly links these correlations to the spectrum of the
connectivity matrix.
We show how a correlated rank-one perturbation can lead to multiple spectral outliers and fixed points, a phenomenon that requires high-rank perturbations in the uncorrelated case \cite{mastrogiuseppe2018linking}. We analytically study dynamics around non-trivial fixed points, revealing a surprising connection between the spectrum, the fixed
points and their stability.
Taken together, we show how correlations between the low-rank structure and the
random connectivity extend the computations of the joint network beyond the sum
of its parts.
\section{Network model}%
\label{sec:network_model}
We examine the dynamics of recurrent neural networks with correlated random and structured components in their connectivity. The structured component $P$ is a low-rank matrix and the random component $J$
is a full-rank matrix.
Network dynamics with such a connectivity structure have been analyzed
for $P$ being independent of the random connectivity \cite{mastrogiuseppe2018linking}.
The learning frameworks of
echo state networks and FORCE
also have such connectivity structure
\cite{jaeger2004harnessing,sussillo2009generating}.
There, however, the structure $P$ is trained
such that the full network performs a desired computation,
possibly correlating $P$ to $J$.
For most of this study, we set the rank of $P$ to one and write it as the outer
product
\begin{equation}
P = \mathbf{mn}^T
\end{equation}
of the two structure vectors $\mathbf{m}$ and $\mathbf{n}$.
The matrix $J$ and vector $\mathbf{m}$ are drawn independently from normal distributions,
$J_{ij} \sim \mathcal{N}(0, g^2 / N)$ and $m_i \sim \mathcal{N}(0, 1)$, where $N$ is the network size and $g$ controls the strength of the random part
\cite{sompolinsky1988chaos}. The second vector $\mathbf{n}$ is
defined in terms of $J$ and $\mathbf{m}$. In this sense, $\mathbf{n}$ carries
the correlation between $J$ and $P$.
This is in line with the echo state and FORCE models, where $\mathbf{n}$ corresponds to
the readout vector which is trained and therefore becomes correlated to $J$ and $\mathbf{m}$.
In contrast to these models, however, we constrain the statistics of
$\mathbf{n}$ to be Gaussian. This allows for an analytical
treatment and thus for a transparent understanding of how the correlations
affect the network dynamics.
The details of the construction of $\mathbf{n}$ are described later on.
At this point we merely state that the
entries of $\mathbf{n}$ scale with the network size as $1 / N$.
The structure $P$ is hence considered as a perturbation
to the random connectivity $J$ whose entries scale as $1 / \sqrt{N}$.
All our results are valid in the limit of infinitely large networks,
$N \to \infty$. Throughout the work, we compare the theoretical
predictions with samples from finite networks.
The network dynamics are given by standard rate equations. Neurons are characterized by their internal
states $x_i$ and interact with each other via firing rates $\phi(x_i)$.
The nonlinear transformation from state to firing rate is taken to be
the hyperbolic tangent, $\phi = \mathrm{tanh}$.
The entire network dynamics are written as
\begin{equation}
\label{eq:dot_x}
\dot{\mathbf{x}}(t) = -\mathbf{x}(t) +
\left(J + P
\right)
\phi(\mathbf{x}(t))\,,
\end{equation}
with the state vector $\mathbf{x} \in \mathbb{R}^N$ and the
nonlinearity applied element-wise.
The derivation of our results, Appendix \ref{sub:mean_field_theory_with_correlations},
further includes a constant external input $\mathbf{I}$.
The results in the main text, however, only consider the autonomous
network.
\section{Linear dynamics around the origin}%
\label{sec:spectral_properties_of_j}
\begin{figure*}[tb]
\includegraphics[width=1.\linewidth]{fig1}
\caption{
Spectral outliers via low-rank perturbations.
Spectrum of $J + \mathbf{mn}^T$ with
(a) no correlations,
(b) exponential overlaps,
and
(c) truncated overlaps, \cref{eq:n_k_2}.
See \cref{eq:construct_n}
for details on the construction of $\mathbf{n}$.
The values of non-zero $\hat{\theta}_k$ are displayed in
each plot.
Orange circles and stars indicate the theoretical prediction,
dots refer to the spectra of the finite-size
connectivity matrices, computed numerically.
(d) Overlaps $\theta_k = \mathbf{n}^T\! J^k \mathbf{m}$
for the cases above.
The dashed line are the target overlaps $\hat{\theta}_k$
for the exponential correlation.
Parameters: $N = 2000$, $g = 0.8$.
}
\label{fig:outliers_g_08}
\end{figure*}
The origin $\mathbf{x} = \mathbf{0}$ is a fixed point, since $\phi(0) = 0$.
It is stable if the real parts of all the eigenvalues of the Jacobian are smaller than one.
Since $\phi'(0) = 1$, the Jacobian is simply the connectivity matrix $J + \mathbf{mn}^T$
itself. Here we examine the spectral properties of this matrix.
\subsection{Eigenvalues}%
\label{sub:eigenvalues}
The spectrum of the Gaussian random matrix $J$ converges to a uniform distribution on
a disk with radius $g$ and centered at the origin for $N \to \infty$ \cite{ginibre1965statistical}.
Previous studies have explored the effect of independent low-rank perturbations
like in our model \cite{rajan2006eigenvalue, tao2013outliers}.
They found that the limiting distribution of the remaining eigenvalues,
referred to as the bulk, does not change. Additionally, the spectrum contains
outliers corresponding to the eigenvalues of the low-rank perturbation itself.
In this sense, the spectra of the random matrix $J$ and the low-rank perturbation decouple
(although the precise location of each eigenvalue is affected by the perturbation).
To our knowledge, the effect of correlated low-rank perturbations, which we explore below,
has not been considered before.
To determine the spectrum, we apply the matrix determinant lemma \cite{harville1998matrix}:
\begin{equation}
\det \left(A + \mathbf{mn}^T \right) =
\left(1 + \mathbf{n}^T\! A^{-1} \mathbf{m}\right) \det(A) \,,
\end{equation}
where $A \in \mathbb{C}^{N \times N}$
is an invertible matrix. For a complex number $z$ that is not an eigenvalue of $J$,
the matrix $J - \mathds{1} z$ is invertible, resulting in
\begin{equation}
\label{eq:matrix_det_lemma}
\begin{split}
&\det \left((J + \mathbf{mn}^T) - \mathds{1} z \right)
\\&\qquad=
\left(1
+ \mathbf{n}^T\! (J - \mathds{1} z)^{-1} \mathbf{m}\right) \det(J - \mathds{1} z)
\,.
\end{split}
\end{equation}
The roots of this equation are the eigenvalues of $J + \mathbf{mn}^T$.
Since the determinant on the right-hand-side is nonzero, we get the scalar equation
\begin{equation}
\label{eq:eigval_eq}
z =
\mathbf{n}^T\! \left(\mathds{1} - \frac{J}{z}\right)^{-1} \mathbf{m} \,.
\end{equation}
As long as the entire spectrum is affected by the rank~1 perturbation,
this equation determines all eigenvalues of $J + \mathbf{mn}^T$.
We are interested in outliers of the spectrum: eigenvalues of
$J + \mathbf{mn}^T$ larger than the spectral radius of $J$ (which in the limit of
$N \to \infty$ is given by $g$).
For such an outlier, denoted by $\lambda$, the inverse in \cref{eq:eigval_eq} can be
written as a series, and we have
\begin{equation}
\label{eq:eigval_eq_series}
\lambda =
\sum_{k=0}^\infty \frac{\theta_k}{\lambda^k}
\,,
\end{equation}
with the overlaps
\begin{equation}
\label{eq:theta_k}
\theta_k = \mathbf{n}^T\!J^k\mathbf{m} \,.
\end{equation}
Although this equation is a polynomial of infinite degree, there can be at most proper $N$ solutions (those outside of the bulk; see Appendix \ref{sub:finite_solutions}).
The series representation \cref{eq:eigval_eq_series} is the main result of this section.
It indicates that the overlaps $\theta_k$ between $\mathbf{m}$ and $\mathbf{n}$
after passing through $J$ for $k$ times determine the eigenvalues of
the perturbed matrix. It is hence useful to characterize the correlations between
$J$ and the rank-one perturbation in terms of these overlaps.
The description up to this point is general and does not depend on details of the matrix $J$.
For our model, where $J$ is a random matrix, the scalar products
$\theta_k$ over $N$ entries are self-averaging: for $N \to \infty$, $\theta_k$ converges to its ensemble average $\mathbb{E}[\theta_k]$, with variance decaying as $1 / N$.
We rely on this property and compute quantities for single realization of large networks instead of ensemble averages.
A random matrix $J$ has the effect of decorrelating independent vectors:
if the vectors $\mathbf{m}$ and $\mathbf{n}$ are
uncorrelated to $J$, a single pass through the network already annihilates any
overlap between $\mathbf{n}$ and $J\mathbf{m}$.
In Appendix \ref{sub:construction_of_the_vector_n}, we formally show that the self-averaging indeed
yields $\mathbf{n}^T\!J^k\mathbf{m}=0$ for $k \ge 1$.
We can apply this to
\cref{eq:eigval_eq_series}: If $\theta_k = 0$ for any $k \ge 1$, then
\begin{equation}
\lambda = \theta_0 = \mathbf{n}^T\!\mathbf{m}\,.
\end{equation}
Thus an independent rank-one perturbation yields a single outlier positioned at the
eigenvalue of the rank-one matrix itself [\cref{fig:outliers_g_08}(a)], in accordance with
known results \cite{rajan2006eigenvalue, tao2013outliers}.
If $\mathbf{mn}^T$ is correlated to $J$, the $\theta_k$ will not vanish for nonzero $k$.
We analyze two special cases:
\begin{enumerate}[label=(\roman*)]
\item
If $\theta_k = 0$ for any $k \ge 2$, then
there are two outliers
\begin{equation}
\label{eq:lambda_pm}
\lambda_\pm =
\frac{\theta_0}{2} \pm
\sqrt{
\left(\frac{\theta_0}{2}\right)^2
+
\theta_1
} \,.
\end{equation}
This can give rise to complex conjugate outliers, as
displayed in \cref{fig:outliers_g_08}(b).
More generally, $K$ nonzero overlaps lead to $K$ outliers
via a polynomial equation
[\cref{eq:lambda_poly_trunc}].
\item
A second case is one of a converging series in \cref{eq:eigval_eq_series}.
The simplest assumption is an exponential scaling, $\theta_k = \theta_0 b^k$
with base $b$. Inserting into the eigenvalue
equation \eqref{eq:eigval_eq_series}
yields a single solution
\begin{equation}
\lambda = \theta_0 + b \,.
\end{equation}
Remarkably, we see that correlation between the random matrix $J$
and the rank-one perturbation does not necessarily lead to more than one
outlier.
This is shown in \cref{fig:outliers_g_08}(c).
The observation generalizes to correlations expressible as a sum of $K$
exponentially decaying terms, leading to $K$ outliers
[\cref{eq:lambda_poly_exp}].
\end{enumerate}
We can apply this understanding to construct a network with a set of outliers
and either one of the underlying correlation structures.
One way is to define the vector $\mathbf{n}$ explicitly in terms of $\mathbf{m}$
and $J$.
For example, if we set
\begin{equation}
\label{eq:n_k_2}
\mathbf{n} =
\frac{1}{N} \left(
\hat{\theta}_0 \, \mathbf{m} +
\frac{\hat{\theta}_1}{g^2} J \mathbf{m}
\right) \,,
\end{equation}
then the overlaps will self-average to $\mathbb{E}[\theta_k] = \hat{\theta}_k$
for $k \in \{0, 1\}$ and $\mathbb{E}[\theta_k] = 0$ for any $k \ge 2$, with variance
scaling as $1 / N$.
This is shown formally and generalized to higher $\theta_k$
in Appendix \ref{sub:construction_of_the_vector_n}. The details of the construction
for a set of target outliers is further detailed in Appendix \ref{sub:construction_of_outliers}.
The discrepancy between numerical and target outliers in Figure \ref{fig:outliers_g_08} is due to finite size effects, which decay with $1 / \sqrt{N}$ (verified numerically, and in accordance with Ref. \cite{mastrogiuseppe2018linking}).
The simulations further show that the remaining eigenvalues span the same circle
as without the perturbation.
While all eigenvalues change, visual inspection does not reveal any changes in the statistics.
\subsection{Implementation of multiple outliers}
\label{sub:implementation_of_multiple_outliers}
\begin{figure}[tb]
\includegraphics[width=1.\linewidth]{fig2}
\caption{Scaling of the norm of the rank-one perturbation with number of induced outliers.
The vector $\mathbf{n}$ is the least square solution to implementing
a set of outliers $\Lambda = \{\lambda_1, \dots,\lambda_K\}$,
with
$\lambda_k = 1.25 + 0.25 k$,
see Appendix \ref{sub:least_square_vector_n}.
(a) Log-linear plot of the Frobenius norm of $J$ and $\mathbf{mn}^T$
as a function of the number of outliers.
The dashed line is the theoretical prediction.
(b) Spectrum of $J + \mathbf{mn}^T$ for $K=9$ outliers.
Parameters: $N = 1000$, $g = 0.8$.
}
\label{fig:scaling_with_outliers_N_1000_g_08}
\end{figure}
So far we analyzed the outliers for given correlations between $J$
and $\mathbf{mn}^T$ as quantified by the overlaps $\theta_k$.
We now change the perspective and ask about the properties of the
rank-one perturbation given a set of outliers.
We saw that in principle a given set of outliers may have multiple
underlying correlation structures -- e.g. through a truncated set
of non-zero overlaps or a combination of exponentially decaying terms.
Regardless of the correlation structure, however, we observe that
the norm of $\mathbf{n}$ grows fast with the number of outliers introduced,
implying that strong perturbations are needed to generate a large number of
outliers.
To understand analytically the origin of this phenomenon, we focus on a
method to determine the least square $\mathbf{n}$
given $J$, $\mathbf{m}$ and the
set of target outliers $\Lambda$.
The resulting $\mathbf{n}$
can be formulated using the pseudoinverse, as detailed in
Appendix \ref{sub:least_square_vector_n}.
The main result of this analysis is the scaling of the Frobenius norm of
the rank-one matrix $\mathbf{mn}^T$ with the number of outliers.
The asymptotic behavior is given by
\begin{equation}
\label{eq:scaling_mn}
||\mathbf{mn}^T||
\sim g \prod_{\lambda \in \Lambda}
\frac{|\lambda|}{g} \,,
\end{equation}
that is, exponentially growing with the number of outliers.
In comparison, the Frobenius norm of $J$ is given by $||J|| = g \sqrt{N}$.
This means that if one aims to place more than a handful of outliers,
the perturbation $\mathbf{mn}^T$ becomes
the dominating term (for a fixed network size $N$). We illustrate this in
\cref{fig:scaling_with_outliers_N_1000_g_08} by plotting $||\mathbf{mn}^T||$
for sets of outliers $\Lambda_K = \{\lambda_1, \dots, \lambda_K\}$
with growing number $K$. The outliers $\lambda_k$ were placed on the real line.
Further tests including complex eigenvalues gave similar
results (not shown).
A similar method of deriving $\mathbf{n}$ from the pseudoinverse has been described in
Ref. \cite{logiaco2019model}.
The scaling \eqref{eq:scaling_mn} shows another important point: the bulk radius
$g$ critically determines the norm of the rank-one perturbation. Indeed, the
contribution of each outlier $\lambda_k$ is relative to the radius.
Even for a single outlier, where
\begin{equation}
||\mathbf{mn}^T|| = \sqrt{\lambda^2 - g^2} \,,
\end{equation}
an increase in $g$ leads to decreasing norm.
This observation suggests that a large random connectivity
facilitates the control of the spectrum by a rank-one perturbation.
\section{Non-trivial fixed points}%
\label{sec:non_trivial_fixed_points}
We now turn to the non-trivial fixed points of the network.
At these, the internal states $\mathbf{x}$ obey the equation
\begin{equation}
\label{eq:fixedpoint_rank1}
\mathbf{x} = J \bm{\phi} + \kappa \mathbf{m}\,.
\end{equation}
Here we defined the scalar feedback strength $\kappa = \mathbf{n}^T\! \bm{\phi}$,
using the vector notation $\bm{\phi} = \phi(\mathbf{x})$.
The fixed points of related models have been analyzed in previous
works. For infinitely large networks, the unperturbed system ($P = 0$)
has a single fixed point at the origin if $g < 1$ \cite{wainrib2013topological}.
For $g > 1$, the system exhibits chaotic dynamics \cite{sompolinsky1988chaos}.
In this regime, the number of (unstable) fixed points scales exponentially with the network size
$N$ \cite{wainrib2013topological}.
Here we only focus on networks in the non-chaotic regime, where either $g < 1$
or the perturbation $P$ suppresses chaos
\cite{mastrogiuseppe2018linking}.
\subsection{Fixed point manifold}%
\label{sub:fixed_point_manifold}
\begin{figure*}[tb]
\includegraphics[width=1.\linewidth]{fig3}
\caption{Manifold $\mathcal{M}$, \cref{eq:M} constraining fixed points,
\cref{eq:x_fp_ol}.
(a)
Projection of $\mathcal{M}$ for three networks with
different strength of randomness $g$ (see main text for the three-dimensional basis).
The negative side for $\hat{\kappa} < 0$ is symmetric and not shown.
The squares on the manifolds indicate the inputs $
\hat{\kappa}= (1, 2)$.
The straight lines in the plane $y_\mathbf{a} = 0$ are the asymptotic directions
$\hat{\mathbf{x}}_\sim$
for the manifolds.
(b-d)
Correlation $\rho_{12}$ between two points $\hat{\mathbf{x}}(\hat{\kappa}_i)$
on $\mathcal{M}$ for two different inputs $\hat{\kappa}_1, \hat{\kappa}_2 \in [0, 3]$.
(b, c, d) correspond to the random connectivity
strengths $g = 0.1, 0.5, 0.9$, respectively.
Note the different scales on the color bars.
}
\label{fig:open_loop_fp_manifold}
\end{figure*}
Following \citet{rivkind2017local}, the perturbed system with fixed points
\eqref{eq:fixedpoint_rank1} can be understood using a surrogate system in which
the feedback $\kappa$ is replaced with a fixed scalar $\hat{\kappa}$.
For $g < 1$, every such value $\hat{\kappa}$ corresponds to a unique
fixed point
\begin{equation}
\label{eq:x_fp_ol}
\hat{\mathbf{x}} = J \phi(\hat{\mathbf{x}}) + \hat{\kappa} \mathbf{m}\,.
\end{equation}
This equation defines the one-dimensional nonlinear manifold
\begin{equation}
\label{eq:M}
\mathcal{M} = \{\hat{\mathbf{x}} \,|\, \hat{\kappa} \in \mathbb{R}\} \,.
\end{equation}
The manifold $\mathcal{M}$ can be understood by looking at the
asymptotics.
For large input $\hat{\kappa}$, the nonlinearity saturates and the manifold becomes
approximately linear:
\begin{equation}
\label{eq:x_fp_ol_asymp}
\hat{\mathbf{x}}_\sim = \mathbf{c} + \hat{\kappa} \mathbf{m} \,,
\end{equation}
with $\mathbf{c} = J\, \mathrm{sign}(\mathbf{m})$.
Around the origin, we linearize and obtain
\begin{equation}
\label{eq:x_fp_ol_orig}
\hat{\mathbf{x}} = \hat{\kappa} \,\mathbf{a} + \mathcal{O}(\hat{\kappa}^2)\,,
\end{equation}
with $\mathbf{a} = \left( \mathds{1} - J\right)^{-1} \mathbf{m}$.
Applying orthonormalization to the triplet
$(\mathbf{m, c, a})$, we obtain a three-dimensional basis.
We observe that, for $N \to \infty$, the vectors $\mathbf{m}$ and $\mathbf{c}$ are
orthogonal and that the vector $\mathbf{a}$ becomes orthogonal to the other two
in the limit $g \to 1$. Accordingly, we name the coefficients of the basis
$(y_\mathbf{m}, y_\mathbf{c}, y_\mathbf{a})$.
The projection of the manifold $\mathcal{M}$ on this basis is shown
in \cref{fig:open_loop_fp_manifold}(c) for three different values of $g$.
Numerical evaluation of the reconstruction error shows that these three
dimensions reconstruct the manifold very well
albeit with decreasing accuracy for increasing $g$ (not shown).
Fixed points of the full system are obtained by determining $\kappa$
self-consistently. They necessarily
lie on the manifold $\mathcal{M}$. One consequence is a strong correlation
between pairs of fixed points, especially if both lie close to the origin or
in the saturating regime.
In \cref{fig:open_loop_fp_manifold}(b-d), we numerically evaluate
this correlation for three different randomness strengths $g$.
On can observe that for $g \le 0.5$, correlation does not drop below 90\%.
Even for $g = 0.9$, the correlation is low only if one fixed point is very close to the origin
and the other one is far out.
So far we only considered the case $g < 1$.
For $g > 1$, there is a minimal $\hat{\kappa}_\mathrm{min}$ for which
the dynamics are stabilized and a unique stable fixed point emerges
\cite{mastrogiuseppe2018linking}.
Here, the manifold $\mathcal{M}$ is unconnected and now reads
$\mathcal{M} = \{\hat{\mathbf{x}} \,|\,
\hat{\kappa} \in \mathbb{R} \setminus (-\hat{\kappa}_\mathrm{min}, \hat{\kappa}_\mathrm{min}) \}$.%
Finally we note that the constraints of a one-dimensional manifold are general
and do not depend on the details of the vector $\mathbf{n}$, especially not on its Gaussian statistics.
This is particularly important for learning algorithms
like the echo state framework or FORCE, which by construction only allow for the adaptation of
the vector $\mathbf{n}$
\cite{jaeger2004harnessing, sussillo2009generating}.
Accordingly, fixed points in these cases are also
strongly correlated, which may lead to catastrophic forgetting when trying
to learn multiple fixed points sequentially \cite{beer2019one}.
\subsection{Mean field theory}%
\label{sub:mean_field_theory}
\begin{figure*}[tb]
\includegraphics[width=1.\linewidth]{fig4}
\caption{Two fixed points induced by rank-one perturbation correlated to
the random connectivity $J$.
(a) Spectrum of $J + \mathbf{mn}^T$. The squares indicate the corresponding
averaged slopes at the fixed points, as predicted by \cref{eq:eigval_dphi}.
(b-c) Eigenvalues of the Jacobian at the two fixed points $\mathbf{x}^{(1)}$ (b)
and $\mathbf{x}^{(2)}$ (c).
Stars indicate the theoretical predictions for exceptional
stability eigenvalues (only meaningful outside the bulk).
(d)
Fixed points and manifold $\mathcal{M}$.
The colored lines indicate trajectories starting from the two fixed points
(blue and orange), a point on the manifold $\mathcal{M}$ (green) and the origin.
All trajectories converge on $\mathbf{x}^{(1)}$ or its negative counterpart.
At each point, 50 different initial conditions are obtained by adding
Gaussian noise (SD = 0.5).
The fixed point correlation is indicated by $\rho_{12}$.
Parameters: $g = 0.8$, $N = 1000$.
The rank-one perturbation is obtained by
the least-squared $\mathbf{n}$, see Appendix \ref{sub:least_square_vector_n}.
}
\label{fig:dynamics_2fps_rank1_corr_g_08}
\end{figure*}
For non-trivial fixed points of the full network, \cref{eq:fixedpoint_rank1},
the scalar feedback $\kappa$
needs to be consistent with the firing rates $\phi(\mathbf{x})$.
Similar to prior works, we compute $\kappa$ using a mean field theory
\cite{mastrogiuseppe2018linking}. The central idea of the mean field theory
is to replace the input to each variable $x_i$ by a stochastic variable with statistics
matching the original system. The statistics of the resulting stochastic processes
$x_i$ are then computed self-consistently.
Because our model includes correlations between the random part $J$ and the
low-rank structure $P$, the correlations in the activity do not vanish as dynamics unfold.
This phenomenon prevents the application of previous
theories \cite{mastrogiuseppe2018linking}. We hence develop a new theory.
The details are elaborated in
Appendix \ref{sub:mean_field_theory_with_correlations}.
Here we give an outline of the analysis.
The starting point is the scalar
feedback $\kappa$. The Gaussian statistics of
$\mathbf{n}$ and the fixed point $\mathbf{x}$ allow to factor out the effect
of the nonlinearity via partial integration. We have
\begin{equation}
\label{eq:kappa_partial}
\kappa
= \mathbf{n}^T\!\bm{\phi}
= \langle \phi' \rangle \,\mathbf{n}^T\! \mathbf{x} \,,
\end{equation}
with the average slope $\langle\phi'\rangle$ evaluated at the fixed point:
\begin{equation}
\langle \phi' \rangle
= \int \mathcal{D}z \,\phi'(\sqrt{\Delta^0} z) \,,
\end{equation}
where $\mathcal{D}z$ is the standard Gaussian measure. $\Delta^0$ is the variance of $\mathbf{x}$, which from the fixed point equation
\eqref{eq:fixedpoint_rank1} is given by
\begin{equation}
\Delta^0
= g^2 \langle \phi^2 \rangle
+ \kappa^2
\,.
\end{equation}
The quantities $\langle\phi'\rangle$, $\Delta^0$, and $\kappa$ are determined self-consistently.
To that end, we further evaluate $\kappa$ in \cref{eq:kappa_partial}.
Inserting the fixed point equation
\eqref{eq:fixedpoint_rank1}
yields
\begin{equation}
\label{eq:insert_x}
\mathbf{n}^T\! \mathbf{x}
=
\mathbf{n}^T\! J \bm{\phi}
+ \kappa \mathbf{n}^T\! \mathbf{m} \,.
\end{equation}
The first term on the right-hand side vanished in previous
studies with no correlation between $P$ and $J$ \cite{mastrogiuseppe2018linking}. In our case, there are correlations, and we proceed to analyze this term. We first interpret $J^T\mathbf{n}$ as
a Gaussian vector and use partial integration to replace $\bm{\phi}$ with $\mathbf{x}$:
\begin{equation}
\mathbf{n}^T\! \mathbf{x}
= \langle \phi' \rangle \mathbf{n}^T\! J \mathbf{x}
+ \kappa \mathbf{n}^T\! \mathbf{m} \,.
\end{equation}
We now insert the fixed point equation \eqref{eq:fixedpoint_rank1} into the new first term of the right hand side.
We can apply this scheme recursively and arrive at an equation
linear in $\kappa$ on both sides:
\begin{equation}
\kappa
= \kappa \langle\phi'\rangle \,
\sum_{k=0}^\infty \langle \phi' \rangle^k \,\theta_k
\,,
\end{equation}
with overlaps as defined above, \cref{eq:theta_k}.
We are looking at a non-trivial fixed point, so we can divide by the
nonzero $\kappa$ to obtain
\begin{equation}
\label{eq:dphi}
\frac{1}{\langle \phi' \rangle} =
\sum_{k=0}^\infty \langle \phi' \rangle^k \,\theta_k
\,.
\end{equation}
A comparison with \cref{eq:eigval_eq_series} shows that the two
equations are identical if
\begin{equation}
\label{eq:eigval_dphi}
\lambda = 1 / \langle \phi' \rangle \,.
\end{equation}
This is a remarkable relationship between the outliers and
autonomously generated fixed points:
each non-trivial fixed point $\mathbf{x}^{(i)}$ must be associated with
a real eigenvalue $\lambda_i$ such that the average over the derivative
of firing rates at this fixed point, $\langle \phi'\rangle_i$,
fulfills the above condition \eqref{eq:eigval_dphi}.
In the special case of $\phi = \mathrm{tanh}$,
the $\langle \phi'\rangle_i$ are confined to the interval $(0, 1]$,
so the corresponding eigenvalues must be real and larger than one.
One may hence look at the spectrum of the connectivity matrix alone and determine
how many non-trivial fixed points there are.
An instance of this phenomenon is illustrated in
\cref{fig:dynamics_2fps_rank1_corr_g_08}. The spectrum at the origin
contains two outliers $\lambda_i$, $i=1, 2$, each real and larger than one.
The dynamics have two corresponding fixed points $\mathbf{x}^{(i)}$
located on the manifold $\mathcal{M}$.
In accordance with \cref{eq:eigval_dphi}, the average slopes at these fixed points,
$1 / \langle\phi'\rangle_i$, agree with the outliers up to deviations due to the finite network size.
\subsection{Stability of fixed points}%
\label{sub:stability_of_fixed_points}
\begin{figure*}[tb]
\includegraphics[width=1.\linewidth]{fig5}
\caption{Limit cycle induced by oscillatory instability.
(a) Spectrum of the connectivity matrix. The outlier $\lambda_3$
is real-valued and larger than one, so there is a corresponding fixed point
$\mathbf{x}_3$.
(b) The spectrum at the fixed point. The
predicted stability eigenvalue $\gamma_-$ lies inside the bulk and is not labeled.
(c) Fixed point and dynamics
as in \cref{fig:dynamics_2fps_rank1_corr_g_08}.
Trajectories start at the fixed point (blue), a point on the
manifold $\mathcal{M}$ (orange) and the origin (green).
(d) Scalar feedback $\kappa(t)$ for the different initial
conditions.
Parameters: $g = 0.6$, $N = 1000$.
The rank-one perturbation is obtained by
the least-squared $\mathbf{n}$, see Appendix \ref{sub:least_square_vector_n}.
}
\label{fig:oscillations_rank1_corr_g_06}
\end{figure*}
The stability of each fixed point is determined by the spectrum of its Jacobian.
The associated stability matrix (the Jacobian shifted by $-\mathds{1}$)
is
\begin{equation}
S = \left( J + \mathbf{m}\mathbf{n} \right) R' \,,
\end{equation}
with the diagonal matrix of slopes $R_{ij}' = \delta_{ij} \phi_i'$.
Previous work (\cite{mastrogiuseppe2018linking}) has found that the spectrum of $S$, too,
consists of a bulk and a small number of exceptional eigenvalues:
in the case of an uncorrelated rank-one perturbation,
there are two nonzero eigenvalues obtained via mean field theory, only one of which
has been found outside the random bulk.
The radius of the bulk shrinks to $g \sqrt{\langle \phi'^2\rangle}$
due to the saturation of the nonlinearity \cite{mastrogiuseppe2018linking}.
We find numerically that the bulk behaves alike in our model, too. For the rest of this section,
however, we focus on exceptional eigenvalues of the stability matrix $S$, denoted by $\gamma$.
Similar to the trivial fixed point, one can apply the matrix determinant lemma to
derive an equation for the stability eigenvalues:
\begin{equation}
\label{eq:eigval_eq_gamma}
\gamma =
\mathbf{n}^T\! R' \left(\mathds{1} - \frac{J R'}{\gamma}\right)^{-1} \mathbf{m}
\,.
\end{equation}
We can apply the mean field theory introduced above to evaluate the right hand side.
The details of this calculation are deferred to Appendix
\ref{sub:stability_eigenvalues}. It turns out that
the resulting $\gamma$ are surprisingly compact. We now describe these
stability eigenvalues.
Consider the fixed point $\mathbf{x}^{(i)}$.
According to \cref{eq:eigval_dphi},
this fixed point corresponds to the eigenvalue $\lambda_i$.
\cref{eq:eigval_eq_gamma} always has two solutions
$\gamma_\pm$ determined by a quadratic equation.
These are only dependent on the outlier $\lambda_i$ and the
statistics of the fixed point $\mathbf{x}^{(i)}$, but entirely
independent of the remaining spectrum or other fixed points.
Their precise values are detailed in
\cref{eq:gamma_pm}. It turns out that $\gamma_+$ and $\gamma_-$ always
have a real part smaller than one. They hence do not destabilize the fixed point.
Additionally, at least one of the two is always hidden within the bulk of the
eigenvalues, as observed before for the case of no correlation
\cite{mastrogiuseppe2018linking}.
In \cref{fig:dynamics_2fps_rank1_corr_g_08}(b-c), the spectra of the Jacobian
at two fixed points are compared with the theoretical predictions. In both
cases, $\gamma_\pm$ correspond to the two stars within the bulk.
In \cref{fig:oscillations_rank1_corr_g_06}(b), the bulk is smaller ($g = 0.6$) and
$\gamma_+$ is visible.
If $\lambda_i = \lambda_1$ is the only outlier, then $\gamma_\pm$ are the only two
solutions of \cref{eq:eigval_eq_gamma}, and the fixed point
$\mathbf{x}^{(1)}$ as well as its mirror $-\mathbf{x}^{(1)}$ will be stable.
However, if there are $K \ge 2$ outliers $\{\lambda_1, \dots, \lambda_K\}$,
we find an additional set of $K - 1$ stability eigenvalues
\begin{equation}
\label{eq:gamma_j}
\gamma_j = \frac{\lambda_j}{\lambda_i} \quad \text{for all} \quad
j \in \{1, \dots, K\}, \, j \ne i \,.
\end{equation}
This expression indicates a remarkable relationship between different fixed points:
the existence of a fixed point $\mathbf{x}^{(j)}$
with outlier $\lambda_j > \lambda_i$ will always
destabilize the fixed point $\mathbf{x}^{(i)}$ corresponding to $\lambda_i$.
Conversely, if there are no outliers with real part larger than
that of $\lambda_i$, then $\mathbf{x}^{(i)}$ will be stable.
Since
$\lambda = 1 / \langle\phi'\rangle$ implies that larger $\lambda$ corresponds
to larger fixed point variance $\Delta^0$, one can say that
only the largest fixed point can be stable,
Such an interaction between two fixed points is illustrated in
\cref{fig:dynamics_2fps_rank1_corr_g_08}(b-c). The stars outside of the bulk
correspond
to the predicted eigenvalue $\gamma_2$ or $\gamma_1$. Comparison between the
theoretical prediction \eqref{eq:gamma_j} and numerical calculation for a
sampled network shows good agreement for both fixed points.
Furthermore, a simulation of the dynamics in \cref{fig:dynamics_2fps_rank1_corr_g_08}(d)
shows that indeed all trajectories converge to the larger fixed point $\mathbf{x}^{(1)}$
or its negative counterpart.
Finally, note that a complex outlier $\lambda_j$
also destabilizes a fixed point $\mathbf{x}^{(i)}$ if the real part of $\lambda_j$
is larger than that of $\lambda_i$.
Complex outliers do not have corresponding fixed points, since \cref{eq:eigval_dphi} is real.
An example of such a case is shown in \cref{fig:oscillations_rank1_corr_g_06}.
There is only one real eigenvalue larger than one,
and hence only a single non-trivial fixed point $\mathbf{x}^{(3)}$.
Nonetheless, the two complex outliers $\lambda_1 = \lambda_2^*$ destabilize the fixed point
by virtue of \cref{eq:gamma_j}, since the real parts are larger than the outlier
corresponding to the fixed point, $\Re\lambda_1 = \Re\lambda_2 > \lambda_3$.
Numerical simulations indicate that in such a case, the dynamics converge on a limit cycle.
\section{rank-two perturbations}%
\label{sec:rank_2}
\begin{figure*}[tb]
\includegraphics[width=1.\linewidth]{fig6}
\caption{Fixed points and dynamics for a rank-two perturbation with
structures $\mathbf{mn}^T$ and $\mathbf{uv}^T$ drawn independently of each other
as well as of $J$.
(a-c) Spectra of the Jacobian at the origin (a)
and at the two fixed points $\mathbf{x}^{(1)}$ (b)
and $\mathbf{x}^{(2)}$ (c).
Stars denote the predictions for infinite size networks.
(d) Projection of fixed points on vectors $\mathbf{m}$ and $\mathbf{u}$,
and trajectories starting around $\mathbf{x}^{(1)}$ (blue),
$\mathbf{x}^{(2)}$ (orange), $\mathbf{x}^{(1)} + \mathbf{x}^{(2)}$ (green) and $\mathbf{0}$ (red).
The correlation between the two fixed points is indicated by $\rho_{12}$.
Other parameters as in \cref{fig:dynamics_2fps_rank1_corr_g_08}.
}
\label{fig:dynamics_2fps_uncorr_g_08}
\end{figure*}
The previous section demonstrated two properties of networks with multiple non-trivial fixed points: they are highly correlated due to the confinement on the manifold
$\mathcal{M}$ [\cref{fig:open_loop_fp_manifold}(b-d) and
\cref{fig:dynamics_2fps_rank1_corr_g_08}(d)], and their stability properties interact [\cref{eq:gamma_j}].
We asked whether the latter is a result of the former.
To approach this question, we extend the model to a rank-two perturbation
which allows for uncorrelated fixed points.
The rank-two connectivity structure is defined by
\begin{equation}
\label{eq:P_rank2}
P = \mathbf{m} \mathbf{n}^T + \mathbf{u} \mathbf{v}^T \,.
\end{equation}
We assume $J$, $\mathbf{m}$ and $\mathbf{u}$ to be drawn independently.
Similar to the rank-one case, the entries of both $\mathbf{m}$ and $\mathbf{u}$ are drawn from
standard normal distributions
while $\mathbf{n}$ and $\mathbf{v}$ are Gaussian and dependent on $J, \mathbf{m}$ and $\mathbf{u}$.
The outliers $\lambda$ of the perturbed matrix
$J + \mathbf{mn}^T\! + \mathbf{uv}^T$ are calculated similarly to the rank-one case.
Applying the matrix determinant lemma twice, we arrive at an equation
of quadratic form:
\begin{equation}
\label{eq:eigvals_r2}
0 = \lambda^2 - \lambda \mathrm{Tr} Q_\lambda + \det(Q_\lambda) \,.
\end{equation}
In other words, $\lambda$ is an eigenvalue of the matrix
\begin{equation}
Q_\lambda =
\begin{bmatrix}
\mathbf{n}^T\! M_\lambda \mathbf{m} &
\mathbf{n}^T\! M_\lambda \mathbf{u} \\
\mathbf{v}^T\! M_\lambda \mathbf{m} &
\mathbf{v}^T\! M_\lambda \mathbf{u}
\end{bmatrix} \,,
\end{equation}
which depends on $\lambda$ through
\begin{equation}
\label{eq:M_lam}
M_\lambda =
\left(\mathds{1} - \frac{J}{\lambda}\right)^{-1} \,.
\end{equation}
In general there are more than two solutions, but
if the rank-two perturbation is uncorrelated to $J$, the matrix $M_\lambda$
disappears in $Q_\lambda$. The solution is then
in agreement with previous results
\cite{mastrogiuseppe2018linking}.
Non-trivial fixed points of the network dynamics \eqref{eq:dot_x}
with a rank-two perturbation \eqref{eq:P_rank2}
obey the equation
\begin{equation}
\label{eq:fixedpoint_rank2}
\mathbf{x} = J \bm{\phi}
+ \kappa_1 \mathbf{m}
+ \kappa_2 \mathbf{u}
\,,
\end{equation}
with $\kappa_1 = \mathbf{n}^T\! \bm{\phi}$
and $\kappa_2 = \mathbf{v}^T\! \bm{\phi}$.
Similar to the rank-one case, we can apply the recursive insertion of the
fixed point and partial integration, \cref{eq:kappa_partial,eq:insert_x},
to compute the two-component vector $\bm{\kappa} = (\kappa_1, \kappa_2)$.
We arrive at
\begin{equation}
\label{eq:kappa_ev}
Q_\lambda \bm{\kappa}
=
\frac{1}{\langle\phi'\rangle}
\bm{\kappa} \,.
\end{equation}
This equation has two consequences:
First, we find that $\lambda = 1 / \langle \phi' \rangle$,
because both sides are eigenvalues of $Q_\lambda$, see \cref{eq:eigvals_r2}.
Second, the feedback vector $\bm{\kappa}$ is the corresponding eigenvector.
This gives rise to three situations:
\begin{enumerate}[label=(\roman*)]
\item If $Q_\lambda$ has two distinct eigenvalues,
one of them is equal to $\lambda$. The corresponding eigenvector
determines the direction of $\bm{\kappa}$.
\end{enumerate}
In the case of degeneracy, the geometric multiplicity,
that is, the number of eigenvectors, determines the situation.
\begin{enumerate}[label=(\roman*)]
\setcounter{enumi}{1}
\item
If there is only one eigenvector, the direction of $\bm{\kappa}$ is determined
uniquely.
\item If $\lambda$ has two corresponding eigenvectors, any
direction is a solution. The length of
$\bm{\kappa}$ is determined below, \cref{eq:delta_circle},
and we obtain a ring attractor \cite{mastrogiuseppe2018linking}.
This situation arises in the case of precise symmetry, $Q_\lambda = \lambda \mathds{1}$.
\end{enumerate}
Finally, the length of $\bm{\kappa}$ is determined by
the variance $\Delta^0 = \mathbf{x}^T\!\mathbf{x} / N$ of the fixed point, which obeys
\begin{equation}
\label{eq:delta_circle}
\Delta^0 = g^2 \langle \phi^2 \rangle + \kappa_1^2 + \kappa_2^2 \,.
\end{equation}
The fixed point stability is calculated based on the techniques introduced above;
details can be found in Appendix \ref{sub:stability_for_rank_2_perturbation}.
The result is the same as that in the rank-one case:
the stability eigenvalues obey the same equations as before. Namely,
if the spectrum of $J + \mathbf{mn}^T +\mathbf{uv}^T$
has the outliers $\{\lambda_1, \dots, \lambda_K\}$,
there are always two stability eigenvalues $\gamma_\pm$,
both with real parts smaller than one. At a fixed point $\mathbf{x}^{(i)}$,
there are $K - 1$ additional outliers
$\gamma_j = \lambda_j / \lambda_i$ for $j \ne i$.
This implies that linear dynamics around a
fixed point is completely determined by its statistics and the spectrum
of the connectivity matrix: as long as the outliers are the same,
the stability eigenvalues are independent of the rank of the perturbation $P$
or its correlations to $J$.
This also answers our question about whether the correlation between
fixed points is responsible
for the strong influence on each other.
The rank-two case, too, can be analyzed by replacing the feedback
$\kappa_1, \kappa_2$ with two constant scalars. The corresponding
manifold is now two-dimensional, and fixed points can be
arbitrarily uncorrelated. In \cref{fig:dynamics_2fps_uncorr_g_08},
we show an example: plotting the projection of the fixed points along
the vectors $\mathbf{m}$ and $\mathbf{u}$ shows that the fixed points are
almost orthogonal. Yet, the spectra at the origin and at each fixed point
are identical to the corresponding rank-one case
(compare with \cref{fig:dynamics_2fps_rank1_corr_g_08}).
The correlation between fixed points is hence not important for the mutual influence of different
fixed points.
\section{Discussion}
\label{sec:discussion}
Given a network with connectivity consisting of a random and a structured part,
we examined the effects of correlations between the two.
We found that such correlations enrich the functional repertoire of the network.
This is reflected in the number of non-trivial fixed points and the spectrum of the connectivity matrix.
We analyzed precisely which aspects of the correlations determine the fixed points and eigenvalues.
In our model, the overlaps $\theta_k = \mathbf{n}^T J^k \mathbf{n}$ quantify the correlations
between random connectivity $J$ and structured, low-rank connectivity $\mathbf{mn}^T$.
For uncorrelated networks, only $\theta_0$ is nonzero, and the spectrum of the
joint connectivity matrix has only a single outlier \cite{rajan2006eigenvalue,tao2013outliers}.
We showed that in correlated networks with $\theta_k$ nonzero for higher $k$,
multiple outliers can exist, and that with such, multiple fixed points
induced by a random plus rank-one connectivity structure become possible.
The correlations between random part and rank-one structure hence enrich the
dynamical repertoire in contrast to networks with uncorrelated rank-one structures,
which can only induce a single fixed point \cite{mastrogiuseppe2018linking}.
Note, however, that our assumption of Gaussian connectivity limits the resulting dynamics to a single stable fixed point (discussed below).
Apart from multiple fixed points, the correlated rank-one structure can also lead to a pair of complex conjugate outliers, which in turn yield oscillatory dynamics on a limit cycle.
In absence of correlations, such dynamics need the perturbation to be at least of rank two
\cite{mastrogiuseppe2018linking}.
Finally, we found that correlations amplify the perturbation due to the structured components:
the norm of a correlated rank-one structure inducing a fixed point
decreases with increasing variance of the
random part, pointing towards possible benefits of large initial random connectivity.
Constraining the model to Gaussian connectivity allowed us to analytically understand the
mechanisms of correlations in a nonlinear network.
We established a remarkable one-to-one correspondence between the outliers of the connectivity matrix and
fixed points of the nonlinear dynamics: each real outlier larger than one induces a single fixed point.
Surprisingly, the stability of the fixed points is governed by a simple set of equations
and also only depend on the outliers of the spectrum at the origin.
Through these results, we were able to look at the system at one point in phase space
(the origin)
and determine its dynamics at a different part of the phase space.
It remains an open question to which degree these insights extend to non-Gaussian
connectivity.
Interesting other connectivity models might include sparse connectivity
\cite{neri2012spectra,neri2016eigenvalue,metz2019spectral},
different neuron types \cite{aljadeff2015transition},
or networks of binary neurons such as the Hopfield model \cite{hopfield1982neural}.
Our approach allows us to gain mechanistic insight into the computations underlying
echo state and FORCE learning models which have the same connectivity structure
as our model \cite{jaeger2004harnessing,sussillo2009generating}.
Here, the readout vector $\mathbf{n}$ is trained, which leads to correlations to
the random part $J$ \cite{rivkind2017local,mastrogiuseppe2019geometrical}.
Our results on multiple fixed points and oscillations show that
these correlations are crucial for the rich functional repertoire.
However, constraining our theory to Gaussian connectivity limits the insights,
since the learning frameworks do not have this constraint.
One study analyzing such non-Gaussian rank-one connectivity in the
echo state framework shows that, like in our study, each fixed point had one corresponding outlier
in the connectivity matrix \cite{rivkind2017local}.
However, multiple stable fixed points were observed, which is
in contrast to our model where the Gaussian connectivity only permits the fixed
point with largest variance to be stable.
It would thus be interesting to extend our model beyond the Gaussian statistics.
We pointed out a general limitation of networks with random plus rank-one connectivity:
the restriction of fixed points to a one-dimensional manifold.
This insight is independent of the Gaussian assumption and leads to high correlations between fixed points.
Such correlations have been found to impede sequential learning of multiple fixed points \cite{beer2019one}.
An extension to rank-two structures allows for uncorrelated fixed points.
Surprisingly, however, the strong influence of the largest
outliers on the stability of fixed points still exists for Gaussian rank-two
connectivity.
Indeed, the fixed point statistics and their stability is determined solely by
the spectral outliers of the connectivity matrix, independently of how
these outliers were generated.
Since these relations do not hold in the non-Gaussian case, \cite{rivkind2017local},
we conclude that the Gaussian assumption poses a severe limitation to the space of solutions.
Further in accordance with the echo state and FORCE learning frameworks
\cite{jaeger2004harnessing,sussillo2009generating},
we model the correlations to be induced by one of the two vectors forming the rank-one structure.
Some of the results, such as the overlaps $\theta_k$, are symmetric under
the exchange of the two vectors and should hence be unaffected.
The result on the strongly increasing norm of the perturbation when placing
multiple outliers, on the other hand, may depend on this assumption
\cite{logiaco2019model}.
To which degree our results or the capabilities of trained networks are limited by this constraint is not clear.
Our choice to model the structured part as a low-rank matrix was in part motivated by the computational models discussed above.
Besides these, the existence of such structures may also be inspired by a biological perspective.
Any feedback loop from a high-dimensional network
through an effector with a small number of degrees of freedom may be considered
as a low-rank perturbation to the high-dimensional network.
Similarly, feedback loops from cortex through basal ganglia have been modeled
as low-rank connectivity \cite{logiaco2019model}.
Even without such explicit loops, networks may effectively have such structure if
their connectivity is scale-free or contains hubs \cite{rivkind2019scale}.
Finally, low-rank connectivity also appears outside of neuroscience,
for example in an evolutionary setting \cite{furusawa2018formation}.
Whether low-rank matrices arise in general in learning networks, and to which
degree such structure is correlated with the initially present connectivity
are interesting future questions to be approached with the theory we developed here.
\begin{acknowledgments}
This work was supported in part by the Israeli Science Foundation (grant number 346/16, OB).
The project was further supported by the Programme Emergences of the City of Paris, ANR project MORSE (ANR-16-CE37-0016), the program “Ecoles Universitaires de Recherche” launched by the French Government and implemented by the ANR, with the reference ANR-17-EURE-0017.
F.S. acknowledges the Max Planck Society for a Minerva Fellowship.
\end{acknowledgments}
|
2,877,628,089,817 | arxiv | \section{Introduction}
The cosmological evolution of supermassive black holes (SMBH) is a
vibrant topic in modern astrophysics.
Its importance has been recognized ever since the discovery that virtually all massive galaxies in the local Universe host a central SMBH with a
mass proportional to that of the galaxy spheroid \citep[e.g.][]{Kormendy95,Magorrian98,Ferrarese00,Gebhardt00,Tremaine02,Marconi03,Gultekin09,Kormendy09,Zubovas12}. This tight relation indicates that SMBHs and
their host galaxies co-evolve, but the
physical processes that lead to this relation are still debated.
SMBHs grow primarily by accreting surrounding mass that leads to emission through various physical processes
and to the appearance of an active galactic nuclei (AGN). An accurate
census of the AGN is essential in understanding the cosmic history of
accretion onto SMBHs and its relation to the host
galaxy. Theoretical models proposed an AGN-driven feedback
which can successfully expel gas from the galaxies in order to explain
this interactive co-evolution \citep[e.g.][]{Granato04,Monaco05,Springel05,Croton06,Hopkins06,Schawinski06,Cen11}. In addition, over the past decade or so
both semi-analytical models of galaxy formation and full cosmological hydrodynamical simulations faced the problem of
an excessively large number of bright galaxies formed in massive
haloes \citep[cooling crisis, e.g.][]{Balogh01}. These results
pointed towards the necessary inclusion of AGN feedback in order to
suppress the star formation and produce the observed luminosity
functions.
AGN demographics can provide an assessment of the cosmic SMBH growth history. The AGN luminosity function (LF) is an especially powerful tool when studied over a wide range of redshift and wavelength \citep[e.g.][]{Maccacaro83,Maccacaro84,Maccacaro91,Boyle93,Boyle94,Boyle00,Page96,Ueda03,Ueda14,Wolf03,Barger05, Hasinger05,LaFranca05,Richards06,Bongiorno07,Silverman08,Croom09,Aird10,Aird15,Buchner15,Assef11,Fiore12,Ranalli16,Fotopoulou16}. Arguably, the most effective way to detect active galaxies is through X-ray observations \citep[e.g.][]{Brandt15}. The majority of the detected extragalactic X-ray sources are AGN, while their unresolved integrated contribution essentially builds up the X-ray cosmic background \citep[][]{Setti89,Comastri95}. Although several methods and models have been explored over the years, there are still uncertainties in the evolution of the LF at high redshift and the amount of nuclear obscuration. Further progress in such studies will require larger AGN samples and
knowledge of the joint ($N_{\rm H},z$) distribution \citep[][]{Ueda14}.
Producing a realistic simulated X-ray AGN population that originates directly from the SMBH population can provide an invaluable tool in the study of structure and SMBH evolution. Used in conjunction with the underlying large-scale structure, it could hint to the physical mechanisms that lead to the observed properties of AGN populations, for example the correlation function of AGN, the halo occupation distribution (HOD), the environmental differences of obscured and unobscured AGN. It is also of great importance for X-ray cluster surveys, especially of the high-$z$ universe ($z>$1) where the level of contamination of the X-ray cluster emission by a powerful AGN is largely unknown. Therefore, such a catalog can be of unprecedented value in the era of precision cosmology.
However, difficulties arise from the many uncertainties regarding the observed X-ray properties of AGN. Firstly, there is no established consensus on the ratio of X-ray to bolometric luminosity. Although several X-ray samples have been used over the years \citep[e.g.][]{Marconi04,Hopkins07,Vasudevan10,Lusso12,Shankar13} to produce a reliable bolometric correction function, the results remain discrepant. Secondly, the column density distribution of the AGN torus is a highly disputed topic. All X-ray background and unabsorbed X-ray luminosity function (XLF) studies \citep[e.g.][]{Ueda14,Buchner15,Ranalli16, Fotopoulou16} had to address the issue but the adopted approaches differ from study to study. Most of the results do indicate, however, a strong luminosity dependence \citep[e.g.][]{Ueda03,Ueda14,Simpson05} and an evolution of the column density \citep[e.g.][]{LaFranca05,Hasinger05,Ueda14}.
In the current paper, we have used the output AGN catalogs from the cosmo-OWLS suite of cosmological hydrodynamical simulations \citep[][]{Lebrun14} to produce a simulated population of X-ray AGN up to redshift 3, and we have compared with observations. As shown by \citet{Lebrun14}, models which include AGN feedback perform significantly better than those that do not with regards to reproducing the observed properties of local galaxy groups and clusters. Assessing the realism of the predicted AGN population in the simulations is therefore a powerful independent test of the simulations that invoke AGN feedback.
In Sect. 2 we present the simulations and the SMBH modeling. We also present the XXL survey, which we used to compare the projected correlation function of the simulated AGN with observations, and within the framework of which the X-ray AGN modeling was undertaken. In Sect. 3 we describe the applied methodology and in Sect. 4 we present the results and compare the properties of the simulated AGN catalog with observations. Finally, in Sect. 5 we summarize our results and discuss the possible future applications of the catalog.
We call (a.) the X-ray AGN catalog after applying the bolometric corrections ``the unabsorbed X-ray AGN catalog", (b.) the one derived after applying obscuration ''the absorbed X-ray AGN catalog``, and (c.) the one after applying the simulated {\it XMM-Newton} observational features and the detection pipeline ``the detected X-ray AGN catalog". An outline of the procedure and of the products is presented in Table 1. When referring to the soft and the hard band we always mean the 0.5-2 keV and the 2-10 keV bands, respectively.
\section{Data description}
\subsection{The cosmological hydrodynamical simulations}
The cosmo-OWLS simulations were carried out with a version of the Lagrangian
TreePM-SPH code GADGET3 \citep[last described in][]{Springel05}, which
has been modified to include additional sub-grid physics. The volume was defined by a 400 $h^{-1}$ (comoving) Mpc on a side periodic box.
The initial conditions were based either on the maximum-likelihood cosmological parameters derived from the 7-year WMAP \citep[][]{Komatsu11} or the Planck data \citep[][]{PlanckXVI}. The number of particles was 2$\times1024^3$, yielding dark matter and (initial) baryon particle masses of $\sim 3.75\times10^9 h^{-1} M_{\sun}$ and $\sim 7.54\times10^8 h^{-1} M_{\sun}$ for the WMAP7 cosmology. In the current work, we have used the WMAP7 runs by default, since the WMAP-predicted cluster density is consistent with the observed number count in the XXL survey, in contrast to the Planck cosmology predictions \citep{Pacaud16}.
Nevertheless, the Planck runs were also tested and a comparison is presented. Further details about the way radiative cooling rates, reionization, star formation, stellar evolution and SN feedback were implemented in the cosmo-OWLS can be found in \citet[][]{Schaye10} and references therein.
For each simulation, ten different light cones were produced, each of 25 deg$^2$, thus matching the area of one XXL survey field (see Sect. 2.3). The interested reader can refer to \citet{McCarthy14} for further details of the light-cone making method. X-ray maps for the hot diffuse gas were produced for each light cone by summing the X-ray emission of each gas particle along the line of sight in pixels of $2.5''$, matching the real XXL pixel scale. A description of how the X-ray emission of gas particles was computed can be found in \citet[][]{Lebrun14}. X-ray AGN were then added to the maps, using the actual locations of accreting SMBHs in the simulations (i.e. we create light cones for the SMBHs as well) and their predicted X-ray emission, which is described below.
\subsection{SMBH modeling in the cosmo-OWLS}
Three of the cosmo-OWLS runs included AGN feedback as the result of accretion onto SMBHs. This was incorporated
using the sub-grid prescription of \citet[][]{Booth09}, where the interested reader can find all the details of the modeling. However, we summarize the essential ingredients for the present study.
During each simulation, an on-the-fly friends-of-friends (FoF) algorithm is applied on the dark matter particles. All haloes
with more than 100 particles (a corresponding mass of log$_{10}[M_{FoF} (M_{\sun}h^{-1})]\approx11.6)$ are seeded with SMBH sink particles.
The initial SMBH mass is 0.001 times the (initial) gas particle mass ($\sim10^5 M_{\sun}h^{-1})$.
The simulated SMBHs grow via Eddington-limited, modified Bondi-Hoyle-Lyttleton accretion \citep[][]{Bondi44,Hoyle39} and by merging with other SMBHs. The accretion rate is given by:
\begin{equation}
\dot m_{acc}=\alpha\frac{4\pi G^2M^2_{\rm SMBH}\rho}{(c^2_s+u^2)^{3/2}},
\end{equation}
where $M_{\rm SMBH}$ is the mass of the black hole, $\rho$ and $c_s$ are the gas density and the sound speed of the local medium, and $u$ is the relative velocity of the black hole to the ambient medium. The relation is modified with respect to the standard Bondi accretion rate through the inclusion of the multiplicative $\alpha$, that was originally introduced by \citet[][]{Springel05b} to correct for the limitations of the simulations. Specifically, in typical cosmological hydro simulations, the numerical resolution is too low to resolve the Bondi radius, and therefore the estimated accretion rate will be an underestimate of the true rate. Furthermore, and more importantly, many cosmological hydro simulations (such as OWLS, Illustris, EAGLE, etc.) do not include an explicit modeling of the cold interstellar medium (ISM), but instead invoke an equation of state for dense gas, in order to avoid numerical fragmentation. The use of an equation of state, which adds pressure to the gas (to mimic
turbulence in the ISM), can also lead to a significant underestimate of the gas density near the SMBH, and therefore an underestimate of the accretion rate onto the SMBH.
In order to overcome these problems, \citet[][]{Springel05b}, and most subsequent studies that used this model, adopted a constant $\alpha$=100. OWLS and cosmo-OWLS adopted a somewhat different strategy, following \citet[][]{Booth09}. In particular, in \citet{Booth09}, $\alpha$ depends on the local gas density, as $\alpha \propto \rho^2$. However, at low densities, which can be resolved by the simulations, the accretion rate reverts back to the standard Bondi rate (i.e. with $\alpha=1$).
The black hole mass grows following the relation:
\begin{equation}
\dot M_{\rm SMBH}=m_{acc}(1-\epsilon_r),
\end{equation}
where $\epsilon_r$ is the radiative efficiency of the black hole, fixed at 10\% here. In addition, 15\% of the radiated energy is coupled to the surrounding medium (i.e. feedback), while the remaining 85\% is allowed to escape.
The accretion rate is always limited by the Eddington rate:
\begin{equation}
\dot m_{\rm Edd}=\frac{4\pi G^2M_{\rm SMBH}m_p}{\epsilon_r\sigma_{T} c},
\end{equation}
where $m_p$ is the proton mass, $\sigma_T$ is the Thomson cross-section and c the speed of light.
The Eddington ratio $\lambda$ is defined as
\begin{equation}
\lambda=L_{bol}/L_{Edd},
\end{equation}
where $L_{Edd}=(M_{\rm SMBH}/M_{\sun})\times1.3\times10^{38}$erg sec$^{-1}$.
Finally, SMBH-SMBH mergers takes place when two black holes were within a distance $h_{BH}$ and their relative velocity $\upsilon$ was less than the circular velocity ($\upsilon<\sqrt{Gm_{BH}/h_{BH}}$, where $h_{BH}$ is the smoothing length and $m_{BH}$ is the mass of the most massive SMBH). When these conditions are met, the merger takes place instantaneously.
\subsubsection{AGN feedback}
As discussed earlier, AGN feedback is an important ingredient of the simulations which is necessary to suppress star formation and avoid the excessive formation of very massive galaxies. \citet[][]{Lebrun14} showed that the inclusion of AGN feedback leads to good agreement between the stellar masses of real and simulated brightest cluster galaxies (BCGs). The feedback also regulates the accretion onto the black holes themselves. Therefore, we anticipate that different feedback models will directly affect the predicted AGN demographics (e.g. the XLF). We note that SN feedback is also modeled in the simulations. In this section we summarize briefly the AGN feedback modeling.
cosmo-OWLS transforms a fraction of the rest-mass energy of the accreted gas into heating of the neighbouring gas particles, by increasing their
temperature. An advantage of the \citet[][]{Booth09} model is that it overcomes the problem of numerical overcooling (i.e. the problem that feedback energy can be rapidly radiated away due purely to low mass resolution). This is accomplished by raising the temperature of only a small number $n$ of surrounding gas particles by a predefined amount of $\Delta T$. To this end, a fraction $\epsilon$ of the accreted energy is stored in the SMBH until it reaches the predefined value. $\Delta T$ and $n$ are chosen such as to produce a sufficiently long cooling time and the time needed for a feedback event to be shorter than the Salpeter time for Eddington-limited accretion. It is shown that $\Delta T = 10^8$K and $n=1$ satisfy the two constraints (AGN 8.0 model). However, in \citet[][]{Lebrun14} two more values of $\Delta T$ were tested, that is $3\times10^8$K (model 8.5) and $5\times10^8$K (model 8.7). The AGN 8.0 model proved more suitable for the purposes of that paper with Planck cosmology, while with WMAP7 the
observational data tends to be bracketed by the AGN 8.0 and AGN 8.5 models. In the current work we have
tested both models.
Note than when $\Delta T$ is set to a higher value, more time is needed to accumulate the energy
to heat the gas particle and we actually simulate more energetic bursts. As already noted, the net efficiency $\epsilon$ is set to 0.015, which results in a good match to the normalization of the $z=0$ relations between SMBH mass and stellar mass and velocity dispersion, as well as to the observed cosmic SMBH density, as demonstrated by \citet[][]{Booth09} and \citet[][]{Lebrun14}.
Finally, the cosmo-OWLS output SMBH catalog, which is the input SMBH catalog in the current study, provides the position, the redshift, the mass and the bolometric luminosity $L_{bol}$ for all SMBHs for the 25 deg$^2$ light cones up to redshift $z=$3.
\subsection{The XXL survey}
The XXL Survey is the largest {\it XMM-Newton} project approved to date ($>$6 Msec),
surveying two $\sim$ 25 deg$^2$ fields with a median exposure of 10.4 ks
and at a depth of $\sim5\times10^{-15}$ erg sec$^{-1}$ cm$^{-2}$ in the [0.5-2] keV soft X-ray band
(completeness limit for the point-like sources). The two fields have extensive multi-wavelength coverage
from X-ray to radio. A general description of the survey and its goals was published by \citet[][]{Pierre16}.
To date some 450 new galaxy clusters have been detected out to redshift $z\sim2$, as well as more than 20000 AGN
out to $z\sim4$. The main goal of the project is to constrain the dark energy equation
of state parameter, $w$, using clusters of galaxies. This survey will also have lasting legacy value for cluster scaling laws
and studies of galaxy clusters, AGN, and X-ray background. The XXL-S (Southern) field, which we use in the current study, is one of two XXL fields, centered at RA=23$^{h}$30 and DEC=-55$^{d}$00.
\section{Methodology}
In the following sections we describe the procedure used to convert the output black hole catalog of the simulations to the final X-ray AGN catalog.
We preselected our sample so that only active black holes were included. To this end, we set an absolute accretion rate threshold of $10^{-6} M_\odot/$year \citep[][]{Ho08}, which corresponds to a bolometric luminosity cut of $\sim 5\times10^{39}$ erg s$^{-1}$. This cut eliminated almost one-third of the SMBH sample, but we note that SMBHs with luminosities below this threshold would not be detected with current surveys. Therefore, our cut was a conservative one. We further assumed that all AGN with luminosities exceeding this threshold were X-ray emitters and therefore potentially detectable in X-ray surveys. This was a reasonable assumption because almost all identified AGN by optical, infrared, and radio techniques show X-ray AGN signatures \citep[see review on AGN demographics by][and references within]{Brandt15}. Therefore, X-ray emission seems to be almost universal, at least for the luminous AGN. Nevertheless, it appears that a small number of intrinsically X-ray weak but luminous AGN
does exist \citep[e.g.][]{Wu11,Luo14}. However, current studies indicate that they are so rare that their impact on demographic studies should be substantially small \citep[e.g.][]{Gibson08,Wu11,Luo14}.
An alternative strategy, which has been adopted in some previous theoretical studies \cite[e.g.][]{Rosas16}, would be to select which AGN will be X-ray emitters based on the predicted Eddington ratio. The motivation for this comes from the fact that there is a known empirical correlation between the Eddington ratio and the predominant emission wavelength \citep[e.g.][]{Dai04,Saez08}. Without an Eddington ratio cut, there is the potential that we will include low-Eddington rate sources (e.g. radio AGN) in our sample. However, as we will show, recent observations suggest that X-ray AGN actually span a relatively wide range of Eddington ratios (which we will compare to; see Fig. 5 and Fig. 6), which means that there would be a strong possibility to exclude genuine X-ray emitters by adopting a fixed Eddington threshold (e.g. 0.01, as adopted in some previous studies). This argues against adopting a fixed Eddington rate threshold. Furthermore, we will show that, with our adopted luminosity cut, only a negligibly small
fraction of our selected simulated AGN have very low Eddington accretion rates of $\lambda < 10^{-4}$, which are typical of radio AGN.
Below we describe the (inverse) bolometric corrections (i.e. to convert the simulated bolometric luminosity into an observable X-ray luminosity) and the application of AGN obscuration to produce our final X-ray AGN sample.
\begin{table}
\begin{minipage}{87mm}
\centering
\caption{Methodology outline}
\tabcolsep 3 pt
\renewcommand{\arraystretch}{1.8}
\begin{tabular}{|l|c|c|}
\hline
Tool or methodology & output & results\vspace{-7pt} \\
{\em (1)}&{\em (2)}&{\em (3)}\\
\hline
cosmo-OWLS (Sect. 2) & SMBH catalog & \\
\hline
\multirow{2}{85pt}{bolometric corrections (Sect. 3.1)} & \multirow{2}{65pt}{\centering unabsorbed X-ray AGN catalog}& \multirow{2}{65pt}{\centering unabsorbed X-ray LF (Sect. 4.1)} \\
& & \\
\hline
\multirow{3}{85pt}{absorption function (Sect. 3.2)}& \multirow{3}{65pt}{\centering absorbed X-ray AGN catalog} & \multirow{3}{70pt}{\centering Eddington ratio distribution \& black hole mass function (Sect. 4.2)}\\
&&\vspace{-2pt}\\
&&\\
\hline
\multirow{3}{85pt}{{\it XMM-Newton} instrumental effects (Sect. 3.3)}& \multirow{3}{65pt}{\centering detected X-ray AGN catalog} & \multirow{3}{77pt}{\centering projected correlation function (Sect. 4.3)}\\
&&\vspace{-7pt}\\
&&\\
\hline
\end{tabular}
\tablefoot{{\em (1)} The applied tool or methodology (the sections where they are described) {\em (2)}, name of the output catalog,
{\em (3)} the results (the sections where they are described) }
\end{minipage}
\end{table}
\subsection{Bolometric correction}
Despite a concerted effort to combine various X-ray and optical surveys (e.g. XMM-COSMOS, CDF-N, CDF-S, ROSAT, SDSS, 2dF) while exploiting the area of shallow surveys and the depth of pencil-beam surveys, there is still no general consensus between different studies
on the fraction of the total bolometric luminosity ${\rm L_{bol}}$ that is emitted at X-ray wavelengths \citep[for a comparison between different studies see][L12 hereafter]{Lusso12}.
Nevertheless, most studies do agree that the correction depends on the luminosity itself, in the sense that the correction becomes increasingly large with increasing bolometric luminosity. However, the scatter in published relations is relatively large. In addition, a number of studies \citep[e.g.][]{Vasudevan07,Vasudevan09b,Vasudevan10} presented evidence that the bolometric corrections depend primarily on the Eddington ratio and not the luminosity of their low-$z$ AGN samples. \citet[][]{Shankar13} studied thoroughly this relation using semi-empirical models of AGN, but they concluded that their modeling, although it becomes very elaborate, cannot reproduce well the observational constraints. We note, however, that L12 reported a clear correlation of increasing Eddington ratio with increasing luminosity up to redshift 2.3, which implies that probably both are correlated with the bolometric corrections in a similar way.
In the current study we have implemented the simple approach of adopting luminosity-dependent bolometric corrections only, of which we tried several. As we will show, for recently-determined bolometric corrections from either L12 or \citet[][M04 hereafter]{Marconi04}, the simulations predict a hard XLF that is consistent with observations; \citet[][]{Ranalli16}, \citet[][]{Aird15}, \citet{Miyaji15} and \citet{Buchner15} (see Sect. 4.1).
It is worth noting that we also explored using the bolometric corrections proposed by \citet[][]{Hopkins07}, but found significantly worse agreement with the observed XLF. To estimate the bolometric corrections, \citet[][]{Hopkins07} combined a large number of optical, soft and hard X-ray, and mid-IR catalogs. They provide the bolometric corrections for a wide range of bolometric luminosities. However, we found that the level of the proposed corrections is very high, producing an under-luminous simulated X-ray AGN population that fails to reproduce the hard band unabsorbed XLF. This may be attributed to the inclusion of reprocessed emission in their calculations (although we cannot rule out that the discrepancy could also be due in part to inadequacies in the underlying predicted bolometric LF). M04, by contrast, constructed a template spectrum to study the local black hole properties of optical QSOs and they explicitly removed the IR bump in order to estimate the bolometric corrections without the
reprocessed radiation. However, they assumed that the template spectrum, and thus the derived bolometric corrections, is redshift independent. On the other hand, L12 derived empirical bolometric corrections using {\it XMM-COSMOS} hard X-ray selected AGN. Their corrections are generally smaller than those proposed by M04, but consistent within the scatter. The sample used in L12 spans the full redshift range up to $z=3$ but, as expected, the AGN population at low redshifts is undersampled. Therefore, it is possible that there is a mild evolution of the bolometric corrections which can reconcile the differences in the corrections proposed by L12 and M04. In any case, we explore using both corrections in Sect. 4.1, showing that adopting either leads to reasonable agreement with the observed XLF. In both cases the functions are approximated by third degree polynomials:
\begin{equation}
y=\alpha_1x+\alpha_2x^2+\alpha_3x^3+\beta,
\end{equation}
where $y={\rm log_{10}}[L_{bol}/L_{band}]$, and $x={\rm log_{10}}[L_{bol}/L_{\sun}]-12$. The set of parameters ($\alpha_1, \alpha_2, \alpha_3, \beta $) are given by (L12: 0.217, 0.009, -0.010, 1.399) and (M04: 0.22, 0.012, -0.0015, 1.65) for $L_{band}=L_{[0.5-2 keV]}$, and by (L12: 0.230, 0.050, 0.001, 1.256) and (M04: 0.24, 0.012, -0.0015, 1.54) for $L_{band}=L_{[2-10 keV]}$.
\subsection{Obscuration}
Obscuration was implemented for our X-ray catalog following the absorption function $f(L_X,z;N_{\rm H})$ introduced by \citet[][]{Ueda14}. To derive this function they used a highly-complete sample compiled from several surveys using {\it Swift/BAT, MAXI, ASCA, XMM-Newton, Chandra}, and {\it ROSAT}. The function takes also Compton-thick AGN (log $N_{\rm H}>24$) into account. The level of absorption is strongly luminosity-dependent and it evolves with redshift. In particular, the frequency of absorbed AGN (log$N_{\rm H}>22$) rises steeply with decreasing AGN luminosity, rising from $\sim$20\% for high-luminosity AGN ($L_X>10^{45}$ erg sec$^{-1}$) to more than 80\% for the low-luminosity sources. Also, the function includes a positive evolution of the absorbed fraction with redshift, as reported by several studies \citep[e.g.][]{LaFranca05,Ballantyne06,Treister06,Hasinger08}. We note that there are large uncertainties involved in these calculations, as clearly stated in \citet[][]{Ueda14},
especially for the faint AGN.
\begin{figure}[h!t]
\centering
\resizebox{7.5cm}{20cm}{\includegraphics[angle=0, origin=c]{Sims.eps}}
\caption{Simulated {\it XMM-Newton} images and source detection. Top: X-ray photon map from cosmo-OWLS overplotted with red (cyan) circles that mark the position and redshift of the input dark matter haloes (secondary haloes). The radius of each circle represents the $r_{500}$ radius. Middle: same as top overplotted with the X-ray contours (10 ks exposure) and the position of the input simulated AGN (black squares). Bottom: same as top after including {\it XMM-Newton} instrumental effects and background. Green squares (circles) mark significant detections of point-like (extended) sources by the detection algorithm.}
\end{figure}
\begin{figure*}[t]
\centering
\resizebox{19cm}{15cm}{\includegraphics[angle=270, origin=c]{Aird.eps}}
\caption{2-10 keV unabsorbed X-ray luminosity functions of synthetic and observed AGN. The eleven panels correspond to redshift bins up to redshift 3. The results of our modeling with cosmo-OWLS data are marked with red lines (continuous for bolometric corrections based on L12, dashed for M04). Black circles (with 1-$\sigma$ errors) denote the intrinsic hard XLF by \citet[][]{Aird15}. In the last redshift bin we plot data from \citet[][orange circles]{Miyaji15}, which span a more pertinent redshift range. The grey bands indicate the 90\% confidence interval of a non-parametric fit of observational data by \citet[][]{Buchner15}. For comparison, we also plot data points (blue circles) by \citet[][]{Ranalli16} (with 1-$\sigma$ errors) at the bright-end of the XLF (the 11 deg$^2$ of the XMM-LSS survey were used).}
\end{figure*}
\begin{figure*}[t]
\centering
\resizebox{19cm}{15cm}{\includegraphics[angle=270, origin=c]{models.eps}}
\caption{2-10 keV unabsorbed luminosity functions obtained using various cosmologies and AGN feedback models (see Sect. 2.2.1). We also overplot the results of similar analyses with the EAGLE (blue continuous lines) and the Magneticum Pathfinder simulations (blue dashed lines). Planck cosmology results (red continuous line) use the AGN8.0 model. Observed XLFs are plotted as in Fig. 2.}
\end{figure*}
\begin{figure*}[t]
\centering
\resizebox{19cm}{7cm}{\includegraphics[angle=270, origin=c]{schulze.eps}}
\caption{ERDF (left panel) and BHMF (right panel) of broad-line AGN (type 1, $N_{\rm H}<10^{22}$ cm$^{-2}$) between redshift 1 and 2 (black continuous line). We overplot relevant X-ray selected data from SXDS (red circles) and optically selected data from VVDS (green triangles) and zCOSMOS (blue squares), corrected for incompleteness with the 1/$V_{\rm max}$ method. A luminosity limit of $L_{bol}>10^{44}$erg sec$^{-1}$ was imposed on the simulation data according to the respective limitations of the above surveys. When the limit is relaxed (red line), the number of sources continuously increases towards low $\lambda$ and low $M_{\rm SMBH}$. We also present the respective distributions of sources with $z<1$ (dashed line) and $z>2$ (dotted line).}
\end{figure*}
We did not implement any further criteria that may play a role in the obscuration of black holes; for example interactions and merging of the host galaxies. This could in principle have an impact on the correlation function of obscured AGN compared to the unabsorbed population. However, studies using X-ray selected samples \citep[e.g.][]{Coil09,Ebrero09,Mountrichas12} did not find significant differences, although \citet[][]{Elyiv12} reported different clustering for hard and soft X-ray sources.
Obscured fluxes in the soft and the hard X-ray bands were calculated with NASA's HEASARC tool PIMMS\footnote{https://heasarc.gsfc.nasa.gov/docs/software/tools/pimms.html} (Portable, Interactive Multi-Mission Simulator), where the k-correction was applied assuming a power law spectra with photon index $\Gamma=1.9$ \citep[e.g.][]{Nandra94}.
\subsection{Simulated {\it XMM-Newton} images and source detection}
Synthetic X-ray images were created from the perfect-sky X-ray photon-maps and
the input X-ray AGN catalog. We also added a realistic background, which
included X-ray photons (vignetted), solar soft protons (vignetted),
and particles (not vignetted). We modeled the photon background, following \citet[][]{Snowden08},
as the sum of a Galactic and an extragalactic contribution.
The Galactic contribution was computed by the superposition
of two absorbed MEKAL components \citep[][]{Mewe85} at $0.1$ keV and $0.25$ keV
from the galactic halo and another, unabsorbed, MEKAL component at $0.1$ keV
from the Local Hot Bubble; the extragalactic contribution (from unresolved AGN)
was modeled as a power law with index 1.46. The solar soft proton background was modeled after \citet[][]{Snowden08}, as a power law
with index 0.9; particle background was computed from 200ks {\it XMM-Newton} exposures with
closed filter wheel and we chose not to include flares.
Finally, an ideal event list was created by merging the above contributing photons. It was then blurred
to simulate the {\it XMM-Newton} instrumental effects: PSF blurring (assuming a King profile PSF), energy blurring,
vignetting; particle background was also added.
In all cases we assumed a 10 ks exposure time, as in the XXL survey. Photons were reshuffled in position and energy, or were discarded according to the simulated
local effective area, exposure time, vignetting factor, detector (MOS1, MOS2, PN) or filter (THIN).
Therefore, we obtained three event lists (one for each EPIC detector) that included instrumental effects and
which were converted to images in the 0.5-2 keV and 2-10 keV bands at $2.5''$ per pixel. We also produced the corresponding exposure maps.
Source extraction was performed on these images for the soft
and the hard band separately, in the same way as for the XXL survey images, via the
XAmin pipeline \citep[][]{Pacaud06}. In more detail,
first a preliminary list of source candidates was selected by running
SEXtractor \citep[][]{Bertin96} on a wavelet smoothed combined (MOS1, MOS2, PN)
X-ray image. Then, on each candidate source, a series of fits was performed on the three raw X-ray images:
a point source model (assuming a position-dependent {\it XMM-Newton} PSF), an extended source model (assuming a $\beta=2/3$ profile),
a double point source model (two {\it XMM-Newton} PSFs close on the image), and an extended$+$point source model
($\beta=2/3$ profile with central {\it XMM-Newton} PSF). In \citet[][]{Pacaud06} the threshold level for a significant detection has been chosen in order that any detection compatible with a non-extended source would have a $\sim$99\% probability of being a real source and not background fluctuation.
An example of the resulting images and pipeline detections of the above procedure is presented in Fig. 1. The detected AGN have usually more than 10 counts, while the remaining input sources are either detected at low significance or not detected at all.
\section{Results}
In the following sections we present the comparison of the synthetic AGN catalogs with observational results. Obtaining a good agreement is essential for any further application of the simulated catalogs.
\subsection{Unabsorbed hard X-ray luminosity function}
After implementing the bolometric corrections described in Sect. 3.1, we produced catalogs of X-ray AGN and their respective intrinsic X-ray luminosity (before obscuration). To assess how closely these catalogs relate to the observed X-ray AGN population, we compare our results to the unabsorbed (de-obscured) hard band XLF of \citet[][]{Ranalli16}, \citet[][]{Aird15}, \citet{Miyaji15} and \citet{Buchner15}. The differential luminosity function $\Phi$ is defined as the number of objects $N$ per comoving volume $V$ and per unabsorbed luminosity $L$ as follows:
\begin{equation}\label{eq:lf}
\Phi(L,z)=\frac{d^2N(L,z)}{dVdz}.
\end{equation}
The comparison within ten redshift bins up to $z=3$ is illustrated in Fig. 2. For clarity we mainly plot data points from \citet[][]{Aird15}, except in the $z=2.5-3$ range where \citet{Miyaji15} data are more pertinent. We also plot the 90\% confidence interval of the non-parametric fit by \citet{Buchner15}. This is an important addition since their analysis, which takes all uncertainties and the contribution of Compton-thick AGN into account, does not predict the sharp flattening of the XLF towards low-luminosity high-redshift bins, a common behaviour of previous parametric fits.
Using the empirical bolometric corrections of L12, the simulations reproduce the observed XLF in all redshift bins, although there is possibly a slight overestimate for the local population at $z<0.5$ (Fig.2, top panels), according to the XLF by \citet[][]{Aird15}. However, the results are more consistent with \citet{Buchner15}. Using the template spectra corrections of M04, the simulations reproduce the XLF up to roughly $z\sim0.5$, but somewhat underestimate it at higher redshifts. Recall that the M04 corrections are probably more accurate for the low redshift population, since they were computed from a template spectra at $z$=0, while the L12 corrections are based on X-ray observations that cover the full redshift range but undersample the local population. Therefore, assuming a mild evolution of the bolometric corrections, one can use the M04 functions for the low redshift sources ($z<0.5$) and L12 for high-$z$ sources. Alternatively, L12 can be used exclusively, bearing in mind the probable
overestimation of bright low-$z$ sources, although all points are consistent within 2-$\sigma$.
We note that, applying a mild evolution on the M04 relation in order to reach the L12 level gradually by $z\sim0.5$ does not alter the results considerably. We will therefore use the results based exclusively on the L12 estimations for the rest of the paper, although we thoroughly tested all alternatives. No qualitative differences were found.
In general, it is apparent that simulations are in good agreement with observations within all redshift and luminosity bins.
Nevertheless, above redshift 1.5 the simulated points in low-luminosity bins start to deviate, showing a tendency to overestimate the number of faint AGN. This discrepancy, which evolves with redshift, could be due to the limitations of the simulations, or the applied bolometric corrections, or the completeness of the observational surveys. However, we note that the simulations are fully consistent with the non-parametric results of \citet{Buchner15}, which do not support the sharp flattening of the XLF. For relatively shallow surveys like the XXL (10 ks average exposure time), this area of the XLF is mostly unprobed, since such faint sources at such high redshifts would not be detected. However, it becomes more relevant for deeper surveys. At the bright end, our results agree very well with the XLF by \citet[][]{Aird15}, but they are located at the lower limit of the fit by \citet{Buchner15}. The plotted points of the XLF by \citet[][]{Ranalli16}, where they also use the 11 deg$^2$ of the XMM-LSS field, shows that we may indeed underestimate the bright population at high redshifts, but not greatly.
Finally, in Fig.3, we present the X-ray luminosity functions that we obtain using a different cosmology (Planck, as opposed to WMAP7) and the AGN8.5 feedback model from cosmo-OWLS (as opposed to our default choice, the AGN8.0 model). We use L12 bolometric corrections. It is apparent that changing the cosmology does not affect the results, since they are extremely similar to what we obtain with WMAP7 (Fig.2). On the other hand, as expected, the AGN feedback plays an important role. The relatively low level of the XLF for the AGN8.5 model, compared to the AGN8.0 model, shows that adopting a more powerful feedback results in a less effective accretion and therefore, in a less luminous AGN population.
We also compare our results with those of other recent simulations, including the EAGLE \citep{Rosas16} and the Magneticum Pathfinder simulations \citep{Hirschmann14} in Fig.~3. In terms of the comparison to EAGLE, the predicted XLFs agree relatively well at the faint end of the XLF, while they tend to underpredict the bright end. This difference may be due to the limited volume of the EAGLE simulations, the use of the M04 bolometric corrections, the exclusion of low-$\lambda$ sources (they omit log$_{10}\lambda<-2$ sources), and/or differences in the modeling of SMBH accretion rates. By contrast, the Magneticum Pathfinder simulations, which also use the M04 corrections, tend to overpredict the XLF at most luminosities and the discrepancy tends to grow with redshift. The steep drop of the predicted XLF at high redshifts and low luminosities may be due to the adoption of an inefficient mode of accretion for all log$_{10}\lambda<-1$ sources.
\subsection{Eddington ratio and SMBH mass distribution}
\begin{figure*}[t]
\centering
\resizebox{18cm}{13cm}{\includegraphics[angle=270, origin=c]{lusso.ps}}
\caption{Eddington ratio vs. bolometric luminosity (left panels) and black hole mass (right panel). The simulated AGN sample was divided in two redshift bins, $z<1.2$ (top panels) and $1.2<z<2.3$ (bottom panels), and in unobscured (type-1, $N_{\rm H}<10^{22}$ cm$^{-2}$) and obscured (type-2, $10^{22}$ cm$^{-2}>N_{\rm H}>10^{22}$ cm$^{-2}$) sources, to match the L12 samples (circles and squares, mean values with 1-$\sigma$ errors). X-ray luminosity lower-limits (as marked on the plots) were also imposed for the same reason. When the lower-luminosity limits are relaxed the results are shown with dotted (type-1) and dashed lines (type-2). Obscuration does not affect the results plotted on the left, therefore only one line is drawn for the two samples. The gray lines are the respective results of the type-1 sample at the high-$z$ range, plotted to demonstrate the evolution. We also plot the results of the AGN8.5 model (dash-dotted lines) and the mean $\lambda$ values (not corrected for incompleteness)
of previous studies at the mean luminosities of their samples. See Sect. 4.2 for more discussion on the observed trends.}
\end{figure*}
\begin{figure*}[t]
\centering
\resizebox{15cm}{15cm}{\includegraphics[angle=270, origin=c]{stats.ps}}
\caption{Eddington ratio distribution of the X-ray AGN within three redshift bins. On the left panels we plot the perfect-sky distribution and on the right panels the distribution after all observational and {\it XMM-Newton} instrumental effects were simulated (10 ks exposures, see Sect. 3.3). To illustrate observational selection effects, we overplot deep observational data on the perfect-sky distribution, and shallow on the 10 ks exposures (see Sect. 4.2 for more discussion).}
\end{figure*}
In this section, we study the differential Eddington ratio distribution function ($\Phi_\lambda$) and the differential SMBH mass function ($\Phi_\bullet$) of the synthetic X-ray population. The two functions follow the formalism of eq. (\ref{eq:lf}) replacing $L$ with $\lambda$ and $M_{\rm SMBH}$, respectively.
The Eddington ratio, being the ratio of the bolometric luminosity to the Eddington luminosity, is a clear indication of activity, although there is no explicit threshold which characterizes a turning point. Also, it is apparently redshift dependent. In the local Universe, the majority of AGN have $\lambda$ between $10^{-6}$ to $10^{-3}$ \citep[see review by][and references therein]{Alexander12}. In the same review, they also argue that optically-detected AGN have an Eddington ratio distribution that peaks at $10^{-2}$. On the other hand, X-ray AGN from z=0.3 to ~2.5 have a typical Eddington ratio between $10^{-4}$ to $10^{-1}$ \citep[e.g.][]{Babic07,Hickox09,Raimundo10,Lusso12}. However, at higher redshift the uncertainties are very large.
Firstly, we compare the intrinsic unobscured AGN population (type 1, $N_{\rm H}<10^{22}$ cm$^{-2}$), between redshift 1 and 2, to the results of relevant studies: X-ray selected sources ($z$=1.18-1.68) from the {\it SUBARU XMM-NEWTON Deep Field} \citep[SXDS,][]{Ueda08} described in \citet[]{Nobuta12}, and optically selected sources from the VVDS \citep{LeFevre13} and zCOSMOS \citep{Lilly07} surveys ($z$=1.0-1.9 and $z$=1.1-2.1, respectively) described in \citet{Schulze15}. To determine the unobscured simulated sample we apply the torus obscuration as described in Sect. 3.2. Comparing the unobscured population is the optimal choice, since obscuration corrections are minimal, especially in the hard X-ray band. In addition, a significant part of the accretion growth probably takes place within this redshift range.
In Fig. 4, we plot the Eddington ratio distribution function (ERDF) and the black hole mass function (BHMF) of the above observational data and of our results (limited to $L_{bol}>10^{44}$ erg sec$^{-1}$). Observational data are corrected for incompleteness with the 1/$V_{max}$ method. We find a good agreement between simulations and observations in both cases. However, the shape of the VVDS ERDF is discrepant. We also plot the distribution of our data in the low-$z$ ($z<1$) and the high-$z$ ($z>2$) range. There is a clear evolution of the two functions, namely a significant increase of low-$\lambda$ and high-mass sources toward lower redshifts. Owing to the luminosity limit, the number of sources down to approximately $\lambda$=-2 increases only in the low-$z$ range and then rapidly decreases. However, if we relax the imposed luminosity limit, the number of sources increases continuously towards low $\lambda$ and low $M_{\rm SMBH}$, in agreement with the modeling of \citet[][]{Schulze15} which takes the low-flux sources below the limit of the surveys into account.
Secondly, to reproduce the observational results presented in L12, we divide our sample in two redshift bins, $z<1.2$ and $1.2<z<2.3$, and in unobscured (type-1, $N_{\rm H}<10^{22}$ cm$^{-2}$) and obscured (type-2, $10^{22}$ cm$^{-2}>N_{\rm H}>10^{22}$ cm$^{-2}$) sources. X-ray luminosity lower-limits were also imposed for the same reason. In Fig.5 we plot Eddington ratio vs. bolometric luminosity (left panels), and black hole mass (right panels). There is an excellent agreement between simulations and observations within 1-$\sigma$. AGN8.5 results are more discrepant, especially in the high-$z$ range. We note that the axes are not independent and the trends need to be carefully
explained. As expected, obscured and unobscured sources with the same intrinsic luminosities have the same Eddington ratio distributions (the same lines represent both samples in the left panels). The differences between the two types in the low-$z$ range, reported in L12, are not observed. Nevertheless, we find a clear evolution toward higher Eddington ratios at higher redshifts. On the other hand, when $\lambda$ is plotted versus mass the two AGN types differ. We argue that the difference is a result of the shift of the type-2 sample toward lower luminosities, meaning that if we select subsamples of the same luminosity distribution then the differences disappear. Nevertheless, the evolution is again apparent. Finally, if we relax the luminosity lower-limits, the simulated SMBH distributions flatten significantly, as expected by the shape of the BHMF in Fig. 4.
In Fig. 6, we plot the Eddington ratio distribution of the simulated X-ray AGN catalog divided in three redshift bins. There is a clear increase of the of high-$\lambda$ fraction with increasing redshift, both before (left panels) and after (right panels) introducing the observational and instrumental effects described in Sect. 3.3 (10 ksec exposures). The low-$z$ AGN sample exhibits the lowest Eddington ratio values that peak roughly at $10^{-3}$ in both cases, while the majority of sources above $z$=2 have $\lambda$ values above 10$^{-2}$. Evidently, the steep evolution found for the detected sources is partly due to selection effects, since deeper surveys probe more low-$\lambda$ AGN at higher redshifts than shallow ones. This is demonstrated by overplotting data from the {\it Chandra} deep fields \citep[][]{Raimundo10,Babic07} and from the Galaxy Evolution Survey (AGES) \citep[][]{Hickox09}; the deep surveys trace the perfect-sky distribution, while the shallow match with our 10 ks exposures. This is in agreement with the strong positive correlation of $\lambda$ with luminosity, found in previous studies and presented in Fig.5.
Considering the above results, we conclude that our final AGN catalog follows the observed trends rather well.
\subsection{Projected correlation function and comparison with observations}
The final assessment of the simulated X-ray AGN catalog is the comparison of the predicted large-scale spatial distribution, as quantified by the projected two-point correlation function, with that of the real XXL data. This is of great importance since large-scale structure is a powerful diagnostic for
tracing the cosmic evolution of the AGN (and galaxy) populations. We note that X-ray, IR and radio-selected AGN display different clustering properties, a fact which implies that specific modes of SMBH accretion may be related to the host dark matter halo \citep[e.g.][]{Hickox09,Melnyk13}, although selection effects cannot be ruled out.
The soft band projected correlation function of the southern XXL sample of
spectroscopically confirmed point-like sources
and its possible systematics will be presented in detail in a forthcoming paper.
The southern field has been chosen for this study due to the
homogeneity of its spectroscopic follow-up data, which is based uniquely on the
multifiber AAOmega facility on AAT, as compared to the
northern field which is based on a compilation of different surveys with
different instruments, limiting magnitudes, selection biases and solid angles.
The XXL-S spectroscopic sample contains roughly $\sim 3740$
out of the $\sim4100$ total X-ray point
sources (a $\magcir$90\% completion)
with $r$-band magnitude $\lesssim 21.8$ (the instrument
detection limit), obtained during two AAT observing runs.
The fraction of sources being stars is $\sim$10\%, and our final AGN
spectroscopic sample therefore consists of 3355 unique sources, out of
which 3106 are detected in the soft X-ray band sources and 1893 in the hard.
To compare the simulation with the XXL-S AGN projected
correlation function, which is based only on confirmed sources, we
need to avoid the spurious simulation detections of the pipeline.
To this end, we
correlated the resulting catalog of significant pipeline detections with the
true simulated X-ray AGN input catalog (before the creation of the XMM
images). This resulted in $\sim7000$ soft band X-ray
sources, a number consistent with that of the real
XXL data but a factor of $\sim$2 larger than that of
the XXL-S sources with spectroscopy, an unavoidable fact due to the limiting
magnitude of the AAOmega spectroscopic facility.
To avoid the so-called redshift space distortion effects
we used the
projected correlation function, $w_p(r_p)$ \citep[][]{Davis83},
which is based on deconvolving the
redshift-based comoving distance, $s$, in a component parallel and
perpendicular to the line of sight, $\pi$ and $r_p$, respectively, as
$s^2=r_p^2+\pi^2$.
Then the so-called projected correlation function can be found
by integrating $\xi(r_p,\pi)$ along the $\pi$ direction:
\begin{equation}\label{eq:wp}
w_p(r_p)=2\int_{0}^{\infty}\xi(r_p,\pi) \mathrm{d}\pi \;.
\end{equation}
The real space correlation function can be recovered
according to Davis \& Peebles (1983):
\begin{equation}\label{eq:wp}
w_p(r_p)=2\int_{0}^{\pi_{\rm max}}\xi\left(\sqrt{r_p^2+\pi^2}\right)
{\rm d}\pi =2\int_{r_p}^{\infty}
\frac{x \xi(x)\mathrm{d}x}{\sqrt{x^2-r_p^2}}\;.
\end{equation}
Modelling $\xi(x)$ as a power law one obtains:
\begin{equation}\label{eq:wp_model}
w_p(r_p)=A(\gamma) r_p \left(\frac{x_{0}}{r_p}\right)^{\gamma},
\end{equation}
with $x_{0}$ the projected comoving clustering length at the effective
redshift of the sample, and
\begin{equation}
A(\gamma)=\Gamma\left(\frac{1}{2}\right)
\Gamma\left(\frac{\gamma-1}{2}\right)/\Gamma\left(\frac{\gamma}{2}\right),
\end{equation}
with $\Gamma$ the usual gamma function.
We note that eq. (\ref{eq:wp}) holds strictly for $\pi_{\rm max}=\infty$,
while in order to avoid redshift-space distortions
the integral is performed up to a finite value of
$\pi_{\rm max}$, which in turn produces an underestimation of the underlying
projected correlation function.
However, for the aim of comparing the clustering of the real XXL-S
sources to that of the simulated AGN we do not recover the true projected
comoving correlation length, $x_0$, but we just compare directly the $w_p(r_p)$
representation of the correlation function for the same value of
$\pi_{\rm max}$.
\begin{figure}[t]
\centering
\resizebox{8cm}{8cm}{\includegraphics{final_9_2017.ps}}
\caption{Projected correlation function of the XXL-S
point-like sources (in red) and of the simulated X-ray AGN, detected
through the XXL pipeline (black lines, ten realizations).}
\end{figure}
In Fig. 7, we present the projected correlation function of the ten
realizations of the simulated XXL point-sources together
with that of the XXL-S spectroscopic sample. In both cases
we have limited the sources to those with $L_X>10^{41}$
erg sec$^{-1}$. It is evident that there is a quite good consistency between
data and simulations for $r_p\gtrsim 3$ $h^{-1}$ Mpc, although at
small separations there is a deficiency of the XXL-S correlation function
with respect to that of the simulations (a
fact which could possibly be attributed to the spectroscopic targeting
strategy which will be discussed in a forthcoming paper).
\section{Summary and discussion}
We presented the methodology used to produce a simulated population of
X-ray AGN from the SMBH population of the cosmo-OWLS
hydrodynamical simulations. The resulting AGN catalogs were compared
with observations to assess if they follow the
observed trends. We used ten light-cones of 25 deg$^2$ each, up to redshift 3.
Black holes in cosmo-OWLS grow through accretion of
surrounding gas and merging with other black holes. Stellar
disruption is neglected. Some previous studies argue, however, that it may play an
important role for AGN demographics \citep[e.g.][]{Milosavljevic06},
i.e. that many low-luminosity AGN may be due to the accretion of disrupted stellar mass. The rates of these events, as reported from X-ray surveys, are rather low ($10^{-4}-10^{-5}$/yr/galaxy) and agree well with theory and
simulations \citep[see][]{Komossa12}.
Simulations however, showed that these rates are independent of the
SMBH mass and thus only the growth of the intermediate or least massive
SMBHs may be dominated by stellar disruptions \citep[e.g.][]{Brockamp11},
while white dwarfs (extremely common) can only be observed
in X-rays when they are disrupted by intermediate-mass black holes,
$M_{\rm SMBH}<10^5 M_{\sun}$ \citep[e.g.][]{Luminet89,Rosswog09}. In addition,
observations show that the X-ray LF for
moderate-luminosity active galactic nuclei is not due to tidal
disruptions \citep[][]{Luo08}. We note that there are many
uncertainties affecting these results and other effects which might
reduce the fraction of stellar matter that is finally accreted by the
black hole.
In the present study we have used the bolometric corrections calculated in
L12 from X-ray AGN in the XMM-COSMOS survey. M04 bolometric corrections, derived from template
spectra, can also be used at the low-$z$ range. We argue that the two approaches are complementary (see Sect. 3.1).
Probably the most interesting result is how well the
simulated catalog reproduces the intrinsic luminosity
function \citep[][]{Aird15,Miyaji15} in almost all redshift bins and
luminosities. A small discrepancy only appears at low-luminosities
above redshift 1.5, which increases with redshift. This discrepancy is
also present in other hydrodynamical simulations like the EAGLE
\citep[][]{Rosas16} and the Magneticum Pathfinder simulations
\citep[][]{Hirschmann14}.
However, we note that our results are in good agreement with the non-parametric XLF of \citet[][]{Buchner15}.
To produce obsured and unobscured AGN catalogs, we
applied obscuration to all our sources following the obscuration function
by \citet[][]{Ueda14}. Following the observational trends, the function is luminosity-dependent and it evolves with redshift.
Additional induced obscuration during galaxy merging was
not considered. However, it is possible that a correlation of AGN obscuration with merging
exists, meaning that galaxy interactions and merging may lead to the triggering of SMBH activity
\citep[e.g.][]{Hopkins08,Koulouridis06b,Koulouridis06,Koulouridis13,Villarroel14},
and to an enhancement of obscuration during the initial stage of AGN
evolution \citep[e.g.][]{Koulouridis14,Villarroel17}.
We compared our AGN catalog properties with observational results (Eddington ratio distribution, black hole mass function) and we concluded that the
simulated AGN population comprises sources that reproduce well the observed tendencies and the
evolution of the Eddington ratio, meaning that at higher redshift AGN accrete
more efficiently. Selection effects were also discussed.
We also compared the projected two-point correlation function of the
simulated AGN catalog with the corresponding one from the $\sim$25 deg$^2$
southern XXL field.
The relatively good reproduction of the X-ray AGN large-scale
structure, both in observations and the simulation, has important consequences for
cosmology as it is related to the initial fluctuation spectrum and its
evolution. It further implies that the dark
matter haloes, hosting X-ray selected AGN, correspond directly to the
simulated ones, and thus the simulation provides a test-bed for
understanding the physical processes shaping the triggering and
evolution of the SMBHs in the Universe. We caution that the selection of the sources is not exactly the same,
with the XXL-S data sources being a magnitude limited sample defined by the AAOmega limit of $r\simeq 21.8$.
Nevertheless, another interesting part of the general agreement is the fact that an
optical host-galaxy magnitude limited AGN sample agrees quite well with the underline X-ray AGN sample,
represented by the simulation data. In a forthcoming paper (Plionis et al. in prep.),
which studies the AGN clustering in much greater detail,
we perform a thorough and consistent comparison of the simulations and the XXL point-source redshift data.
On the X-ray cluster side, this sample can give valuable insight
for the high redshift ($z>1$) X-ray cluster population. X-ray clusters
are indeed detected in the redshift range between $z$=1 and 2, but the
level of AGN contamination and their selection function are completely
unknown. Very little is also known for the AGN which reside in
clusters (not the BCG) at these redshifts. There are indications of a
turn-over point at $z=1$ where not only AGN \citep[e.g.][]{Martini13}
but also star forming galaxies behave differently regarding their preference on
dense environments. Our catalogs are well suited to explore this
kind of questions in a statistical sense.
On the other hand, a successful synthetic AGN population should reproduce not only the observed AGN demographics, but also the detailed scaling relations of SMBHs, including their slope, amplitude, intrinsic scatter, and evolution. Recent studies demonstrated the essential role of the velocity dispersion in the relation between SMBHs and their host galaxies \citep[e.g.][]{Bluck16,Shankar16}. In addition, there is evidence of significant bias in the Maggorian relation \citep[e.g.][]{Lasker16,Reines16,Shankar16}, which introduces further complications for a realistic AGN modeling. Unfortunately, the relatively low resolution of the current simulations (a spatial resolution of 4 kpc/h, which owes to the fact that we are simulating huge volumes of the universe in order to model the galaxy cluster population) prevents us from being able to make meaningful comparisons of this sort at present. Measurements of the line-of-sight velocity dispersions at small scales would therefore be unreliable.
Furthermore, we note that these simulations, like most cosmological simulations, do not reproduce in detail the observed galaxy stellar mass function, therefore we do not expect some of the scaling relations to be realistic.
In the present study, we have focused on the quasar demographics first, as this is crucial to our modeling and interpretation of the XXL survey. Going forward, however, the models must continue to be improved and challenged.
Given the limitations of the simulations and the uncertainties of the models
used in the current work, we were able to produce synthetic X-ray AGN catalogs
which perform well when compared with observations.
The advantage of these catalogs is that the properties of the X-ray sources
are directly linked to that of their host dark matter haloes and thus
they can be used in conjunction with the underlying large scale structure distribution
provided by the simulations.
In brief,
to produce a realistic synthetic AGN population:
\begin{itemize}
\item we used the SMBH list of the cosmo-OWLS simulations \citep[][AGN8.0 feedback model, WMAP7 cosmology]{Lebrun14},
\item we used the empirical assessment of the bolometric corrections by \citet{Lusso12} to convert the simulated AGN bolometric luminosities to X-ray emission,
\item we applied the obscuration function by \citet{Ueda14} to compute the column density of the AGN torus and the observed X-ray flux,
\item we modeled the X-ray background by adding (a) the X-ray photon and solar proton contribution following Snowden et al. (2008),
and (b) the particle background from 200 ks closed filter wheel {\it XMM-Newton} exposures, and
\item we simulated all instrumental and survey-dependent signatures.
\end{itemize}
We argue that the described methodology can be applied on
the output of next generation hydrodynamical simulations \citep[e.g.
BAHAMAS:][]{Mccarthy17}, while, by adjusting the instrumental and the survey-dependent parameters,
the produced synthetic AGN catalogs can provide predictions for
future X-ray missions.
\acknowledgements
We would like to thank the anonymous referee for constructive comments that have helped us to improve the quality of this paper. We would like to thank James Aird, Johannes Buchner and Yetli Rosas-Guevara for providing their data and Joop Schaye for helpful discussions. XXL is an international project based around an {\it XMM-Newton}
Very Large Programme surveying two 25 $deg^2$
extragalactic fields at a depth of $5\times10^{-15}$ erg s$^{-1}$ cm$^{-2}$ in [0.5-2] keV at the 90\% completeness level (see XXL paper I).
The XXL website is
http://irfu.cea.fr/xxl. Multiband information and spectroscopic follow-up of the
X-ray sources are obtained through a number of survey programmes, summarized at http://xxlmultiwave.pbworks.com/.
EK acknowledges the Centre National d’Etudes Spatiales
(CNES) and CNRS for support of post-doctoral research. FP acknowledges support by the German Aerospace Agency (DLR) with funds
from the Ministry of Economy and Technology (BMWi) through grant 50 OR 1514 and grant 50 OR 1608.
\bibliographystyle{aa}
|
2,877,628,089,818 | arxiv | \section{Sentence Meaning in Vector Spaces}
\label{sec:intro}
While for decades sentence meaning has been represented in terms of complex formal structures, the most recent trend in computational semantics is to model semantic representations with dense distributional vectors (aka \emph{embeddings}). As a matter of fact, distributional semantics has become one of the most influential approaches to lexical meaning, because of the important theoretical and computational advantages of representing words with continuous vectors, such as automatically learning lexical representations from natural language corpora and multimodal data, assessing semantic similarity in terms of the distance between the vectors, and dealing with the inherently gradient and fuzzy nature of meaning \citep{Erk:2012,Lenci:2018a}.
Over the years, intense research has tried to address the question of how to project the strengths of vector models of meaning beyond word level, to phrases and sentences. The mainstream approach in distributional semantics assumes the representation of sentence meaning to be a vector, exactly like lexical items.
Early approaches simply used pointwise vector operations (such as addition or multiplication) to combine word vectors to form phrase or sentence vectors \citep{mitchell2010composition}, and in several tasks they still represent a non-trivial baseline to beat \citep{rimell2016relpron}.
More recent contributions can be essentially divided into two separate trends. The former attempts to model `Fregean compositionality' in vector space, and aimes at finding progressively more sophisticated compositional operations to derive sentence representations from the vectors of the words composing them \citep{baroni2013frege,paperno2014practical}.
In the latter trend, dense vectors for sentences are learned as a whole, in a similar way to neural word embeddings \citep{mikolov2013distributed,levy2014neural}: for example, the encoder-decoder models of works like \citet{kiros2015skip} and \citet{hill2016learning} are trained to predict, given a sentence vector, the vectors of the surrounding sentences.
Representing sentences with vectors appears to be unrivaled from the applicative point of view, and has indeed important advantages such as the possibility of measuring similarity between sentences with their embeddings, as is customary at the lexical level, which is then exploited in tasks like automatic paraphrasing and captioning, question-answering, etc. Recently, probing tasks have been proposed to test what kind of syntactic and semantic information is encoded in sentence embeddings \citep{ettinger2016probing,adi2016fine,conneau2018you,zhu2018exploring}. In particular, \cite{zhu2018exploring} show that current models are not able to discriminate between different syntactic realization of semantic roles, and fail to recognize that \emph{Lilly loves Imogen} is more similar to
its passive counterpart than to \emph{Imogen loves Lilly}. Moreover, it is difficult to recover information about the component words from sentence embeddings \citep{adi2016fine,conneau2018you}.
The semantic representations built with tensor product in the question-answering system by \citet{palangi2018question} have been claimed to be grammatically interpretable as well.
However, the complexity of the semantic information brought by sentences and the difficulty to interpret the embeddings raise doubts about the general theoretical and empirical validity of the ``sentence-meaning-as-vector'' approach.
In this paper, we propose a \textbf{Structured Distributional Model} (SDM) of sentence meaning that combines word embeddings with formal semantics and is based on the assumption that sentences represent events and situations. These are regarded as inherently complex semantic objects, involving multiple entities that interact with different roles (e.g., agents, patients, locations etc.). The semantic representation of a sentence is a formal structure inspired by Discourse Representation Theory (DRT) \citep{Kamp:2013} and containing distributional vectors. This structure is dynamically and incrementally built by integrating knowledge about events and their typical participants, as they are activated by lexical items. Event knowledge is modeled as a graph extracted from parsed corpora and encoding roles and relationships between participants that are represented as distributional vectors. The semantic representations of SDM retain the advantages of embeddings (e.g., learnability, gradability, etc.), but also contain directly interpretable formal structures, differently from classical vector-based approaches.
SDM is grounded on extensive psycholinguistic research showing that generalized knowledge about events stored in semantic memory plays a key role in sentence comprehension \citep{mcrae2009people}. On the other hand, it is also close to recent attempts to look for a ``division of labour'' between formal and vector semantics, representing sentences with logical forms enriched with distributional representations of lexical items \citep{Beltagy:etal:2016,Boleda:Herbelot:2016,McNally:2017}. Like SDM, \cite{McNally:Boleda:2017} propose to introduce embeddings within DRT semantic representations. At the same time, differently from these other approaches, SDM consists of formal structures that integrate word embeddings with a distributional representation of activated event knowledge, which is then dynamically integrated during semantic composition.
The contribution of this paper is twofold. First, we introduce SDM as a cognitively-inspired distributional model of sentence meaning based on a structured formalization of semantic representations and contextual event knowledge (Section \ref{sec:Model}). Secondly, we show that the event knowledge used by SDM in the construction of sentence meaning representations leads to improvements over other state-of-the-art models in compositionality tasks. In Section \ref{sec:Exp}, SDM is tested on two different benchmarks: the first is RELPRON \citep{rimell2016relpron}, a popular dataset for the similarity estimation between compositional distributional representations; the second is DTFit \citep{vassallo2018event}, a dataset created to model an important aspect of sentence meaning, that is the typicality of the described event or situation, which has been shown to have important processing consequences for language comprehension.
\section{Dynamic Composition with Embeddings and Event Knowledge}
\label{sec:Model}
SDM rests on the assumption that natural language comprehension involves the \emph{dynamic construction of semantic representations, as mental characterization of the events or situations described in sentences}. We use the term `dynamic' in the sense of dynamic semantic frameworks like DRT, to refer to a bidirectional relationship between linguistic meaning and context \citep[see also][]{Heim:1983}:
\begin{quote}
\noindent{}The meaning of an expression depends on the context in which it is used, and its content is itself defined as a \emph{context-change potential}, which affects and determines the interpretation of the following expressions.
\end{quote}
\noindent{}The content of an expression $E$ used in a context $C$ depends on $C$, but -- once the content has been determined -- it will contribute to update $C$ to a new context $C'$, which will help fixing the content of the next expression.
Similarly to DRT, SDM integrates word embeddings in a dynamic process to construct the semantic representations of sentences. Contextual knowledge is represented in distributional terms and affects the interpretation of following expressions, which in turn cue new information that updates the current context.\footnote{An early work on a distributional model of lexical expectations in context is \citet{washtell2010expectation}, but its focus was more on word sense disambiguation than on representing sentence meaning.}
Context is a highly multifaceted notion that includes several types of factors guiding and influencing language comprehension: information about the communicative settings, preceding discourse, general presuppositions and knowledge about the world, etc. In DRT, \cite{Kamp:2016} has introduced the notion of \emph{articulated context} to model different sources of contextual information that intervene in the dynamic construction of semantic representations. In this paper, we focus on the contribution of a specific type of contextual information, which we refer to as \emph{Generalized Event Knowledge} (\textsc{gek}). This is knowledge about events and situations that we have experienced under different modalities, including the linguistic input \citep{mcrae2009people}, and is generalized because it contains information about prototypical event structures.
In linguistics, the Generative Lexicon theory \citep{Pustejovsky:1995} argues that the lexical entries of nouns also contain information about events that are crucial to define their meaning (e.g., \emph{read} for \emph{book}).
Psycholinguistic studies in the last two decades have brought extensive evidence that the array of event knowledge activated during sentence processing is extremely rich: verbs (e.g. \textit{arrest}) activate expectations about typical arguments (e.g. \textit{cop, thief}) and vice versa \citep{mcrae1998modeling,Ferretti2001516,mcrae2005basis}, and similarly nouns activate other nouns typically co-occurring as participants in the same events (\textit{key, door}) \citep{hare2009activating}.
The influence of argument structure relations on how words are neurally processed is also an important field of study in cognitive neuroscience \citep{thompson2014neurocognitive,meltzer2015brain,williams2017early}.
Stored event knowledge has relevant processing consequences. Neurocognitive research showed that the brain is constantly engaged in making predictions to anticipate future events \citep{Bar:2009,Clark:2013}. Language comprehension, in turn, has been characterized as a largely predictive process \citep{Kuperberg:Jaeger:2015}. Predictions are memory-based, and experiences about events and their participants are used to generate expectations about the upcoming linguistic input, thereby minimizing the processing effort \citep{elman20145,mcrae2009people}. For instance, argument combinations that are more `coherent' with the event scenarios activated by the previous words are read faster in self-paced reading tasks and elicited smaller N400 amplitudes in ERP experiments \citep{bicknell2010effects,matsuki2011event,Paczynski:Kuperberg:2012,metusalem2012generalized}.\footnote{Event-related potentials are the electrophysiological response of the brain to a stimulus. In the sentence processing literature, the ERPs are recorded for each stimulus word and the N400, one of the most studied ones, is a negative-going deflection appearing 400ms after the presentation of the word.
A common interpretation of the N400 assumes that the wave amplitude is proportional to the difficulty of semantic unification \citep{baggio2011balance}.}
\cite{elman2009meaning,elman20145} has proposed a general interpretation of these experimental results in the light of the Words-as-Cues framework. According to this theory, words are arranged in the mental lexicon as a sort of network of mutual expectations, and listeners rely on pre-stored representations of events and common situations to try to identify the one that a speaker is more likely to communicate. As new input words are processed, they are quickly integrated in a data structure containing a dynamic representation of the sentence content, until some events are recognized as the `best candidates' for explaining the cues (i.e., the words) observed in the linguistic input. It is important to stress that, in such a view, the meaning of complex units such as phrases and sentences is not always built by composing lexical meanings, as the representation of typical events might be already stored and retrieved as a whole in semantic memory. Participants often occurring together become active when the representation of one of them is activated (see also Bar et al., 2007 on the relation between associative processing and predictions).
SDM aims at integrating the core aspects of dynamic formal semantics and the evidence on the role of event knowledge for language processing into a general model for compositional semantic representations that relies on two major assumptions:
\begin{itemize}
\item lexical items are represented as embeddings within a network of relations encoding knowledge about events and typical participants, which corresponds to what we have termed above \textsc{gek};
\item the \emph{semantic representation} (\textsc{sr}) of a sentence (or even larger stretches of linguistic input, such as discourse) is a formal structure that dynamically combines the information cued by lexical items.
\end{itemize}
\noindent{}Like in \citet{chersoni2017logical}, the model is inspired by Memory, Unification and Control (MUC), proposed by Hagoort \citep{Hagoort:2013,Hagoort:2016} as a general model for the neurobiology of language. MUC incorporates three main functional components: i.) \emph{Memory} corresponds to knowledge stored in long-term memory; ii.) \emph{Unification} refers to the process of combining the units stored in \emph{Memory} to create larger structures, with contributions from the context; and iii.) \emph{Control} is responsible for relating language to joint action and social interaction. Similarly, our model
distinguishes between a component storing event knowledge, in the form of a \textbf{Distributional Event Graph} (\textsc{deg}, Section \ref{sec:DEG}), and a \textbf{meaning composition function} that integrates information activated from lexical items and incrementally builds the \textsc{sr} (Section \ref{sec:MCF}).
\subsection{The Distributional Event Graph}
\label{sec:DEG}
The Distributional Event Graph represents the event knowledge stored in long-term memory with information extracted from parsed corpora. We assume a very broad notion of \emph{event}, as an $n$-ary relation between entities. Accordingly, an event can be a complex situation involving multiple participants, such as \emph{The student reads a book in the library}, but also the association between an entity and a property expressed by the noun phrase \emph{heavy book}. This notion of event corresponds to what psychologist call \emph{situation knowledge} or \emph{thematic associations} \citep{Binder:2016}. As \cite{mcrae2009people} argue, \textsc{gek} is acquired from both sensorimotor experience (e.g., watching or playing football matches) and linguistic experience (e.g., reading about football matches). \textsc{deg} can thus be regarded as a model of the \textsc{gek} derived from the linguistic input.
\begin{figure}
\includegraphics[width=0.7\textwidth]{samplesentence-Page-1.png}
\caption{Reduced version of the parsing for the sentence \textit{The student is reading the book about Shakespeare in the university library}. Three events are identified, each represented with a dotted box.}
\label{deps}
\end{figure}
Events are extracted from parsed sentences, using syntactic relations as an approximation of deeper semantic roles (e.g., the subject relation for the agent, the direct object relation for the patient, etc.). In the present paper, we use dependency parses, as it is customary in distributional semantics, but nothing in SDM hinges on the choice of the syntactic representation. Given a verb or a noun head, all its syntactic dependents are grouped together.\footnote{The extracted graphs are similar to the syntactic joint contexts for verb representation that were proposed by \citet{chersoni2016representing}.} More schematic events are also generated by abstracting from one or more event participants for every recorded instance. Since we expect each participant to be able to trigger the event and consequently any of the other participants, a relation can be created and added to the graph from every subset of each group extracted from a sentence (cf. Figure \ref{deps}).
The resulting \textsc{deg} structure is a \textit{weighted hypergraph}, as it contains weighted relations holding between nodes pairs, and a \textit{labeled multigraph}, since the edges are labeled in order to represent specific syntactic relations. The weights $\sigma$ are derived from co-occurrence statistics and measure the association strengths between event nodes. They are intended as salience scores that identify the most prototypical events associated with an entity (e.g., the typical actions performed by a student).
Crucially, the graph nodes are represented as word embeddings. Thus, given a lexical cue $w$, the information in \textsc{deg} can be activated along two dimensions during processing (cf. Table \ref{tab:DEG}):
\begin{enumerate}
\item by retrieving the most similar nodes to $w$ (the paradigmatic neighbors), on the basis of their cosine similarity between their vectors and the vector of $w$;
\item by retrieving the closest associates of $w$ (the syntagmatic neighbors), using the edge weights.
\end{enumerate}
\begin{figure}
\includegraphics[scale=0.63]{DEG.jpg}
\caption{Toy sample of \textsc{deg} showing several instances of events, each represented by a sequence of co-indexed $e$. The $\sigma$ are the event salience weights.}
\label{fig:DEG}
\end{figure}
\noindent{}Figure \ref{fig:DEG} shows a toy example of \textsc{deg}. The little boxes with circles in them represent the embedding associated with each node. Edges are labeled with syntactic relations (as a surface approximation of event roles) and weighted with salience scores $\sigma$. Each event is a set of co-indexed edges. For example, $e_2$ corresponds to the event of students reading books in libraries, while $e_1$ represents a schematic event of students performing some generic action on books (e.g., reading, consulting, studying, etc.).
\begin{table}
\begin{tabular}{cc}
\textbf{Paradigmatic Neighbors} & \textbf{Syntagmatic Neighbors} \\ \hline
essay, story, novel, author,
biography & publish, write, read, child, series \\
\hline
\end{tabular}
\caption{The five nearest paradigmatic and syntagmatic neighbors for the lexical item \textnormal{book}, extracted from \textsc{deg}.}
\label{tab:DEG}
\end{table}
\subsection{The Meaning Composition Function}
\label{sec:MCF}
We assume that during sentence comprehension lexical items activate fragments of event knowledge stored in \textsc{deg} (like in Elman's Words-as-Cues model), which are then dynamically integrated in a semantic representation \textsc{sr}. This is a formal structure directly inspired by DRT and consisting of three different yet interacting information tiers:
\begin{enumerate}[i]
\item \textit{universe} (\textsc{U}) - this tier, which we do not discuss further in the present paper, includes the entities mentioned in the sentence (corresponding to the \emph{discourse referents} in DRT). They are typically introduced by noun phrases and provide the targets of anaphoric links;
\item \textit{linguistic conditions} (\textsc{lc}) - a context-independent tier of meaning that accumulates the embeddings associated with the lexical items. This corresponds to the conditions that in DRT content words add to the discourse referents. The crucial difference is that now such conditions are embeddings;
\item \textit{active context} (\textsc{ac}) - similarly to the notion of \emph{articulated context} in \cite{Kamp:2016}, this component consists of several types of contextual information available during sentence processing or activated by lexical items (e.g., information from the current communication setting, general world knowledge, etc.). More specifically, we assume that \textsc{ac} contains the embeddings activated from \textsc{deg} by the single lexemes (or by other contextual elements) and integrated into a semantically coherent structure contributing to the sentence interpretation.
\end{enumerate}
\begin{figure}
\includegraphics[scale=0.45]{SR1.png}
\caption{Sample \textsc{sr} for the sentence \emph{The student drinks the coffee}. The sentence activates typical locations and times in which the event could take place.}
\label{fig:SR1}
\end{figure}
\noindent{}Figure \ref{fig:SR1} shows an example of \textsc{sr} built from the sentence \emph{The student drinks the coffee} (ignoring the specific contribution of determiners and tense). The universe \textsc{U} contains the discourse referents introduced by the noun phrases, while \textsc{lc} includes the embeddings of the lexical items in the sentence, each linked to the relevant referent (e.g., $\overrightarrow{student}:u$ means that the embedding introduced by \emph{student} is linked to the discourse referent $u$). \textsc{ac} consists of the embeddings activated from \textsc{deg} and ranked by their salience with respect to the current content in the \textsc{sr}. The elements in \textsc{ac} are grouped by their syntactic relation in \textsc{deg}, which again we regard here just as a surface approximation of their semantic role (e.g., the items listed under ``obl:\emph{loc}'' are a set of possible locations of the event expressed by the sentence). \textsc{ac} makes it possible to enrich the semantic content of the sentence with contextual information, predict other elements of the event, and generate expectations about incoming input. For instance, given the \textsc{ac} in Figure \ref{fig:SR1}, we can predict that the student is most likely to be drinking a coffee at the cafeteria and that he/she is drinking it for breakfast or in the morning. The ranking of each element in \textsc{ac} depends on two factors: i.) its degree of activation by the lexical items, ii.) its overall coherence with respect to the information already available in the \textsc{ac}.
\begin{figure}[t]
\includegraphics[scale=0.40]{SR2.png}
\caption{On the left, the \textsc{sr} for \emph{The student}. On the right, the embedding and \textsc{deg} portion activated by the verb \emph{drink}.}
\label{fig:SR2}
\end{figure}
\begin{figure}[t]
\includegraphics[scale=0.40]{SR3.png}
\caption{The original semantic representation \textsc{sr} for \emph{The student $\dots$} is updated with the information activated by the verb, producing the \textsc{sr} for \emph{The student drinks $\dots$} The new event knowledge is re-ranked with respect to the previous content of \textsc{ac}.}
\label{fig:SR3}
\end{figure}
A crucial feature of each \textsc{sr} is that \textsc{lc} and \textsc{ac} are also represented with vectors that are incrementally updated with the information activated by lexical items. Let \textsc{sr}$_{i-1}$ be the semantic representation built for the linguistic input $w_1,\dots, w_{i-1}$. When we process a new pair $\langle w_i,r_i \rangle$ with a lexeme $w_i$ and syntactic role $r_i$:
\begin{enumerate}[i.)]
\item \textsc{lc} in \textsc{sr}$_{i-1}$ is updated with the embedding $\overrightarrow{w_i}$;
\item \textsc{ac} in \textsc{sr}$_{i-1}$ is updated with the embeddings of the syntagmatic neighbors of $w_i$ extracted from \textsc{deg}.
\end{enumerate}
Figures \ref{fig:SR2} and \ref{fig:SR3} exemplify the update of the \textsc{sr} for the subject \emph{The student} with the information activated by the verb \emph{drink}. The update process is defined as follows:
\begin{enumerate}
\item \textsc{lc} is represented with the vector $\overrightarrow{LC}$ obtained from the linear combination of the embeddings of the words contained in the sentence. Therefore, when $\langle w_i,r_i\rangle$ is processed, the embedding $\overrightarrow{w_i}$ is simply added to $\overrightarrow{LC}$;\footnote{At the same time, the embedding is linked either to a new discourse referent added to \textsc{U}, or to an already available one.}
\item for each syntactic role $r_i$, \textsc{ac} contains a set of ranked lists (one for each processed pair) of embeddings corresponding to the most likely words expected to fill that role. For instance, the \textsc{ac} for the fragment \emph{The student} in Figure \ref{fig:SR2} contains a list of the embeddings of the most expected direct objects associated with \emph{student}, a list of the embeddings of the most expected locations, etc. Each list of expected role fillers is itself represented with the weighted centroid vector (e.g., $\overrightarrow{dobj}$) of their $k$ most prominent items (with $k$ a model hyperparameter). For instance, setting $k=2$, the $\overrightarrow{dobj}$ centroid in the \textsc{ac} in figure \ref{fig:SR2} is built just from $\overrightarrow{book}$ and $\overrightarrow{research}$; less salient elements (the gray areas in Figures \ref{fig:SR1}, \ref{fig:SR2} and \ref{fig:SR3}) are kept in the list of likely direct objects, but at this stage do not contribute to the centroid representing the expected fillers for that role. \textsc{ac} is then updated with the \textsc{deg} fragment activated by the new lexeme $w_i$ (e.g., the verb \textit{drink}):
\begin{itemize}
\item the event knowledge activated by $w_i$ for a given role $r_i$ is ranked according to cosine similarity with the vector $\overrightarrow{r_i}$ available in \textsc{ac}: in our example, the direct objects activated by the verb \textit{drink} (e.g., $\overrightarrow{beer}$, $\overrightarrow{coffee}$, etc.) are ranked according to their cosine similarity to the $\overrightarrow{dobj}$ vector of the \textsc{ac};
\item the ranking process works also in the opposite direction: the newly retrieved information is used to update the centroids in \textsc{ac}. For example, the direct objects activated by the verb \textit{drink} are aggregated into centroids and the corresponding weighted lists in \textsc{ac} are re-ranked according to the cosine similarity with the new centroids, in order to maximize the semantic coherence of the representation. At this point, $\overrightarrow{book}$ and $\overrightarrow{research}$, which are not as salient as $\overrightarrow{coffee}$ and $\overrightarrow{beer}$ in the \textit{drinking} context, are downgraded in the ranked list and are therefore less likely to become part of the $\overrightarrow{dobj}$ centroid at the next step.
\end{itemize}
The newly retrieved information is now added to the \textsc{ac}: as shown in Figure \ref{fig:SR3}, once the pair $\langle drink, root \rangle$ has been fully processed, the \textsc{ac} contains two ranked lists for the \textit{dobj} role and two ranked lists for the \textit{obl:loc} role, the top \textit{k} elements of each list will be part of the centroid for their relation in the next step. Finally, the whole \textsc{ac} is represented with the centroid vector $\overrightarrow{AC}$ built out of the role vectors $\overrightarrow{r_1},\dots,\overrightarrow{r_n}$ available in \textsc{ac}. The vector $\overrightarrow{AC}$ encodes the integrated event knowledge activated by the linguistic input.
\end{enumerate}
As an example of \textsc{gek} re-ranking, assume that after processing the subject noun phrase \emph{The student}, the \textsc{ac} of the corresponding \textsc{sr} predicts that the most expected verbs are \emph{read, study, drink}, etc., the most expected associated direct objects are \emph{book, research, beer}, etc., and the most expected locations are \emph{library, cafeteria, university,} etc. (Figure \ref{fig:SR2}). When the main verb \emph{drink} is processed, the corresponding role list is removed by the \textsc{ac}, because that syntactic slot is now overtly filled by this lexeme, whose embedding is then added to the \textsc{lc}. The verb \emph{drink} cues its own event knowledge, for instance that the most typical objects of drinking are \emph{tea, coffee, beer}, etc., and the most typical locations are \emph{cafeteria, pub, bar}, etc. The information cued by \emph{drink} is re-ranked to promote those items that are most compatible and coherent with the current content of \textsc{ac} (i.e., direct objects and locations that are likely to interact with students). Analogously, the information in the \textsc{ac} is re-ranked to make it more compatible with the \textsc{gek} cued by \emph{drink} (e.g., the salience of \emph{book} and \emph{research} gets decreased, because they are not similar to the typical direct objects and locations of \emph{drink}). The output of the \textsc{sr} update is shown in Figure \ref{fig:SR3}, whose \textsc{ac} now contains the \textsc{gek} associated with an event of drinking by a student.
A crucial feature of \textsc{sr} is that it is a much richer representation than the bare linguistic input: the overtly realized arguments in fact activate a broader array of roles than the ones actually appearing in the sentence. As an example of how these unexpressed arguments contribute to the semantic representation of the event, consider a situation in which three different sentences are represented by means of \textsc{ac}, namely \textit{The student writes the thesis}, \textit{The headmaster writes the review} and \textit{The teacher writes the assignment}. Although \textit{teacher} could be judged as closer to \textit{headmaster} than to \textit{student}, and \textit{thesis} as closer to \textit{assignment} than to \textit{review}, taking into account also the typical locations (e.g., a \textit{library} for the first two sentences, a \textit{classroom} for the last one) and writing supports (e.g., a \textit{laptop} in the first two cases, a \textit{blackboard} in the last one) would lead to the first two events being judged as the most similar ones.
In the case of unexpected continuations, the \textsc{ac} will be updated with the new information, though in this case the re-ranking process would probably not change the \textsc{gek} prominence. Consider the case of an input fragment like \textit{The student plows...}: \textit{student} activates event knowledge as it is shown in Figure \ref{fig:SR1}, but the verb does not belong to the set of expected events given \emph{student}. The verb triggers different direct objects from those already in the \textsc{ac} (e.g., typical objects of \textit{plow} such as \textit{furrow}, \emph{field}, etc.). Since the similarity of their centroid with the elements of the direct object list in the \textsc{ac} will be very low, the relative ordering of the ranked list will roughly stay the same, and direct objects pertaining to the plowing situation will coexist with direct objects triggered by \textit{student}.
Depending on the continuation of the sentence, then, the elements triggered by \textit{plow} might gain centrality in the representation or remain peripheral.
It is worth noting that the incremental process of the \textsc{sr} update is consistent with the main principles of formal dynamic semantic frameworks like DRT. As we said above, dynamics semantics assumes the meaning of an expression to be a context-change potential that affects the interpretation of the following expressions. Similarly, in our distributional model of sentence representation the \textsc{ac} in \textsc{sr}$_{i-1}$ affects the interpretation of the incoming input $w_i$, via the \textsc{gek} re-ranking process.\footnote{For a more comprehensive analysis of the relationship between distributional semantics and dynamics semantics, see \cite{Lenci:2018}.}
\section{Experiments}
\label{sec:Exp}
\subsection{Datasets and Tasks}
Our goal is to test SDM in compositionality-related tasks, with a particular focus on the contribution of event knowledge. For the present study, we selected two different datasets: the development set of the RELPRON dataset \citep{rimell2016relpron}\footnote{We used the development set of RELPRON in order to compare our results with those published by \citet{rimell2016relpron}.} and the DTFit dataset \citep{vassallo2018event}.
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{relpron-exe.png}
\caption{Image from \cite{rimell2016relpron}, showing the terminology for terms and properties in RELPRON: subject relative clause top, object relative clause bottom.}
\label{fig:relpron-exe}
\end{figure}
\textbf{RELPRON} consists of 518 target-property pairs, where the target is a noun labeled with a syntactic function (either subject or direct object) and the property is a subject or object relative clause providing the definition of the target (Figure \ref{fig:relpron-exe}). Given a model, we produce a compositional representation for each of the properties. In each definition, the \textit{verb}, the \textit{head noun} and the \textit{argument} are composed to obtain a representation of the property. Following the original evaluation in \cite{rimell2016relpron}, we tested six different combinations for each composition model: the verb only, the argument only, the head noun and the verb, the head noun and the argument, the verb and the argument and all three of them.
For each target, the 518 composed vectors are ranked according to their cosine similarity to the target. Like \citet{rimell2016relpron}, we use Mean Average Precision (henceforth MAP) to evaluate our models on RELPRON. Formally, MAP is defined as
\begin{equation}
MAP = \frac{1}{N}\sum_{i=1}^{N}AP(t_i)
\end{equation}
where $N$ is the number of terms in RELPRON, and $AP(t)$ is the Average Precision for
term $t$, defined as:
\begin{equation}
AP(t) = \frac{1}{P_t}\sum_{k=1}^{M}Prec(k) \times rel(k)
\end{equation}
where $P_t$ is the number of correct properties for term $t$ in the dataset, $M$ is the total
number of properties in the dataset, $Prec(k)$ is the precision at rank $k$, and $rel(k)$ is a function equal to one if the property at rank $k$ is a correct property for $t$, and zero otherwise. Intuitively, $AP(t)$ will be $1$ if, for the term $t$, all the correct properties associated to the term are ranked in the top positions, and the value becomes lower when the correct items are ranked farther from the head of the list.
Our second evaluation dataset, \textbf{DTFit}, has been introduced with the goal of building a new gold standard for the \emph{thematic fit} estimation task \citep{vassallo2018event}. Thematic fit is a psycholinguistic notion similar to selectional preferences, the main difference being that the latter involve the satisfaction of constraints on discrete semantic features of the arguments, while thematic fit is a continuous value expressing the degree of compatibility between an argument and a semantic role \citep{mcrae1998modeling}. Distributional models for thematic fit estimation have been proposed by several authors \citep{Erk2007ASS,Baroni:2010:DMG:1945043.1945049,Erk2010AFC,lenci2011composing,Sayeed2015AnEO,Greenberg2015ImprovingUV,santus2017measuring,Tilk2016EventPM,hong2018learning}.
While thematic fit datasets typically include human-elicited typicality scores for argument-filler pairs taken in isolation, DTFit includes tuples of arguments of different length, so that the typicality value of an argument depends on its interaction with the other arguments in the tuple. This makes it possible to model the dynamic aspect of argument typicality, since the expectations on an argument are dynamically updated as the other roles in the sentence are filled. The argument combinations in DTFit describe events associated with crowdsourced scores ranging from 1 (very atypical) to 7 (very typical). The dataset items are grouped into typical and atypical pairs that differ only for one argument, and divided into three subsets:
\begin{itemize}
\item 795 triplets, each differing only for the \textbf{Patient} role:
\begin{itemize}
\item \emph{sergeant}\_N \emph{assign}\_V \emph{mission}\_N (typical)
\item \emph{sergeant}\_N \emph{assign}\_V \emph{homework}\_N (atypical)
\end{itemize}
\item 300 quadruples, each differing only for the \textbf{Location} role:
\begin{itemize}
\item \emph{policeman}\_N \emph{check}\_V \emph{bag}\_N \emph{airport}\_N (typical)
\item \emph{policeman}\_N \emph{check}\_V \emph{bag}\_N \emph{kitchen}\_N (atypical)
\end{itemize}
\item 200 quadruples, each differing only for the \textbf{Instrument} role:
\begin{itemize}
\item \emph{painter}\_N \emph{decorate}\_V \emph{wall}\_N \emph{brush}\_N (typical)
\item \emph{painter}\_N \emph{decorate}\_V \emph{wall}\_N \emph{scalpel}\_N (atypical)
\end{itemize}
\end{itemize}
\noindent{}However, the Instrument subset of DTFit was excluded from our current evaluation. After applying the threshold of $5$ for storing events in the \textsc{deg} (cf. Section \ref{sec:expGEK}), we found that the SDM coverage on this subset was too low.
For each tuple in the DTFit dataset, the task for our models is to predict the upcoming argument on the basis of the previous ones. Given a model, we build a compositional vector representation for each dataset item by excluding the last argument in the tuple, and then we measured the cosine similarity between the resulting vector and the argument vector. Models are evaluated in terms of the Spearman correlation between the similarity scores and the human ratings.
As suggested by the experimental results of \cite{bicknell2010effects} and \cite{matsuki2011event}, the typicality of the described events has important processing consequences: atypical events lead to longer reading times and stronger N400 components, while typical ones are easier to process thanks to the contribution of \textsc{gek}. Thus, the task of modeling typicality judgements can be seen as closely related to modeling semantic processing complexity.
\subsection{Models Settings}
In this study, we compare the performance of SDM with three baselines. The simple additive model formulated in \cite{mitchell2010composition}, a smoothed additive model, and a multi-layer Long-Short-Term-Memory (LSTM) neural language model trained against one-hot targets \citep{zaremba2014recurrent}.
The additive models \citep{mitchell2010composition} have been evaluated on different types of word embeddings. We compared their performances with SDM.\footnote{We also tested pointwise multiplicative models, but in our tasks the performances were extremely low, so they were omitted.} Despite their simplicity, previous evaluation studies on several benchmarks showed that such models can be difficult to beat, even for sophisticated compositionality frameworks \citep{rimell2016relpron,Arora:etal:2017,Tian:etal:2017}.
The embeddings we used in our tests are the \textsc{word2vec} models by \citet{mikolov2013distributed}, that is the Skip-Gram with Negative Sampling (\textbf{SG}) and the Continuous-Bag-of-Words (\textbf{CBOW}), and the \textbf{C-Phrase} model by \citet{kruszewski2015jointly}. The latter model incorporates information about syntactic constituents, as the principles of the model training are i.) to group the words together according to the syntactic structure of the sentences and ii.) to optimize simultaneously the context predictions at different levels of the syntactic hierarchy (e.g., given the training sentence \textit{A sad dog is howling in the park}, the context prediction will be optimized for \textit{dog, a dog, a sad dog} etc., that is for all the words that form a syntactic constituent). The performance of C-Phrase is particularly useful to assess the benefits of using vectors that encode directly structural/syntactic information.
We used the same corpora both for training the embeddings and for extracting the syntactic relations for \textsc{deg}. The training data come from the concatenation of three dependency-parsed corpora: the BNC \citep{leech1992100}, the Ukwac \citep{Baroni2009} and a 2018 dump of the English Wikipedia, for a combined size of approximately 4 billion tokens. The corpora were parsed with Stanford CoreNLP \citep{manning2014stanford}.
The hyperparameters of the embeddings were the following for all models: 400 dimensions, a context window of size 10, 10 negative samples, 100 as the minimum word frequency.\footnote{We tested different values for the dimension hyperparameter, and we noticed that vectors with higher dimensionality lead to constant improvements on the thematic fit datasets. The best results were obtained with 400 dimensions.}
\subsubsection{Simple Additive Models}
Our additive models, corresponding to a \textsc{sr} consisting of the $\overrightarrow{LC}$ component only, represent the meaning of a sentence $sent$ by summing the embeddings of its words:
\begin{equation}
\overrightarrow{sent} = \sum_{w \in sent}{\vec{w}}
\end{equation}
\noindent{}The similarity with the targets is measured with the cosine between the target vector and the sentence vector.
\subsubsection{Smoothed Additive Models}
\label{new_base}
These models are a smoothed version of the additive baseline, in which the final representation is simply the sum of the vectors of the words in the sentence, plus the top $k=5$ nearest neighbor of each word in the sentence.\footnote{We have experimented with $k=2,5,10$ and, although the scores do not significantly differ, this baseline model reports slightly better scores for $k=5$.}
Therefore, the meaning of a sentence $sent$ is obtained by:
\begin{equation}
\overrightarrow{sent} = \sum_{w \in sent}{\left(\vec{w} + \sum_{x \in N_5(w)} \vec{x}\right)}
\end{equation}
\noindent{}where $N_k(w)$ is the set of the $k$ nearest neighbors of $w$.
Compared to the \textsc{gek} models, the smoothed additive baseline modifies the sentence vector by adding the vectors of related words. Thus, it represents a useful comparison term for understanding the actual added value of the structural aspects of SDM.\footnote{We would like to thank one of the anonymous reviewers for the suggestion.}
\subsubsection{The Structured Distributional Models}
\label{sec:expGEK}
The SDM introduced in Section \ref{sec:Model} consists of a full \textsc{sr} including the linguistic conditions vector $\overrightarrow{LC}$ and the event knowledge vector $\overrightarrow{AC}$. In this section, we detail the hyperparameter setting for the actual implementation of the model.
\paragraph{\textbf{Distributional Event Graph}}
We included in the graph only events with a minimum frequency of 5 in the training corpora. The edges of the graph were weighted with \emph{Smoothed LMI}.
Given a triple composed by the words $w_1$ and $w_2$, and a syntactic relation $s$ linking them, we computed its weight by using a smoothed version of the Local Mutual Information \citep{Evert2004TheSO}:
\begin{equation}
LMI_\alpha(w_1, w_2, s) = f(w_1, w_2, s) * log(\frac{P(w_1, w_2, s)}{P(w_1)*P_\alpha(w_2)*P(s)})
\end{equation}\\
\noindent{}where the smoothed probabilities are defined as follows:
\begin{equation}
P_\alpha(x) = \frac{f(x)^\alpha}{\sum_x{f(x)^\alpha}}
\end{equation}\\
\noindent{}This type of smoothing, with $\alpha=0.75$, was chosen to mitigate the bias of MI statistical association measures towards rare events \citep{levy2015improving}. While this formula only involves pairs (as only pairs were employed in the experiments), it is easily extensible to more complex tuples of elements.
\paragraph{\textbf{Re-ranking settings}}
For each word in the dataset items, the top 50 associated words were retrieved from \textsc{deg}. Both for the re-ranking phase and for the construction of the final representation, the event knowledge vectors (i.e., the role vectors $\overrightarrow{r}$ and the \textsc{ac} vector \overrightarrow{AC}) are built from the top 20 elements of each weighted list. As detailed in Section \ref{sec:MCF}, the ranking process in SDM can be performed in the forward direction and in the backward direction at the same time (i.e., the \textsc{ac} can be used to re-rank newly retrieved information and vice versa, respectively), but for simplicity we only implemented the forward ranking.
\paragraph{\textbf{Scoring}}
As in SDM the similarity computations with the target words involves two separate vectors, we combined the similarity scores with addition. Thus, given a $target$ word in a sentence $sent$, the score for SDM will be computed as:
\begin{equation}
score(target, sent) = cos(\overrightarrow{target}, \overrightarrow{LC}(sent)) + cos(\overrightarrow{target}, \overrightarrow{AC}(sent))
\end{equation}\\
In all settings, we assume the model to be aware of the syntactic parse of the test items. In DTFit, word order fully determines the syntactic constituents, as the sentences are always in the \textit{subject verb object [location-obl|instrument-obl]} order. In RELPRON, on the other hand, the item contains information about the relation that is being tested: in the \textit{subject} relative clauses, the properties always show the \textit{verb} followed by the \textit{argument} (e.g., \textit{telescope: device that detects planets}), while in the \textit{object} relative clauses the properties always present the opposite situation (e.g., \textit{telescope: device that observatory has}). In the present experiments, we did not use the predictions on non-expressed arguments to compute $\overrightarrow{AC}$, and we restricted the evaluation to the representation of the target argument. For example, in the DTFit Patients set, $\overrightarrow{AC}(sent)$ only contains the $\overrightarrow{dobj}$ centroid.
\subsubsection{LSTM Neural Language Model.}
We also compared the additive vector baselines and SDM with an LSTM neural network, taking as input \textsc{word2Vec} embeddings. For every task, we trained the LSTM on syntactically-labeled tuples (extracted from the same training corpora used for the other models), with the objective of predicting the relevant target. In DTFit, for example, for the Location task, in the tuple \textit{student learn history library}, the network is trained to predict the argument \textit{library} given the tuple \textit{student learn history}. Similarly, in RELPRON, for the tuple \textit{engineer patent design}, the LSTM is trained to predict \textit{engineer} in the subject task and \textit{design} in the object task, given \textit{patent design} and \textit{engineer patent} respectively.
In both DTFit and RELPRON, for each input tuple, we took the top $N$ network predictions (we tested with $N={3, 5, 10}$, and we always obtained the best results with $N = 10$), we averaged their respective word embeddings, and we used the vector cosine between the resulting vector and the embedding of the target reported in the gold standard.
The LSTM is composed by: i.) an input layer of the same size of the \textsc{word2Vec} embeddings (400 dimensions, with dropout=0.1); ii.) a single-layer monodirectional LSTM with $l$ hidden layers (where $l=2$ when predicting Patients and $l=3$ when predicting Locations) of the same size of the embeddings; iii.) a linear layer (again with dropout= 0.1) of the same size of the embeddings, which takes in input the average of the hidden layers of the LSTM; iv.) and finally a softmax layer that projects the filler probability distribution over the vocabulary.
\section{Results and Discussion}
\label{sec:Res}
\subsection{RELPRON}
\label{sec:REL}
Given the targets and the composed vectors of all the definitions in RELPRON, we assessed the cosine similarity of each pair and computed the Mean Average Precision scores shown in Table \ref{addgek}. First of all, the Skip-Gram based models always turn out to be the best performing ones, with rare exceptions, closely followed by the C-Phrase ones. The scores of the additive models are slightly inferior, but very close to those reported by \citet{rimell2016relpron}, while the LSTM model lags behind vector addition, improving only when the parameter $N$ is increased. Results seem to confirm the original findings: even with very complex models (in that case, the Lexical Function Model by Paperno et al. 2014), it is difficult to outperform simple vector addition in compositionality tasks.
\begin{table}[t]
\begin{tabular}{l|llcccc}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{l}{\textbf{Word combination}} & \textbf{R\&al.} & \textbf{SG} & \textbf{CBOW} & \textbf{C-Phrase} \\ \hline
\multirow{6}{*}{{\centering \textbf{Additive}}}
& & verb & 0.18 & 0.16 & 0.16 & 0.13 \\
& & arg & 0.35 & 0.33 & 0.32 & 0.37 \\
& & head noun+verb & 0.26 & 0.26 & 0.25 & 0.21 \\
& & head noun+arg & 0.45 & 0.44 & 0.46 & 0.45 \\
& & verb+arg & 0.40 & 0.43 & 0.36 & 0.41 \\
& & head noun+verb+arg & \textbf{0.50} & \textbf{0.50} & 0.47 & 0.47 \\ \hline
\multirow{6}{*}{{\centering \textbf{Smoothed}}}
& & verb & - & 0.15 & 0.16 & 0.14 \\
& & arg & - & 0.35 & 0.33 & 0.40 \\
& & head noun+verb & - & 0.24 & 0.23 & 0.22 \\
& & head noun+arg & - & 0.45 & 0.46 & 0.49 \\
& & verb+arg & - & 0.41 & 0.36 & 0.41 \\
& & head noun+verb+arg & - & \textbf{0.49} & 0.46 & 0.47 \\ \hline
\multirow{1}{*}{{\centering \textbf{LSTM}}}
& & LSTM\_10 & - & 0.10 & 0.32& - \\
\hline
\multirow{6}{*}{\textbf{SDM}}
& & verb & - & 0.21 & 0.20 & 0.19 \\
& & arg & - & 0.38 & 0.36 & 0.41 \\
& & head noun+verb & - & 0.27 & 0.28 & 0.26 \\
& & head noun+arg & - & 0.50 & 0.50 & 0.50 \\
& & verb + arg & - & 0.41 & 0.36 & 0.41 \\
& & head noun + verb + arg & - & \textbf{0.54} & 0.52 & \textbf{0.54} \\ \hline
\end{tabular}
\caption{Results for the Vector Addition Baseline, Smoothed Vector Addition Baseline, LSTM and the Structured Distributional Model (SDM) on the RELPRON development set (Mean Average Precision scores). Rows refer to the different word combinations tested in Rimell et al. 2016 (R\&al.).}
\label{addgek}
\end{table}
Interestingly, SDM shows a constant improvement over the simple vector addition equivalents (Table \ref{addgek}), with the only exception of the composition of the verb and the argument. All the results for the $headNoun + verb + arg$ composition are, to the best of our knowledge, the best scores reported so far on the dataset.
Unfortunately, given the relatively small size of RELPRON, the improvement of the \textsc{gek} models fails to reach significance ($p > 0.1$ for all comparisons between a basic additive model and its respective augmentation with \textsc{deg}, $p-$values computed with the Wilcoxon rank sum test). Compared to SDM, the Smoothed Vector Addition baseline seems to be way less consistent (Table \ref{addgek}): for some combinations and for some vector types, adding the nearest neighbors is detrimental. We take these results as supporting the added value of the structured event knowledge and the \textsc{sr} update process in SDM, over the simple enrichment of vector addition with nearest neighbors.
Finally, we can notice that the Skip-Gram vectors have again an edge over the competitors, even over the syntactically-informed C-Phrase vectors.
\begin{table}[t]
\begin{tabular}{l|lcccc}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \textbf{Dataset} & \textbf{SG} & \textbf{CBOW} & \textbf{C-Phrase} \\ \hline
\multirow{2}{*}{{\centering \textbf{Additive}}}
& & Patients & \textbf{0.63} & 0.52 & 0.60 \\
& & Locations & \textbf{0.74} & 0.70 & 0.74 \\ \hline
\multirow{2}{*}{{\centering \textbf{Smoothed}}}
& & Patients & \textbf{0.58} & 0.51 & \textbf{0.58} \\
& & Locations & 0.74 & 0.71 & \textbf{0.76} \\ \hline
\multirow{2}{*}{{\centering \textbf{LSTM}}}
& & Patients & ns & 0.42 & - \\
& & Locations & 0.58 & \textbf{0.60} & - \\ \hline
\multirow{2}{*}{{\centering \textbf{SDM}}}
& & Patients & 0.65 & 0.62** & \textbf{0.66} * \\
& & Locations & 0.75 & 0.74 & \textbf{0.76} \\ \hline
\end{tabular}
\caption{Results for the Vector Addition Baseline, Smoothed Vector Addition Baseline, LSTM and the Structured Distributional Model (SDM) on the Patients and Locations subsets of DTFit. The scores are expressed in terms of Spearman correlation with the gold standard ratings. The LSTM scores refer to the best configuration, with $N = 10$ and vectors of size 400. The statistical significance of the improvements over the additive baseline is reported as follows: * $p < 0.05$, ** $p < 0.01$ (p-values computed with Fisher's r-to-z transformation, one-tailed test). ns = non significant correlation.}
\label{tab:dtfit}
\end{table}
\subsection{DTFit}
\label{sec:dtfit}
At a first glance, the results on DTFit follow a similar pattern (Table \ref{tab:dtfit}): The three embedding types perform similarly, although in this case the CBOW vectors perform much worse than the others in the Patients dataset. LSTM also largely lags behind all the additive models, showing that thematic fit modeling is not a trivial task for language models, and that more complex neural architectures are required in order to obtain state-of-the-art results \citep{Tilk2016EventPM}.\footnote{It should also be noticed that our LSTM baseline has been trained on simple syntactic dependencies, while state-of-the-art neural models rely simultaneously on dependencies and semantic role labels \citep{Tilk2016EventPM,hong2018learning}.}
The results for SDM again show that including the \textsc{deg} information leads to improvements in the performances (Table \ref{tab:dtfit}). While on the Locations the difference is only marginal, also due to the smaller number of test items, two models out of three showed significantly higher correlations than their respective additive baselines. The increase is particularly noticeable for the CBOW vectors that, in their augmented version, manage to fill the gap with the other models and to achieve a competitive performance. However, it should also be noticed that there is a striking difference between the two subsets of DTFit: while on patients the advantage of the \textsc{gek} models on both the baselines is clear, on locations the results are almost indistinguishable from those of the smoothed additive baseline, which simply adds the nearest neighbours to the vectors of the words in the sentence. This complies with previous studies on thematic fit modeling with dependency-based distributional models \citep{Sayeed2015AnEO,santus2017measuring}. Because of the ambiguous nature of the prepositions used to identify potential locations, the role vectors used by SDM can be very noisy. Moreover, since most locative complements are optional adjuncts, it is likely that the event knowledge extracted from corpora contain a much smaller number of locations.
Therefore, the structural information about locations in \textsc{deg} is probably less reliable and does not provide any clear advantage compared to additive models.
Concerning the comparison between the different types of embeddings, Skip-Gram still retains an advantage over C-Phrase in its basic version, while it is outperformed when the latter vectors are used in SDM. However, the differences are clearly minimal,
suggesting that the structured knowledge encoded in the C-Phrase embeddings is not a plus for the thematic fit task. Concerning this point, it must be mentioned that most of the current models for thematic fit estimation rely on vectors relying either on syntactic information \citep{Baroni:2010:DMG:1945043.1945049,Greenberg2015ImprovingUV,santus2017measuring,chersoni2017structure} or semantic roles \citep{Sayeed2015AnEO,Tilk2016EventPM}. On the other hand, our results comply with studies like \citet{lapesa2017large}, who reported comparable performance for bag-of-words and dependency-based models on several semantic modeling tasks, thus questioning whether the injection of linguistic structure in the word vectors is actually worth its processing cost.
However, this is the first time that such a comparison is carried out on the basis of the DTFit dataset, while previous studies proposed slightly different versions of the task and evaluated their systems on different benchmarks.\footnote{In datasets such as \citet{mcrae1998modeling} and \citet{pado2007integration}, the verb-filler compatibility is modeled without taking into account the influence of the other fillers. On the other hand, studies on the composition and update of argument expectations generally propose evaluations in terms of classification tasks \citep{lenci2011composing,chersoni2017structure} instead of assessing directly the correlation with human judgements.} A more extensive and in-depth study is required in order to formulate more conclusive arguments on this issue.
Another constant finding of previous studies on thematic fit modeling was that high-dimensional, count-based vector representations perform generally better than dense word embeddings, to the point that \citet{Sayeed2016ThematicFE} stressed the sensitivity of this task to linguistic detail and to the interpretability of the vector space. Therefore, we tested whether vector dimensionality had an impact on task performance (Table \ref{models}). Although the observed differences are generally small, we noticed that higher-dimensional vectors are generally better in the DTFit evaluation and, in one case, the differences reach a marginal significance (i.e., the difference between the 100-dimensional and the 400-dimensional basic Skip-Gram model is marginally significant at $p<0.1$). This point will also deserve future investigation, but it seems plausible that for this task embeddings benefit from higher dimensionality for encoding more information, as it has been suggested by Sayeed and colleagues. However, these advantages do not seem to be related to the injection of linguistic structure directly in the embeddings (i.e., not to the direct use of syntactic contexts for training the vectors), as bag-of-words models perform similarly to - if not better than - a syntactic-based model like C-Phrase. We leave to future research a systematic comparison with sparse count-based models to assess whether interpretable dimensions are advantageous for modeling context-sensitive thematic fit.
\subsection{Error Analysis}
One of our basic assumptions about \textsc{gek} is that semantic memory stores representations of typical events and their participants. Therefore, we expect that integrating \textsc{gek} into our models might lead to an improvement especially on the typical items of the DTFit dataset. A quick test with the correlations revealed that this is actually the case (Table \ref{typ}): all models showed increased Spearman correlations on the tuples in the typical condition (and in the larger Patients subset of DTFit, the increase is significant at $p < 0.05$ for the CBOW model), while they remain unchanged or even decrease for the tuples in the atypical conditions. Notice that this is true only for SDM, which is enriched with \textsc{gek}. On the other hand, the simple addition of the nearest neighbors never leads to improvements, as proved by the low correlation scores of the smoothed additive baseline. As new and larger datasets for compositionality tasks are currently under construction \citep{vassallo2018event}, it will be interesting to assess the consistency of these results.
\begin{table}[t]
\begin{tabular}{ccc}
\textbf{Dimensions} & \textbf{Additive} & \textbf{SDM} \\ \hline
100 & 0.58 & 0.63 \\
200 & 0.58 & 0.63 \\
300 & 0.60 & 0.64 \\
400 & 0.64 & \textbf{0.65}\\ \hline
\end{tabular}
\caption{Spearman correlations of the Vector Addition Baseline and the Structured Distributional Model (SDM) based on Skip-Gram on the DTFit patients subset.}
\label{models}
\end{table}
\begin{table}[t]
\begin{tabular}{lcccl}
\textbf{Model} & \textbf{Additive} & \textbf{Smoothed} & \textbf{SDM} & \textbf{$\Delta$} \\
\hline
CBOW & 0.18 & 0.18 & 0.30 & + 0.12 * \\
SG & 0.29 & 0.24 & 0.33 & + 0.04 \\
C-Phrase & 0.30 & 0.29 & \textbf{0.37} & + 0.07 \\\hline
\end{tabular}
\caption{Comparison of the performance of the Vector Addition Baseline, Smoothed Vector Addition Baseline, and the Structured Distributional Model (SDM) on the typical items of DTFit Patients (Spearman correlations). $\Delta$ reports the SDM improvements over the basic additive models. Significance is noted with the following notation: * $p < 0.05$.}
\label{typ}
\end{table}
Turning to the RELPRON dataset, we noticed that the difference between subject and object relative clauses is particularly relevant for SDM, which generally shows better performances on the latter. Table \ref{tab:verbarg} summarizes the scores component on the two subsets. While relying on syntactic dependencies, SDM also processes properties in linear order: the \textit{verb}$+$\textit{arg} model, therefore, works differently when applied to \textit{subject} clauses than to \textit{object} clauses. In the \textit{subject} case, in fact, the verb is found first, and then its expectations are used to re-rank the object ones. In the \textit{object} case, on the other hand, things proceed the opposite way: at first the subject is found, and then its expectations are used to re-rank the verb ones. Therefore, the event knowledge triggered by the verb seems not only less informative than the one triggered by the argument, but it is often detrimental to the composition process.
\begin{table}[t]
\begin{tabular}{lcc|cc|cc}
\textbf{SDM} & \multicolumn{2}{c}{\textbf{SG}} & \multicolumn{2}{c}{\textbf{CBOW}} & \multicolumn{2}{c}{\textbf{C-Phrase}} \\ \hline
\textbf{Subset} & \textit{sbj} & \textit{obj} & \textit{sbj} & \textit{obj}& \textit{sbj}& \textit{obj} \\ \hline
head noun+verb & 0.29 & 0.31 & 0.32 & 0.32 & 0.29 & 0.28 \\
head noun+arg & 0.54 & 0.57 & 0.54 & 0.56 & 0.56 & 0.57 \\
verb+arg & 0.45 & 0.47 & 0.40 & 0.43 & 0.47 & 0.47 \\
head noun+verb+arg & 0.56 & 0.61 & 0.58 & 0.57 & 0.60 & 0.58 \\ \hline
\end{tabular}
\caption{Comparison of the Structured Distributional Model (SDM) performance (MAP) on the subject and object relative clauses in RELPRON.}
\label{tab:verbarg}
\end{table}
\section{Conclusion}
\label{sec:conc}
In this contribution, we introduced a Structured Distributional Model (SDM) that represents sentence meaning with formal structures derived from DRT and including embeddings enriched with event knowledge. This is modeled with a Distributional Event Graph that represents events and their prototypical participants with distributional vectors linked in a network of syntagmatic relations extracted from parsed corpora. The compositional construction of sentence meaning in SDM is directly inspired by the principles of dynamic semantics. Word embeddings are integrated in a dynamic process to construct the semantic representations of sentences: contextual event knowledge affects the interpretation of following expressions, which cue new information that updates the current context.
Current methods for representing sentence meaning generally lack information about typical events and situation, while SDM rests on the assumption that such information can lead to better compositional representations and to an increased capacity of modeling typicality, which is one striking capacity of the human processing system. This corresponds to the hypothesis by \citet{baggio2011balance} that semantic compositionality actually results from a balance between storage and computation: on the one hand, language speakers rely on a wide amount of stored events and scenarios for common, familiar situations; on the other hand, a compositional mechanism is needed to account for our understanding of new and unheard sentences. Processing complexity, as revealed by effects such as the reduced amplitude of the N400 component in ERP experiments, is inversely proportional to the typicality of the described events and situations: The more they are typical, the more they will be coherent with already-stored representations.
We evaluated SDM on two tasks, namely a classical similarity estimation tasks on the target-definition pairs of the RELPRON dataset \citep{rimell2016relpron} and a thematic fit modeling task on the event tuples of the DTFit dataset \citep{vassallo2018event}. Our results still proved that additive models are quite efficient for compositionality tasks, and that integrating the event information activated by lexical items improves the performance on both the evaluation datasets. Particularly interesting for our evaluation was the performance on the DTFit dataset, since this dataset has been especially created with the purpose of testing computational models on their capacity to account for human typicality judgments about event participants.
The reported scores on the latter dataset showed that not only SDM improves over simple and smoothed additive models, but also that the increase in correlation concerns the dataset items rated as most typical by human subjects, fulfilling our initial predictions.
Differently from other distributional semantic models tested on the thematic fit task, `structure' is now externally encoded in a graph, whose nodes are embeddings, and not directly in the dimension of the embeddings themselves. The fact that the best performing word embeddings in our framework are the Skip-Gram ones is somewhat surprising, and against the finding of previous literature in which bag-of-words models were always described as struggling on this task \citep{baroni2014don,Sayeed2016ThematicFE}. Given our results, we also suggested that the dimensionality of the embeddings could be an important factor, much more than the choice of training them on syntactic contexts.
\bibliographystyle{chicago-nle}
|
2,877,628,089,819 | arxiv | \section{Introduction}
In recent years, we could witness a substantial paradigm shift in
sciences: relationships emerge from vast amounts of collected data
(sometimes dubbed \textit{big data}), and not just a few measurements.
Notable examples are customer recommendation systems of Amazon, eBay
and similar retailers, or the data streams of the Large Hadron Collider
or the Square \foreignlanguage{british}{Kilometre} Array. This trend
is expected to continue in the foreseeable future, especially, with
the advent of \foreignlanguage{british}{always-online} mobile devices
capable of continuously collecting and transmitting all kinds of data.
However, it also seems that science education does not keep up with
the pace of progress in data collection and analysis capabilities,
and that many introductory or even advanced level laboratory experiments
are still conducted with hand-held stopwatches, weights, mercury thermometers,
and microscopes with engraved scales. This approach has at least three
inherent problems. The first is that it teaches students how experiments
were conducted two centuries ago, but does not tell them how to do
them now. Second, the amount of data that can be collected in this
way is limited, and inaccurate. This also means that statistical evaluation
of the results is constrained to a handful of data points. Finally,
by the very nature of the required specialized setups, these experiments
are expensive, and students have to operate within the spatial and
temporal confines of the laboratory course. We are convinced that
one cannot underestimate the pedagogical benefits of pursuing science
on the kitchen sink: when one can accurately measure something relevant
(such as, a fundamental constant) with easily available and cheap
everyday items, and without reference to a dedicated laboratory. It
is a very fortunate coincidence that in our times, everyday items
are digital gadgets capable of measuring all kinds of physical quantities,
e.g., distance, temperature, acceleration, magnetic fields, light
intensity, frequency, time etc., and that the demand for high quality
in user experience makes it possible to deliver unprecedented accuracy.
At the same time, we also recognize the pedagogical value of discussing
how experiments were conducted in the past and that it would be an
irreparable loss not to show what could be achieved with devices that
we would now consider rudimentary. What we would like to demonstrate
in this paper is that it is not necessary to regard the above-mentioned
two subjects, historical perspective, and progress in measurement
capabilities as disjoint. There are ways of showing the beauty and
ingenuity of past experiments, while reaping the many benefits of
modern technologies in terms of measurement time, accuracy, cost,
or the volume of data.
The example that we take is the measurement of the speed of light,
which is one of the fundamental physical constants. Strictly speaking,
our example is pathological in the sense that the international meter
is defined by the help of the speed of light and the international
standard of time, and not the other way around. However, first, till
1983 (i.e., in Foucault's life), length and time were defined and
the speed of light was the derived quantity, second, this fact does
not reduce the didactic value of the experiment itself. We would like
to emphasize that while the evaluation of the measurements requires
some data processing, this fact does definitely not qualify it as
a \emph{big data} exercise.
The paper is organized as follows. In the next two sections, we outline
the historical and theoretical background and derive the expression
for $c$. In Section IV., we introduce our experimental setup and
the critical components, Section V. contains a detailed discussion
of our results, while Section VI. is devoted to a thorough analysis
of various systematic errors. In the appendix, we present a couple
of MATLAB (The Mathworks, Inc.) snippets that can be used to evaluate
measurement data.
\section{Historical background}
That the speed of light, $c$, is finite was already conjectured by
Galileo in the XVII. century, though, his experimental apparatus at
the time prevented him from giving even an order-of-magnitude estimate
for the value. Since then, various methods have been developed.
It was first Huygens, who, based on the astronomical measurements
of Rømer in 1676 on the entry into and exit from eclipses of Jupiter's
moons, could provide a lower bound of about 200 000\,km/s. The same
measurements, repeated with higher accuracy by Delambre in 1809, yielded
304 000\,km/s, astonishingly close to the true value. Another astronomical
method, the aberration of light, was discovered by Bradley in 1729,
with the result of about 296 000\,km/s.
Later, it was realized that in Maxwell's theory, the speed of light
is linked to fundamental electromagnetic constants through the relation
$\epsilon_{0}\mu_{0}=c^{-2}$, and therefore, by measuring the vacuum
permittivity $\epsilon_{0}$, and the vacuum permeability $\mu_{0}$,
it is possible to indirectly infer the value of $c$ \cite{Clark1956,Clark2001}.
It is also to be noted that, if the frequency $f$ of electromagnetic
radiation is known, and the wavelength $\lambda$ can be measured,
then by dint of the relation $c=f\lambda$, $c$ can be indirectly
determined. This is the basis of measurements of interferometric methods
\cite{Belich1996,Lahaye2012}, and of cavity resonance methods \cite{DOrazio2010}.
Finally, there are several methods that measure the time of flight
in terrestrial settings. One of them is the Foucault method that we
discuss in more detail in the next section \cite{Dillman1964,Feagin1979,Morrison1980,Brody2010},
while with the advent of high-speed electronics, it is now possible
to directly measure the delay in the arrival of short optical pulses
as the distance between the emitter and receiver is increased \cite{Rogers1969,Deblaquiere1991,Aoki2008,Ronzani2008}.
This latter method is the simplest of all, but it definitely lacks
the elegance of the others.
The interested reader can find a more detailed survey of various measurement
methods and their significance in \cite{Bates1988}.
\section{Theoretical background}
Foucault's is one of the simplest methods of measuring the speed of
light on Earth, and it falls into the category of time of flight measurements.
It is based on the observation that, if a light beam bounces off a
moving mirror twice, the mirror will have moved by a small amount
by the time it is hit by the beam the second time, and this movement
results in a small displacement of the reflected beam. It is this
displacement that is to be measured, and from where the speed of light
is to be inferred. In this particular instance, the mirror is rotating,
and the rotation angle between the two events can simply be related
to the time that was required for the round trip. The speed of light
can be obtained from the measured displacement, and the length of
the round-trip path. Due to its conceptual simplicity, this is perhaps
the most popular method in student laboratories.
To be more specific, let us take the simplified experimental setup
shown in Fig.\,\ref{fig:concept}. A point source emitting light
is located at point $S$, at a distance $d_{1}$ from the lens $L$.
The source's light is reflected by the rotating mirror $RM$ (at a
distance of $d_{2}$ from the lens, and at this point, stationary)
and its image is created at the position of the end mirror $M$, which
is at a distance of $d{}_{3}$ from the rotating mirror, and is normal
to the in-coming light. This also means that the light reflected by
$M$ is focused on $S$ again. In the absence of the rotating mirror,
the image of $S$ would be at $V$.
\begin{figure}[h]
\includegraphics[width=0.98\columnwidth]{concept}
\caption{The concept of the experiment. $RM$ is the rotating mirror, $M$
is the end reflector, and $L$ is a lens. $S,S'$ are the light source,
and its image, respectively, while $V$, and $V'$ are the virtual
images of $S$, and $S'$.}
\label{fig:concept}
\end{figure}
Now, let us assume that in the time $\Delta t$ the light traverses
the distance between $RM$ and $M$ in both directions, the rotating
mirror turns by an amount $\omega\Delta t$, where $\omega$ is the
angular velocity, and $\Delta t=2d_{3}/c$. This rotation displaces
the virtual image $V$ to $V'$, where the distance between these
two points is simply $\Delta s'=2\omega\Delta t\cdot d{}_{3}$. The
factor of $2$ is a result of the reflection on $RM$: upon reflection,
all angles change by a factor of $2$. The image of the virtual point
$V'$ is mapped by the lens to the point $S'$, and using the two
similar triangles formed by $V,V'$, and the lens, and $S,S'$, and
the lens, respectively, we conclude that the distance between $S$
and $S'$ is
\[
\Delta s=\frac{d_{1}}{d_{2}+d_{3}}\Delta s'=2\omega\Delta t\frac{d_{1}d_{3}}{d_{2}+d_{3}}=\frac{4d_{1}d_{3}^{2}}{d_{2}+d_{3}}\frac{\omega}{c}\ ,
\]
i.e., the speed of light is
\begin{equation}
c=\frac{4d_{1}d_{3}^{2}}{d_{2}+d_{3}}\frac{\omega}{\Delta s}\ .\label{eq:c_final}
\end{equation}
Given $d_{1,2,3}$, the speed of light can be gotten by measuring
the displacement $\Delta s$ for a given angular speed. In principle,
to determine $c$, a single measurement point is enough, but as we
will see later, by measuring $\Delta s$ as a function of $\omega$,
and taking the slope of the linear dependence, it is not necessary
to find the reference position at $\omega=0$. Re-arranging Eq.(\ref{eq:c_final})
yields
\begin{equation}
c_{0}=\frac{4d_{1}d_{3}^{2}}{d_{2}+d_{3}}\left(\frac{d\Delta s}{d\omega}\right)^{-1}\ .\label{eq:c_diffs}
\end{equation}
In Section VI., we will show that the errors are negligible, if the
lens is not positioned perfectly, and the image of $S$ is not formed
at $M$. In the formula above, $c_{0}$ indicates that these errors
are not yet taken into account.
\section{Experimental setup}
Fig.\,\ref{fig:experimental_setup} displays our first experimental
setup. Laser light from a standard fibre fault locator (OZ Optics,
FODL-43S-635-1) emitting at a wavelength of 635\,nm is transmitted
through a single-mode fibre, and collimated by a short focal length
fibre collimator (Thorlabs F240FC-B). While it is not absolutely necessary,
by passing the light through the single-mode fibre patch cable (Thorlabs
P1-630A-FC-1), we begin with a perfect Gaussian beam. It is worth
noting that the fault locator can be replaced by an inexpensive laser
pointer.
\begin{figure}[h]
\includegraphics[width=0.98\columnwidth]{2014jul10_set2_setup_simplified}
\caption{Experimental setup Setup 1. $RM$, $FM$, $SM$ are the rotating,
folding, and back reflector mirrors, respectively, $L_{1}$, $L_{2}$
are lenses of focal length 75 and 400\,mm, respectively, $FP$ is
the focal point of $L_{1}$, $FC$ is the fibre collimator, $BS$
is the beamsplitter, and $TS$ are translation stages. Dimensions
are given in the text.}
\label{fig:experimental_setup}
\end{figure}
The collimated beam is then lead through a telescope consisting of
two lenses of focal lengths 75\,mm ($L_{1}$), and 400\,mm ($L_{2}$),
respectively. The telescope is misaligned slightly in the longitudinal
direction (the distance between the two lenses is larger than 475\,mm),
so that the beam leaving is not collimated any more, but, after being
reflected on the rotating mirror $RM$, is focused on a spherical
mirror $SM$, which acts as the back reflector. The rotating mirror
is located at a distance of 1630\,mm from the 400-mm lens, while
the back reflector with a radius of curvature of 4000\,mm is positioned
at a distance of 4830\,mm from the rotating mirror. Distances were
measured with a tape measure. In order to reduce the overall size
of the setup, the 4830-mm path was folded by the insertion of a flat
mirror (not shown) between $RM$, and $SM$, $FM$. We should also
note that since the spherical mirror is not involved in the imaging,
it can be replaced by a flat mirror.
For monitoring the rotation, we also placed a standard silicon photodiode
(Thorlabls PD136A) close to the rotating mirror: when rotating, the
mirror diverts the laser light to the diode 8 times per revolution,
thereby, producing a well-defined potential spike that can conveniently
be recorded on an oscilloscope.
The light reflected by the spherical mirror travels along the same
path, except that it is diverted to a \foreignlanguage{british}{webcam}
(Logitech C310) by a pellicle beam splitter ($BS$, Thorlabs BP150)
positioned to the left of $FP$, which is the focal point of $L_{1}$.
The small lens of the webcam has to be removed before use, so that
no extra imaging element is introduced. In the original version of
the experiment, instead of a camera, a microscope is used to measure
the displacement of the beam. However, given the finite size of the
focal spot, this also entails that large distances have to be employed
in order to realize measurable displacements. The application of the
camera not only makes data collection more convenient, but it also
implies that the physical size of the setup can considerably be reduced.
We would like to point out that the use of a webcam in the context
of speed of light measurements was discussed in an interferometric
setting in \cite{Lahaye2012}.
Our second setup, Setup 2, is shown in Fig.\,\ref{fig:experimental_setup2}.
The light of the fault locator is focused by a lens of focal length
$400\,\mathrm{mm}$ onto $FP$, from where it reaches the spherical
mirror $SM$ with a focal length of 2\,m. The spherical mirror is
$4060\,\mathrm{mm}$ away from $FP$, and is tilted slightly, so that
the light is reflected off the rotating mirror $RM$, located $730\,\mathrm{mm}$
away from $SM$, and finally $FM$, located $3260\,\mathrm{mm}$ away
from $RM$. The lengths in the setup are chosen in such a way that
the light is focused on the flat end reflector, $FM$, although, as
will be discussed in Section VI., small longitudinal misalignments
do not influence the results in any significant way. As in the first
setup, folding mirrors were used between $SM$, and $RM$, and between
$RM$, and $FM$.
\begin{figure}[h]
\includegraphics[width=0.98\columnwidth]{2014sep10_setup_set3_simplified_b}
\caption{Experimental setup, Setup 2. $RM$, $FM$, and $SM$ are the rotating,
flat, and spherical mirrors, respectively, $L$ is a lens of focal
length 400\,mm, $FC$ is the fibre collimator, $BS$ is the beamsplitter,
and $TS$ are translation stages. Dimensions as indicated in the text.}
\label{fig:experimental_setup2}
\end{figure}
The two setups are conceptually the same: the only difference between
them is that the imaging element in the first case is a lens, while
in the other case, it is a spherical mirror.
As the rotating reflector, we employed an octagonal printer mirror
scavenged from a faulty printer, shown in Fig.\,\ref{fig:8mirror}.
(This part can also be purchased separately. A possible alternative
is a barcode reader with a revolving mirror.) Laser printers utilize
a focused laser beam to locally discharge a positively pre-charged
cylindrical drum, that is itself rotating around its own axis. The
octagonal (sometimes quadratic, or hexagonal) mirror is used for scanning
the laser beam along the axis of the drum, thereby creating an accurate
time-to-two-dimensional mapping on the drum's surface. In order to
achieve high spatial accuracy, both the drum and the rotating mirror
have to revolve at a constant speed. Stabilization of the rotation
frequency is achieved by means of phase-locked loops (PLL), in which
an external clock signal is locked to the signal of a magnetic field
transducer measuring the temporal variations of the field of a constant
magnet moving with the axle of the motor. This also means that, within
limits, the rotation speed can be set by adjusting the clock signal
that is fed into the PLL loop. Fig.\,\ref{fig:8mirror} also indicates
the connections of the mirror assembly: $PWR$ (pin 1) is the power
line, whose potential can be anything between +18, and +36\,V, $GND$
(pin 2) is ground, $ENAB$ (pin 3) is the active-low motor enable
pin (this should be tied to ground), while $CLK$ (pin 5) is the clock
line, which takes TTL pulses with frequencies between around 300,
and 6000\,Hz. Pin 4 is an output connected to the magnetic field
transducer, and can be used for monitoring the rotation.
The advantages of the mirror assembly are that first, the mirror is
monolithic, therefore, it is safe to operate: no pieces can break
off at high speeds. Second, the control electronics makes it possible
to adjust the speed by setting the frequency of the clock signal from
a simple function generator, and that there is a well-defined linear
relationship between the rotation speed and the clock frequency.
\begin{figure}[h]
\includegraphics[width=0.98\columnwidth]{8mirror}
\caption{Rotating printer mirror, side view (top), and top view (bottom). The
30-pin integrated circuit contains the motor driver (BD6792FM from
Rohm Semiconductors) with the built-in PLL. Control pins are labeled
in blue.}
\label{fig:8mirror}
\end{figure}
Initial alignment of the setup is performed when the mirror is stopped
(the enable line is high). First, all mirrors are placed to their
respective positions, and $FC$ is aligned such that the collimated
laser beam can travel to the end mirror, $SM$. Then $L_{1}$ is inserted
in such a way that the diverging laser light still reaches both $RM$,
and $SM$. After this, $L_{2}$ is inserted in the path, and is moved
along the optical axis till the size of the light spot reaches its
minimum on $SM$. When this is achieved, the beamsplitter, $BS$,
has to be placed on the left hand side of the focal point of $L_{1}$,
at a distance of about 5-7\,cm from the focal spot, $FP$. With the
tip-tilt control knobs of the mirror holder, $SM$ has now to be aligned
so that the light is reflected back to the laser. At this point, the
reflected beam should be focused on $FP$. Finally, the camera has
to be placed in the diverted focus of the back-reflected beam. Great
care has to be taken to make sure that the camera's plane is as perpendicular
to the laser beam as possible: failure to do so will results in a
systematic error, which leads to higher speeds of light. For a thorough
discussion on this, see Section VI.
\section{Experimental results}
As can be inferred from Eq.(\ref{eq:c_diffs}), in order to determine
the speed of light, one has to measure $d_{1,2,3}$, the angular frequency
$\omega$, and the displacement $\Delta s$. The measurement can be
done in the same way in both setups, and the steps are as follows.
First, one has to determine the rotation speed as a function of the
clock frequency. Next, the pixel size of the camera has to be measured.
This step amounts to calibrating a ruler. Then the displacement of
the image on the camera has to be measured at various clock frequencies
(this step involves fitting to the camera images), and by using the
pixel size, this displacement has to be converted to physical units.
Finally, the slope of the displacement-frequency relationship has
to be determined, and inserted in Eq.(\ref{eq:c_diffs}).
The rotation speed can be deduced from the time traces of the photodiode,
either by simply measuring the time difference between an integer
number of maxima, or recording the potential values, taking the Fourier
transform, and identifying the strongest frequency component. Given
a high enough number of samples, the two methods deliver the same
results. In Fig.\,\ref{fig:rpm_calibration}, we show the measured
rotation speed as a function of the clock frequency, with a typical
time trace of the detector signal on an oscilloscope, and its Fourier
transform. The period can clearly be resolved from either the signal,
or its Fourier spectrum. Note that at high clock rates, the rotation
speed saturates. For this reason, we excluded the last 3 points from
the linear fit, from which we deduced the relationship $f_{\mathrm{rot}}(Hz)=(0.167\pm0.00054)\cdot f_{\mathrm{clk}}(\mathrm{Hz)}-0.649\,\mathrm{(Hz)}$.
The error of the fit is approximately 0.3\%. Given the precision (in
the ppm range) of frequency standards used in modern pulse generators,
and the stability of phase-locked loops used in laser printers, we
ascribe the error to our way of determining the frequency from the
Fourier transform of the time trace. Also note that, since the rotating
mirror has 8 facets, the actual rotation speed is only 1/8 of what
the detector signal indicates.
It is worth pointing out that, given the order of magnitude of the
rotation speed, in the absence of an oscilloscope, these frequencies
can easily be measured by means of a smart phone. All one has to do
is to convert the electrical signal of the photodiode to sound by
amplifying it, and connecting it to a speaker, and then record the
sound through the microphone. There are countless applications that
can take and display the Fourier transform of the microphone input.
Likewise, the clock signal can be generated by a suitable waveform
applied to the phone's speaker.
\begin{figure}[h]
\includegraphics[width=1\columnwidth]{rpm_calibration}
\caption{Rotation speed as a function of clock frequency. The inset in the
lower right corner shows a typical time trace on the detector with
its Fourier spectrum in the upper left. The clock frequency for this
trace is 300 Hz. The parameters of a linear fit are displayed in the
figure. The last three data points (blue squares) were excluded. The
error in the slope is 0.00054. }
\label{fig:rpm_calibration}
\end{figure}
In order to convert the pixel positions into physical distance, we
have to calibrate the CCD camera. In other words, we have to measure
the pixel size. For this procedure, we stopped the rotating mirror,
and shifted the camera by an amount indicated by the micrometer screw
on the translation stage. The data points are plotted in Fig.\,\ref{fig:ccd_calibration}
in conjunction with a linear fit, which gives a pixel size of $2.75\,\mu\mathrm{m}$.
This is also the value given by the manufacturer. By the help of this
measurement, one can also ascertain that the translation axis is parallel
to the camera's plane, because if that is not the case, then the width
of the profiles changes as the camera is shifted. As shown below (see
e.g., Fig.\,\ref{fig:profiles}), the centre of the nearly Gaussian
profiles can be obtained with sub-pixel accuracy. If we take half
of the smallest micrometer division ($10\,\mu\mathrm{m}$) as the
error in position, this procedure incurs an overall error of less
than one fifth of a per cent.
\begin{figure}[h]
\includegraphics[width=1\columnwidth]{ccd_calibration}
\caption{Measurement of the pixel size. The statistical errors in both the
dependent and independent variables are too small to be visible. The
parameters of the linear fit are indicated in the figure. }
\label{fig:ccd_calibration}
\end{figure}
Having determined the calibration for both the camera, and the rotation
speeds, we now turn to the measurement of the displacements. First
we discuss the results obtained from Setup 1. Typical images of the
reflected beam at three different rotation speeds (183, 417, and 885\,Hz)
are shown in Fig.\,\ref{fig:camera_image} (only a small part of
the otherwise 720-by-1280 chip is displayed). The movement of the
beam is clearly visible. Note that, while we begin with a circularly
symmetric Gaussian beam (this is what leaves the single-mode fiber),
the camera image is elongated along the vertical direction, which
is perpendicular to the direction of the displacement. The reason
for this is that the mirrors are only 2\,mm thick, but 15\,mm wide,
while the beam at the mirror's position is still about 10\,mm in
diameter. This means that diffraction will stretch the beam in the
direction of the smallest dimension of the mirror.
\begin{figure}[h]
\includegraphics[width=1\columnwidth]{camera_images}
\caption{Typical camera images in Setup 1 at rotation frequencies 183, 417,
and 885\,Hz, respectively. The profiles below were obtained by integrating
over the region indicated on the right hand side by the small white
arrows. }
\label{fig:camera_image}
\end{figure}
The images in Fig.\,\ref{fig:camera_image} are turned into nearly
Gaussian profiles by vertically integrating over a range of $\pm25$
pixels around the maximum, as indicated by the small white arrows
in the figure. Such profiles for three different rotation speeds (183,
417, and 885\,Hz) are shown in Fig.\,\ref{fig:profiles}. In order
to accurately determine the centre positions of these profiles, we
fit a Gaussian with an offset to the data points in a range of $\pm15$
pixels around the pixel with the highest intensity, as shown by the
shaded gray domains in the figure. The centre of these fits is then
accepted as the true position of the reflected beam. The error in
the fit is less than $0.15$ pixels for all measurements.
\begin{figure}[h]
\includegraphics[width=1\columnwidth]{profile_jul10_set2}
\caption{Typical profiles taken in Setup 1 at a rotation frequency of 183 (solid
red circle), 417 (empty green triangle), and 885\,Hz (solid blue
square), respectively. The domain of Gaussian fits is indicated by
the shaded gray regions, while the solid black lines are the fits.}
\label{fig:profiles}
\end{figure}
Figure\,\ref{fig:shift_vs_frequency} contains measurement data on
the beam displacement as a function of the rotation speed. On the
right vertical axis, the positions are given in terms of the CCD pixels,
as taken from images similar to Fig.\,\ref{fig:camera_image}. The
left axis displays the positions in physical units, after the CCD
pixels were converted using the fit from Fig.\,\ref{fig:ccd_calibration}.
The linear fit to these data yields a slope of $(0.130\pm0.00047)\,\mathrm{\mu m/Hz}$.
Given that, with the nomenclature of Eq.\,(\ref{eq:c_diffs}), $d_{1}=425\pm1\,\mathrm{mm}$,
$d_{2}=1630\pm1\,\mathrm{mm}$, and $d_{3}=4830\pm1\,\mathrm{mm}$,
and taking all above-mentioned error sources into account, we calculate
a speed of light of $c=(2.97\pm0.03)\cdot10^{8}\,\mathrm{m/s}$. This
is within 1\% of the defined value of $2.99792458\cdot10^{8}\,\mathrm{m/s}$,
and overall, the statistical errors are within 1\%.
\begin{figure}[h]
\includegraphics[width=1\columnwidth]{shift_vs_frequency}
\caption{Position of the reflected beam as a function of the rotation speed
in Setup 1. On the right vertical axis, the same data are shown in
units of the CCD pixels. The peak position can be obtained from $P_{\mathrm{peak}}(\mathrm{\mu m})=(0.130\pm0.00047)\cdot f_{\mathrm{rot}}\mathrm{(Hz)}+1247.6(\mathrm{\mu m)}$.
Error on the data points is not visible.}
\label{fig:shift_vs_frequency}
\end{figure}
We now discuss measurements in Setup 2. Typical camera images at frequencies
50, 400, and 751\,Hz, respectively are shown in Fig.\,\ref{fig:camera_image_setup2}.
As opposed to the other setup, the laser spot is stretched vertically
over the whole length of the camera (720 pixels). Also note that as
the frequency increases, so does the width of the images. We speculate
that this might be related to turbulence generated by the fast rotating
mirrors: while the average speed of the motor is determined by the
clock frequency, vortices detaching from the vertices of the octagonal
mirror can lead to fluctuations in the instantaneous speed.
\begin{figure}[h]
\includegraphics[width=1\columnwidth]{camera_images_setup2}
\caption{Camera images in Setup 2 at frequencies 50, 400, and 751\,Hz.}
\label{fig:camera_image_setup2}
\end{figure}
This change in the width can also be seen in Fig.\,\ref{fig:sep10_set3_profiles},
where we plot the vertically integrated camera images for 17 rotation
frequencies as indicated. However, despite the broadening of the profiles,
the displacement is clearly visible as the frequency changes.
\begin{figure}[h]
\includegraphics[width=1\columnwidth]{sep10_set3_profiles}
\caption{Vertically integrated camera profiles in Setup 2 as a function of
the frequency.}
\label{fig:sep10_set3_profiles}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.98\columnwidth]{shift_vs_frequency_sep10_set3}
\caption{Position of the reflected beam as a function of the rotation speed
in Setup 2. On the right vertical axis, the same data are shown in
units of the CCD pixels. The linear fit to the peak position is $P_{\mathrm{peak}}(\mathrm{\mu m})=(0.899\pm0.0059)\cdot f_{\mathrm{rot}}\mathrm{(Hz)}+383.3(\mathrm{\mu m)}$.
Error on the data points is not visible.}
\label{fig:shift_vs_freq_sep10}
\end{figure}
In Fig.\,\ref{fig:shift_vs_freq_sep10} we plot the beam displacement
as a function of the rotation speed, similar to Fig.\ref{fig:shift_vs_frequency}.
The linear fit to these data yields a slope of $(0.899\pm0.0059)\,\mathrm{\mu m/Hz}$.
Given that $d_{1}=4060\pm1\,\mathrm{mm}$, $d_{2}=730\pm1\,\mathrm{mm}$,
and $d_{3}=3260\pm1\,\mathrm{mm}$, and considering all error sources,
we calculate a speed of light of $c=(3.02\pm0.03)\cdot10^{8}\,\mathrm{m/s}$.
Our experimental conditions and results are summarized in Table\,\ref{table:experimental_conditions}.
\begin{table}[h]
\begin{tabular}{|c|c|c|c|c|}
\hline
& $d_{1}$ & $d_{2}$ & $d_{3}$ & c (m/s)\tabularnewline
\hline
\hline
\multirow{2}{*}{Setup 1} & $\overline{FP-L_{2}}$ & $\overline{L_{2}-RM}$ & $\overline{RM-SM}$ & \multirow{2}{*}{$2.97\cdot10^{8}$}\tabularnewline
\cline{2-4}
& 425\,mm & 1630\,mm & 4830\,mm & \tabularnewline
\hline
\multirow{2}{*}{Setup 2} & $\overline{FP-SM}$ & $\overline{SM-RM}$ & $\overline{RM-FM}$ & \multirow{2}{*}{$3.02\cdot10^{8}$}\tabularnewline
\cline{2-4}
& 4060\,mm & 730\,mm & 3260\,mm & \tabularnewline
\hline
\end{tabular}
\caption{Summary of experimental conditions, and results. Overlines denote
distances between designated elements.}
\label{table:experimental_conditions}
\end{table}
\section{Systematic errors}
We have already indicated the magnitude of statistical errors: the
calibration of the CCD is about 0.2\%, the rotation frequency's is
about 0.3\%, the length measurement's is less than 0.1\%, while the
Gaussian fits to the profiles contain an error of about 0.2\%. However,
in addition to these, there are a number of systematical errors that
one has to consider.
One we have already pointed out, namely, if the camera is not perpendicular
to the laser beam, all displacements will be measured \emph{shorter}
and this will lead to a seemingly \emph{larger }speed of light. One
way of removing this error source is to slightly rotate the camera
without moving it, and repeat the measurements multiple times. The
smallest value of $c$ should correspond to the perpendicular configuration.
However, since this correction is proportional to the cosine of the
angle of deviation from the normal, errors are of second order.
Second, if the camera's plane is not parallel to the axis of the translation
stage, the pixel size will be inferred incorrectly, and this, again,
will lead to a seemingly higher light speed. As mentioned above, a
trivial test for this is the beam profile measured at various positions
of the translation stage: all other conditions being identical, a
simple translation should result in identical profiles. If this is
not the case, then the camera has to be rotated slightly with respect
to the translation stage till all measured profiles are identical.
As with the systematic error discussed above, corrections are quadratic
in the angle.
Third, the measurement of independent quantities, in this case, the
frequency (time) and distance might contain errors that result from
the particular method used to measure them. Given the accuracy of
frequency measurements, it is reasonable to expect that only the value
of distance would be affected, and one can safely neglect systematic
errors in frequency.
Fourth, imperfections in the focusing lead to small errors. In order
to estimate the order of magnitude of these, let us assume that the
image of $S$ is formed not at the end mirror, but at $P$, which
is at a distance of $x$ from $M$, as shown in Fig.\,\ref{fig:concept_imperfect}.
The virtual image of $P$ will also be shifted by the same amount,
and following the derivation in Section III., we arrive at
\begin{equation}
c=\frac{4d_{1}d_{3}(d_{3}-x)}{d_{2}+d_{3}-x}\left(\frac{d\Delta s}{d\omega}\right)^{-1}\approx c_{0}\left[1-\frac{xd_{2}}{d_{3}(d_{2}+d_{3})}\right]\ ,\label{eq:c_diffs_imperfect}
\end{equation}
if $x\ll d_{3}$. Note that $d_{1}$ does not necessarily indicate
the distance at which the image is formed: it simply designates the
position of the measurement (webcam). If $d_{1}$ is chosen such that
the imaging condition is not satisfied, it does not mean that the
derivation is incorrect, it only means that the image will not be
sharp at that point, but Eq.(\ref{eq:c_diffs_imperfect}) is still
valid.
\begin{figure}[h]
\includegraphics[width=0.98\columnwidth]{concept_imperfect}
\caption{The concept of the measurements, with focusing errors. Notation as
in Fig.\,\ref{fig:concept}. $P$ is the image of the source $S$.}
\label{fig:concept_imperfect}
\end{figure}
The magnitude of the correction will depend on two parameters of the
setup, $d_{2},d_{3}$, and the inaccuracy in the focusing, $x$. Note
that for $d_{2}=0$, i.e., when the rotating mirror is next to the
imaging element, the first-oder correction is zero. In the first setup
$d_{2}/d_{3}\approx1/3$, while in the second case, $d_{2}/d_{3}\approx1/4$.
Therefore, an upper bound for the correction in Eq.(\ref{eq:c_diffs_imperfect})
is $x/(4d_{3})$. Given that $d_{3}\geq3\,\mathrm{m}$, we incur an
error of 1\%, if $x\approx0.12\,\mathrm{m}$. It is reasonable to
assume that the focus can be determined with $10\,\mathrm{cm}$ accuracy,
even if the imaging elements have such long focal length. Therefore,
we can conclude that the error related to imperfect focusing is less
than 1\%.
Finally, the lens, the only glass element in the first setup, has
finite width with a refractive index larger than one, and this adds
to the total length between the focal point and the end mirror. This
extra optical length can be measured and added to the path, provided
the refractive index of the glass is known. Of course, the second
setup does not suffer from this kind of error.
\section{Conclusion}
In conclusion, we presented a simple version of the Foucault method
for the measurement of the speed of light. We demonstrated that with
readily available and inexpensive optics, and a bit of data processing,
acceptable accuracy (results within 1\% of the defined value) can
be achieved. We also discussed a range of systematic errors, and pointed
out several possible improvements. The experiment teaches students
the historically important Foucault method, and modern data evaluation
concepts at the same time.
\bibliographystyle{apsrev}
|
2,877,628,089,820 | arxiv |
\section{The Algorithm}
\label{sec:algo}
In this section, we will discuss the details of the routing-by-agreement scheme in the dynamic routing algorithm
based on which, an improved routing algorithm is proposed that targets directly at the discriminative power of class capsules.
\subsection{Dynamic Routing-by-Agreement}
Routing-by-agreement aims to couple the lower level capsules to higher level capsules when they agree with each other.
Here we will discuss the coupling procedure from primary capsules to output capsules in \cite{CAP_NIPS2017_Hinton}.
Each primary capsule $\vec{u}_i$ is first projected to the space of digital capsules in the follow-up layer by
\begin{align}
\label{eq:p2d}
\hat{\vec{u}}_{j|i}=\vec{W}_{ij} \vec{u}_i,
\end{align}
and the digital capsules are then derived from the weighted summation of all $\hat{\vec{u}}_{j|i}$s, as in \Cref{eq:dcap},
\begin{align}
\label{eq:dcap}
\vec{v}_j = \sum_{i} c_{ij}\hat{\vec{u}}_{j|i},\vec{s}_j = \text{squash}(\vec{v}_j).
\end{align}
where $\text{squash}$ brings nonlinearity to digital capsules and scales capsule length to between 0 and 1,
\begin{align}
\label{eq:squash}
\text{squash}(\vec{v}) = \dfrac{||\vec{v}||^2_2}{1+||\vec{v}||^2_2} \cdot \dfrac{\vec{v}}{||\vec{v}||_2},
\end{align}
and $c_{ij}$s are softmaxed coupling coefficients $b_{ij}$s (see \Cref{eq:rsftmx}) that determines the probability on primary capsule $\vec{u}_i$ should contribute to activate the digital capsule $\vec{v}_j$.
\begin{align}
\label{eq:rsftmx}
c_{ij}=\dfrac{\exp(b_{ij})}{\sum_{k} \exp(b_{ik})}.
\end{align}
In each routing iteration, $c_{ij}$ will be amplified if capsule $i$ agrees with capsule $j$ the most.
There are two beneath assumptions in the dynamic routing algorithm.
\begin{myassumption}
\label{asm:1}
Primary capsules do not have negative impact on the activation of digital capsules.
\end{myassumption}
\begin{myassumption}
\label{asm:2}
All digital capsules are activated correctly.
\end{myassumption}
Assumption~\ref{asm:1} comes from the fact that $c_{ij}$s are always positive due to \Cref{eq:rsftmx}, which guarantees each primary capsule will more or less contribute to higher level capsules.
Such design cannot efficiently represent the case when one or more specific entities/features can never exist in certain objects.
The dynamic routing algorithm is coherent with Assumption~\ref{asm:2}, because in each routing iteration, $c_{ij}$ will always be increased if $\hat{\vec{u}}_{j|i}$ has largest inner product with $\vec{v}_j$.
Here are some potential drawbacks when accepting these assumptions.
Assumption~\ref{asm:1} can possibly limit the solution space with always-nonnegative $c_{ij}$'s.
On the other hand, more importantly, Assumption~\ref{asm:2} does not necessarily hold during training especially at very early epochs.
Unconditionally coupling with a digital capsule simply based on inner-product agreement even that capsule is incorrectly activated will hold back the whole training procedure.
According to the observations above, we will introduce an improved routing algorithm that is expected to achieve better classification results and faster convergence.
\subsection{Routing Towards Discriminative Quality}
The length of a capsule is originally designed as indicators of the existence of corresponding features.
Features with larger capsule vector length are more likely to exist in a given instance.
For simplicity, we will use the architecture in \cite{CAP_NIPS2017_Hinton} in the following discussion.
Digital capsules, as the output layer of capsule networks, make the final decision of prediction tasks.
Therefore, the activation error should be considered when determining routing coefficients.
According to the prediction mechanism of capsule networks, intuitively, the length of capsules that are supposed to be activated should be maximized,
while the length of inactivated capsules should be minimized.
This objective can be written in a unified form as shown in \Cref{eq:obj-1}.
\begin{subequations}
\label{eq:obj-1}
\begin{align}
\max_{\vec{b}_j}~~ &\delta_{ij} ||\vec{v}_j||_2^2,\\
\mathrm{s.t.~} &\delta_{ij}= \begin{cases}
1, & \text{if } i=j, \\
-1,& \text{otherwise},
\end{cases}
\end{align}
\end{subequations}
where $i$ corresponds to the labels of given observations and the indicator function $\delta_{ij}$ ensures \Cref{eq:obj-1} consistent with the discrimination mechanism of digital capsules.
We denote $\vec{b}_j$ as the routing coefficients corresponding to the $j^{th}$ output capsule before going into softmax function.
To additionally enlarge the representation space of digital capsules, we also discard the softmax of routing coefficients such that each digital capsule is calculated directly through
\begin{align}
\vec{v}_j &= \sum_{i} b_{ij}\hat{\vec{u}}_{j|i} =\hat{\vec{U}}^\top\vec{b}_j,
\end{align}
where rows of $\hat{\vec{U}}$ are the primary capsules projected into digital capsule space with $\hat{\vec{u}}_{j|i}$,
and the objective of \Cref{eq:obj-1} becomes
\begin{align}
\label{eq:obj-1-1}
\max_{\vec{b}_j}~~ \delta_{ij,k} \vec{b}_j^\top \hat{\vec{U}}_k\hat{\vec{U}}_k^\top \vec{b}_j.
\end{align}
Note that \Cref{eq:obj-1-1} can not fit each individual observation in the training dataset, because it will always give optimal optimal solution of
\begin{align}
\vec{b}_j^\ast=
\begin{cases}
\vec{0}, & \text{if } \delta_{ij,k}=-1, \\
\inf, & \text{if } \delta_{ij,k}=1.
\end{cases}
\end{align}
Rewrite \Cref{eq:obj-1-1} into batch mode we have a slightly better formulation:
\begin{align}
\label{eq:obj-1-b}
\max_{\vec{b}_j}~~ \sum_{k} \delta_{ij,k} \vec{b}_j^\top \hat{\vec{U}}_k\hat{\vec{U}}_k^\top \vec{b}_j,
\end{align}
where $j$ corresponds to the $j^{th}$ digital capsule and $k$ is the $k^{th}$ observation in the training batch.
We are able to obtain a local optimal of \Cref{eq:obj-1-b} as long as $\sum_{k}\delta_{ij,k} \hat{\vec{U}}_k\hat{\vec{U}}_k^\top\nsucceq \vec{0}$.
\subsubsection{$l_2$-Regularization}
Observing that each capsule is equipped with very small number of neuron nodes that makes $\hat{\vec{U}}_k \in \mathbb{R}^{m\times n}$ have very few columns and as a result, $\hat{\vec{U}}_k\hat{\vec{U}}_k^\top$ has a very low rank
that also makes it possible to turn \Cref{eq:obj-1-b} into a ridge regression-like \cite{LEARN_TM1970_Hoerl} problem with a small regularization on $\vec{b}_j$, as in \Cref{eq:obj-2}.
\begin{align}
\label{eq:obj-2}
\max_{\vec{b}_j}~~ \sum_{k=1}^p \delta_{ij,k} \vec{b}_j^\top \hat{\vec{U}}_k\hat{\vec{U}}_k^\top \vec{b}_j
-\lambda ||\vec{b}_j||_2^2,
\end{align}
where $\lambda$ is the regularization coefficient which is usually set small, e.g.~$0.001$.
Note that \Cref{eq:obj-2} is no longer convex that makes the maximization reasonable, by the fact that
\begin{equation}
\begin{aligned}
&\sum_{k=1}^p \delta_{ij,k}(\vec{b}_j^\top \hat{\vec{U}}_k\hat{\vec{U}}_k^\top \vec{b}_j) - \lambda ||\vec{b}_j||_2^2 \\
={}&\vec{b}_j^\top (\sum_{k=1}^p \delta_{ij,k}\hat{\vec{U}}_k\hat{\vec{U}}_k^\top -\lambda \vec{I}) \vec{b}_j \\
={}&\vec{b}_j^\top \vec{Q} (\vec{\Lambda} -\lambda \vec{I}) \vec{Q}^\top \vec{b}_j \nsucceq \vec{0},
\end{aligned}
\end{equation}
where $\lambda>0$ and $\vec{\Lambda}$ is a diagonal matrix with as least $m-pn$ zeros in its diagonal that ensures $\vec{\Lambda} -\lambda \vec{I}$ to be indefinite as long as the batch size $p$ is not extremely large.
The regularization term also avoids $\vec{b}_j$ going too large or too small that resembles momentum in classical neural network training algorithms \cite{LEARN_NN1999_Ning}.
Because the capsule coupling coefficients are designed to approximate a large set of observations,
we solve the problem greedily by ascending the gradient of on a batch of observations, as in \Cref{eq:gaobj2}.
Let
\begin{align}
r=\sum_{k=1}^p \delta_{ij,k}(\vec{b}_j^\top \hat{\vec{U}}_k\hat{\vec{U}}_k^\top \vec{b}_j)
-\lambda ||\vec{b}_j||_2^2,
\end{align}
and $\vec{b}_j$ can be updated as follows:
\begin{align}
\label{eq:gaobj2}
\vec{b}_j
={}&\vec{b}_j+\gamma \dfrac{\partial r}{\partial \vec{b}_j} \nonumber \\
={}&\vec{b}_j+\gamma \sum_{k=1}^p \dfrac{\partial \vec{b}_j^\top (\delta_{ij,k} \hat{\vec{U}}_k\hat{\vec{U}}_k^\top -\dfrac{\lambda}{p} \vec{I}) \vec{b}_j}{\partial \vec{b}_j} \nonumber \\
={}&\vec{b}_j+2\gamma \sum_{k=1}^p (\delta_{ij,k} \hat{\vec{U}}_k\hat{\vec{U}}_k^\top -\dfrac{\lambda}{p} \vec{I})\vec{b}_j,
\end{align}
where $p$ denotes the observation batch size.
\subsubsection{$l_1$-Regularization}
In the original capsule networks design, primary capsules and digital capsules are densely connected.
It has been shown in previous works such densely connected structure is easily suffering from overfitting \cite{DL_JMLR2014_Nitish,DL_ARXIV2012_Hinton}.
Enforcement weight sharing in CNN and drop neurons when training densely connected nets (also known as dropout) are two major solutions in deep learning scope.
Weight sharing is similarly applied with the implementation of capsule networks in \cite{CAP_ICLR2018_Hinton}.
Instead of predetermine the neuron/capsule connectivity or randomly drop connection, we propose an alternative that can automatically learn how capsules in different layers are linked with each other.
The routing objectives can be found in \Cref{eq:obj-3},
\begin{align}
\label{eq:obj-3}
\max_{\vec{b}_j}~~ \sum_{k=1}^p \delta_{ij,k} \vec{b}_j^\top \hat{\vec{U}}_k\hat{\vec{U}}_k^\top \vec{b}_j
-\lambda ||\vec{b}_j||_1,
\end{align}
where an $l_1$ penalty term is applied on $\vec{b}_j$ that admits a sparse solution \cite{LEARN_JRSS1996_Robert}.
Because solving \Cref{eq:obj-3} requires to calculate the gradient of $|x|$ at $x=0$, we define $\left.\frac{\partial |x|}{\partial x}\right \vert_{x=0}=0$.
Routing coefficients can then be similarly updated as follows:
\begin{align}
\label{eq:gaobj3}
\vec{b}_j=\vec{b}_j + 2\gamma (\sum_{k=1}^p\vec{b}_j^\top \delta_{ij,k} \hat{\vec{U}}_k\hat{\vec{U}}_k^\top - \lambda \dfrac{\partial ||\vec{b}_j||_1}{\partial \vec{b}_j}).
\end{align}
\subsection{Training Capsule Networks}
Note that for both strategies in \Cref{eq:gaobj2} and \Cref{eq:gaobj3} are compatible with networks that contain more than 2 capsule layers, where the routing coefficients can be accordingly updated through chain rule.
When training other neuron weights, we adopt the margin loss as in \Cref{eq:ml} \cite{CAP_NIPS2017_Hinton}.
\begin{equation}
\begin{aligned}
\label{eq:ml}
L_k ={}& T_k \max(0, m^{+}-||\vec{v}_k||_2)^2 \\
&+ \lambda^\prime (1-T_k) \max(0, ||\vec{v}_k||_2-m^{-})^2,
\end{aligned}
\end{equation}
where $T_k=1$ if class $k$ is present in the $k^{th}$ output capsule.
As shown in \Cref{alg:trcap}, the routing coefficients and other neuron weights are updated alternatively in each training step,
where $n_d$, $n_b$ and $n_r$ are the number of output capsules, the number of iterations to update regular weights and the number of iterations for routing, respectively.
In each training iteration, we first sample a minibatch of observations from the training set (lines 1--2),
we then update the regular neuron weights for $n_b$ steps with routing coefficients fixed (lines 3--6),
and finally the routing coefficients are updated according to the formulation in \Cref{eq:obj-2} or \Cref{eq:obj-3} (lines 7--9).
\begin{algorithm}[h]
\small
\caption{\small Training Capsule Networks.
Routing coefficients $\vec{b}_j, j=1,2,...,n_d$ and regular neuron weights $\vec{W}$ are updated alternatively.
In each iteration, $n_r$ steps routing and $n_b$ steps back-propagation are conducted respectively. We pick $n_r=1$ and $n_b=1$ in all the experiments.
}
\label{alg:trcap}
\begin{algorithmic}[1]
\For{number of training iterations}
\State Sample a minibatch of $p$ observations \{$\vec{x}_i| i=1,2,...,p$\} from the training dataset;
\For{$n_b$ steps}
\State Update $\vec{W}$ by descending its gradient;
\State $\vec{W} \gets \vec{W}-\dfrac{1}{p} \sum_{i=1}^{p} \sum_{j=1}^{n_d} \dfrac{\partial L_k}{\partial \vec{W}}$;
\EndFor
\For{$n_r$ steps}
\State Update $\vec{b}_j$s by ascending its gradient as in \Cref{eq:gaobj2} or \Cref{eq:gaobj3};
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Conclusion}
\label{sec:conclu}
The basis of capsule neural networks and associated routing algorithms are studied in this paper,
based on which a new objective on determining routing coefficients between capsules are established.
An algorithm targeting at the new routing objective is proposed to achieve faster model convergence and competitive classification results,
compared to the baseline results achieved by dynamic routing algorithm on the same capsule network architecture.
We also discuss the effectiveness of fully connected reconstruction networks in support of the classification results and visualized counterexamples.
Additional researches on development efficient capsule network architecture and hyper-parameter exploration to compete with state-of-the-art solutions on larger datasets (e.g.~ImageNet \cite{BENCHMARK_IMGNET_jia}) will be conducted in our future work.
\section{Introduction}
\label{sec:intro}
Convolutional neural networks have been deeply studied in recent years.
Its variations are successfully and widely applied in different tasks including classification \cite{DL_NIPS2012_Krizhevsky}, generation \cite{DL_NIPS2014_Ian}, segmentation \cite{DL_CVPR2015_Long}, and so on.
Convolution layers abstract common features hierarchically by scanning the object with shared kernels
that decomposes the original images into small and simple instances which are hence used for classification task.
However, the process to some extent violates the nature of recognizing objects, that the visual system resembles parse tree-like \cite{CAP_NIPS2000_Hinton} structures on fixation points adopted by human vision.
In each layer of a parse tree, neurons are grouped together representing certain objects, which are known as capsules.
Capsule networks are recently proposed as an alternative of modern convolutional neural network architectures that changes the way neural networks are trained and features are represented,
and, as a result, it also brings robustness to adversarial attacks and overlapped objects \cite{CAP_NIPS2017_Hinton}.
A capsule is a group of neurons that represent a feature or an entity.
The capsule length reflects by how much the capsule is activated or the probability a corresponding entity exists in a given image.
Capsules in adjacent layers are densely connected via traditional neuron links with their weights learned through routing-by-agreement algorithms, as shown in Figure~\ref{fig:capnet}.
Another characteristic of capsule networks is that lower level features are constructing higher level entities as layer goes deeper,
compared to convolutional neural networks that perform feature abstraction layer by layer.
\begin{figure}[tb!]
\centering
\includegraphics[width=1.0\linewidth]{capnet}
\caption{Visualization of capsule layers with 10 capsules in the output layer that represent the existence of 10 classes.}
\label{fig:capnet}
\end{figure}
Two recent capsule routing algorithms are dynamic routing \cite{CAP_NIPS2017_Hinton} and EM routing \cite{CAP_ICLR2018_Hinton}.
Dynamic routing quantifies the agreement between capsules via their inner-product.
The greater inner-product value indicates two capsules agree more with each other and the dynamic routing aims to amplify the agreement.
EM routing models each higher level capsule as a Gaussian and the posterior probability of previous layer capsules determines in which level they are connected to higher level capsules.
In both routing algorithms, capsules are coupled to higher level capsules according to certain agreement metrics without considering the prediction results.
In this work, we discuss and analyze the routing-by-agreement mechanisms in capsule networks and propose a routing algorithm
that can achieve faster convergence and better discrimination results in classification tasks.
The algorithm is inspired by two observations: (1) the ultimate objective of training capsule networks is to make the correctly activated output capsules have the largest lengths and
(2) the feature capsules (capsules before the output layer) are reasonable to have negative effects on capsules in the following layers.
We also propose several training tricks to enlarge the solution space that can result in higher classification accuracy on several datasets.
We pick the capsule network architecture used in \cite{CAP_NIPS2017_Hinton} as a case study to show how our methods benefit the training of capsule networks.
\iffalse
The main contributions of this paper are summarized as follows:
\begin{enumerate}
\item We revisit the dynamic routing algorithm in \cite{CAP_NIPS2017_Hinton} and discuss two beneath assumptions in the algorithm, which might not be necessarily hold in some cases.
\item We establish a new objective on determining routing coefficients between capsules in adjacent capsule layers according to the classification mechanism of capsule networks.
\item Two popular regularization methods are discussed and embedded in our routing algorithm that can be solved efficiently within the neural network training procedure.
\item We conduct experiments on three widely used benchmarks and compare the results with dynamic routing algorithm which shows the effectiveness of the proposed methods.
\end{enumerate}
\fi
\section{Related Works}
\label{sec:prelim}
\cite{LEARN_NIPS2000_Hinton} developed credibility networks where images are interpreted as a parse tree or a collection of parse trees with the leaf nodes being image pixels.
Each node and its associated latent variables represents an object and its pose information that forms higher level objects.
The concept of capsule comes from transforming auto-encoders \cite{CAP_ICANN2011_Hinton}, where each capsule consists of an instantiation of certain entity.
All the entities together reconstructs images with some transformation that is applied on the instantiation vectors with some probabilities.
Although the capsule in \cite{CAP_ICANN2011_Hinton} aims at reconstruction, it already comes with similar features of the capsule discussed in this paper, i.e., associating with a higher level object in a feedforward style.
\cite{CAP_NIPS2017_Hinton} and \cite{CAP_ICLR2018_Hinton} are two latest implementations of capsule networks for object classification, certain routing algorithms have been discussed in previous section.
\cite{CAP_ICLR2018_Wang} rephrased the dynamic routing algorithm as a KL divergence regularized clustering problem that inspires
an improved solution resembling agglomerative fuzzy k-means, which can be solved by coordinate descent.
\section{Experiments}
\label{sec:exp}
\begin{table*}[tb!]
\caption{Neural network configuration for each benchmark.}
\label{tab:arch}
\renewcommand{\arraystretch}{1.3}
\setlength{\tabcolsep}{1pt}
\centering
\small
\begin{tabular}{c|c|c|ccc}
\toprule
\multirow{2}{*}{Layer} & \multirow{2}{*}{Filter/Capsule Size} & \multirow{2}{*}{Activation} & \multicolumn{3}{c}{Filter/Capsule/Neuron Number} \\
& & & MNIST &Fashion-MNIST & CIFAR-10 \\ \midrule
Conv1 & 9$\times$9 & ReLU & 256 & 256 & 256 \\
Cap1 & 8 & Squash & 32 & 32 & 64 \\
Cap2 & 16 & Squash & 10 & 10 & 10 \\
FC1 & - & ReLU & 512 & 512 & - \\
FC2 & - & ReLU & 1024 & 1024 & - \\
FC3 & - & Sigmoid & 784 & 784 & - \\ \bottomrule
\end{tabular}
\end{table*}
\begin{figure*}[bt!]
\centering
\subfloat[input]{\includegraphics[width=.25\textwidth]{input}}
\hspace{0.5cm}
\subfloat[dynamic routing]{\includegraphics[width=.25\textwidth]{dyrecon}}
\hspace{0.5cm}
\subfloat[$l_2$-regularized routing]{\includegraphics[width=.25\textwidth]{bprecon}}
\caption{Visualization of the reconstructed images on Fashion-MNIST dataset.
(a) 100 image samples from the Fashion-MNIST dataset that can be correctly classified by the capsule networks that is trained with our algorithm;
(b) The corresponding images reconstructed from the capsule networks and the reconstruction networks trained with dynamic routing algorithm;
(c) The corresponding images reconstructed from the reconstruction networks trained in dynamic routing
where the input capsules are obtained from our $l_2$-regularized routing algorithm without reconstruction networks.}
\label{fig:reconstruction}
\end{figure*}
To verify the proposed methods, in this paper, we adopt the simplest capsule neural network architecture in \cite{CAP_NIPS2017_Hinton},
which is implemented with \texttt{tensorflow} \cite{DL_OSDI2016_TensorFlow}.
We conduct experiments on three datasets that include MNIST \cite{BM_MNIST_LeCun}, Fashion-MNIST \cite{BM_MNISTF_Han} and CIFAR-10 \cite{BM_CIFAR10_Alex}.
Notations ``DR'', ``L1'' and ``L2''correspond to original dynamic routing \cite{CAP_NIPS2017_Hinton}, the proposed algorithm with $l_1$ regularization and $l_2$ regularization respectively.
``x/FC'' denotes no fully connected reconstruction net is applied.
\iffalse
\subsection{The Datasets}
Benchmark details are introduced below.
\subsubsection{MNIST} MNIST consists of 70,000 hand-written digit images with each digit centered at 28$\times$28 field.
60,000 samples are used for training and rest of them are for testing.
To show how \Cref{alg:trcap} behaves on segmenting on overlapped digits, we also include the multi-MNIST dataset from \cite{CAP_NIPS2017_Hinton},
that consists 60M overlapped digits for training and 10M for testing.
\subsubsection{Fashion-MNIST} Fashion-MNIST is a dataset of fashion images with 60,000 samples for training and 10,000 samples for testing.
Fashion-MNIST comes with the exact same configurations as MNIST except the contents that are categorized into more challenging objects including ``T-shirt'', ``Trouser'', ``Pullover'',
``Dress'', ``Coat'', ``Sandal'', ``Shirt'', ``Sneaker'', ``Bag'' and ``Ankle boot''.
\subsubsection{CIFAR-10} CIFAR-10 consists of 60,000 32$\times$32 color images categorized into 10 classes including ``Airplane'', ``Automobile'', ``Bird'', ``Cat'',
``Deer'', ``Dog'', ``Frog'', ``Horse'', ``Ship'' and ``Truck''.
The training set contains 50,000 images with 5000 images in each class and the test set contains 10,000 images with 1000 images in each class.
\fi
\subsection{Neural Network Architecture}
In all the experiments, we adopt the simplest 3-layer capsule networks as used in \cite{CAP_NIPS2017_Hinton}, with one convolutional layer, one primary capsule layer and one output layer.
Specifications are listed in \Cref{tab:arch}.
The first convolution layer is defined by 256 9$\times$9 kernels followed by two capsule layers with capsule vector dimensions of 8 and 16 respectively.
We use 32 primary capsules and 10 output capsules for the MNIST and Fashion-MNIST dataset and the primary capsule number is doubled when we are conducting experiments on CIFAR-10.
The reconstruction networks for MNIST dataset has 3 fully connected layers with neuron nodes of 512, 1024 and 784 respectively.
Reconstruction is not applied when training the network on CIFAR-10.
Each capsule layer is followed by the squash activation as in \Cref{eq:squash}.
We apply ReLU on the rest of the layers except the last layer in the reconstruction networks, which is equipped with sigmoid.
\subsection{Image Classification}
\begin{table}[tb!]
\caption{Classification results of three benchmarks in terms of error rate (\%).}
\label{tab:class}
\centering
\small
\begin{tabular}{c|ccccc}
\toprule
Benchmarks & DR \cite{CAP_NIPS2017_Hinton} & L2 & L1 & L2/FC & L1/FC \\ \hline
MNIST & 0.34 & 0.35 & \textbf{0.32} & 0.35 &0.44 \\
Fashion-MNIST & 7.21 & 7.01 & 6.76 & \textbf{6.75} &6.77 \\
CIFAR-10 &15.3 & - & - & 14.52 & \textbf{14.04} \\ \hline
Average &7.62 & - & - &7.21 &\textbf{7.08} \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure*}[tb!]
\small
\centering
\subfloat[MNIST]{\input{pgfplot/mnist}} \hspace{.3in}
\subfloat[Fashion-MNIST]{\input{pgfplot/fashion}} \\
\subfloat[CIFAR-10]{\input{pgfplot/cifar}} \hspace{.3in}
\subfloat[MultiMNIST]{\input{pgfplot/multi}}
\caption{Visualization of the convergence on different routing algorithms.
(a)--(d) are regular training curves on MNIST, Fashion-MNIST, CIFAR-10 and MultiMNIST, respectively.}
\label{fig:train}
\end{figure*}
In the first experiment, we compare the classification results with \cite{CAP_NIPS2017_Hinton} on three benchmarks discussed above as shown in \Cref{tab:class}.
For the MNIST dataset,
we observe that the margin loss drops fast even at early training stage when fully connected reconstruction networks are removed.
Therefore we pick a smaller batch size (i.e.~32) when training the capsule networks without reconstruction networks.
The learning rate decays by 0.5 every 1000 iterations.
Because deep learning models can easily achieves above 99.0\% accuracy on MNIST, it is hard to have further significant improvements.
Here we only show similar classification results compared to dynamic routing.
We train the Fashion-MNIST dataset with a batch size of 128 and a starting learning rate of 0.001 that decays by 0.96 every 1000 iterations.
We also train the network without reconstruction layers and keep everything else unchanged.
The test error rate drops from 7.21\% to 7.01\% and 6.76\% when using our routing algorithms with $l_2$ and $l_1$ regularization respectively.
It can also be seen that the classification error further drops when the reconstruction networks are removed especially on the $l_2$ regularized algorithm.
The CIFAR-10 model is trained on a single capsule networks (without any model ensemble) using the architecture specified in \Cref{tab:arch} where the number of primary capsules is doubled and reconstruction networks are removed as in \cite{CAP_NIPS2017_Hinton}.
We also replace the classic ReLU with LeakyReLU \cite{DL_ICML2013_Maas} during training that shows better performance.
As shown in \Cref{tab:class}, our routing methods with $l_2$ and $l_1$ regularization reduce the single model classification error rate by 0.78\% and 1.26\% respectively compared to dynamic routing.
\subsubsection{Discussion of FC-Regularization}
\Cref{tab:class} shows smaller classification error when the neural network is trained without fully-connected reconstruction nets.
One explanation is that the training set is not necessarily covering the whole data space.
In this experiment, we evaluate the trained neural networks on Fashion-MNIST with and without reconstruction regularization.
We feed correctly activated output capsule vectors associated to the model without reconstruction networks into the reconstruction networks and obtain the reconstructed images as shown in \Cref{fig:reconstruction}.
It can be seen that a fraction of those images are not correspondingly reconstructed but correctly classified,
which agrees with \Cref{tab:class}.
For the classification purpose only, removing the reconstruction networks can also improve the training efficiency by dropping redundant trainable variables.
\subsubsection{Segmenting Overlapped Digits}
We also conduct experiments to show our routing solution attains the ability to recognize overlapped digits.
In this experiment, we adopt the MultiMNIST dataset as used in \cite{CAP_NIPS2017_Hinton},
where two digits from different classes are overlapped together with at most 4 pixels shift in each direction that forms into 36$\times$36 images.
The MultiMNIST dataset contains 60 million training samples and 10 million testing samples.
We train the capsule nets using $l_2$ and $l_1$ regularized routing algorithms respectively.
The initial learning rate is set 0.001 that decays by 0.96 every 20000 iterations.
We also set a larger regularization coefficient with $\lambda=0.001$ to avoid over-fitting.
Because the MultiMNIST training set is extremely large we train the neural networks for 200000 steps for both ``DR'' and our routing algorithms,
when we achieved an evaluation error of 7.54\% compared to 7.47\% of dynamic routing.
It should be noted that although the evaluation errors are similar for both methods, the training speed is relatively faster than dynamic routing, as discussed in the following section.
\subsection{Convergence of the Routing Algorithm}
In support of the proposed routing algorithm, we visualize the evaluation performance of the capsule networks along with the training procedure in \Cref{fig:train}.
All models show similar convergence curves on MNIST dataset that reach an evaluation error under 0.4\%.
For the more challenging Fashjon-MNIST and CIFAR-10, all regularized routing algorithms discussed in this work exhibit faster and better convergence in terms of evaluation error and hence demonstrates the effectiveness and the efficiency of our methods.
Because the MultiMNIST dataset is extremely large, we only visualize the training behavior in 20000 steps.
We can observe that the evaluation error drops much faster than dynamic routing at early training steps, which is consistent with the discussion about Assumption~\ref{asm:2}.
Our algorithm also shows an advantage in terms of training runtime that each step can save at least 20\% runtime compared to dynamic routing (DR), as shown in \Cref{fig:time}.
\begin{figure}[tb!]
\small
\centering
\input{pgfplot/time}
\caption{Approximate training time per 100 steps.}
\label{fig:time}
\end{figure}
|
2,877,628,089,821 | arxiv | \section*{Introduction}
One can find basic concepts of tropical algebra in \cite{MS}.
Given a tropical univariate polynomial
\begin{eqnarray}\label{1}
f=\min_{0\le k\le n} \{x_k+kY\}
\end{eqnarray}
its tropical zero $y\in \RR$ is such that the minimum in (\ref{1}) is attained for at least two different values of $0\le k\le n$. In this paper we treat the coefficients $X=(x_0,\dots,x_n)$ as parameters and find the zeroes of $f$ as a function in $x_0,\dots,x_n$.
We show that $f$ has exactly $n$ such parametric zeroes $g_1,\dots,g_n$. Each $g_k,\, 1\le k\le n$ is a {\it tropical Newton-Puiseux polynomial}, i.~e. a piece-wise linear function with rational coefficients at the variables. One can represent any tropical Newton-Puiseux polynomial in the form
$$\min_I \{a_I+(I,\, X)\}-\min_J\{b_J+(J,\, X)\}$$
\noindent of a difference (so, the tropical quotient) of two concave piece-wise linear functions, where $I,\, J\in \QQ^{n+1};\, a_I,\, b_J\in \RR$, and $(I,\, X)$ denotes the inner product (cf. \cite{O}).
Tropical Newton-Puiseux polynomials play a role of algebraic functions in the tropical setting. Similar to Newton-Puiseux series in the classical algebra, $g_k(x_0,\dots,x_n)$ provides a tropical zero of $f$ for any point $(x_0,\dots,x_n)\in \RR^{n+1}$. We note that in the classical algebra one considers Newton-Puiseux series just in a single variable, while an advantage of the tropical algebra is that one considers tropical Newton-Puiseux polynomials in several variables.
Observe that if one considers a univariate tropical polynomial with its coefficients being tropical Newton-Puiseux polynomials then in its turn, the tropical zeroes of this tropical polynomial are again tropical Newton-Puiseux polynomials. Thus, one can view the semi-field of tropical Newton-Puiseux polynomials as a tropical algebraic closure of the semi-ring of tropical polynomials.
\section{Tropical Newton-Puiseux polynomials as tropical zeroes}\label{one}
We say that a tropical Newton-Puiseux polynomial $g:=g(X_0,\dots,X_n)$ is a {\it (tropical) zero} of $f$ (\ref{1}) if for any $(x_0,\dots,x_n)\in \RR^{n+1}$ the value $y=g(x_0,\dots,x_n)$ is a tropical zero of the tropical polynomial $f$.
First, we describe the tropical Newton-Puiseux zeroes of $f$ geometrically and show that there are exactly $n$ of them. In the next section we provide for them the explicit formulas.
For a point $x:=(x_0,\dots,x_n)\in \RR^{n+1}$ its Newton polygon $N_x\subset \RR^2$ is the convex hull of the vertical rays $\{(k,\, c)\, :\, c\ge x_k\},\, 0\le k\le n$. Note that the slopes of the edges of $N_x$ are just the tropical zeroes of $f$.
For a subset $S\subset \{1,\dots,n-1\}$ consider a convex polyhedron $P_S\subset \RR^{n+1}$ (of dimension $n+1$) consisting of points $x=(x_0,\dots,x_n)$ such that its Newton polygon $N_x$ has the vertices $(0,\, x_0),\, (n,\, x_n),\, \{(s,\, x_s)\, :\, s\in S\}$. Thus, $\{P_S\, :\, S\subset \{1,\dots,n-1\}\}$ constitute a partition of $\RR^{n+1}$ into $2^{n-1}$ polyhedra.
Take the (open) polyhedron $P:=P_{\{1,\dots,n-1\}}$ consisting of points $x$ such that the Newton polygon $N_x$ has $n+1$ vertices. Then there are exactly $n$ continuous piece-wise linear functions $g_1,\dots,g_n$ on $P$ being tropical zeroes of $f$ (\ref{1}). Namely, $g_k(x_0,\dots,x_n)=x_{k-1}-x_k,\, 1\le k\le n$.
Observe that each $g_k,\, 1\le k\le n$ has a unique (continuous) continuation on every polyhedron $P_S$. Namely, take the unique pair $0\le i\le k-1,\, k\le j\le n$ such that $i,\, j\in S\cup \{0,\, n\}$, and there are no $s\in S\cup \{0,\, n\}$ satisfying inequalities $i<s<j$.
\begin{lemma}\label{slope}
The unique continuation of $g_k$ on $P_S$ coincides with $\frac{x_i-x_j}{j-i}$.
\end{lemma}
{\bf Proof}. For any point $(x_0,\dots,x_n)$ which belongs to both boundaries of $P$ and of $P_S$ holds $x_s-x_{s+1}=x_{k-1}-x_k,\, i\le s<j$, hence $x_{k-1}-x_k=\frac{x_i-x_j}{j-i}$. $\Box$ \vspace{2mm}
Note that $\frac{x_i-x_j}{j-i}$ is the slope of the edge with the end-points $(i,\, x_i),\, (j,\, x_j)$. Thus, we have shown that there are exactly $n$ tropical Newton-Puiseux polynomials on $\RR^{n+1}$ being tropical zeroes of $f$ (\ref{1}).
\section{Explicit formulas for tropical zeroes}
\begin{theorem}
A tropical polynomial $f=\min_{0\le k\le n} \{x_k+kY\}$ with parametric coefficients $(x_0,\dots,x_n)$ has exactly $n$ tropical zeroes $g_1,\dots,g_n$ being tropical Newton-Puiseux polynomials in $(x_0,\dots,x_n)$. For each $0\le k\le n$ one can represent $g_k$ as follows. For every $0\le p<k$ consider a tropical Newton-Puiseux polynomial
$$t_p:=\max_{k\le q\le n} \left\{\frac{x_p-x_q}{q-p}\right\}.$$
\noindent Then $g_k=\min_{0\le p<k} \{t_p\}$.
\end{theorem}
{\bf Proof}. Fix for the time being a polyhedron $P_S$ and follow the notations from Lemma~\ref{slope}. For any point $x:=(x_0,\dots,x_n)\in P_S$ its Newton polygon $N_x$ has an edge with the end-points $(i,\, x_i),\, (j,\, x_j)$. Therefore, for every $0\le p<k$ the following inequality for the slopes holds:
$$\frac{x_p-x_j}{j-p} \ge \frac{x_i-x_j}{j-i}.$$
\noindent Hence $t_p \ge \frac{x_i-x_j}{j-i}$.
On the other hand, $t_i=\frac{x_i-x_j}{j-i}$ since for every $k\le q\le n$ the following inequality for the slopes holds:
$$\frac{x_i-x_q}{q-i} \le \frac{x_i-x_j}{j-i}.$$
\noindent Thus, $g_k$ coincides with $\frac{x_i-x_j}{j-i}$ on $P_S$ which completes the proof due to Lemma~\ref{slope}. $\Box$
\begin{remark}
In the formula for $g_k$ in the theorem a tropical Newton-Puiseux polynomial $t_p$ is involved in which a left-end point $(p,\, x_p)$ of the intervals is fixed. In a dual way one can define $r_q:=\min_{0\le p<k} \{\frac{x_p-x_q}{q-p}\}$ by fixing a right end-point $(q,\, x_q)$. Then, similarly to the theorem we get $g_k=\max_{k\le q\le n} \{r_q\}$.
\end{remark}
\vspace{2mm}
{\bf Acknowledgements}. The author is grateful to the grant RSF 21-11-00283 and
to MCCME for inspiring atmosphere.
|
2,877,628,089,822 | arxiv | \section{Introduction}
Free groups are ubiquitous in mathematics and their representation
theory has been widely studied. However, since a (finitely generated)
free group $\Gamma$ is not type I, the usual program of representation
theory in its na\"\i ve form, decomposing unitary representations into
irreducible ones, is almost meaningless. In fact, to construct a
unitary representation of $\Gamma$ it is only necessary to fix a Hilbert
space $H$ and to choose a unitary operator for each
generator.
A ``random'' choice will yield an irreducible representation.
If we restrict our
attention to those representations that are weakly contained in the regular
representation the situation drastically changes. For brevity we shall say
that a representation is {\em tempered} if it is weakly contained in the regular representation.
Using the fact that the reduced $C^\ast$ algebra of $\Gamma$ is simple (\cite{Pow}),
one can prove \cite{K-S2} that a tempered representation $(\pi,H)$ can always be realized
as a {\em boundary representation} (see \S\ref{sec:boundary-rep} for the definition).
This implies in particular that the Hilbert space $H$ can be chosen
to be a direct integral of a measurable field of Hilbert spaces
$H=\int_{\partial\Gamma}^\oplus H_xd\mu(x)$ over the
boundary $\partial\Gamma$ of $\Gamma$ for a suitable quasi-invariant measure $\mu$
which depends on the representation.
In 2004,
a large family of irreducible unitary tempered representations of the free group,
the so-called {\it multiplicative} representations, was introduced \cite{K-S3}.
Although these representations have a very concrete and seemingly elementary definition,
this family is large enough to include all known specific irreducible
tempered representations constructed using the action of $\Gamma$ on its Cayley graph.
In \cite{Iozzi_Kuhn_Steger_stab}
we extended the class in \cite{K-S3} to include also representations
that are obtained with a similar procedure as in \cite{K-S3}
but are only {\it finitely reducible}.
This has the advantage that this enlarged class of representations,
called the class $\mathbf{Mult}(\Gamma)$, is now stable under many natural operations,
such as the restriction to a subgroup and the induction to a free supergroup
(see \cite{Iozzi_Kuhn_Steger_stab}).
Moreover,
although the construction presented in \cite{K-S3}
seems to depend on the choice of generators, the class $\mathbf{Mult}(\Gamma)$
is independent on that choice and in fact
it is invariant under the action of $\Aut(\Gamma)$.
This fact is not true for
example for the restriction to the free group in two generators
of the spherical series of the group of automorphisms of the homogeneous tree of valency four.
(See Remark~\ref{rem:facts}(2) for more
on the irreducibility of these representations.)
\medskip
In this paper, in analogy with the case of the free group, we define
a new class of representations for virtually free groups.
These groups include for example $\mathrm{PSL}(2,\mathbb Z)\cong\mathbb Z_2\ast \mathbb Z_3$,
whose commutator subgroup is a torsion-free surface group and whose abelianization
$\mathrm{PSL}(2,\mathbb Z)_\mathrm{ab}\cong\mathbb Z_2\times\mathbb Z_3$ has order six.
Furthermore, virtually free groups are Gromov hyperbolic and can be realized
as fundamental groups of finite graph of finite groups, \cite{Karrass_Pietrowski_Solitar}.
We define a class $\mathbf{Mult}(\Lambda)$ of unitary representations
of a finitely generated virtually free group $\Lambda$ by inducing a
representation of the class $\mathbf{Mult}(\Gamma)$ from a (in fact,
any) free subgroup $\Gamma$ of finite index (see \S~\ref{sec:lambda}) .
For these classes of representations we prove the following
\begin{theorem_intro} Let $\Lambda$ be a virtually free group.
\begin{enumerate}
\item The classes $\mathbf{Mult}(\Lambda)$ and $\mathbf{Mult}_{\mathrm irr}(\Lambda)$
are non-empty and $\Aut(\Lambda)$-invariant (Corollary~\ref{cor:invariance}).
\item The representations in the class $\mathbf{Mult}(\Lambda)$ are weakly contained in the
regular representation (Corollary~\ref{cor:bdry-rep-Gamma}).
\item The class $\mathbf{Mult}(\Lambda)$ are cocycle representations of $\Lambda$,
that is representations of the cross product
$\Lambda\ltimes\mathcal{C}(\partial\Lambda)$ (Corollary~\ref{cor:bdry}).
\end{enumerate}
\end{theorem_intro}
As we mentioned earlier, the representations of the class
$\mathbf{Mult}(\Gamma)$ encompass all tempered representations of the
free group $\Gamma$ that arise from the embedding of $\Gamma$ into
the group of automorphisms of its Cayley graph (with respect to some
set of generators). On the other hand, to the authors' knowledge, we
are not aware of other realizations of any of the representations in
the class $\mathbf{Mult}_\mathrm{irr}(\Lambda)$ of a virtually free
group $\Lambda$. Constructions similar to ours (in the cocompact case)
can be found for example in \cite{Bader_Muchnik}, where the authors
show the irreducibility of the quasi-regular representation of a
compact surface group $\pi_1(\Sigma)$ on the geodesic
boundary\footnote{One word of warning for the reader: what the authors
in \cite{Bader_Muchnik} call "boundary representation" is not what
is referred to with the same terminology in this paper, but what we call
``quasi-regular'' representation in Theorem~\ref{prop:herz}.}
$\partial\Sigma$ with respect to the Patterson--Sullivan measure.
Likewise, in \cite{Burger_delaHarpe}, the authors show that if
$H<L$ are discrete groups such that
$H=\mathrm{Comm}_L(H)$, then the induction to $L$ of
any finite dimensional irreducible representation of $H$ remains
irreducible. None of these result seem to have a nonempty
intersection with our construction.
\medskip
The last item in the above theorem follows from a result that was already known for free groups and
we record here for a general Gromov hyperbolic group (see Theorem~\ref{general}), namely:
\begin{theorem_intro}
Let $ G $ be a torsion free Gromov hyperbolic group which is not almost cyclic.
Then every tempered representation of $ G $ is a cocycle representation with respect to
some quasi-invariant measure.
If the representation is irreducible, the measure
can be taken to be ergodic.
\end{theorem_intro}
As a consequence of this result we prove the following analogue of Herz
majorization principle:
\begin{theorem_intro}
Let $(\pi,H)$ be a tempered
representation of a torsion free Gromov hyperbolic group $ G $
which is not almost cyclic
and let $v$ be any vector in $H$.
Then there exists a positive measure $\mu$ on $\partial G $ such that
\begin{equation*}
|\langle\pi(x)v,v\rangle|\leq \norm{v}^2|\langle\rho(x)\mathbf{1}_{\partial G },\mathbf{1}_{\partial G }\rangle|
\end{equation*}
where $\rho$ is the quasi-regular representation on $L^2(\partial G ,d\mu)$ and
$\mathbf{1}_{\partial G }$ is the constant function on $\partial G $.
\end{theorem_intro}
The measure on $\partial G$ must however depend on the tempered
representation, thus implying that a Harish-Chandra function cannot
exist (see Remark~\ref{rem:4.8}) and exhibiting one more instance of
the fact that Gromov hyperbolic group behave morally as rank one
groups.
\medskip
We remark that the above construction relies not only upon the
stability properties of the class $\mathbf{Mult}(\Gamma)$ of a free
group $\Gamma$ (which were proven in \cite{Iozzi_Kuhn_Steger_stab}),
but also of the non-obvious corresponding properties of the extension
of multiplicative representations to boundary representations (see for
example Theorem~\ref{thm:ind}).
\bigskip The structure of the paper is as follows. In
\S~\ref{sec:prelim} we recall the definition of boundary
representation of a free group -- and, more generally, of a Gromov
hyperbolic group -- and the construction of the boundary
multiplicative representations of the free group; we recall moreover
from \cite{Iozzi_Kuhn_Steger_stab} the stability properties of the
class of representations of the free group obtained from matrix
systems with an inner product. In \S~\ref{sec:lambda} we define the
classes $\mathbf{Mult}(\Lambda)$ and $\mathbf{Mult}_\mathrm{irr}(\Lambda)$ of
representations of a finitely generated virtually free group $\Lambda$
obtained by induction from any free subgroup of finite index and we show
both that $\mathbf{Mult}(\Lambda)$ and
$\mathbf{Mult}_\mathrm{irr}(\Lambda)$ are $\Aut(\Lambda)$-invariant and
that these representations are tempered. In
\S~\ref{sec:result-hyperbolic} we prove that (irreducible) tempered
representations of a Gromov hyperbolic group $G$ are cocycle
representations with respect to an (ergodic) measure and we deduce the
analogue of Herz majorization
principle (Theorem~\ref{prop:herz}). In the Appendix~\ref{app:2}
we prove the essential stability results for multiplicative boundary
representations that are not proven in \cite{Iozzi_Kuhn_Steger_stab}.
\section{Preliminaries}\label{sec:prelim}
\subsection{Boundary Representations}\label{sec:boundary-rep}
\begin{definition}
Let $G$ be any discrete group, $\mathcal{A}$ be a commutative $C^\ast$-algebra and
$\lambda:G\to \Aut(\mathcal{A})$ a homomorphism of $G$
into the group of isometric automorphisms of $\mathcal{A}$.
A \defn{covariant representation} of $(G,\mathcal{A})$ on a Hilbert space $H$ is a triple
$(\pi,\alpha,\mathcal{H})$ where
\begin{itemize}
\item
$\pi:G\to\mathcal{U}(H)$ is a unitary representation of $G$;
\item $\alpha:\mathcal{A}\to\mathcal{L}(H)$ is a $C^\ast$-representation of $\mathcal{A}$
in the space of bounded linear operators;
\item for all $\gamma\in G$ and $A\in\mathcal{A}$
\begin{equation*}
\pi(\gamma)\alpha(A)\pi(\inv\gamma)=\alpha\big(\lambda(\gamma)A\big)\,.
\end{equation*}
\end{itemize}
Two covariant representations $(\pi,\alpha,\mathcal{H})$ and $(\rho,\beta,\mathcal{L})$ of $G$ and $\mathcal{A}$
are {\em equivalent} if there exists a unitary operator $J:\mathcal{H}\to\mathcal{L}$,
such that for all $\gamma\in G$ and all $A\in\mathcal{A}$,
\begin{equation*}
\rho(\gamma)\,J=J\,\pi(\gamma)\qquad\text{ and }\qquad\beta(A)\,J=J\,\alpha(A)\,.
\end{equation*}
\end{definition}
\medskip
If $K$ is any compact metrizable space on which $G$ acts continuously and
by isometries, the space of complex valued functions $\mathcal{C}(K)$ is a $C^\ast$-algebra
naturally endowed with a continuous isometric action of $G$,
$\lambda:G\to\Aut\big(\mathcal{C}(K)\big)$,
defined by
\begin{equation*}
\lambda(\gamma)F(k):=F(\inv\gamma k)\,,
\end{equation*}
for all $F\in \mathcal{C}(K)$, $\gamma\in G$ and $k\in K$.
In the case in which $G$ is a Gromov
hyperbolic group, the space $K$ can be taken to be the boundary
of the Cayley graph associated to a fixed generating system, which we
denote by $\partial G$. For the sake of the reader, we recall the definition of
$\partial G$ in the appendix. We
mention here only that $\partial G$ is a compact metrizable space with the
$G$-action defined by $(\gamma,\omega)\mapsto\inv\gamma\omega$
and that different generating sets correspond to homeomorphic
boundaries.
\begin{definition} A {\em boundary representation} of a hyperbolic
group $G$ on $H$ is a covariant representation $(\pi,\alpha,H)$
of $\big(G,\mathcal{C}(\partial G)\big)$.
\end{definition}
The reader who is familiar with crossed-product $C^\ast$-algebras will
recognize that a boundary representation is nothing but a
representation of the crossed product $C^\ast$-algebra $G\ltimes\mathcal{C}(\partial G)$
(see \S~\ref{sec:result-hyperbolic}).
General Gromov hyperbolic groups will be considered again in their
full generality in \S~\ref{sec:result-hyperbolic}, while in the rest
of this section we will consider only free groups.
\subsection{Boundary Multiplicative Representations of the Free Group}
We begin with the definition of {\it multiplicative representation}
in the context of finitely generated free groups, referring to
\cite{K-S3} for details and proofs.
Let $\mathbb{F}_A$ be a free group with a finite symmetric set of free
generators $A$. A {\em matrix system} $(V_a,H_{ba})$, is an
assignment of a vector space $a~\mapsto~V_a$ for every generator $a\in
A$ and a linear map $H_{ba}:V_a\to V_b$, for every $a,b~\in~A$, such
that $H_{ba}=0$ whenever $ba=e$. An {\em invariant subsystem}
$(W_a,H_{ba})$ of the matrix system $(V_a,H_{ba})$ is an assignment of
vector subspaces $a\mapsto W_a\subseteq V_a$ such that
$H_{ba}W_a\subset W_b$ for all $a,b\in A$. If $(W_a,H_{ba})$ is an
invariant subsystem of $(V_a,H_{ba})$, the {\em quotient system}
$(\widetilde{V}_a,\widetilde{H}_{ba})$ is the assignment $a\mapsto
\widetilde{V}_a:=V_a/W_a$ such that
$\widetilde{H}_{ba}\widetilde{v}_{a}=H_{ba}v_a$, for any
representative $v_a$ of $\widetilde{v}_a\in\widetilde{V}_a$. A system
$(V_a,H_{ba})$ is called {\em irreducible} if it is nonzero and admits
no nontrivial invariant subsystems.
We endow $\mathbb{F}_A$ with the word metric $d(x,e):=|x|$
with respect to the generating set $A$.
We say that a function
\begin{equation*}
f:\mathbb{F}_A\to\bigsqcup_{a\in A} V_a
\end{equation*}
is {\em multiplicative} if there exists $N\geq0$, depending only on $f$,
such that for all $x$ with $|x|\geq N$
\begin{equation}\label{1.1}
\begin{alignedat}{3}
&f(x)\in V_a &\quad&\text{ if }x=x'a\text{ is reduced}\\
&f(xb)=H_{ba}f(x)&\quad&\text{ if }x=x'a\text{ is reduced and }ba\neq e\,.
\end{alignedat}
\end{equation}
We denote by $\mathcal{H}^\infty(V_a,H_{ba})$ (or by $\mathcal{H}^\infty$ if no
confusion arises) the quotient of the space of multiplicative
functions with respect to the equivalence relation according to which
two multiplicative functions are equivalent if they differ only on
finitely many words.
If for every $a\in A$ the $V_a$'s are equipped with a positive
definite sesquilinear form $B_a$ and if these forms satisfy the {\em
compatibility condition}
\begin{equation}\label{E-cond-B}
B_a(v_a,v_a)=\sum_{b\in A}B_b(H_{ba}v_a,H_{ba}v_a)
\end{equation}
for all $v_a\in V_a$,
then
\begin{equation}\label{1.2}
\langle f_1,f_2\rangle
:=\sum_{|x|=N}\;\;\sum_{ \substack{ \;a\\ |xa|=|x|+1}}
B_a\big(f_1(xa),f_2(xa)\big)
\end{equation}
defines an inner product on $\mathcal{H}^\infty$,
where $N$ should be taken to be large enough that both $f_1$ and $f_2$ satisfy \eqref{1.1}
outside the ball of radius $N$. We remark that, up to a normalization,
every matrix system $(V_a,H_{ba})$ admits a compatible tuple $(B_a)_{a\in A}$
of positive semidefinite forms.
When the matrix system is irreducible,
then each $B_a$ is strictly definite positive and, up to multiple scalars,
it is also unique. Whether the system is irreducible or not,
the triple $(V_a,H_{ba},B_a)$ will be called
a {\em matrix system with inner product}.
We can hence define a representation of $\mathbb{F}_A$ on $\mathcal{H}^\infty(V_a,H_{ba})$ by
\begin{equation*}\label{repr}
\big(\pi(x)f\big)(y):=f(\inv x y)\,,
\end{equation*}
which can be proved to be unitary.
If $\mathcal{H}(V_a,H_{ba},B_a)$ is the completion of
$\mathcal{H}^\infty(V_a,H_{ba})$ with respect to the inner product in
\eqref{1.2}, then $\pi$ extends to a unitary representation on
$\mathcal{H}(V_a,H_{ba},B_a)$, which we called {\em multiplicative}.
\medskip
The next step is to show that multiplicative representations are
in fact boundary representations of the free group.
The boundary $\partial\mathbb{F}_A$ of a free group $\mathbb{F}_A$ consists of the set of infinite
reduced words, with the topology defined by the basis
\begin{equation*}
\partial\mathbb{F}_A(x):=\{\omega\in\partial\mathbb{F}_A:\,\text{the reduced word for }\omega\text{ starts with }x\}\,,
\end{equation*}
for all $x\in\mathbb{F}_A$, $x\neq e$. The sets $\partial\mathbb{F}_A(x)$ are both open and
closed in $\partial\mathbb{F}_A$ and $\partial\mathbb{F}_A$ is a compact (as well as Hausdorff, perfect,
separable, and totally disconnected) space. For every $x\in\mathbb{F}_A$,
$x\neq e$, let $\mathbf{1}_{\partial\mathbb{F}_A(x)}$ denote the characteristic function of
$\partial\mathbb{F}_A(x)$. In order to show that a given unitary representation
$(\pi,\mathcal{H})$ of $\mathbb{F}_A$ is a boundary representation we need to define
an algebra $C^\ast$-homomorphism $\alpha:\mathcal{C}(\partial\mathbb{F}_A)\to\mathcal{L}(\mathcal{H})$
satisfying
\begin{equation}\label{prodinc}
\pi(x)\alpha(F)\pi(x^{-1})=\alpha\big(\lambda(x)F\big)\,,
\end{equation}
for any $x\in\mathbb{F}_A$ and~$F\in \mathcal{C}(\partial\mathbb{F}_A)$.
Since the subalgebra spanned by the
functions $\{\mathbf{1}_{\partial\mathbb{F}_A(x)}\}_{x\in\mathbb{F}_A}$ is a dense $C^\ast$-subalgebra of $\mathcal{C}(\partial\mathbb{F}_A)$,
it is sufficient to define
$\alpha_\pi(\mathbf{1}_{\partial\mathbb{F}_A(x)})$ for every $x$, and in fact on the dense subspace $\mathcal{H}^\infty\subset\mathcal{H}$.
Denote by $\mathbf{1}_{\mathbb{F}_A(x)}$ the characteristic function
of the cone
\begin{equation}\label{eq:fx}
\mathbb{F}_A(x):=\{y\in\mathbb{F}_A:\,\text{the reduced word for }y\text{ starts with }x\}
\end{equation}
and define $\alpha_\pi(\mathbf{1}_{\partial\mathbb{F}_A(x)}):\mathcal{H}^\infty\to
\mathcal{H}^\infty$
by setting
\begin{equation}\label{eq:bdryrep}
\big(\alpha_{\pi}(\mathbf{1}_{\partial\mathbb{F}_A(x)})f\big)(y):=\mathbf{1}_{\mathbb{F}_A(x)}(y)f(y)=
\begin{cases}
f(y) &\text{if $y\in \mathbb{F}_A(x)$}\\
0 &\text{otherwise}\,.
\end{cases}
\end{equation}
A routine calculation shows that \eqref{prodinc} is verified and hence every
multiplicative representation $(\pi,\mathcal{H})$
is a boundary representation
$(\pi,\alpha_\pi,\mathcal{H})$ of $\mathbb{F}_A$.
\begin{remark}\label{rem:facts}
\begin{enumerate}
\item When a boundary representation is considered as a representation
of $\mathbb{F}_A$ it is always weakly contained in the regular
representation. This follows from general considerations
since $\mathbb{F}_A$ acts amenably on $\partial\mathbb{F}_A$; a two pages proof specifically for the
case of the free group can be found in \cite[\S~2]{K-S1}.
\item In \cite{K-S3} it is shown that multiplicative representations built
from an irreducible system are irreducible as boundary representations, (that is
as representations of the cross-product $\mathbb{F}_A\ltimes \mathcal{C}(\partial\mathbb{F}_A)$, see \S~\ref{sec:result-hyperbolic})
while, as representations of $\mathbb{F}_A$, they are either irreducible or, in some
special cases, are sum of two irreducible nonequivalent representations.
\end{enumerate}
\end{remark}
\subsection{Stability Properties of Boundary Multiplicative Representations}\label{sec:stability}
The definition of multiplicative representation
seems to depend on the generating set $A$ that we have fixed. We shall see that
the dependence is only apparent, as soon as we allow general (not only irreducible)
matrix systems. The advantage of considering general matrix systems is that
the new class of representations so obtained is closed under change of generators,
restriction and induction. The price to pay is not so high, as the following result shows:
\begin{theorem}[\cite{Iozzi_Kuhn_Steger_stab}]\label{thm:decomposition}
If $\pi$ is a representation constructed from a matrix system with inner product $(V_a,H_{ba},B_a)$,
then $\pi$ decomposes as an orthogonal direct sum with respect to $(B_a)_{a\in A}$
of a finite number of representations defined from irreducible matrix
systems and the same is true when $\pi$ is considered as a boundary representation.
\end{theorem}
We proceed now to infer further properties of multiplicative
representations of $\mathbb{F}_A$.
\begin{theorem}[\cite{Iozzi_Kuhn_Steger_stab}]
\label{thm:stab}
Let $\mathbb{F}_A$ be a group freely generated by the symmetric set $A$
and let $\big(\pi,\alpha_\pi,\mathcal{H}(V_a,H_{ba},B_a)\big)$
be a multiplicative boundary representation constructed from a matrix
system with inner products $(V_a,H_{ba},B_a)_{a\in A}$.
If $A'$ is another symmetric set of free generators
such that $\mathbb{F}_A\cong\mathbb{F}_{A'}$,
then there exists a multiplicative boundary representation
$\big(\pi',\alpha_{\pi'},\mathcal{H}(V_s,H_{ts},B_s)\big)$ constructed from
a matrix system with inner products $(V_s,H_{ts},B_s)_{s\in A'}$, such that
$\big(\pi,\alpha_\pi,\mathcal{H}(V_a,H_{ba},B_a)\big)$
appears either as a subrepresentation or as a quotient
of $\big(\pi',\alpha_{\pi'},\mathcal{H}(V_s,H_{ts},B_s)\big)$.
\end{theorem}
We can therefore denote a free group by $\Gamma$ without any explicit dependence
on a free generating set.
We warn the reader that there is no guarantee that changing generators will
preserve the irreducibility of the system:
in \cite{Iozzi_Kuhn_Steger_stab} it is shown
that a representation of the principal series for the free group can be realized
as a multiplicative representation from an irreducible matrix system, but,
once the simplest nontrivial change of generator is performed,
it arises from a quotient of a reducible matrix system.
\begin{theorem}[\cite{Iozzi_Kuhn_Steger_stab}]\label{thm:4.4}
Let $\Gamma_0 \leq \Gamma$ be a subgroup of finite index in the free group
$\Gamma$.
Then:
\begin{enumerate}
\item the restriction to $\Gamma_0$ of a multiplicative boundary representation
$(\pi,\alpha_\pi,\mathcal{H})$ of $\Gamma$
is a boundary multiplicative representation of $\Gamma_0$;
\item if $(\pi',\alpha_{\pi'}',\mathcal{H})$ is a boundary multiplicative representation of $\Gamma_0$,
then the induced representation $\ind{\pi'}$ is a boundary multiplicative representation
of $\Gamma$.
\end{enumerate}
\end{theorem}
Strictly speaking, the theorems stated in this section are proved in \cite{Iozzi_Kuhn_Steger_stab}
when all the representations involved are considered only as representations of the free group
rather then boundary representations.
The extension of these results to the case of boundary representations is,
in most of the cases, a straightforward verification.
The one that is a bit more involved is the proof of Theorem~\ref{thm:4.4}(2):
since it uses heavily the notations and the techniques of \cite{Iozzi_Kuhn_Steger_stab},
we defer it to the appendix of this paper.
The above theorems lead to the following:
\begin{definition} Let $\Gamma$ be a finitely generated free group.
A representation $\rho:\Gamma\to\mathcal{U}(H)$ is in the class $\mathbf{Mult}(\Gamma)$
if there exist a symmetric set $A$ of free generators,
a matrix system with inner product $(V_a,H_{ba},B_a)$,
a dense subspace $M\subset H$
and a unitary operator $J:H\to\mathcal{H}(V_a,H_{ba},B_a)$ such that
\begin{enumerate}
\item $J$ is an isomorphism between $M$ and $\mathcal{H}^\infty(V_a,H_{ba}, B_a)$, and
\item for all $m\in M$ and $x\in\Gamma$, $J\big(\rho(x)m\big)=\pi(x)(Jm)$,
where $\pi$ is the multiplicative representation constructed from
$(V_a,H_{ba},B_a)$.
\end{enumerate}
\end{definition}
\section{The Classes $\mathbf{Mult}(\Lambda)$ and $\mathbf{Mult}_\mathrm{irr}(\Lambda)$}\label{sec:lambda}
Let $\Lambda$ be a finitely generated virtually free group.
\begin{definition}
We say that a representation $\pi$
of $\Lambda$ belongs to the class ${\mathbf{Mult}_0(\Lambda)}$
if it is contained in a representation obtained inducing a
representation of the class $\mathbf{Mult}(\Gamma_0)$ where $\Gamma_0$ is
a free subgroup of finite index in $\Lambda$. In other
words
\begin{equation*}
{\mathbf{Mult}_0(\Lambda)}:=\big\{\pi\in\Lambda\st \exists\,
\pi'\in\mathbf{Mult}(\Gamma_0)\st
\pi\leq \operatorname{Ind}_{\Gamma_0}^\Lambda(\pi')\big\}
\end{equation*}
\end{definition}
\begin{proposition}\label{prop:independence} The class ${\mathbf{Mult}_0(\Lambda)}$ does not depend on $\Gamma_0$.
\end{proposition}
\begin{proof} Let $\Gamma_1$ be another free group of finite index in $\Lambda$ and
${\mathbf{Mult}_1(\Lambda)}$ be the corresponding class of induced representations.
The stabilizer of the pair $\Gamma_0\times \Gamma_1\in \Lambda/{\Gamma_0}\times \Lambda/{\Gamma_1}$
for the diagonal action of $\Lambda$ is $\Gamma_0\cap \Gamma_1$.
Hence $\Lambda/\Gamma_0\cap \Gamma_1$,
as well as $\Gamma_0/\Gamma_0\cap \Gamma_1$ and $\Gamma_1/\Gamma_0\cap \Gamma_1$, are finite.
Assume now that $\pi\in{\mathbf{Mult}_0(\Lambda)}$. By definition there exists a representation
$\pi_0$ of $\Gamma_0$ such that $\pi$ is a component of
$\operatorname{Ind}_{\Gamma_0}^\Lambda (\pi_0)$. By general properties of induction
(see for example \cite{Mackey}), we have that
\begin{equation*}
\begin{aligned}
\pi
\leq &\operatorname{Ind}_{\Gamma_0}^\Lambda(\pi_0)
\leq \operatorname{Ind}_{\Gamma_0}^\Lambda
\big(\operatorname{Ind}_{\Gamma_0\cap \Gamma_1}^{\Gamma_0}(\pi_0|_{\Gamma_0\cap \Gamma_1})\big)\\
=& \operatorname{Ind}_{\Gamma_0\cap \Gamma_1}^\Lambda(\pi_0|_{\Gamma_0\cap \Gamma_1})
= \operatorname{Ind}_{\Gamma_1}^\Lambda
\big(\operatorname{Ind}_{\Gamma_0\cap \Gamma_1}^{\Gamma_1}(\pi_0|_{\Gamma_0\cap \Gamma_1})\big)\,.
\end{aligned}
\end{equation*}
By Theorem~\ref{thm:stab}(1) we know that
$\pi_0|_{\Gamma_0\cap \Gamma_1}\in\mathbf{Mult}(\Gamma_0\cap \Gamma_1)$
and hence, by Theorem~\ref{thm:stab}(2),
$\operatorname{Ind}_{\Gamma_0\cap \Gamma_1}^{\Gamma_1}(\pi_0|_{\Gamma_0\cap \Gamma_1})\in\mathbf{Mult}(\Gamma_1)$,
so that
$\pi\in{\mathbf{Mult}_1(\Lambda)}$ and, by symmetry, ${\mathbf{Mult}_0(\Lambda)}={\mathbf{Mult}_1(\Lambda)}$.
\end{proof}
The above result justifies the following
\begin{definition}
We say that a representation $\pi$ of a virtually free group $\Lambda$ belongs
to the class ${\mathbf{Mult}(\Lambda)}$ if there exists a finite index
free subgroup $\Gamma\leq\Lambda$ and a representation
$\pi'$ in the class $\mathbf {Mult}(\Gamma)$ such that
$\pi$ is a component of $\operatorname{Ind}_{\Gamma}^\Lambda (\pi')$,
\begin{equation*}
\begin{aligned}
{\mathbf{Mult}(\Lambda)}:=\big\{\pi\in\Lambda\st \exists\, \pi'\in\mathbf{Mult}(\Gamma)
\text{ for some free subgroup }\Gamma\leq\Lambda&\\
\text{ of finite index such that }
\pi\leq \operatorname{Ind}_{\Gamma}^\Lambda (\pi')
\big\}&\,.
\end{aligned}
\end{equation*}
\end{definition}
The representations in the class ${\mathbf{Mult}(\Lambda)}$ are not necessarily irreducible,
but are however finitely reducible, as the following proposition shows:
\begin{proposition}\label{prop:finite-reducibility} Let $\Lambda_0$ be a subgroup of
finite index of a group $\Lambda$ and let $\pi:\Lambda_0\to\mathcal{U}(\mathcal{H})$ be an irreducible representation.
Then $\big(\operatorname{Ind}_{\Lambda_0}^\Lambda(\pi),\operatorname{Ind}_{\Lambda_0}^\Lambda(\mathcal{H})\big)$
is a finite sum of irreducible representations.
\end{proposition}
\begin{proof} Let us denote $\rho:=\operatorname{Ind}_{\Lambda_0}^\Lambda(\pi)$ and
$\mathcal{L}:=\operatorname{Ind}_{\Lambda_0}^\Lambda(\mathcal{H})$. Recall that
\begin{equation*}
\mathcal{L}
:=\big\{f:\Lambda\to\mathcal{H}:\,\pi'({\gamma_0})f(\gamma)=f(\gamma\inv{{\gamma_0}}),\text{ for all }{\gamma_0}\in{\Lambda_0},\gamma\in\Lambda\big\}
\end{equation*}
on which $\Lambda$ acts by
\begin{equation*}
\big(\rho(\gamma)f\big)(\eta):=f(\inv{\gamma}\eta)
\end{equation*}
for all $\eta,\gamma\in\Lambda$.
The fact that ${\Lambda_0}$ is of finite index in $\Lambda$, namely $\Lambda=\sqcup_{u\in D}u{\Lambda_0}$,
where $D$ is a finite set of representatives, induces a finite decomposition
\begin{equation}\label{eq:dsd}
\mathcal{L}=\bigoplus_{u\in D}\mathcal{L}_u\,,
\end{equation}
where
\begin{equation*}
\mathcal{L}_u:=\big\{f\in\mathcal{L}:\\,\operatorname{supp}(f)\subset u{\Lambda_0}\big\}\,.
\end{equation*}
It is immediate to verify that for all $\eta\in\Lambda$ and $u\in D$,
one has that $\rho(\eta)\mathcal{L}_u\subseteq\mathcal{L}_{\eta u}$ and hence
\begin{equation*}
\rho(u{\gamma_0}\inv{u})\mathcal{L}_u\subseteq\mathcal{L}_u
\end{equation*}
for all ${\gamma_0}\in{\Lambda_0}$.
Moreover for all $u\in D$, the evaluation operator
\begin{equation*}
\begin{aligned}
E_u:\mathcal{L}_u&\to\,\,\mathcal{H}\\
f\,\,&\mapsto f(u)
\end{aligned}
\end{equation*}
is a unitary isomorphism with the property that
\begin{equation*}
\pi({\gamma_0})\,E_u=E_u\,\rho(u{\gamma_0}\inv{u})\,,
\end{equation*}
for all ${\gamma_0}\in{\Lambda_0}$ and $u\in D$. In other words, $E_u$ is an intertwining operator
between $(\pi,\mathcal{H})$ and
$(\rho|_{u{\Lambda_0}\inv{u}},\mathcal{L}_u)$. Since $(\pi,\mathcal{H})$
is irreducible, $(\rho|_{u{\Lambda_0}\inv{u}},\mathcal{L}_u)$ is irreducible as well.
Let now $T:\mathcal{L}\to\mathcal{L}$ be an intertwining operator for $\rho$. If $p_u:\mathcal{L}\to\mathcal{L}_u$
is the orthogonal projection, then, for all $u, v\in D$, $p_v\,T\, p_u$ intertwines
$(\rho|_{(v{\Lambda_0}\inv{v})\cap(u{\Lambda_0}\inv{u})},\mathcal{L}_v)$ and
$(\rho|_{(v{\Lambda_0}\inv{v})\cap(u{\Lambda_0}\inv{u})},\mathcal{L}_u)$.
Since $(v{\Lambda_0}\inv{v})\cap(u{\Lambda_0}\inv{u})$ is of finite index both in
$v{\Lambda_0}\inv{v}$ and in $u{\Lambda_0}\inv{u}$, each of the above representations is finitely reducible,
\cite{Poguntke}.
Hence the space of intertwining operators between $(\rho|_{(v{\Lambda_0}\inv{v})\cap(u{\Lambda_0}\inv{u})},\mathcal{L}_v)$
and
$(\rho|_{(v{\Lambda_0}\inv{v})\cap(u{\Lambda_0}\inv{u})},\mathcal{L}_u)$ is finite dimensional,
which forces the space of intertwining operators of $(\rho,\mathcal{L})$ to be finite dimensional as well.
\end{proof}
\begin{definition}\label{defi:irr} We say that a representation $\pi$ of $\Lambda$ belongs
to the class ${\mathbf{Mult}_\mathrm{irr}(\Lambda)}$ if there exists a finite index
free subgroup $\Gamma\leq\Lambda$ and a representation
$\pi'$ in the class $\mathbf {Mult}(\Gamma)$ such that
$\pi$ is an irreducible component of $\operatorname{Ind}_{\Gamma}^\Lambda (\pi')$,
\begin{equation*}
\begin{aligned}
{\mathbf{Mult}_\mathrm{irr}(\Lambda)}:=\big\{\pi\in\Lambda\st \exists\, \pi'\in\mathbf{Mult}(\Gamma)
\text{ for some free subgroup }\Gamma\leq\Lambda&\\
\text{ of finite index such that }
\pi\leq \operatorname{Ind}_{\Gamma}^\Lambda (\pi')
\text{ and }\pi\text{ is irreducible}
\big\}&\,.
\end{aligned}
\end{equation*}
\end{definition}
The fact that this class is not empty follows from Proposition~\ref{prop:finite-reducibility} and the fact
that, by Theorem~\ref{thm:decomposition}, any representation in the class $\mathbf{Mult}(\Gamma)$
is a finite sum of irreducible representations in the same class.
\begin{corollary}\label{cor:invariance} For a finitely generated virtually free group $\Lambda$
the classes ${\mathbf{Mult}(\Lambda)}$ and
${\mathbf{Mult}_\mathrm{irr}(\Lambda)}$ are $\Aut(\Lambda)$-invariant.
\end{corollary}
\begin{proof} Let $\alpha\in\Aut(\Lambda)$, let $\Gamma<\Lambda$ be a free subgroup of finite
index and let $\pi\in\mathbf{Mult}(\Gamma)$.
For $\gamma\in\alpha(\Gamma)$ set $\pi^\alpha(\gamma):=\pi(\inv\alpha\gamma)$.
An easy verification shows that
\begin{equation*}
\operatorname{Ind}_{\alpha(\Gamma)}^\Lambda(\pi^\alpha)
\simeq\operatorname{Ind}_{\Gamma}^\Lambda\big(\pi\big)\circ\alpha\,.
\end{equation*}
The fact that $\pi^\alpha\in \mathbf{Mult}(\alpha(\Gamma))$
(\cite{Iozzi_Kuhn_Steger_stab}) and Proposition~\ref{prop:independence}
show the assertion.
\end{proof}
We may then conclude:
\begin{corollary}\label{cor:bdry-rep-Gamma} The representations of a finitely generated
virtually free group $\Lambda$ in the class ${\mathbf{Mult}(\Lambda)}$
(and hence ${\mathbf{Mult}_\mathrm{irr}(\Lambda)}$)
are weakly contained in the regular representations.
\end{corollary}
\begin{proof} Since representations in the class $\mathbf{Mult}{ }$ of the free group are weakly
contained in the regular representation \cite{K-S1}
the continuity of the induction map ensures that every representation
in the class ${\mathbf{Mult}(\Lambda)}$
is weakly contained in the regular representation of $\Lambda$.
\end{proof}
\section{Tempered Representations of Gromov Hyperbolic Groups }\label{sec:result-hyperbolic}
In this section we prove further properties of the representations in
the class ${\mathbf{Mult}_\mathrm{irr}(\Lambda)}$, namely that they
can be extended to boundary representations (Theorem~\ref{general}).
This will follow from general arguments in operator algebras which
hold for general Gromov hyperbolic groups and do not depend on the
particular construction of the class
${\mathbf{Mult}_\mathrm{irr}(\Lambda)}$, but rather only on the fact
that the representations in the class
${\mathbf{Mult}_\mathrm{irr}(\Lambda)}$ are tempered. In this section
$G $ is a Gromov hyperbolic group.
\medskip
We saw already that boundary representations are associated with the action of $G $ on its
boundary $\partial G $ and we mentioned that they are in fact
representations of the crossed product $G\ltimes \mathcal{C}({\partial G})$.
We recall here the definitions that will be needed for the proof of the next theorem
(and at the same time clarify the above assertions).
\smallskip
Let $\mathcal{A}$ be a $C^\ast$-algebra and let us denote by $\mathcal{A}[G]$ the space of finitely
supported functions $G\to\mathcal{A}$,
\begin{equation*}
\mathcal{A}[G]:=\bigg\{\sum_i \zeta_i\delta_{\gamma_i}:\, \zeta_i\in \mathcal{A}, \gamma_i\in G\bigg\}\,,
\end{equation*}
where $\delta_\gamma$ is the Kronecker function at $\gamma\in G $.
If $ G $ acts on $\mathcal{A}$ by isometric automorphisms ${\lambda}: G \to\Aut(\mathcal{A})$,
we endow $\mathcal{A}[ G ]$ with a $C^\ast$-algebra structure as follows.
Define the sum of two elements of $\mathcal{A}[ G ]$ in the obvious way (as
$\mathcal{A}$-valued functions on $ G $) and let
\begin{equation}\label{prodotto}
(\zeta_1\delta_{\gamma_1})\cdot(\zeta_2\delta_{\gamma_2}):=
\big(\zeta_1{\lambda}({\gamma_1})\zeta_2\big)\delta_{\gamma_1\gamma_2}\period
\end{equation}
Use the distributive law to extend
\eqref{prodotto} to a product on $\mathcal{A}[ G ]$.
Finally set
\begin{equation*}
(\zeta\delta_\gamma)^\ast:= \lambda(\inv\gamma)
\zeta^*\delta_{\inv\gamma}\period
\end{equation*}
In order to define a norm on $\mathcal{A}[ G ]$, take any covariant
representation $(\pi,\alpha,H)$ of $( G ,\mathcal{A})$ and
for $f=\sum_i \zeta_i\delta_{\gamma_i}\in\mathcal{A}[ G ]$
define the operator
\begin{equation*}
(\pi\ltimes\alpha)(f):=\sum_i
\alpha(\zeta_i)\pi(\gamma_i)\,.
\end{equation*}
Define now the {\em universal norm}
\begin{equation}\label{norma}
\|f\|:=\sup\|(\pi\ltimes\alpha)(f)\|_H\,,
\end{equation}
where the supremum is taken over all covariant representations $(\pi,\alpha,H)$ of $ G $.
The completion of $\mathcal{A}[ G ]$ with respect to the above norm is
the {\em (full) crossed product $C^\ast$-algebra $ G \ltimes\mathcal{A}$}.
Given a $C^\ast$-representation $\alpha$ of $\mathcal{A}$ on $H$,
one can always get a covariant representation $(\tilde\lambda,\tilde\alpha)$ of $( G ,\mathcal{A})$
on $\ell^2( G )\otimes H$ by setting
\begin{align*}
\big(\tilde\alpha(\zeta)\xi\big)(\gamma)&:=\alpha(\lambda(\inv\gamma)\zeta)\xi(\gamma)\\
\big(\tilde\lambda(\gamma')\xi\big)(\gamma)&:=\xi(\inv{\gamma'}\gamma)\,,
\end{align*}
for all $\zeta\in\mathcal{A}$, $\gamma,\gamma'\in G $ and $\xi\in\ell^2( G )\otimes H$.
We remark, for further purposes, that $\tilde\lambda$ consists of $d$ copies
of the regular representation $\pi_{\text{reg}}$ of $ G $,
where $d$ is the Hilbert dimension of $H$.
The completion of $\mathcal{A}[ G ]$ with respect to the {\em reduced norm}
\begin{equation*}
\|f\|_{\text{red}}:=\sup_\alpha\|(\tilde\lambda\ltimes\tilde\alpha)(f)\|_{\ell^2( G )\otimes H}\,,
\end{equation*}
where the supremum in \eqref{norma} is taken only
over those covariant representations of the form
$(\tilde\lambda,\tilde\alpha)$
is the {\em reduced crossed product $C^\ast$-algebra} $ G \ltimes_{\text{red}} \mathcal{A}$.
\begin{example} The examples of this construction relevant to our purposes are the following:
\begin{itemize}
\item $\mathcal{A}=\mathbf{C}$ is the $C^\ast$-algebra of complex numbers with the trivial $ G $-action;
in this case $ G \ltimes\mathbf{C}$ is called the {\em group $C^\ast$-algebra}, denoted by $C^\ast( G )$,
and $ G \ltimes_{\text{red}}\mathbf{C}$ is called the {\em reduced group $C^\ast$-algebra},
denoted by $C^\ast_{\text{red}}( G )$.
\item $\mathcal{A}=\mathcal{C}(\partial G )$ is the $C^\ast$-algebra of continuous functions on the boundary $\partial G $ of $ G $.
\end{itemize}
\end{example}
\smallskip
We conclude this discussion by exhibiting a universal construction for
representations of the cross product $ G \ltimes\mathcal{C}(\partial G )$
\cite[Chapter~X, Theorem~3.8]{T} . Such representations are also called
{\em cocycle representations} (see for instance the papers of C.~Anantharaman \cite{An2}
and of C.~Anatharaman and J.~Renault \cite{An-R}) and
also appear in the context of measured semidirect product groupoids (see \cite{Re}).
Let $\omega\to H_\omega$ be a Borel field of Hilbert spaces
and $\mu$ a quasi-invariant measure on ${\partial G }$. Denote by
$\mathcal{H}:=\int^\oplus_{\partial G } H_\omega d\mu(\omega)$ the direct integral and
by $P(\omega,\gamma):=\frac{d\mu(\inv\gamma \omega)}{d\mu(\omega)}$ the Radon--Nikodym
cocycle of the $ G $ action.
For $\omega_1$ and $\omega_2$ in ${\partial G }$, denote by $\operatorname{Iso}
(H_{\omega_1},H_{\omega_2})$ the
space of all isomorphisms from $H_{\omega_1}$ to $H_{\omega_2}$.
A unitary Borel cocycle is a map $A(\omega,\gamma):\partial G \times G \to
\operatorname{Iso}(H_{\inv \gamma\omega}, H_\omega)$ such that
\begin{itemize}
\item $A(\omega,\gamma_1\gamma_2)=A(\omega,\gamma_1)A(\inv{\gamma_1}\omega,\gamma_2)$ [$\mbox{a.e.}\mu$], and
\item the map
$ \omega\to\big\langle f(\omega),A(\omega,\gamma)g(\inv\gamma\omega)\big\rangle$
is measurable for every pair of elements $f$, $g\in\mathcal{H}$.
\end{itemize}
\begin{definition}
A cocycle representation $\pi$ of $ G $
is a unitary representation acting on $\mathcal{H}$ according to the rule
\begin{equation*}
\big(\pi(\gamma)f\big)(\omega):=P^{\frac12}(\omega,\gamma)A(\omega,\gamma)f(\inv\gamma\omega)\;.
\end{equation*}
\end{definition}
\medskip
We can now prove the following:
\begin{theorem}\label{general}
Let $ G $ be a torsion free Gromov hyperbolic group which is not almost cyclic.
Then every tempered representation of $ G $ is a cocycle representation with respect to
some quasi-invariant measure.
If the representation is irreducible, the measure
can be taken to be ergodic and hence
the dimension of $H_\omega$ is almost everywhere constant.
\end{theorem}
\begin{proof} The inclusion $\mathbf{C}\hookrightarrow\mathcal{C}(\partial G )$
induces a map
$\phi:\mathbf{C}[ G ]\to\mathcal{C}({\partial G })[ G ]$
defined by
\begin{equation*}
\phi\bigg(\sum_i\zeta_i\delta_{\gamma_i}\bigg):=
\sum_i\zeta_i\mathbf{1}_{\partial G }\delta_{\gamma_i}\,,
\end{equation*}
where $\mathbf{1}_{\partial G }\in\mathcal{C}(\partial G )$ denotes the function identically one on ${\partial G }$.
It is immediate to verify that $\phi$ is continuous with respect to the reduced norm on both sides: in fact,
since $\alpha(\mathbf{1}_{\partial G })$ is the identity operator , then
\begin{equation*}
\begin{aligned}
\bigg\|\phi\bigg(\sum_i\zeta_i\delta_{\gamma_i}\bigg)\bigg\|_{\text{red}}
&=\sup_\alpha\bigg\|(\tilde\lambda\ltimes\tilde\alpha)\bigg(\sum_i\zeta_i\mathbf{1}_{\partial G }\delta_{\gamma_i}\bigg)\bigg\|_{\ell^2( G )\otimes H}\\
&=\bigg\|\tilde\lambda\bigg(\sum_i\zeta_i\delta_{\gamma_i}\bigg)\bigg\|_{\ell^2( G )\otimes H}\\
&=\big\|\pi_{\text{reg}}\big(\sum_i\zeta_i\delta_{\gamma_i}\big)\big\|_{\ell^2( G )}\\
&=\bigg\|\sum_i\zeta_i\delta_{\gamma_i}\bigg\|_{\text{red}}\,.
\end{aligned}
\end{equation*}
Since the reduced $C^\ast$-algebra of $ G $ is simple \cite{dLH}
(see also \cite{B-C-D} concerning lattices in semisimple Lie groups)
the extension of the above map $\overline\phi$ is actually an inclusion
\begin{equation}\label{map}
\overline\phi:C^\ast_{\text{red}}( G )\hookrightarrow
G \ltimes_{\text{red}}\mathcal{C}({\partial G })\;.
\end{equation}
Moreover, since the action of $ G $ on $\partial G $ is amenable (see \cite{Adams} or the more recent
\cite{Ka}), then the reduced crossed product and the full crossed product
coincide (see \cite[Theorem~5.3]{An1})) and hence we have
\begin{equation}\label{incl}
\overline{\phi}:C^\ast_{\text{red}}( G )\hookrightarrow G \ltimes\mathcal{C}({\partial G })\,.
\end{equation}
Assume now that $\pi$ is tempered or,
equivalently, that $\pi$ is a representation of $C^\ast_{\text{red}}( G )$.
By standard arguments involving the Hahn--Banach Theorem
(see \cite[Lemma 2.10.1]{Dix}) one can see that $\pi$ can be extended to a representation
$\pi^{\partial G }$ of $ G \ltimes\mathcal{C}({\partial G })$.
By \cite[Chapter~X, Theorem~3.8]{T}
the representations of the full crossed product are exactly
the cocycle representations for some quasi-invariant measure $\mu$ on $\partial G $
and some field of Hilbert spaces $\omega\to H_\omega$.
The same argument in \cite[Lemma 2.10.1]{Dix} shows that if $\pi$ is irreducible
one can require the extension $\pi^{\partial G }$ to be also irreducible.
Since $\pi^{\partial G }$ is irreducible, the corresponding measure $\mu$
is ergodic and, since the map $\omega\mapsto\dim(H_\omega)$ is
measurable and $ G $-invariant,
the dimension of the Hilbert spaces $H_\omega$ is constant $[a.e.\mu]$.
\end{proof}
\begin{remark}
The existence of the map \eqref{map}, and hence of the inclusion \eqref{incl},
is independent of the representation $\pi$ and depends only on the compactness of $\partial G $ and
the amenability of the $ G $-action. Had we inputted in the picture from the beginning
a representation $\pi$ that is weakly contained in the regular representation of
$ G $, we could have obtained directly the map \eqref{incl}. In fact, since
cocycle representations are exactly the representations
of the full crossed-product $C^\ast$-algebra and since the action of $ G $ on
$\partial G $ is amenable, we have that the restriction of a representation of $ G \ltimes \mathcal{C}(\partial G )$
to $ G $ is weakly contained in the regular representation,
that is $\|\pi(f\ltimes\mathbf{1}_{\partial G })\|\leq\|\pi_{\text{reg}}(f)\|$,
\cite{Ku}, and hence the continuity of the map
into $ G \ltimes\mathcal{C}(\partial G )$ is proved at once.
\end{remark}
\begin{corollary}\label{cor:bdry}
Let $\Lambda$ be a finitely generated virtually free group and let $\pi$ be a representation
in the class ${\mathbf{Mult}_\mathrm{irr}(\Lambda)}$.
Then $\pi$ is a cocycle representation with respect to
a quasi-invariant ergodic measure $\mu$ on $\partial\Lambda$.
\end{corollary}
\begin{proof} Theorem~\ref{general} and Corollary~\ref{cor:bdry-rep-Gamma}.
\end{proof}
\begin{remark}
Theorem 4.3 states that every tempered irreducible
representation $(\pi,H)$ of a Gromov hyperbolic group $ G $ admits
at least one extension to an irreducible representation
$(\pi^{\partial G },H_{\partial G })$ of $ G \ltimes_{\text{red}}\mathcal{C}({\partial G })$.
We call such an extension a {\it boundary realization for $\pi$}.
We say moreover that a boundary realization is {\it perfect} if one can take
$H=H_{\partial G }$.
Even if there is no {\it a priori}
reason for $(\pi^{\partial G },H_{\partial G })$ to be unique,
we have noticed that this is the case when $ G =\Gamma$ is a free group
and $\pi$ is a representation of the class $\mathbf{Mult}(\Gamma)$ whose matrix coefficients
are sufficently ``big'' in the sense of \cite{K-S2}.
In fact for all irreducible tempered representations $(\pi,H)$ of the free group known so far, there are only
three possibilities:
\begin{itemize}
\item $\pi$ admits only one boundary realization which is perfect. In this case
the irreducible representation $(\pi^{\partial\Gamma},H)$ of $\Gamma\ltimes\mathcal{C}(\partial\Gamma)$
remains irreducible also when restricted to $\Gamma$;
in this case we say that $\pi$ satisfies {\it monotony}.
\item $\pi$ admits only one boundary realization which is not perfect,
so that the inclusion $H\hookrightarrow H_{\partial\Gamma}$ is proper.
In this case the representation $(\pi^{\partial\Gamma},H_{\partial\Gamma})$ is irreducible as representation
of $\Gamma\ltimes_{\text{red}}\mathcal{C}({\partial\Gamma})$, but, when restricted to
$\Gamma$, it splits into the sum of two irreducible inequivalent representations;
we say that $\pi$ satisfies {\it oddity}.
\item $\pi$ admits exactly two perfect boundary realizations, no other
boundary realization is perfect and any other (not perfect) boundary realization can be obtained
as a linear combination of these two perfect ones;
in this last case we say that $\pi$ satisfies {\it duplicity}.
\end{itemize}
We have conjectured that those are the only three possibilities for
any tempered representation of a free group,
but we can prove it so far only for representations of the class $\mathbf{Mult}(\Gamma)$, \cite{K-S2}.
We think that the same problem is well posed also for a Gromov hyperbolic group and
perhaps passing to a more general class of groups will give a better understanding of this phenomenon.
\end{remark}
As a consequence of Theorem~\ref{general} we can state an analogue of
Herz majorization principle for a class of hyperbolic groups.
\begin{theorem}\label{prop:herz}
Let $(\pi,H)$ be a tempered
representation of a torsion free Gromov hyperbolic group $ G $
which is not almost cyclic
and let $v$ be any vector in $H$.
Then there exists a positive measure $\mu$ on $\partial G $ such that
\begin{equation}\label{eq:herz}
|\langle\pi(x)v,v\rangle|\leq \norm{v}^2|\langle\rho(x)\mathbf{1}_{\partial G },\mathbf{1}_{\partial G }\rangle|
\end{equation}
where $\rho$ is the quasi-regular representation on $L^2(\partial G ,d\mu)$ and
$\mathbf{1}_{\partial G }$ is the constant function on $\partial G $.
\end{theorem}
\begin{proof}
Let $\pi^{\partial G }$ be a boundary representation extending $\pi$.
Choose any element $v$ of norm one in $H$ and
let $\omega\mapsto f(\omega)$ be the element of $\int_{\partial G }^\oplus H_\omega d\mu(\omega)$, corresponding to it.
(We remark that in this case $\mu$ need not be ergodic.)
Let
\begin{equation*}
E(f)=\{~\omega~\in{\partial G }\st f(\omega)\neq0\}\,,
\end{equation*}
\begin{equation*}
F(\omega):=\begin{cases}
\norm{f(\omega)}\qquad&\text{if $\omega\in E(f)$}\\
1\qquad& \text{if }\omega\in{\partial G }\setminus E(f)\,,
\end{cases}
\end{equation*}
and set $dm(\omega)={[F(\omega)]^2d\mu(\omega)}$. Since the map
$g(\omega) \mapsto \frac{g(\omega)}{F(\omega)}$
is a unitary equivalence between
$\int_{\partial G }^\oplus H_\omega d\mu(\omega)$ and $\int_{\partial G }^\oplus H_\omega dm(\omega)$, we may
assume that $\norm{f(\omega)}=1$ on $E(f)$.
Denote by $P(x,\omega)$ the Radon--Nikodym derivative of $\mu$ with respect to the
$ G $ action, so that
\begin{equation*}
(\pi^{\partial G }(x)f)(\omega)=P^{\frac12}(x,\omega)A(x,\omega)f(\inv x\omega)
\end{equation*}
for some unitary Borel cocycle $A(x,\omega)$.
One has
\begin{equation*}
\begin{aligned}
|\langle\pi(x)v,v\rangle|=&|\int_{\partial G }\langle A(x,\omega)f(\omega),f(\omega)\rangle_\omega
P^{\frac12}(x,\omega)d\mu|\\
&\leq\int_{E(f)}\norm{f(\omega)}_\omega^2P^{\frac12}(x,\omega)d\mu\\
&\leq\int_{\partial G } P^{\frac12}(x,\omega)d\mu\\
&=\langle\rho(x)\mathbf{1}_{\partial G },\mathbf{1}_{\partial G }\rangle
\end{aligned}
\end{equation*}
where $\rho$ is the quasi-regular representation on $L^2({\partial G },d\mu)$.
\end{proof}
\begin{remark}\label{rem:4.8}
If $H$ is a semisimple Lie group with finite center and maximal compact $K$,
there exists a unique $K$-invariant probability measure $\mu$ on the
maximal Furstenberg boundary $H/P$, for $P$ a minimal parabolic.
In this case the quasi-regular representation
$\rho$ on $L^2(H/P,d\mu)$ plays a very important role, namely
the Harish-Chandra function
$\Xi(x)=\langle\rho(x)\mathbf{1}_{H/P },\mathbf{1}_{H/P}\rangle$ dominates
all spherical functions associated with tempered unitary representations.
If $H$ has property (T) one can push this further by exhibiting a positive definite
function $\Psi$ which dominates all positive definite non-constant spherical functions on $H$.
R.~Howe and E.~C.~Tan constructed in their book \cite[Chapter V]{HT}
such a function $\Psi$ from $\Xi$ for $SL(n,\mathbb R)$, for $n\geq3$,
while the more recent paper of H.Oh \cite{Oh} treats the general case.
\medskip
We remark that the measure $\mu$ of Proposition~\ref{prop:herz}
must depend on $\pi$, making our case much more similar to $SL(2,\mathbb R)$,
for which it is impossible to bound an arbitrary matrix coefficient in terms
of $\Xi$.
To see this, take $\Gamma$ to be a non-abelian free group and take
a copy of $\mathbb Z$ inside $\Gamma$. Let $w$ be the generator for $\mathbb Z$,
$\pi_{\mathbb Z}$ be the representation induced from the trivial character of $\mathbb Z$
and $\mathbf{1}_{[\mathbb Z]}$ the characteristic function of the coset $[\mathbb Z]$.
If there were to exist a fixed
measure $\mu$ such that \eqref{eq:herz} holds for every tempered $\pi$,
one would have
\begin{equation*}
\langle\rho(w)\mathbf{1}_{\partial\Gamma},\mathbf{1} _{\partial \Gamma}\rangle\geq
\langle\pi_{\mathbb Z}(w)\mathbf{1}_{[\mathbb Z]},\mathbf{1} _{[\mathbb Z]}\rangle
=1\period
\end{equation*}
The fact that every word $w$ generates a copy of $\mathbb Z$ inside $\Gamma$,
would then imply that
$\langle\rho(\,\cdot\,)\mathbf{1}_{\partial\Gamma},\mathbf{1} _{\partial \Gamma}\rangle\equiv1$
identically on $\Gamma$, which is impossible since the measure $\mu$ on $\partial\Gamma$
cannot be invariant.
\end{remark}
|
2,877,628,089,823 | arxiv | \section{Introduction}
Current cosmological models ascribe only about 4\% of the density in the
Universe to baryons \citep{2007ApJS..170..377S}. The majority of
these baryons reside outside of galaxies; stars and cold galactic gas
may account for about about one third \citep{1998ApJ...503..518F}.
Intergalactic baryons have historically been traced in absorption,
such as the Ly$\alpha$ forest arising from diffuse photoionized
gas that may account for up to 30\% of baryons. The remaining
baryons are predicted to exist in a warm-hot intergalactic medium
(WHIM) \citep[e.g.][]{1999ApJ...511..521D,2001ApJ...552..473D,
1999ApJ...514....1C}, which is shock-heated during the collapse of
density perturbations that give rise to the cosmic web. Still, absorption
probes yield only one-dimensional redshift-space information, or in rare
cases several probes through common structures. Mapping intergalactic
baryons in emission can in principle provide morphological and kinematic
information on accreting (and perhaps outflowing) gas within the
cosmic web.
Unfortunately, emission from intergalactic baryons is difficult to
observe, because current telescope sensitivities result in a detection
limit of column densities $N_{\rm{HI}} \ga 10^{19}$ cm$^{-2}$, which
are the realm of Damped Ly$\alpha$ (DLA) systems and sub-DLAs. Below
column densities of $\sim N_{\rm{HI}} \sim 10^{19.5}$ cm$^{-2}$, the
neutral fraction of hydrogen decreases rapidly due to the transition
from optically-thick to optically-thin gas ionized by the metagalactic
ultraviolet flux. At lower densities the gas is no longer affected by
self shielding and the atoms are mostly ionized. This sharp decline
in neutral fraction from almost unity to less than a percent happens
within a few kpc \citep{1994ApJ...423..196D}. Below $N_{\rm{HI}} \sim
10^{17.5}$ cm$^{-2}$ the gas is optically thin and the decline in
neutral fraction with total column is much more gradual. A consequence
of this rapid decline in neutral fraction is a plateau in the
\hbox{\rm H\,{\sc i}} column density distribution function between
$N_{\rm{HI}} \sim 10^{17.5}$ and $N_{\rm{HI}} \sim 10^{19.5}$
cm$^{-2}$, where the relative surface area at these columns shows only
modest growth. This behaviour is confirmed in QSO absorption studies
tabulated by \cite{2002ApJ...567..712C} and in \hbox{\rm H\,{\sc i}}
emission by \cite{2004A&A...417..421B}. Below $N_{\rm{HI}} \sim
10^{17.5}$ cm$^{-2}$ the relative surface area increases rapidly,
reaching about a factor of 30 larger at $N_{\rm{HI}} \sim 10^{17}$
compared with $N_{\rm{HI}} \sim 10^{19}$ cm$^{-2}$.
This plateau in the distribution function is a critical issue for
observers of neutral hydrogen in emission. Although telescope
sensitivities have increased substantially over the past decades,
the detected surface area of galaxies observed in the 21cm line has
only increased modestly \citep[eg.][]{1994ApJ...423..196D}. Clearly
there is a flattening in the distribution function near $N_{\rm{HI}}
\sim 10^{19.5}$ cm$^{-2}$ which has limited the ability of even deep
observations to detect hydrogen emission from a larger area. By
establishing that a steeper distribution function is again expected below
about $N_{\rm{HI}} \sim 10^{17.5}$, it provides a clear technical
target for what the next generation of radio telescopes needs to
achieve to effectively probe diffuse gas.
Exploration of the $N_{\rm{HI}} < 10^{17.5}$ cm$^{-2}$ regime is
essential for gaining a deeper understanding of the repository of
baryons that drive galaxy formation and evolution. This gas, residing
in filamentary structures, is the reservoir that fuels future star
formation, and could provide a direct signature of smooth cold-mode
accretion predicted to dominate gas acquisition in star-forming
galaxies today \citep[]{2005MNRAS.363....2K, 2008arXiv0808.0553D,
2008arXiv0809.1430K}. Furthermore, the trace neutral fraction in this
phase may provide a long-lived fossil record of tidal interactions and
feedback processes such as galactic winds and AGN-driven cavities.
Several new large facilities to study 21cm emission are under
development today. In view of the observational difficulties in probing
the low \hbox{\rm H\,{\sc i}} column regime, it is particularly
important to have reliable numerical simulations to aid in planning
new observational campaigns, and eventually to help interpret such
observations within a structure formation context. While simulations
of galaxy formation are challenging, historically they have had much
success predicting the more diffuse baryons residing in the cosmic
web~\citep[e.g.][]{1999ApJ...511..521D}. If such simulations display
statistical agreement with key existing \hbox{\rm H\,{\sc i}} emission
data, then they can be used to make plausible predictions for the types of
structures that may be detected, along with suggesting optimum observing
strategies.
In this paper, we employ a state-of-the art cosmological hydrodynamic
simulation to study \hbox{\rm H\,{\sc i}} emission from filamentary
large-scale structure and the galaxies within them. The simulation
used here include a well-constrained prescription for galactic
outflows that has been shown to reproduce the observed metal and
\hbox{\rm H\,{\sc i}} absorption line properties from $z\sim
6\rightarrow 0$ \citep{2006MNRAS.373.1265O,
2008MNRAS.387..577O,2009MNRAS.395.1875O}. We develop a method to
produce \hbox{\rm H\,{\sc i}} maps from these simulations, and compare
statistical properties of the reconstructed \hbox{\rm H\,{\sc i}} data
with the statistics of real \hbox{\rm H\,{\sc i}} observations, to
assess the reliability of the simulation. For this purpose we will
primarily use the \hbox{\rm H\,{\sc i}} Parkes All Sky Survey (HIPASS)
\cite{2001MNRAS.322..486B} since this is the largest available
\hbox{\rm H\,{\sc i}} survey. This work is intended to provide an
initial step towards a more thorough exploration of model constraints
that will be enabled by comparisons with present and future \hbox{\rm
H\,{\sc i}} data.
Note that current spatial resolution of simulations having
cosmologically-representative volumes cannot reproduce a galaxy as
would be seen with \hbox{\rm H\,{\sc i}} observations having sub-kpc
resolution. Therefore we do not consider the internal kinematics or
detailed shapes of the objects associated with simulated galaxies. We can
only assess the statistical properties of the diffuse \hbox{\rm H\,{\sc
i}} phase and predict how the gas is distributed on multi-kpc scales.
We particularly focus on lower column density material that may be probed
with future \hbox{\rm H\,{\sc i}} surveys, which primarily reside in
cosmic filaments within which galaxies are embedded.
Our paper is organized as follows. In section two we briefly describe
the particular simulation that has been analyzed. In section three we
describe our method to extract the neutral hydrogen from the
simulations. The neutral fraction is determined from both a general
ionization balance as well as a local self-shielding correction. We
also model the transition from atomic to molecular hydrogen We
present our results, showing the statistical properties of the
recovered \hbox{\rm H\,{\sc i}} in section four, where they are
compared with similar statistics obtained from observations. The
distribution of neutral hydrogen is compared with the distribution of
dark matter and stars in this section as well. In the fifth section we
discuss the results and outline the implications. Finally, section
six reiterates our main conclusions.
\section{Simulation Code}
A modified version of the N-body+hydrodynamic code Gadget-2 is
employed, which uses a tree-particle-mesh algorithm to compute
gavitational forces on a set of particles, and an entropy-conserving
formulation of Smoothed Particle Hydrodynamics (SPH:
\cite{2002MNRAS.333..649S}) to simulate pressure forces and shocks
in the baryonic gaseous particles. This Lagrangian code is fully
adaptive in space and time, allowing simulations with a large dynamic
range necessary to study both high-density regions harboring galaxies
and the lower-density IGM. It includes a prescription for star
formation following \cite{2003MNRAS.339..312S} and galactic
outflows as described below. The code has been described in detail
in \cite{2006MNRAS.373.1265O} and \cite{2008MNRAS.387..577O}; we
will only summarise the properties here.
The novel feature of our simulation is that it includes a
well-constrained model for galactic outflows. The implementation
follows \cite{2003MNRAS.339..289S}, but employs scalings of outflow
speed and mass loading factor with galaxy mass as expected for
momentum-driven winds \citep{2005ApJ...618..569M}. Our simulations
using these scalings have been shown to successfully reproduce a wide
range of IGM and galaxy data, including IGM enrichment as traced by
$z\sim2-6$ \ion{C}{iv} absorbers~\citep{2006MNRAS.373.1265O}, the
galaxy mass-metallicity relation~\citep{2008MNRAS.385.2181F}, the
early galaxy luminosity function and its
evolution~\citep{2006MNRAS.370..273D}, \ion{O}{vi} absorption at
low-$z$~\citep{2009MNRAS.395.1875O}, and enrichment and entropy
levels in galaxy groups~\cite{2008MNRAS.391..110D}. Such outflows
are expected to impact the distribution of gas in the large-scale
structure around galaxies out to typically $\sim 100$kpc
\citep{2008MNRAS.387..577O, 2009MNRAS.395.1875O}, so are
important for studying the regions expected to yield detectable
\hbox{\rm H\,{\sc i}} emission.
The simulation used here is run with cosmological parameters
consistent with the 3-year WMAP results \citep{2007ApJS..170..377S}.
The parameters are $\Omega_0 = 0.25$, $\Omega_{\Lambda} = 0.75$, $\Omega_b
= 0.044$, $H_0 = 71$ km s$^{-1}$ Mpc$^{-1}$, $\sigma_8 = 0.83$, and $n
= 0.95$. The periodic cubic volume has a box length of 32 $h^{-1}$ Mpc
(comoving), and the gravitational softening length is set to 2.5 $h^{-1}$
kpc (comoving). Dark matter and gas are represented using $256^3$
particles each, yielding a mass per dark matter and gas particle of
$1.57\times 10^8 M_\odot$ and $3.35\times 10^7 M_\odot$, respectively.
The simulation was started in the linear regime at $z=129$, with initial
conditions established using a random realization of the power spectrum
computed following \cite{1999ApJ...511....5E}, and evolved to $z=0$.
\section{Method for Making \hbox{\rm H\,{\sc i}} Maps}
We now describe the algorithm used to extract the neutral hydrogen
component from this simulation. The method developed is general, and
can be applied to any simulation that has a similar set of output
parameters. In our analysis, we set the total hydrogen number density
to $n_{\rm{H}} = 0.74\rho_g/m_{\rm{H}}$ where $m_{\rm{H}}$ is the mass
of the hydrogen atom. The factor 0.74 assumes a helium abundance of $Y
= 0.26$ by mass, and that all the helium is in the form of \hbox{\rm
He\,{\sc i}} and \hbox{\rm He\,{\sc ii}} with a similar neutral
fraction as hydrogen. Apart from this factor the presence of helium is
not taken into account in our calculations. We will describe how we
determine the neutral fraction of the gas particles, including
applying a correction for self shielding in high density regions and
taking into account molecular hydrogen formation where relevant. This
allows reconstruction of the neutral hydrogen distribution, by mapping
the particles onto a three dimensional grid.
\subsection{Neutral Fraction of Hydrogen}
We begin by calculating the neutral fraction from the density and
temperature of gas in the simulations, together with the \hbox{\rm
H\,{\sc i}} photo-ionization rate provided by the cosmic UV background.
\cite{1994ApJ...423..196D} found that the radial structure of the column
density of \hbox{\rm H\,{\sc i}} is more sensitive to the extragalactic
radiation field than to the distribution of mass in the host galaxy.
When calculating the neutral fraction, we assume that all photoionization
is due to radiation external to the disk and that internal stellar sources
are not significant. In this case the nebular model as described in
\cite{1989agna.book.....O} is a very good approximation, since the typical
number density in the outer parts of galaxies, approximately $10^{-2}$
cm$^{-3}$, is so low that collisional ionization is negligible. When
going further out, the densities become even lower. Inside galaxies
the volume densities are so high, that the neutral fraction is of order
unity owing to self-shielding.
In the IGM, hydrogen becomes ionized when the extreme ultraviolet (UV)
radiation ionizes and heats the surrounding gas. On the other hand,
the recombination of electrons leads to neutralization. The degree of
ionization is determined by the balance between photo-ionization and
radiative recombination. Only photons more energetic than $h\nu > 13.6$
eV can ionize hydrogen. The ionization equilibrium equation is given by e.g. \cite{1989agna.book.....O} as:
\begin{equation}
n_{\rm{H}} \int^{\infty}_{\nu_0} \frac{4\pi J_{\nu}}{h\nu}\alpha(\nu)
d\nu = n_e n_p \beta(T),
\end{equation}
where $J_{\nu}$ is the mean intensity of ionizing photons,
$\alpha(\nu)$ is the ionization cross section and $\beta(T)$ is the
recombination rate coefficient to all levels of atomic hydrogen at
temperature $T$. $\nu_0$ is the ionization threshold frequency. Only
radiation with frequency $\nu > \nu_0$ is effective in photoionization
of hydrogen from the ground level. Summarising, the integral
represents the number of photoionizations per unit time and
the right hand side of the equation gives the number of
recombinations per unit time. The neutral hydrogen,
$n_{\rm{H}}$, electron, $n_e$, and proton, $n_p$, densities are related
through the charge conservation and hydrogen abundance equations,
\begin{equation}
n_e = n_p = (1-\xi)n
\end{equation}
where $\xi$ is the neutral fraction, so
\begin{equation}
n_{\rm{H}} = \xi n
\end{equation}
with $n$ the total density.
We can write the ionization balance for neutral Hydrogen as
\begin{equation}
\xi n \Gamma_{\rm{HI}} = (1-\xi)^2n^2 \beta(T)
\end{equation}
where $\Gamma_{\rm{HI}}$ is the ionization rate for neutral hydrogen.\\
With this equation it is easy to determine the neutral fraction, which
is given by
\begin{equation}
\xi = \frac{2C + 1 - \sqrt{(2C+1)^2-4C^2}}{2C}
\end{equation}
using
\begin{equation}
C = \frac{n \beta(T)}{\Gamma_{\rm{HI}}}
\end{equation}
Obviously the neutral fractions that we calculate are closely related
to the values we use for the photoionization and recombination rate.
The photoionization rate at low redshift is not well constrained
observationally; by combining Ly$\alpha$ forest data and simulations,
\cite{2001ApJ...553..528D} obtain a photoionization rate of
$\Gamma_{\rm{HI}} \sim 10^{-13.3 \pm 0.7}$ s$^{-1}$ for redshift
$z \sim 0.17$. We will use the photoionization rate given by the
CUBA model of \cite{2001cghr.confE..64H}, which is $\Gamma_{\rm{HI}}
\sim 10^{-13}$ s$^{-1}$ for $z \sim 0$.
The recombination rate coefficients are dependent on temperature.
We make use of an analytic function described by
\cite{1996ApJS..103..467V}, that fits the coefficients in the
temperature range form 3 K to $10^{10}$ K:
\begin{equation}
\beta(T) = a \Big[
\sqrt{T/T_0}\big(1+\sqrt{T/T_0}\big)^{1-b}\big(1+\sqrt{T/T_1}\big)^{1+b}
\Big] ^{-1}
\end{equation}
where $a$, $b$, $T_0$ and $T_1$ are the fitting parameters. For the
\hbox{\rm H\,{\sc i}} ion the fitting parameters are: $a=7.982 \times
10^{-11}$ cm$^3$ s$^{-1}$, $b=0.7480$, $T_0=3.148$ K and $T_1 = 7.036
\times 10^{5}$ K.\\
The neutral fraction is plotted for different temperatures as function
of density of \hbox{\rm H} atoms in Fig.~\ref{neutralbal}. For temperatures
below $10^4$ K, the neutral fraction is still significant (a few
percent) at reasonably low densities of 0.01 cm$^{-3}$ but at higher
temperatures most of the gas is ionized, and the neutral fraction
drops very quickly.
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{figures/neut_bal.eps}
\caption{Neutral fraction as a function of density for different
temperatures between $10^4$ and $10^9$ K.}
\label{neutralbal}
\end{figure}
\subsection{Molecular Hydrogen}
When gas has cooled sufficiently, it coexists in the
molecular (H$_2$) and atomic (\hbox{\rm H\,{\sc i}}) phases. The
H$_2$ regions are found in dense molecular clouds where star
formation occurs. Unfortunately there is large uncertainty in the
average amount of H$_2$ in galaxies, as estimates have to rely on indirect
tracers and conversion factors, for which the dependancies are not
well-understood. As a result, there is substantial variance in
estimates of the average density ratio at z~=~0, $\eta_{universe}
= \Omega_{H_2} / \Omega_{HI}$ (e.g. 0.42 and 0.26 stated
respectively by \cite{2003ApJ...582..659K} and
\cite{2009MNRAS.tmp..289O}). It is beyond the scope of this paper to
revisit these determinations, therefore we will adopt a value of
$\Omega_{H_2} / \Omega_{HI} = 0.3$ that falls within the error bars
of current estimates. Given the observed local value of the atomic
mass density, of $\rho_{HI}=6.1\times 10^7$ $h$ M$_{\odot}$
Mpc$^{-3}$ \citep{2003AJ....125.2842Z}, this implies a molecular
mass density of
$\rho_{H_2}=1.8\times 10^7$ $h$ M$_{\odot}$ Mpc$^{-3}$.
To define the regions of molecular hydrogen, we use a threshold
based on the thermal pressure $(P/k = nT)$. \cite{2002ApJ...569..157W},
\cite{2004ApJ...612L..29B} and more recently
\cite{2006ApJ...650..933B} have made the case that the amount of molecular
hydrogen that is formed in galaxies is determined by
only one parameter, the interstellar gas pressure. In hydrostatic
pressure equilibrium, the hydrostatic pressure is balanced by the sum
of all contributions to the gas pressure: magnetic pressure, cosmic
ray pressure and kinetic pressure terms (of which the thermal pressure
is relatively small) (e.g. \cite{1996ASPC..106....1W} and references
therein). However, thermal pressure is directly coupled to energy
dissipation via radiation, and therefore thermal pressure can track
the total pressure due to various equipartition mechanisms. An
evaluation of the various contributions to the total hydrostatic
pressure is given by \cite{1990ApJ...365..544B}.
Two lines of constant thermal pressure are shown in
Fig.~\ref{temp_dens} where temperatures are plotted against density
for individual particles in the simulation. When following these
lines, they cross two regions, one with high densities and low
temperatures and one with moderate densities, but very high
temperatures. These two regions are distinguished by the solid green
line in Fig.~\ref{temp_dens}, where the radiative recombination time
is equivalent to the sound-crossing time on a kiloparsec scale. The
radiative recombination time is given in \cite{2005pcim.book.....T}
by:
\begin{equation}
\tau_{\textrm{rec}} \simeq 2 \times 10^2 \frac{T^{0.6}}{n_e} \textrm{ years}
\end{equation}
where $n_e$ is the electron density which is comparable to the total
density for low neutral fractions. The sound crossing time is given by
$\tau_{\textrm{s}} = R/C_{\textrm{s}}$, where $R$ is the relevant
scale (assumed here to be one kpc) and $C_{\textrm{s}} \simeq 1.4
\times 10^4 T^{1/2}$ cm s$^{-1}$ is the sound velocity. All particles
right of the green line have a recombination time that is shorter than
the sound crossing time. Particles with a recombination time that
is larger than the sound crossing time are unlikely to be neutral or
molecular.
For each particle the thermal pressure can be calculated and particles
with a pressure exceeding the threshold value and satisfying
$\tau_{\textrm{rec}}<\tau_{\textrm{s}}$ are considered molecular. By
exploring different pressure values as shown in Fig.~\ref{H2pres}, the
threshold can be tuned to yield the required molecular mass density,
$\rho_{H_2}=1.8\times 10^7$ $h$ M$_{\odot}$ Mpc$^{-3}$. The
threshold thermal pressure value we empirically determine is $P/k =
810$ cm$^{-3}$ K. We must stress, that this value is very likely not
a real physical value, as the resolution in our simulation is not
sufficient to resolve the scales of molecular clouds. Molecular clouds
have smaller scales with higher densities, which will likely have
significantly enhanced pressures.
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{figures/temp_dens.eps}
\caption{Temperatures are plotted against densities for every
200$^{\textrm{th.}}$ particle in the simulation. The dashed
(blue) and dash-dotted (red) lines correspond to constant
thermal pressures of $P/k = 155$ and 810 cm$^{-3}$ K, that were
found empirically to reproduce the observed mass densities of
atomic and molecular gas at z~=~0. The solid (green) line shows
where the recombination time is equal to the sound-crossing time
at a physical scale of one kpc. Particles above/left of the
green line are unlikely to be neutral or molecular.}
\label{temp_dens}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{figures/H2pres.eps}
\caption{The average molecular density at z~=~0 is plotted
against the threshold thermal pressure, where molecular hydrogen
is assumed to form from atomic, while also satisfying the
condition that $\tau_{\textrm{rec}}<\tau_{\textrm{s}}$. The
dashed line indicates a pressure of $P/k = 810$ cm$^{-3}$ K,
where $\rho_{H_2}=1.8\times 10^7$ $h$ M$_{\odot}$ Mpc$^{-3}$
which is shown by the dotted line.}
\label{H2pres}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{figures/HIpres.eps}
\caption{The average \hbox{\rm H\,{\sc i}} mass density at
z~=~0 is plotted against the threshold thermal pressure, where
atomic hydrogen is assumed to recombine from ionized, while also
satisfying the condition that
$\tau_{\textrm{rec}}<\tau_{\textrm{s}}$ (and accounting for a
molecular density of $\rho_{H_2}=1.8\times 10^7$ $h$ M$_{\odot}$
Mpc$^{-3}$). At a pressure of $P/k = 155$ cm$^{-3}$ K (dashed
line), $\rho_{HI}=6.1\times 10^7$ $h$ M$_{\odot}$ Mpc$^{-3}$
(dotted line), which is consistent with the \hbox{\rm H\,{\sc
i}} density from \cite{2003AJ....125.2842Z}.}
\label{HIpres}
\end{figure}
\subsection{Correction for Self-Shielding}
Although the ionization state and kinetic temperature are
determined self-consistently within the simulation, it has been
necessary to assume that each gas particle is subjected to the same
all-pervasive radiation field. At both extremely low and high
particle densities this approximation is sufficient, since local
conditions will dominate. However, at intermediate densities, the
``self-shielding'' of particles by their neighbours may play a
critical role in permitting local recombination, when the same
particle would be substantially ionized in isolation. Present
cosmological simulations are not capable of solving the full radiative
transfer equations, although it is now becoming possible to
post-process radiative transfer on individual galaxies
\citep[e.g.][]{2008MNRAS.390.1349P}. Because we want to study
emission from the IGM as well as galaxies, we must instead adopt a
simple correction based on density and temperature to
approximate self-shielding. We adopt a similar approach to the
one that was used to model the atomic to molecular transition above,
using the thermal pressure as a proxy for the hydrostatic
pressure. Only gas at a sufficiently high thermal pressure and for
which the recombination time is shorter than the sound crossing time
on kpc scales is assumed to recombine. Particles that satisfy the
pressure and timescale condition are considered to be fully
self-shielded, and their neutral fraction is set to unity.
We will assume that the highest pressure regions which satisfy
$\tau_{\textrm{rec}}<\tau_{\textrm{s}}$ have already provided
$\rho_{H_2}=1.8\times 10^7$ $h$ M$_{\odot}$ Mpc$^{-3}$ as discussed
above. We subsequently calculate the atomic density as
function of the thermal pressure
threshold, as shown in Fig.~\ref{HIpres}. It is empirically found,
that a threshold value of $P/k = 155$ cm$^{-3}$ K results in an
\hbox{\rm H\,{\sc i}} density of $\rho=6.1\times 10^7$ $h$
M$_{\odot}$ Mpc$^{-3}$, that is similar to the derived value in
\cite{2003AJ....125.2842Z}. We will adopt this threshold value for our
further analysis.
The typical densities and temperatures where self-shielding becomes
important are not accurately defined. When looking at Fig.~\ref{temp_dens},
the typical temperatures and densities which satisfy our empirical thermal
pressure criterion for local recombination are temperatures of $\sim10^4$
K and densities of $\sim 0.01$ cm$^{-3}$. These values agree well with
various estimates from literature
(e.g. \cite{1997ApJ...490..564W},\cite{2003ApJ...587..278W}, and
\cite{2005PhDT........17P}.)
\subsection{Gridding Method}
To reconstruct the density fields, we have employed a grid-based method,
in which the value of the density field is calculated at a set of
locations defined on a regular grid. The mass of each particle is
spread over this grid in accordance with a particular weighting
function $W$, to yield
\begin{equation}
\widehat{\rho}\Big(\frac{\mathbf{n}}{M}\Big)
= \frac{M^3}{N} \sum_{i=1}^N m_i W
\Big(\mathbf{x}_i - \frac{\mathbf{n}}{M} \Big)
\end{equation}
where $\mathbf{n} = (n_x,n_y,n_z)$ denotes the grid-cell, $M$ is the
number of cells of the grid in each dimension, $N$ is the number of
particles and $m_i$ is the mass of particle $i$.\\
We adopt the weighting function directly from what is used for SPH in
Gadget-2, namely a spline kernel defined by \cite{1985A&A...149..135M}:
\begin{equation}
W(r,h) = \frac{8}{\pi h^3} \left\{ \begin{array}{lr}
1-6\Big( \frac{r}{h}\Big)^2+6\Big(\frac{r}{h}\Big)^3\textrm{,}
& 0\leq\frac{r}{h}\leq\frac{1}{2}\\
& \\
2\Big(1-\frac{r}{h}\Big)^3\textrm{,}
& \frac{1}{2} < \frac{r}{h} \leq 1\\
& \\
0\textrm{,}
& \frac{r}{h} >1\\
\end{array} \right.
\end{equation}
where $r$ is the distance from the position of a particle and $h$ is
the smoothing length for each particle (which in Gadget-2 is the
radius that encloses 32 gas particle masses). Furthermore we set a
limit to the size of the smoothing length: the smoothing length of a
particle has to be at least 1.5 times the resolution of the
grid-cells, which means that a particle is distributed over at least
three grid-cells in each dimension. This adversely affects the
highest density regions in the reconstructed field (when insufficient
gridding resolution is employed), but gives a more realistic
representation of resolved objects and transitions without shot noise
or step functions. We note that our procedure explicitly conserves
total mass.
\section{Results}
Reconstructed density fields are gridded in three dimensions, for
the total hydrogen component (ionized plus neutral) the neutral
component and the molecular component. This makes it possible to
compare the distribution of the total and neutral hydrogen budget and
permits determination of neutral fractions for the volume and column
densities. Initially the full 32 $h^{-1}$ Mpc cubes is gridded with a
cell size of 80 kpc. This allows visualization of the distribution on
large scales and determination of the average density of neutral
hydrogen in the simulation volume $\bar{\rho}_{\rm{HI}}$. The degree
of clustering can be determined by looking at the two-point
correlation function. However, this low resolution grid is not
suitable to resolve the high density regions and small structures, as
we will describe later. High density regions of the simulation volume
were selected and gridded with a cell-size of 2 kpc. The \hbox{\rm
H\,{\sc i}} column density distribution function and the \hbox{\rm
H\,{\sc i}} mass function can be determined from these regions. The
properties of the simulated \hbox{\rm H\,{\sc i}} gas will be
described and the statistics will be compared with the statistical
properties of observational data, mostly from the \hbox{\rm H\,{\sc
i}} Parkes All Sky Survey (HIPASS) (\cite{2001MNRAS.322..486B}).
Apart from gas or SPH particles, the simulations contain star and dark
matter particles as well. We will adopt a relatively simple gridding
scheme to reconstruct the distribution of stars and dark matter. This
can be very useful to verify whether the stars, but especially the gas
(or reconstructed \hbox{\rm H\,{\sc i}}) trace the distribution of
Dark Matter.
\subsection{Mean \hbox{\rm H\,{\sc i}} Density}
The average \hbox{\rm H\,{\sc i}} density is an important
property, as this single number gives the amount of neutral hydrogen
that is reconstructed without any further analysis. The \hbox{\rm
H\,{\sc i}} density is very well determined from the 1000 brightest
HIPASS galaxies in \cite{2003AJ....125.2842Z} They deduce an \hbox{\rm
H\,{\sc i}} density due to galaxies in the local universe of
$\bar{\rho}_{\rm{HI}} = (6.9 \pm 1.1)\cdot 10^7$ $h$ M$_\odot$ Mpc$^{-3}$
or $\bar{\rho}_{\rm{HI}} = (6.1 \pm 1.0)\cdot 10^7$ $h$ M$_\odot$
Mpc$^{-3}$ when taking into account biases like selection bias,
Eddington effect, \hbox{\rm H\,{\sc i}} self absorption and cosmic
variance. From \cite{2002ApJ...567..247R} a value of
$\bar{\rho}_{\rm{HI}} = 7.1\cdot 10^7$ $h$ M$_\odot$ Mpc$^{-3}$ can be
derived for the average \hbox{\rm H\,{\sc i}} density in the
universe. We will adopt the value of $\bar{\rho}_{\rm{HI}} = (6.1 \pm
1.0)\cdot 10^7$ $h$ M$_\odot$ Mpc$^{-3}$.
The pressure thresholds for molecular and atomic hydrogen are tuned to
reproduce this density as is described earlier in this paper.
\subsection{ \hbox{\rm H\,{\sc i}} Distribution Function}
As mentioned above, the low and intermediate column densities
$(N_{\rm{HI}} < 10^{19}$ cm$^{-2})$ do not have a very significant
contribution to the total mass budget of \hbox{\rm H\,{\sc i}}.
For comparison with our simulation, the \hbox{\rm H\,{\sc i}}
distribution function derived from QSO absorption line data will be
used as tabulated in \cite{2002ApJ...567..712C}. For the QSO data the
column density distribution function $f(N_{\rm{HI}})$ is defined such
that $f(N_{\rm{HI}})dN_{\rm{HI}}dX$ is the number of absorbers with
column density between $N_{\rm{HI}}$ and $N_{\rm{HI}} + dN_{\rm{HI}}$
over an absorption distance interval $dX$. We derive $f(N_{\rm{HI}})$
from the statistics of our reconstructed \hbox{\rm H\,{\sc i}}
emission. The column density distribution function in a reconstructed
cube can be calculated from,
\begin{equation}
f(N_{\rm{HI}}) = \frac{c}{H_0 dz} \frac{A(N_{\rm{HI}})}{dN_{\rm{HI}}} \textrm{ cm}^2,
\end{equation}
where $dX = dz H_0/c$ and $A(N_{\rm{HI}})$ is the surface area subtended
by \hbox{\rm H\,{\sc i}} in the column density interval $dN_{\rm{HI}}$
centered on $N_{\rm{HI}}$.
As the simulations contain \hbox{\rm H\,{\sc i}} column densities over
the full range between $N_{\rm{HI}} = 10^{14}$ and $10^{21}$
cm$^{-2}$, we can plot the \hbox{\rm H\,{\sc i}} column density
distribution function $f(N_{\rm{HI}})$ over this entire range with
excellent statistics, in contrast to what has been achieved
observationally. In the left panel of Fig.~\ref{distr} we overlay the
\hbox{\rm H\,{\sc i}} distribution functions we derive from the
simulations with the data values obtained from QSO absorption lines as
tabulated by \cite{2002ApJ...567..712C} (black dots). The horizontal
lines on the QSO data points correspond to the bin-size over which
each data point has been derived. Vertical error bars are not shown,
as these have the same size as the dot. Around $N_{\rm{HI}} =
10^{19}$ cm$^{-2}$ there is only one data bin covering two orders of
magnitude in column density, illustrating the difficulty of sampling
this region with observations. This corresponds to the transition
between optically thick and thin gas, where only a small increase in
surface covering is associated with a large decrease in the column
density.
The dashed (red) line corresponds to data gridded to a 80 kpc cell size.
At low column densities the simulated distribution function agrees
very well with the QSO absorption line data. The transition from
optically thick to optically thin gas happens within just a few kpc of
radius in a galaxy disk \citep{1994ApJ...423..196D}. Clearly a
reconstructed cube with a 80~kpc cell size does not have enough
resolution to resolve such transitions. Some form of plateau can be
recognised in the coarsely gridded data above $N_{\rm{HI}} = 10^{16}$
cm$^{-2}$, however it is not a smooth transition. Furthermore because
of the large cell size, no high column density regions can be
reconstructed at all. The cores of galaxies have high column
densities, but these are severely diluted within the 80 kpc voxels.
To circumvent these limitations, structures with an \hbox{\rm H\,{\sc
i}} mass exceeding $5\times10^8$ $M_\odot$ in an 80 kpc voxel have
been identified for individual high resolution gridding. This mass
limit is chosen to match the mass-resolution of the simulation. The
mass of a typical gas particle is $\sim 2.5 \cdot 10^7$ $M_\odot$,
when taking into account the abundance of hydrogen with respect to
helium, we need at least 20 gas particles to form a $5\times10^8$
$M_\odot$ structure. As the neutral fraction is much less than one for
most of the particles, the number of particles in one object is much
larger. We find 719 structures above the mass limit and grid a
300~kpc box around each object with a cell size of 2~kpc.
We emphasize that gridding to a higher resolution does not mean
that the physics is computed at a higher resolution. We are still
limited by the simplified physics and finite mass resolution of the
particles. A method of accounting for structure or clumping below
the resolution of the simulation is described in
e.g. \cite{2006MNRAS.372..679M}. To derive the clumping factor, they
have used another simulation, with the same number of particles, but
a much smaller computational volume, and thus higher resolution. In
our analysis, we accept that we cannot resolve the smallest
structures, since we are primarily interested in the diffuse outer
portions of galactic disks. We have chosen a 2~kpc voxel size, as
this number represents the nominal spatial resolution of the
simulation. The simulation has a gravitational softening length of
2.5 kpc $h^{-1}$, but note that the smoothing lengths can go as low
as 10\% of the gravitational softening length.
Distribution functions are plotted for simulated \hbox{\rm H\,{\sc i}}
using the two different voxel sizes of 80 and 2 kpc in the left panel
of Fig.~\ref{distr} . When using a 80 kpc voxel size, the
reconstructed maps are unable to resolve structures with high
densities, causing erratic behaviour at column densities above
$N_{\rm{HI}} \sim 10^{17}$ cm$^{-2}$. When using the smaller
voxel size of 2 kpc, there is an excellent fit to the observed data
between about $N_{\rm{HI}} = 10^{15} \textrm{ and } 10^{20.5}$ cm$^{-2}$. The
lower column densities are not reproduced within the sub-cubes (although
they are in the coarsely-gridded full simulation cube), while the
finite mass and spatial resolution of the simulation do not allow a
meaningful distribution function to be determined above about
$N_{\rm{HI}} = 10^{21}$ cm$^{-2}$.
Below $N_{\rm{HI}} = 10^{20}$ cm$^{-2}$ a transition can be seen with
the distribution function becoming flatter. The effect of
self-shielding is decreasing, which limits the amount of neutral
hydrogen at these column densities. Around $N_{\rm{HI}} = 10^{17}$
cm$^{-2}$ the optical depth to photons at the hydrogen ionisation edge
is equal to 1 \citep{2002ApJ...568L..71Z}. Self-shielding no longer
has any effect below this column density and a second transition can
be seen. Now the neutral fraction is only determined by the balance
between photo-ionisation and radiative recombination. The distribution
function is increasing again as a power law toward the very low column
densities of the Lyman-alpha forest. The slope in this regime agrees
very well with the QSO data. Note that the 2~kpc gridded data are
slightly offset to lower occurrences compared to the 80 kpc gridded
data. This is because we only considered the vicinity of the largest
mass concentrations in the simulation for high resolution
sampling. For the same reason the function is not representative below
$N_{\rm{HI}} \sim 3 \times 10^{14}$ cm$^{-2}$, while for the full,
80~kpc gridded cube it can be traced to $N_{\rm{HI}} \sim 5 \times
10^{13}$ cm$^{-2}$. Of course, lower column density systems can be
produced in these simulations when artificial spectra are
constructed~\citep[e.g.][]{2001ApJ...553..528D, 2009MNRAS.395.1875O},
but our focus here is on the high column density systems that are
well-described by our gridding approach.
\begin{figure*}[!t]
\includegraphics[width=0.5\textwidth]{figures/distr_sim.eps}
\includegraphics[width=0.5\textwidth]{figures/distr_obs.eps}
\caption{Left panel: \hbox{\rm H\,{\sc i}} distribution function
after gridding to 80 kpc (dashed (red) line). The solid (blue)
line corresponds to data gridded to a 2 kpc cell size. Filled
dots correspond to the QSO absorption line data
\citep{2002ApJ...567..712C}. Right panel: Combined \hbox{\rm
H\,{\sc i}} distribution functions of the simulation, gridded
to a resolution of 2 kpc (solid (blue) line) and 80 kpc (dashed
(purple) line). Overlaid are distribution function from
observational data of M31 \citep{2004A&A...417..421B}, WHISP
(\cite{2002A&A...390..829S}, \cite{2005MNRAS.364.1467Z}) and QSO
absorption lines \citep{2002ApJ...567..712C} respectively. The
reconstructed \hbox{\rm H\,{\sc i}} distribution function
corresponds very well to all observed distribution functions }
\label{distr}
\end{figure*}
The distribution functions after gridding to 2~kpc (solid
line), and the low column density end of the 80~kpc gridding (dotted
line) are plotted again in the right panel of Fig.~\ref{distr}, but
now with several observed distributions overlaid. The high column
density regime is covered by the WHISP data
\citep{2002A&A...390..829S, 2005A&A...442..137N} in \hbox{\rm H\,{\sc
i}} emission; a Schechter function fit to this data by
\cite{2005MNRAS.364.1467Z} is shown by the dashed line. The
dash-dotted line shows \hbox{\rm H\,{\sc i}} emission data from the
extended M31 environment after combining data from a range of
different telescopes \citep{2004A&A...417..421B}. Since this curve is
based on only a single, highly inclined system, it may not be as
representative as the curves based on larger statistical samples. Our
simulated data agrees very well with the various observed data
sets. The distribution function indicates that there is less \hbox{\rm
H\,{\sc i}} surface area with a column density of $N_{\rm{HI}} \sim
10^{19}$ cm$^{-2}$ than at higher column densities of a few times
$10^{20}$ cm$^{-2}$. This is indeed the case, which can be seen if the
relative occurrence of different column densities is plotted. In
Fig.~\ref{area} the fractional area is plotted (dashed line) as
function of column density on logarithmic scale, which is given by:
\begin{equation}
fA = \frac{A(N_{HI})}{d\log(N_{HI})}.
\end{equation}
The surface area first increases from the highest column densities
(which are poorly resolved in any case above $10^{21}$ cm$^{-2}$) down
to a column density of a few times $10^{20}$ cm$^{-2}$, but then
remains relatively constant (per logarithmic bin). Only below column
densities of a few times $10^{18}$ cm$^{-2}$ does the surface area per
bin start to increase again, indicating that the probability of
detecting emission with a column density near $N_{\rm{HI}} \sim
10^{17}$ cm$^{-2}$ is significantly larger compared
to detecting emission with a column density of $N_{\rm{HI}} \sim
10^{19}$ cm$^{-2}$. Also of interest are plots of the cumulative
\hbox{\rm H\,{\sc i}} mass and surface area.
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{figures/area.eps}
\caption{Fractional area of reconstructed \hbox{\rm H\,{\sc i}}
(dashed line, right-hand axis) and cumulated surface area (solid
line, left-hand axis) plotted against column density on a
logarithmic scale with a bin size of $d\log(N_{HI})=0.2$. The
probability of detecting emission with a column density near
$N_{\rm{HI}} \sim 10^{17}$ cm$^{-2}$ is significantly larger than
around $N_{\rm{HI}} \sim 10^{19.5}$ cm$^{-2}$. The cumulated surface
area is normalized to that at a column density of $N_{HI}=10^{16}$
cm$^{-2}$. At column densities of $N_{\rm{HI}} \sim 10^{17}$
cm$^{-2}$, the area subtended by \hbox{\rm H\,{\sc i}} emission is
much larger than at a limit of $N_{\rm{HI}} \sim 10^{19.5}$
cm$^{-2}$, which is the sensitivity limit of most current
observations of nearby galaxies.}
\label{area}
\end{figure}
The solid line in Fig.~\ref{area} shows the total surface area
subtended by \hbox{\rm H\,{\sc i}} exceeding the indicated column
density. The plot is normalised to unity at a column density of
$N_{\rm{HI}} = 10^{16}$ cm$^{-2}$. At high column densities the
cumulative fractional area increases only moderately. Below a column
density of $N_{\rm{HI}} \sim 10^{18}$ cm$^{-2}$ there is a clear bend
and the function starts to increase more rapidly. At column densities of
$N_{\rm{HI}} \sim 10^{17}$ cm$^{-2}$, the area subtended by \hbox{\rm
H\,{\sc i}} emission is much larger than at a limit of $N_{\rm{HI}} \sim
10^{19.5}$ cm$^{-2}$, which corresponds to the sensitivity limit of most
current observations of nearby galaxies.
\subsection{\hbox{\rm H\,{\sc i}} column density}
\begin{figure*}[t!]
\includegraphics[width=0.50\textwidth]{figures/densmom.eps}
\includegraphics[width=0.5\textwidth]{figures/H2mom.eps}
\begin{center}
\includegraphics[width=0.5\textwidth]{figures/shieldmom.eps}
\end{center}
\caption{Top left panel: Column density of total \hbox{\rm H} gas
integrated over a depth of 32 $h^{-1}$ Mpc on a logarithmic scale,
gridded to a resolution of 80 kpc. Top right panel: Molecular
hydrogen component. Only very dense regions in the total hydrogen
component contain molecular hydrogen. Bottom panel: Neutral atomic
hydrogen component of the same region. In the neutral hydrogen
distribution the highest densities are comparable to the densities
in the total hydrogen distribution, but there is a very sharp
transition to low neutral column densities as the gas becomes
optically thin. Note the very different scales, the total
hydrogen spanning only 2 orders of magnitude and the neutral
hydrogen, eight.}
\label{mom80kpc}
\end{figure*}
In Fig.~\ref{mom80kpc} column density maps are shown of the total and
the neutral hydrogen distribution. The maps are integrated over the
full 32 $h^{-1}$ Mpc depth of the cube, with the colorbar showing
logarithmic column density in units of cm$^{-2}$. The total hydrogen
map reaches maximum values of $N_{\rm{H}} \sim 10^{21}$ cm$^{-2}$,
while the connecting filaments have column densities of approximately
an order of magnitude less. In the intergalactic medium, the column
densities are still quite high, $N_{\rm{H}} \sim 10^{19}$ cm$^{-2}$,
yielding a very large mass fraction when the large surface area of the
intergalactic medium is taken into account.
In the column density map of neutral hydrogen it can be seen that it
is primarily the peaks which remain. At the locations
of the peaks of the total hydrogen map, we can see peaks in the
\hbox{\rm H\,{\sc i}} map with comparable column densities, that
correspond to
the massive galaxies and groups. The filaments connecting the galaxies
can still be recognized, but with neutral column densities of the
order of $N_{\rm{HI}} \sim 10^{16}$ cm$^{-2}$. Here the gas is still
relatively dense, but not dominated by self-shielding, resulting
in a lower neutral fraction. In the intergalactic regime, the neutral
fraction drops dramatically. The gas is highly ionized with neutral
columns of only $N_{\rm{HI}} \sim 10^{14}$ cm$^{-2}$, yielding only a
very small neutral mass contribution.\\
Figure~\ref{2kpcres} shows similar maps chosen from several
high-resolution regions, gridded to 2~kpc instead of 80~kpc. The left
panels show a column density map of all the gas, while in the middle
panels the \hbox{\rm H\,{\sc i}} column densities are plotted. The right
panels show the \hbox{\rm H\,{\sc i}} column density distribution function
of the individual examples. The most complete distribution function is
obtained by summing the distribution functions of all the individual
objects, but even the individual distribution functions already display
the general trend of a flattening plateau around $N_{\rm{HI}} \sim
10^{19}$ cm$^{-2}$. Some objects have just a bright core with extended
emission, like the second example from the top. There are many objects
with small diffuse companions with maximum peak column densities of
$N_{\rm{HI}} \sim 10^{18}$ cm$^{-2}$. These companions are typically 20--40
kpc in size and are connected with filaments that have column densities
of $N_{\rm{HI}} \sim 10^{17}$ cm$^{-2}$ or even less. Comparing the plots
containing all the hydrogen and just the neutral hydrogen it can be seen
that the edge between low and high densities is much sharper for the
neutral hydrogen. The surface covered by column densities of $N_{\rm{HI}}
\sim 10^{17}$ cm$^{-2}$ is much larger than the surface
covered by column densities of $N_{\rm{HI}} \sim 10^{19}$ cm$^{-2}$.
\begin{figure*}[t!]
\includegraphics[width=0.35\textwidth]{figures/dens_2.eps}
\includegraphics[width=0.35\textwidth]{figures/shield_2.eps}
\includegraphics[width=0.29\textwidth]{figures/distr_2_2.ps}
\includegraphics[width=0.35\textwidth]{figures/dens_7.eps}
\includegraphics[width=0.35\textwidth]{figures/shield_7.eps}
\includegraphics[width=0.29\textwidth]{figures/distr_7_2.ps}
\includegraphics[width=0.35\textwidth]{figures/dens_24.eps}
\includegraphics[width=0.35\textwidth]{figures/shield_24.eps}
\includegraphics[width=0.29\textwidth]{figures/distr_24_2.ps}
\includegraphics[width=0.35\textwidth]{figures/dens_47.eps}
\includegraphics[width=0.35\textwidth]{figures/shield_47.eps}
\includegraphics[width=0.29\textwidth]{figures/distr_47_2.ps}
\caption{Four examples of high density regions in the reconstructed
data, gridded to a cell size of 2 kpc. The left panels show the
total hydrogen, while the middle panels show only the neutral
component. Some objects have many satellites, as in the top
panels, while others are much more isolated. All examples have
extended \hbox{\rm H\,{\sc i}} at column densities around
$\log(N_{HI}) = 16-17$ cm$^{-2}$. In the right panel, the individual
\hbox{\rm H\,{\sc i}} column density distribution function is
shown for each of the examples. Black dots correspond to the QSO
absorption line data \citep{2002ApJ...567..712C}.}
\label{2kpcres}
\end{figure*}
\subsubsection{Neutral fraction}
The neutral fraction is plotted in a particularly instructive way in
Fig.~\ref{HIvsFrac}. Neutral fraction of the hydrogen gas is plotted
against \hbox{\rm H\,{\sc i}} column density, where the colorbar
represents the relative likelihood on a logarithmic scale of detecting
a given combination of neutral column and neutral fraction. The most
commonly occuring conditions are a neutral column density around
$N_{\rm{HI}} \sim 10^{14}$ cm$^{-2}$ with a neutral fraction of $\sim
10^{-5}$, representing Ly$\alpha$ forest gas. The cutoff at low
column densities is artificial, owing to our gridding scheme.
At high densities, $N_{\rm{HI}} > 10^{20}$ cm$^{-2}$, the gas is
almost fully neutral and just below $N_{\rm{HI}} \sim 10^{19}$
cm$^{-2}$, the neutral fraction starts to drop very steeply below the
10 percent level. This is exactly the column density that is
considered to be the ``edge'' of \hbox{\rm H\,{\sc i}} galaxies, that
defines the border between optically thick and thin gas. This
transition from high to low neutral density happens on very small
scales of just a few kpc (\cite{1994ApJ...423..196D}). The surface area
with column densities in the range from $N_{\rm{HI}} \sim 10^{17}$ to
$10^{19}$ cm$^{-2}$ is relatively small. At lower column densities,
the probability of detecting \hbox{\rm H\,{\sc i}} in any given
direction increases. The well-defined correlation of neutral fraction
with neutral column for $N_{\rm{HI}} > 10^{16}$ cm$^{-2}$ defines a
straightforward correction for total gas mass accompanying an observed
neutral column density.
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{figures/HIvsFrac.eps}
\caption{Neutral fraction plotted against \hbox{\rm H\,{\sc i}}
column density. The colorbar represents the probability of detecting a
certain combination of \hbox{\rm H\,{\sc i}} column density and
neutral fraction on logarithmic scale. At the highest densities
$N_{\rm{HI}} > 10^{20}$ cm$^{-2}$, the neutral fraction is almost
unity. Column densities around $N_{\rm{HI}} \sim 10^{19}$ cm$^{-2}$
have the smallest detection probability.}
\label{HIvsFrac}
\end{figure}
In Fig.~\ref{CumMass} the cumulative mass is plotted as function of
total hydrogen column density (left panel) and the column density of
the \hbox{\rm H\,{\sc i}} gas (right panel). Note that the vertical
scale is different in the two panels. The plot is divided in different
regions, the galaxies or Damped Lyman-$\alpha$ Absorbers (DLA) are
coloured light gray. In neutral hydrogen, these are the column
densities above log$(N_{\rm{HI}})$ = 20.3. Lower column densities
belong to the Super Lyman Limit systems (SLLS), or sub-DLAs. In the
plot showing the neutral hydrogen an inflection point can be seen at a
column density of log$(N_{\rm{HI}})$ = 19. This is where the effect of
self shielding starts to decrease rapidly and the Lyman Limit regime
begins. At the lower end, below column densities of log$(N_{\rm{HI}})$
= 16 is the Lyman alpha forest, which is coloured dark gray. As can be
seen there is a huge difference in mass contribution for the different
phases, when comparing the neutral gas against the total gas
budget. In \hbox{\rm H\,{\sc i}}, about 99 percent of the mass is in
DLAs, Lyman Limit Systems account for about 1 percent of the mass and
the Lyman alpha forest contributes much less than a percent. When
looking at the total gas mass budget all three components (DLAs, LLSs
and the Ly-$\alpha$ forest) have approximately the same mass fraction.
\begin{figure*}[t!]
\includegraphics[width=0.5\textwidth]{figures/cumHmass_bw.eps}
\includegraphics[width=0.5\textwidth]{figures/cumHI.eps}
\caption{Left panel: Cumulative mass of total hydrogen as function
of column density, the different phases are shown with different
colours. The Ly$\alpha$ forest, the Lyman Limit System and
galaxies have approximately the same mass component when
considering all the gas. Right panel: Cumulative mass of neutral
atomic hydrogen as function of column density. Although the surface
covered by the LLSs is large, it contains only 1\% off all the
neutral gas mass, while about 99\% resides in the galaxies}
\label{CumMass}
\end{figure*}
\subsection{Two-Point Correlation Function}
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{figures/corr.eps}
\caption{Two-point correlation function of \hbox{\rm H\,{\sc
i}}-rich objects in our simulation, contrasted with a
power-law fit to the
observed relation of \cite{2007ApJ...654..702M}.}
\label{corr_function}
\end{figure}
The two-point correlation function measures the degree of
clustering of galaxies in the spatial direction $(\xi (r))$, which
relates directly to the power spectrum through a Fourier transform
\citep[e.g.][]{1977ApJ...217..385G, 1982ApJ...254..437D}. The
spatial two-point correlation function is defined as the excess
probability, compared with that expected for a random distribution,
of finding a pair of galaxies at a separation $r_{1,2}$. For \hbox{\rm
H\,{\sc i}}, the clustering is weaker compared to optical galaxies
\citep{2007ApJ...654..702M}. On scales between $\sim 0.5$ kpc and 12 Mpc,
the correlation function for optical galaxies has been determined in SDSS
\citep{2005ApJ...621...22Z} and 2dFGS \citep{2002MNRAS.332..827N}. For
the \hbox{\rm H\,{\sc i}}-rich galaxies in the HIPASS catalogue, a scale
length is obtained of $r_0 = 3.45 \pm 0.25 h^{-1}$ Mpc and a slope of
$\gamma = 1.47 \pm 0.08$.
In the past, several estimators have been given for the two-point
correlation function, we will use the Landy \& Szalay estimator as
described in \cite{1993ApJ...412...64L} as this estimator is used in
\citep{2007ApJ...654..702M} to determine the correlation for \hbox{\rm
H\,{\sc i}} galaxies. This estimator is given by:
\begin{equation}
\xi_{\rm{LS}} = \frac{1}{RR}[DD - 2DR + RR ]
\end{equation}
where $DD$ are the galaxy-galaxy pairs, $RR$ the random-random pairs
and $DR$ the galaxy-random pairs. This estimator has to be normalised with the number of correlations in the simulated and random distributions:
\begin{equation}
\xi_{\rm{LS}} = \frac{1}{RR}\Big[\frac{DD}{(n_d (n_d-1))/2} - \frac{2DR}{n_r n_d} + RR \Big]
\end{equation}
where $n_d$ is the number of detections or simulated objects and
$n_r$ is the number of galaxies in the random sample.
In Fig.~\ref{corr_function} the two-point correlation function is
plotted; the black dots represent the values obtained from the
simulation, while the dashed red line corresponds to the correlation
function that is fit to galaxies in the HIPASS catalogue by
\citep{2007ApJ...654..702M}. The solid line is our best fit, with
a scale length of $r_0 = 3.3 \pm 0.2 h^{-1}$ Mpc and a slope of
$\gamma = 1.7 \pm 0.2$, only data points where the radius is smaller
than 6 Mpc have been used for the fit.
There is very good correspondence between the simulated and observed
\hbox{\rm H\,{\sc i}}-correlation functions on scales between $\sim
0.5$ Mpc and $\sim 5$ Mpc. Accuracy at smaller scales is limited by
the finite resolution of the simulation. On the other hand, the
representation of large scales is limited by the physical size of the
box. In a 32 $h^{-1}$ Mpc box, the largest well-sampled structures are
about 5 Mpc in size. This difference is not surprising, because
\cite{2007ApJ...654..702M} are able to sample structures up to 10
Mpc, given their significantly larger survey
volume. \cite{2007ApJ...654..702M} also looked at a limited sample
of galaxies, applying the parameter cuts $M_{HI} > 10^{9.05}
M_{\odot}$ and $D < 30$ Mpc. This limited sample is very similar to
our sample of simulated objects and the power law parameters in this
case are $r_0 = 3.2 \pm 1.4 h^{-1}$ Mpc and $\gamma = 1.5 \pm
1.1$. Although the errors are larger, the results are very similar
to the full sample and in excellent agreement with our simulations.
\subsection{\hbox{\rm H\,{\sc i}} Mass Function}
The \hbox{\rm H\,{\sc i}} Mass Function $\theta(M_{\rm{HI}})$ is
defined as the space density of objects in units of $h^3$
Mpc$^{-3}$. For fitting purposes a Schechter function
\citep{1976ApJ...203..297S} can be used of the form:
\begin{equation}
\theta(M_{\rm{HI}})dM_{\rm{HI}} = \theta^*\Big(\frac{M_{\rm{HI}}}{M_{\rm{HI}}^{*}}\Big)^\alpha
\exp \Big(-\frac{M_{\rm{HI}}}{M_{\rm{HI}}^{*}}\Big)\frac{dM_{\rm{HI}}}{M^*}
\end{equation}
characterised by the parameters $\alpha$, $M^{*}_{\rm{HI}}$ and
$\theta^*$, which define the slope of the power law, the \hbox{\rm
H\,{\sc i}} mass corresponding to the ``knee'' and the normalisation
respectively. In a logarithmic form the \hbox{\rm H\,{\sc i}} Mass
function can be written as:
\begin{equation}
\theta(M_{\rm{HI}}) = \ln(10)\theta^*\Big(\frac{M_{\rm{HI}}}{M^*_{\rm{HI}}}\Big)^{\alpha+1}\exp\Big(\frac{-M_{\rm{HI}}}{M^*_{\rm{HI}}}\Big)
\end{equation}
\begin{figure*}[t!]
\includegraphics[width=0.5\textwidth]{figures/HIMF.eps}
\includegraphics[width=0.5\textwidth]{figures/H2MF.eps}
\caption{Left panel: \hbox{\rm H\,{\sc i}} Mass Function of the
simulated data (black dots) with the best-fit Schechter function
(solid black line), compared with the HIMF from
\cite{2003AJ....125.2842Z} (dash-dotted red line). Our best fit
line is dashed below $10^9 M_{\odot}$, because there are no data
points there and the function is only an extrapolation. Right
panel: H$_2$ Mass Function of the simulated data (black dots) and
the best fit, compared with the H$_2$MF from \cite{2003ApJ...582..659K}
(dash-dotted (blue) line) and \cite{2009MNRAS.tmp..289O}
(dashed (red) line). Both simulated Mass Functions show agreement with
observations over about a decade in mass.}
\label{HIMF}
\end{figure*}
The reconstructed structures in our high resolution grids can be used
to determine a simulated \hbox{\rm H\,{\sc i}} Mass Function for
structures above $\sim5 \times 10^8$ $M_\odot$. The result is plotted
in the left panel of Fig.\ref{HIMF}, where the \hbox{\rm H\,{\sc i}}
Mass function is shown with a bin size of 0.1 dex. Overlaid is the
best fit to the data, the fitting parameters that have been used are
$\theta^*$ = 0.059$\pm$0.047, $\log(M^*_{\rm{HI}}) = 9.2 \pm 0.3$ and
$\alpha = -1.16 \pm 0.45$. Note that in this case the value of
$\alpha$ is not very well-constrained, as this parameter defines the
slope of the lower end of the \hbox{\rm H\,{\sc i}} Mass Function, but
our simulation is unable to sample the mass function below a mass of
$M_{HI} \approx 5\times10^8 M_{\odot}$. The result is compared with
the \hbox{\rm H\,{\sc i}} mass function from
\cite{2003AJ....125.2842Z} (dash-dotted line). The reconstructed mass
function corresponds reasonably well with the mass functions obtained
from galaxies in HIPASS around $M^*$. At masses around $10^{10}$
$M_\odot$, the error bars are very large due to small number
statistics. A much larger simulation volume is required to sample this
regime properly. There is a hint of an excess in the simulation near
our resolution limit, this may simply reflect cosmic variance and will
be addressed in future studies. \cite{2003AJ....125.2842Z} compared
four different quadrants of the southern sky and found that at $M_{HI}
\sim 10^9 M_{\odot}$ the estimated space density varies by a factor of
about three, which is comparable to the factor $\sim2$ difference we
see between the simulation and observations.
\subsubsection{\hbox{\rm H\,}$_2$ Mass Function}
The H$_2$ mass Function can be determined in a similar way as the
\hbox{\rm H\,{\sc i}} Mass Function. The result is shown in the right
panel of Fig.~\ref{HIMF} where the simulated data points are fitted
with a Schechter function. Our best fit parameters are $\theta^*$ =
0.036$\pm$0.036, $\log(M^*_{\rm{H_2}}) = 8.7 \pm 0.3$ and $\alpha =
-1.01 \pm 0.57$. At the high end of the mass function the results are
affected by low number statistics. The simulated fit is compared with
the fits as determined by \cite{2009MNRAS.tmp..289O} (dashed line)
and \cite{2003ApJ...582..659K} (dash-dotted line). There is very
good agreement over the full mass range.
In Fig.~\ref{H2vsHI} the derived H$_2$ masses are plotted as function of
\hbox{\rm H\,{\sc i}} mass. The dashed vertical line represents the
completeness limit of \hbox{\rm H\,{\sc i}} masses. The data can be
fitted using a power law (solid line) which looks like $M_{H_2} =
(M_{HI}/m_0)^\beta$ with a scaling parameter of $m_0=158\pm43$ and a
slope of $\beta = 1.2\pm 0.1$.
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{figures/H2vsHI.eps}
\caption{Derived H$_2$ masses are plotted against \hbox{\rm
H\,{\sc i}} masses for individual simulated objects. The
distribution can be fit using a simple power law (solid (red)
line), although the scatter is very large. The dashed
vertical line represents the sample cut-off for \hbox{\rm
H\,{\sc i}} masses in the simulated data.}
\label{H2vsHI}
\end{figure}
\subsection{Stars, Dark Matter and Molecular Hydrogen}
In addition to the SPH-particles, the simulations also contain dark
matter and stars. The distribution of these components can be
reconstructed and compared with the distribution of neutral
hydrogen. For reconstructing the dark matter and stars a very simple
adaptive gridding scheme has been used. This gridding
scheme was adopted, because the dark matter and star particles do not
have a variable smoothing kernel like the gas particles. They do have
a smoothing kernel defined by the softening length of 2.5 kpc
$h^{-1}$, however this is a fixed value. The gridding method as
described in section 3.3 using a spline kernel can not be used for
this reason. Exactly the same regions have been gridded as those
previously determined \hbox{\rm H\,{\sc i}} objects. First the objects
have been gridded with a cell size of 10 kpc using a nearest
grid-point algorithm. The resulting moment maps have been used to
determine which high density regions contain many particles. In a
second step the particles have been gridded to five independent cubes
using a 2 kpc cell size. Based on the density of simulation particles
in the 10 kpc resolution cube, an individual particle is assigned to
one of five different cubes for gridding. The density threshold is
determined by the number of particles in the 10 kpc resolution cube
integrated along the line of sight. The threshold numbers are 2, 6,
18, 54 and everything above 54 particles respectively. The five cubes
are integrated along the line of sight and smoothed using a Gaussian
kernel with a standard deviation of 7, 5, 3.5, 2.5 and 1.5 pixels of
2~kpc respectively. Finally the five smoothed maps are added
together. These smoothing kernels where chosen to insure that each
individual cube covering a different density regime has a smooth
density distribution, but preserves as much resolution as practical.
In Figure~\ref{dm_stars} four examples are shown of the dark matter
and stellar distributions overlaid on the of \hbox{\rm H\,{\sc i}}
column density maps. The right panels in this image show the
contours of molecular hydrogen. The stars are concentrated in the
bright and dense parts of the \hbox{\rm H\,{\sc i}} objects
corresponding to the bulges and disks of galaxies. The third example
shows an edge-on extended gas disk (its thickness is likely an
artifact of our numerical resolution). The neutral hydrogen is much
more extended than the stars, and the smaller, more diffuse \hbox{\rm
H\,{\sc i}} clouds do not have a stellar counterpart in
general. Many objects have \hbox{\rm H\,{\sc i}} satellites or
companions, as in the first two examples. These companions or
smaller components do not always have a stellar or molecular
counterpart, although the \hbox{\rm H\,{\sc i}} column densities can
reach high values up to $N_{HI} \sim 10^{20}$ cm$^{-2}$ as
seen in Galactic high velocity clouds.
Interestingly, these \hbox{\rm H\,{\sc i}} clouds only occasionally
trace dark matter substructures, hence in many cases they are not
obvious large-scale accretion events (since the most massive accreting
clumps should be accompanied by dark matter). The origin of these
diffuse clouds, perhaps analogous to high velocity clouds, may be from
a ``halo fountain" of gas cycling in and out of galaxies owing to
galactic outflows, as speculated in \cite{2008MNRAS.387..577O}, or
represent more diffuse accretion from the extended environment.
Studying the kinematics and metallicities of these clouds may reveal
signatures of their origin. In the future we plan to investigate such
signatures in the simulations to assess how observations of diffuse
\hbox{\rm H\,{\sc i}} clouds around galaxies can inform our
understanding of the processes of galaxy assembly.
\begin{figure*}[t!]
\includegraphics[width=0.33\textwidth]{figures/dm_2.eps}
\includegraphics[width=0.33\textwidth]{figures/star_2.eps}
\includegraphics[width=0.33\textwidth]{figures/H2_2.eps}
\includegraphics[width=0.33\textwidth]{figures/dm_7.eps}
\includegraphics[width=0.33\textwidth]{figures/star_7.eps}
\includegraphics[width=0.33\textwidth]{figures/H2_7.eps}
\includegraphics[width=0.33\textwidth]{figures/dm_24.eps}
\includegraphics[width=0.33\textwidth]{figures/star_24.eps}
\includegraphics[width=0.33\textwidth]{figures/H2_24.eps}
\includegraphics[width=0.33\textwidth]{figures/dm_47.eps}
\includegraphics[width=0.33\textwidth]{figures/star_47.eps}
\includegraphics[width=0.33\textwidth]{figures/H2_47.eps}
\caption{Column density maps of four reconstructed objects as seen
in neutral hydrogen with contours of Dark Matter (left panels),
Stars (middle panels) and molecular hydrogen (right
panels). For both the Dark Matter and the stars contour levels
are at $N$ = 3, 5, 10, 20, 30, 50 and 100 $\times 10^6$
$M_{\odot}$ kpc$^{-2}$. For the molecular hydrogen contours
are drawn at $N_{H_2}= 10^{18}$, $10^{19}$, $10^{20}$ and
$10^{21}$ cm$^{-2}$. Stars are concentrated in the very dense
parts of the \hbox{\rm H\,{\sc i}} objects, dark matter is more
extended, however the extended \hbox{\rm H\,{\sc i}} does not
always trace the dark matter. The \hbox{\rm H\,{\sc i}}
satellites or companions are within the same Dark Matter Halo,
but do not always contain stars.}
\label{dm_stars}
\end{figure*}
\section{Discussion}
To make observational predictions based on numerical simulations, the
first essential step is to establish that the simulation can
adequately reproduce all observational constraints. We have carried
out a critical comparison of our simulated \hbox{\rm H\,{\sc i}} data
with observations using a wide range of statistical
measures. Essential in creating simulated data is the minimization of
the number of free parameters over and above the many that are
already inherent in the simulation;
\citep{2006MNRAS.373.1265O,2008MNRAS.387..577O}. In our analysis, the
only additional assumptions we make are that the transitions from
ionized to atomic and from atomic to molecular hydrogen can be
described by a simple threshold effect at two different values of
the thermal pressure, while demanding that the recombination
time be less than the sound-crossing time. The threshold
values are determined by fixing the average \hbox{\rm H\,{\sc i}}
density and the density ratio $\Omega_{H_2}/\Omega_{HI}$ at
z~=~0 to those determined observationally. In choosing this simple
prescription we are strongly limited by current numerical
capabilities. Although we did not solve the complete radiative
transfer problem, we do get quite complex behaviour emerging. The
range of threshold values we explored are consistent with
expected values in literature.
The statistics of the reconstructed and observed \hbox{\rm H\,{\sc
i}} distributions agree quite well, making it plausible that the
associated \hbox{\rm H\,{\sc i}} structures in the simulation may be
similar to those ocurring in nature. The simulation cannot reproduce
structures that resemble actual galaxies in detail. Besides the finite
mass resolution of the SPH-particles of $\sim 10^7$ $M_\odot$, there are
the inevitable limitations on the included physical processes and their
practical implementation. Nonetheless, we may begin to explore the fate of
partially neutral gas in at least the diffuse outskirts of major galaxies.
Despite the limitations, the simulations can reproduce many observed
statistical aspects of \hbox{\rm H\,{\sc i}} in galaxies, which
is very encouraging for further exploration of this approach. The
adopted self-shielding threshold provides good results for this one
simulation. Future work will test the variations that are encountered
with different feedback mechanisms. The method can also be applied to
look at the evolution of neutral hydrogen with redshift. Furthermore,
mock observations can be created to make predictions for what future
telescopes should detect. This is particularly relevant for assessing
performance requirements for the various facilities now under development
with a strong \hbox{\rm H\,{\sc i}} focus, such as the Square Kilometre Array
(SKA) and many of the SKA pathfinders.
\subsection{Resolution Effects}
Simulations are limited by their volume and the mass resolution
of the particles (over and above the limitations that result from
incomplete physics). Although we are able to reconstruct structures
of several Mpc in scale, the simulated volume is relatively
small. To be able to reconstruct the largest structures encountered
in the universe, and effectively overcome cosmic variance, a
simulated volume is needed that is about about 300 (rather than 32)
Mpc on a side. In the current volume, the most massive structures
are suffering from low number statistics. A drawback of using a
larger volume is that the mass resolution of individual particles
decreases rapidly, making it impossible to resolve structures on
even multi-kiloparsec scales. The approach we have employed to
approximate the effects of self-shielding is extremely simple. The
actual processes acting on sub-kiloparsec scales are undoubtedly
much more complex. Clumping will occur on the scales of molecular
clouds, which will dramatically increase the local densities. The
threshold thermal-pressure we determine to approximate the atomic to
molecular hydrogen transition only has possible relevance on the
kiloparsec scales of our modelled gas particles. At the smaller
physical scales where the transition actually occurs, the physical
pressures will likely be substantially different.
We note that realistic simulations of cosmological volumes are
extremely challenging, and that even the current state-of-the-art is
not particularly successful at reproducing objects that resemble
observed galaxies in great detail. The simulation we employ
represents a very good compromise between simulations focused on
larger and smaller scales. We are mainly interested in the diffuse
intergalactic structures on multi-kiloparsec scales, that would be
observable when doing 21cm \hbox{\rm H\,{\sc i}} observations with
sufficient sensitivity. We have enough resolution to map and resolve
these structures and to reconstruct the extended environments of
galaxies and galaxy groups. Our simulation is not suitable for
making predictions about the inner cores of galaxies or resolving
the star formation process in molecular clouds.
Future work will test our analysis method on both larger and
smaller scales. Simulations in a larger volume can provide a more
sensitive test of the reconstructed \hbox{\rm H\,{\sc i}} Mass
function and the two-point correlation function. Simulations in a
smaller volume will probably require a more advanced method of
modelling the atomic to molecular hydrogen transition. However,
substantial insights into the more diffuse phenomena, such as accretion
and feedback processes around individual galaxies, are very likely
within reach.
\subsection{Future Observations}
This simulation can not only be used for getting a better understanding
of the \hbox{\rm H\,{\sc i}} distribution, especially at low column
densities, but is also very suitable for making predictions for future
observations. Currently many radio telescopes are being built as
pathfinders toward the Square Kilometre Array (SKA). A few examples are
the Australia SKA Pathfinder (ASKAP), the Allen Telescope Array (ATA),
Karoo Array Telescope (MeerKAT) and the Low frequency array (LOFAR),
and many more. Although the SKA will be the final goal, each of these
telescopes is a good detector on its own and is planned to be operational
in the relatively near future. Of course each telescope will have different
characteristics, but in general it will be possible to do surveys deeper,
faster and over a broader bandwidth.
We will discuss the simulated maps of two current single dish telescopes,
Parkes and Arecibo, and compare these with the capabilities of two future
telescopes, the ASKAP and the SKA. Parkes and Arecibo are both single
dish telescopes with a multi-beam receiver that have recently been used
to do large area surveys, the \hbox{\rm H\,{\sc i}} Parkes All Sky Survey
(HIPASS, \cite{2001MNRAS.322..486B}) and the Arecibo Legacy Fast ALFA
Survey (ALFALFA \cite{2005AJ....130.2598G} ).
To make a fair comparison, all four telescope will get 500 hours of
observing time to map 30 square degrees of the sky. These numbers are
chosen as 500 hours is a reasonable timescale to make a deep image and
a 30 square degree field is needed to map the extended environment of
a galaxy. Furthermore, 30 square degrees is the planned instantaneous
field-of-view of ASKAP.
We focus on two cases that illustrate the capabilities of present
and upcoming telescopes. In the first example, observations will be
simulated at a distance where the beam of the telescopes has a physical
size of 25 kpc. This approximate beam size is needed to resolve diffuse
filaments and extended companions from the simulation. In the second
example observations will be simulated at a fixed distance of 6 Mpc,
which is the limiting distance for the Parkes telescope according to
the above argument.
Right ascension and declination coordinates are added according to the
distance, centered on an RA of 12 hours and a Dec of 0 degrees. For
each telescope the sensitivity is determined that can be achieved
using the given conditions. The technical properties are listed in
Table~\ref{mock_data}. For the Parkes telescope we use the sensitivity
that is achieved when re-reducing HIPASS data (Popping 2009 in prep.),
which corresponds to $\sim 8$mJy/Beam over 26 km s$^{-1}$ for a
typical HIPASS field after integrating over $\sim 560$s per square
degree. For the Arecibo telescope we used the sensitivity of the
Arecibo Galaxy Environment Survey (AGES) \citep{2006MNRAS.371.1617A}
assuming 0.95 mJy/beam over 10 km s$^{-1}$ after integrating for 10
hours per square degree. The expected sensitivities for ASKAP are
described in the initial Array Configuration paper which can be found
on {\it
http://www.atnf.csiro.au/projects/askap/newdocs/configs-3.pdf}. ASKAP
is expected to achieve a sensitivity of 7.3 mJy/beam over 21 km
s$^{-1}$ after one hour of integration time. For the SKA we assume a
$A_{eff}/T_{sys} = 2000$ m$^2$ K$^{-1}$ at 2.5 arcmin resolution,
which is 20 times higher than ASKAP. Furthermore we assume a field of
view similar to ASKAP, 30 square degrees instantaneous.
All the flux densities can be converted into brightness temperature using the equation:
\begin{equation}
T_b = \frac{\lambda^2S}{2k\Omega}
\end{equation}
where $\lambda$ is the observed wavelength, $S$ is the flux density,
$k$ the Boltzmann constant and $\Omega$ is the beam solid angle of the
telescope. When using the 21 cm line of \hbox{\rm H\,{\sc i}}, this equation can be written as:
\begin{equation}
T_b = \frac{606}{b_{min}b_{maj}}S
\end{equation}
where $b_{min}$ and $b_{maj}$ are respectively the beam minor and
major axis in arcsec and $S$ is the flux in units of mJy/Beam. The
integrated 21cm line intensity can directly be converted into an \hbox{\rm
H\,{\sc i}} column density using:
\begin{equation}
N_{HI} = 1.823 \cdot 10^{18} \int T_b dv
\end{equation}
with $[N_{HI}]$ = cm$^{-2}$, $[T_b]$ = K and $[dv]$ = km s$^{-1}$.\\
The second column in Table~\ref{mock_data} gives the beam size of each
telescope, the third column gives the distance at which the beam has
a physical size of 25 kpc, and the fourth column gives the physical
beam size at a distance of 6 Mpc. The sensitivities in column five are
converted to a column density limit when sampling a line of approximately
25 km s$^{-1}$ width in the last column. We assume that in any analysis
only the channels will be selected containing diffuse emission and
that the line width of diffuse regions will be of the order of 25 km
s$^{-1}$. Galaxies can have a much larger linewidth, however detecting
these is not an observational challenge. The reconstructed maps have an
intrinsic resolution of 2 kpc with a minimum smoothing kernel size of
three pixels. This yields an initial beam size of 6 kpc, which will be
smoothed to the appropriate beam size of each telescope, corresponding to
the simulated distance. Using the calculated sensitivity limits random
gaussian noise is generated and added to the maps. The steps are shown
in Figure~\ref{mock}, the left panel shows the original reconstructed
column density map that is smoothed to the beam of ASKAP in the middle
panel. At this stage most of the diffuse and extended emission can still
be recognized. Noise is added in the right panel, most of the diffuse
emission disappears in the noise.
\begin{table*}[t]
\begin{center}
\begin{tabular}{lccccc}
\hline
\hline
Telescope & Beam (arcmin) & $D$ (beam 25 kpc) & Beam at $D=6$ Mpc & RMS (25 km s$^{-1}$) & $N_{HI}$ (25 km s$^{-1}$) \\
\hline
Parkes & 14.4' & 6 Mpc & 25 kpc & 0.9 mJy/Beam & $3.3\cdot10^{16}$ cm$^{-2}$ \\
Arecibo & 3.5' & 25 Mpc & 6.1 kpc & 0.45 mJy/Beam & $2.9\cdot10^{17}$ cm$^{-2}$\\
ASKAP & 3' & 28 Mpc & 5.2 kpc & 0.3 mJy/Beam & $2.6\cdot10^{17}$ cm$^{-2}$\\
SKA & 2.5' & 33 Mpc & 4.4 kpc & 0.015 mJy/Beam & $1.8\cdot10^{16}$ cm$^{-2}$\\
\hline
\hline
\end{tabular}
\end{center}
\caption{Sensitivity limits for four different telescopes
for an assumed linewidth of 25 km s$^{-1}$ after a total integration time of
500 hours to image an area of 30
square degrees.}
\label{mock_data}
\end{table*}
\begin{figure*}[t!]
\includegraphics[angle=270,width=0.33\textwidth]{figures/mock1.eps}
\includegraphics[angle=270,width=0.33\textwidth]{figures/mock2.eps}
\includegraphics[angle=270,width=0.33\textwidth]{figures/mock4.eps}
\caption{Left panel: simulated object at a distance of 6 Mpc, with a
6 kpc intrinsic beam size. Middle panel: simulated object smoothed
to a 8.7 kpc beam size, corresponding to the ASKAP beam at 6
Mpc. Right panel: noise is added corresponding to the ASKAP
sensitivity limit after a 500 hour observation of 30 square degrees.}
\label{mock}
\end{figure*}
\begin{figure*}[t!]
\includegraphics[angle=270,width=0.5\textwidth]{figures/parkes_6.eps}
\includegraphics[angle=270,width=0.5\textwidth]{figures/arecibo_25.eps}
\includegraphics[angle=270,width=0.5\textwidth]{figures/askap_28.eps}
\includegraphics[angle=270,width=0.5\textwidth]{figures/ska_33.eps}
\caption{Simulated observations of the same object at a distance at
which the beam size corresponds to 25 kpc for Parkes (top left),
Arecibo (top right), ASKAP (bottom left) and the SKA (bottom
right). All contours begin at a 3 $\sigma$ level, after
a 500 hour observation of a 30 square degree field with each
telescope. Every subsequent contour level is a factor 3 higher than the
previous one. Note that the angular scale is different in each
panel.}
\label{mock25}
\end{figure*}
\begin{figure*}[t!]
\includegraphics[angle=270,width=0.5\textwidth]{figures/parkes_6.eps}
\includegraphics[angle=270,width=0.5\textwidth]{figures/arecibo_6.eps}
\includegraphics[angle=270,width=0.5\textwidth]{figures/askap_6.eps}
\includegraphics[angle=270,width=0.5\textwidth]{figures/ska_6.eps}
\caption{Simulated observations of the same object at a fixed
distance of 6 Mpc for Parkes (top left), Arecibo (top right),
ASKAP (bottom left) and the SKA (bottom right). Contour
levels start at a 3 $\sigma$ level after 500 hour
observation of a 30 square degree field. For Parkes every subsequent
contour is a factor 3 higher than the previous one. For ASKAP and
Arecibo, the contours interval is a factor 7. For the SKA the
contours start a $4\cdot 10^{16}$ cm$^{-2}$ and increase by a factor
6. Parkes is not really competitive in detecting substructures at
this distance. The other telescopes all have sufficient
resolution, however only the SKA is sensitive enough to detect the
faint diffuse sub-structures.}
\label{mock6}
\end{figure*}
In Figure~\ref{mock25} the same field is shown as in Figure~\ref{mock},
placed at a distance where the beam has a size of 25 kpc. This means
that the object looks similar for all four telescopes as it fills the
same number of beams, although there is a big difference in angular scale.
For each panel contour levels are drawn starting at 3 $\sigma$
and increasing as noted in the figure caption, so the actual
values are different in each panel, but can be determined using the
sensitivity limits given in table.~\ref{mock_data}. All panels look
very similar, as most of the structure and substructure of the
original map is smoothed into two large blobs and essentially all the
diffuse structures are lost in the noise. Only with the SKA can some
extended contours still be recognized at the top of the left
object. However, it is very difficult to distinguish extended emission
from companions, unless the companions are clearly separated by at
least a beam width from the primary object.
An example of this can be seen in Figure~\ref{mock6} where the same
object is shown simulated with four telescopes but now at a similar
distance of 6 Mpc. A smaller beam size now yields higher resolution
and more detected structure. The differences between the four panels
are now obvious. Observed with Parkes in the top left panel the object
has essentially no resolved structure. Clearly the beam size is too
large and only suitable for very nearby galaxies, closer than 6
Mpc. Arecibo, with a much smaller beam, can resolve the inner core of
the object and can just detect the brightest companion. Again the
contour levels have values starting at 3 $\sigma$, so
the outer contour in the top right panel corresponds to
$9\cdot10^{17}$ cm$^{-2}$. ASKAP has a very similar sensitivity
limit as Arecibo with a comparable beam size. However it is notable
that it reaches Arecibo sensitivities with a much smaller collecting
area. Furthermore, these deep integrations have not really been
explored, so the given sensitivities are theoretical limits. It is
very likely that correcting for systematic effects, like the shape of
the spectral bandpass, will be more achievable with an interferometer
like ASKAP rather than a large single dish telescope. In the SKA image
essentially all companions can be clearly distinguished down to a
contour level of $\sim4\cdot10^{16}$ cm$^{-2}$. Note that this value
is lower than the 3$\sigma$ value in Table~\ref{mock_data}, this is
because the beam size of the SKA is smaller than the beam size of the
simulation at a distance of 6 Mpc. To adjust for this we adopt the
beam size of the reconstructed data and smooth the noise to this
larger beam size. In this case the noise value will decrease,
resulting in a higher sensitivity of $1.3\cdot10^{16}$ cm$^{-2}$.
\section{Conclusion}
We have used a hydrodynamic simulation to predict the neutral
hydrogen distribution in the universe. The simulation employs a random
cube of 32 $h^{-1}$ Mpc on a side at redshift zero, with an SPH mass
resolution of $\sim 10^7$ $M_\odot$. The physics in the simulation
includes subgrid treatments of star formation and feedback
mechanisms.
We have developed a method to extract the neutral hydrogen component
from the total gas budget. At low volume densities the balance is
calculated between photo-ionization and radiative recombination. For
high densities a correction has to be applied for self-shielding, as
the gas becomes optically thick for ionizing photons. In the densest
regions, the atomic hydrogen is turned into molocular hydrogen that
will subsequently form stars. The molecular hydrogen and the
self-shielding transition are both modelled by a critical thermal
pressure. Above the first threshold limit ($P/k = 155$ cm$^{-3}$ K),
the gas is assumed to recombine and the neutral fraction is set to
unity. At even higher pressures ($P/k = 810$ cm$^{-3}$ K), the atomic
hydrogen is assumed to become molecular hydrogen, so the atomic
fraction becomes zero. These processes only apply to simulated
particles for which the recombination time is shorter than the
sound-crossing time on kpc scales. The two threshold pressures are
tuned to reproduce the observed average \hbox{\rm H\,{\sc i}} density
of $\bar{\rho}_{HI} = 6.1 \times 10^7 h$ $M_{\odot}$ Mpc$^{-3}$ and an
assumed molecular density of $\bar{\rho}_{H_2} = 1.8 \times 10^7 h$
$M_{\odot}$ Mpc$^{-3}$, corresponding to a molecular to atomic density
ratio of $\eta_{z=0}=0.3$.
A wide range of statistical comparisons have been made between
the reconstructed \hbox{\rm H\,{\sc i}} distribution and existing
observational constraints including: the two-point correlation function,
the \hbox{\rm H\,{\sc i}} mass function and the \hbox{\rm H\,{\sc i}}
column density distribution. There is agreement between all these
statistical measures of the observations and the simulations, which
is a very encouraging result. Based on this agreement, the simulated
\hbox{\rm H\,{\sc i}} distribution may be a plausible description of
the \hbox{\rm H\,{\sc i}} universe, at least on the intermediate spatial
scales that are both well-resolved and well-sampled.
We also compare the distribution of neutral and molecular
hydrogen with the distribution of dark matter and stars in the
simulation. Massive \hbox{\rm H\,{\sc i}} structures generally have
associated stars, but the more diffuse clouds do not contain large
stellar components or in many cases even concentrations of dark
matter. The method to extract neutral hydrogen from an SPH output cube
can be applied to other simulations, to allow comparison of different
models of galaxy formation. Furthermore, the results can be used to
create mock observations and make predictions for future observations.
This preliminary study shows that as \hbox{\rm H\,{\sc i}}
observations of diffuse gas outside of galactic disks continue to
improve, simulations will play a vital role in guiding and
interpreting such data to help us better understand the role that
\hbox{\rm H\,{\sc i}} plays in galaxy formation.
\begin{acknowledgements}
We would like to thank Thijs van der Hulst for useful discussions and
comments on the original manuscript. We appreciate the constructive
comments of the anonymous referee.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,089,824 | arxiv | \section{Introduction}
One of the most successful predictions in supersymmetric grand unified theories (GUTs) is a gauge
coupling unification at the GUT scale.
On the other hand,
any sufficient achievement of matter (Yukawa coupling) unification has not
been completed yet and it is one of the most important tasks
to explain the observed quark/lepton masses
and mixings based on the GUT framework .
Here we would like to notice that the well-known Georgi-Jarlskog (GJ) relations \cite{GJ} \begin{eqnarray}
m_e=\frac{m_d}{3}
\ ,\ \ \
m_\mu=3m_s
\ ,\ \ \
m_\tau=m_b
\ ,
\label{GJr}
\end{eqnarray}
can provide us
with
a guiding principle to construct a framework of matter unification.
The above GJ relations at the GUT scale have been known to
explain
the observed values of down quark and charged lepton masses at low energy quite successfully.
Especially the first and second relations are quite interesting to
reproduce the observed down and strange quark masses as well as $\mu $ and $e$ lepton masses
in a unified way. Furthermore
we notice that the ratio $\sqrt{m_d/(m_s+m_d)}$, which is $0.224\pm 0.004$ experimentally,
is almost equal to the observed mixing value, $|V_{us}|_{\rm exp}=0.221-0.227$.
This may be a strong indication of
zero structure for 1-2 entry of the down-type quark mass matrix $M_d$,
since the contribution of the mixing angle from the up-type quark mass matrix $M_u$,
which shows more hierarchical structure, may be generally expected to be very small.
We shall see this coincidence later.
Thus a natural setup for realizing the GJ relation is
to introduce the zero texture of symmetric mass matrix with
some vanishing entries. Such textures of quark mass matrix have been
explored by many authors, for example, see refs.~\cite{RRR,watanabe}.
The coefficient $-3$ in the GJ relations is originated from Clebsch-Gordan (CG) coefficients.
On the other hand, the third relation in eq.~(\ref{GJr}), called bottom-tau unification,
should be addressed in more detail. Although the predicted value of $m_b/m_{\tau}$
at the low energy scale is qualitatively consistent with the experimental value,
it depends quite strongly on the 3rd generation of right handed neutrino mass $M_{R3}$, as well as the strong coupling constant $\alpha_3$~\cite{btau}.
Actually, it predicts a bit larger value than experimental values if $M_{R3}$ is well below the GUT scale.
Furthermore, the recent results of the B factory,
combined with the great advance in lattice QCD calculations,
provide us with much more precisely determined observable
parameters in the CKM matrix.
In particular the experimental value of $\sin2\beta$,
which is an angle in the CKM unitary triangle,
has been precisely determined in
the last few years. This constrains the type of mass matrix textures very strictly.
It is shown that the important constraints
on fermion mass matrix textures come from the measurements of
$\sin2\beta$~\cite{raby} and that the texture with zero 1-3 entry cannot
reproduce the experimental data.
In the previous papers~\cite{BO,BKOT1}, we proposed a symmetric two-zero quark/lepton mass texture,
which realizes the GJ relations and can successfully reproduce the observed bi-large mixings
and mass-squared differences of neutrinos in the SUSY SO(10) GUT.
In our framework, the predicted branching ratios of lepton flavor violation
such as $\mu\to e\gamma$ are safely lower than the experimental upper bound and the
observed baryon asymmetry of the universe can be explained through thermal
leptogenesis scenario~\cite{BKOT2}.
It is known (see for example, ref.\cite{btau}) that ratio of $m_b/m_{\tau}$
at the electroweak scale can be reproduced pretty well if $M_{R3}$ is of the GUT scale.
This implies that the two zero texture of neutrino Dirac mass matrix $M_{{\nu}_D}$ in our model
is preferable for predicting $m_b/m_{\tau}$ because $M_{R3}$ is required
to be almost of the GUT scale in our model, while the right-handed neutrino masses of the 1st and 2nd
generations, $M_{R1}$ and $M_{R2}$, are very small.
So, we expect that our model can reproduce not only neutrino masses and mixing angles
but also the down quark and charged lepton masses as well as quark mixing angles.
However, our model adopted the texture with zero 1-3
entry for both $M_u$ and $M_d$
and it is not clear whether it would bring enough non-zero contribution
to the 1-3 element
to be
consistent with the observed value of $\sin2\beta$.
Therefore,
the predicted values of $\sin2\beta$ and $m_b/m_{\tau}$
give a crucial information as to whether
such matter unification scenarios with the GJ relations
can be realized in nature,
after
taking account of the renormalization group equations (RGEs) running effect
within the framework of unified bottom and tau Yukawa couplings.
Furthermore,
we think it is necessary to find a nice framework
of zero texture which is consistent with the above current
experimental data and at the same time which keeps the good
GJ relations.
The aim of this paper is to investigate the phenomenologically allowed symmetric two-zero
quark/lepton mass matrix textures
in the SUSY SO(10) GUT framework with the current experimental bounds of $\sin2\beta$ and $m_b/m_{\tau}$.
We make numerical calculation of the observed values, especially $m_b/m_{\tau }$ and $\sin2\beta$,
by solving the RGEs with right-handed neutrino threshold effects.
We also show the constraints to $M_{R3}$ and $\tan\tilde \beta$.
\footnote{We write here $\tan\tilde \beta$ for the notation of the parameter
expressing the ratio of VEVs of up and down Higgs fields to
discriminate it from $\sin2\beta$.}
This paper is organized as follows.
In the next section, we present the textures of quark/lepton mass matrix which we adopt in this paper.
In the third section, we show the results of numerical calculations.
The final section is devoted to the discussion of this paper.
\section{Symmetric two-zero textures}
We consider the SUSY SO(10) GUT with the {\bf 10} and {\bf 126} representation of Higgs multiplets.
The SO(10) gauge group is assumed to be broken down to the standard model gauge group through the
Pati-Salam symmetry.
In the above setup,
the up and down quark mass matrices : $M_u$, $M_d$, charged lepton and neutrino Dirac mass matrices :
$M_e$, $M_{\nu_D}$ are symmetric ones.
Many authors have investigated the two-zero textures in the above setup~\cite{Chen,two-zero3}.
We have assumed that either the {\bf 10} or {\bf 126} Higgs multiplet dominantly couples to the fermions
in each generations.
Consequently,
we obtain the relation among mass matrices : $M_u=M_{\nu_D}$ and $M_d=M_e$ except for the CG
coefficients $1$ or $-3$, which can appear in some elements in $M_e$ and $M_{\nu_D}$.
The up quark and neutrino Dirac mass matrices are taken to be the following textures
\footnote{
In addition to the texture U1 which we have adopted in our previous papers~\cite{BO,BKOT1,BKOT2},
we here also introduce texture U2. The latter texture is obtained by exchanging
the 2nd and 3rd generation suffices, and it is expected not to make so much difference
at least to the calculated neutrino masses and mixings because in the neutrino sector
the 2nd and 3rd generations are almost maximally. Nevertheless,
we guess that it does bring much difference to quark sector.
} :
\begin{eqnarray}
&&
{\rm U1}\ \
M_u=
\left(
\begin{array}{ccc}
& a_u& \\
a_u& b_u & c_u \\
& c_u & d_u
\end{array}
\right)
\ \ {\rm and}\ \
M_{\nu_D}=
\left(
\begin{array}{ccc}
& x_{12}a_u & \\
x_{12}a_u & x_{22}b_u & x_{23}c_u \\
& x_{23}c_u & x_{33}d_u
\end{array}
\right)
\ ,
\\
&&
{\rm U2}\ \
M_u=
\left(
\begin{array}{ccc}
& & a_u \\
& b_u & c_u \\
a_u& c_u & d_u
\end{array}
\right)
\ \ {\rm and}\ \
M_{\nu_D}=
\left(
\begin{array}{ccc}
& &x_{13}a_u \\
& x_{22}b_u & x_{23}c_u \\
x_{13}a_u& x_{23}c_u & x_{33}d_u
\end{array}
\right)
\ ,
\end{eqnarray}
where $a_{u}, b_{u}, c_{u}$ and $d_{u}$ are complex numbers,
and CG coefficients $x_{ij}$ can be taken as $1$ or $-3$.
We denote the above assumptions of textures as U1 and U2 hereafter.
We have adopted the texture U1 in refs.~\cite{BO,BKOT1,BKOT2}.
In this paper, we take $x_{12}=x_{13}=x_{22}=x_{23}=1$,
because the values of $x_{ij}$ except for $x_{33}$ do not affect to the RGEs running of the observables
in quark and charged lepton sector.
The constraint on $x_{33}$ from the bottom-tau unification will be discussed in the next section
by showing numerical results.
The down quark and charged lepton mass matrices are given by
\begin{eqnarray}
&&
{\rm D1}\ \
M_d=
\left(
\begin{array}{ccc}
& a_d& \\
a_d& b_d & c_d \\
& c_d & d_d
\end{array}
\right)
\ \ {\rm and}\ \
M_e=
\left(
\begin{array}{ccc}
& a_d & \\
a_d & -3b_d & c_d \\
& c_d & d_d
\end{array}
\right)
\ ,
\end{eqnarray}
where $a_{d}, b_{d}, c_{d}$ and $d_{d}$ are also complex numbers.
The CG coefficient in 2-2 element of $M_e$ is the crucial ingredient to realize the GJ relations.
In addition to the above texture,
we also consider the following three-zero texture for the down quark and the charged lepton mass matrices :
\begin{eqnarray}
&&
{\rm D1'}\ \
M_d=
\left(
\begin{array}{ccc}
& a_d& \\
a_d& b_d & \\
& & d_d
\end{array}
\right)
\ \ {\rm and}\ \
M_e=
\left(
\begin{array}{ccc}
& a_d & \\
a_d & -3b_d & \\
& & d_d
\end{array}
\right)
\ ,
\end{eqnarray}
which is a special case of D1, imposing further zero to the 2-3 element
\footnote{
It might be possible to introduce the case, in which we impose further zeros to
the matrix elements of $M_u$. However, we have already recognized that
it does not reproduce the neutrino bi-large mixing angles any more and
the minimum parameter set should not be less than 4 for $M_{\nu_D}$.
Thus the textures, U1 and U2 with D1 and D1', are natural extension of our previous model~\cite{BO,BKOT1,BKOT2}
}.
The texture D1$'$ were originally proposed by Georgei and Jarlskog~\cite{GJ} and also investigated in ref~\cite{RRR}.
The right-handed neutrino mass matrix is taken to be
\begin{eqnarray}
M_R=
\left(
\begin{array}{ccc}
& a_R& \\
a_R& & \\
& & d_R
\end{array}
\right)\ ,
\end{eqnarray}
where we assume $|a_R|\ll |d_R|=M_{R3}\simeq M_{\rm GUT}$ where $M_{R3}$ is the third generation of
right-handed neutrino mass.
There are two characteristic features of this form.
First, it reproduces the observed bi-large mixings of neutrinos~\cite{BO,BKOT1}.
Second, it predicts almost equal Majorana masses for the 1st and 2nd generations of right-handed neutrinos.
The former is important to derive $m_\tau/m_b$ ratio,
while in the latter,
degenerate masses guarantee the baryon asymmetry of the universe through leptogenesis~\cite{FuYa} in taking account of the resonance effects~\cite{pilaftsis}.
The left-handed Majorana neutrino mass matrix is obtained by the seesaw mechanism :
$M_\nu=-M_{\nu_D}^T M_R^{-1} M_{\nu_D}$.
\section{Numerical analysis}
In this section,
we show the predicted values of the observable parameters in the CKM matrix : $V_{\rm CKM}$,
the quark and charged lepton masses at the electroweak scale by solving RGEs with the right-handed
neutrino threshold effects~\cite{RGEs}.
The procedure that we adopt here for generating scattered plots of Fig.~\ref{btauU1D1},
Fig.~\ref{U1D1}, Fig.~\ref{U2D1} and Fig.~\ref{U2D2} is explained as follows.
First, we generate random numbers for $a_{u,d},b_{u,d},c_{u,d},d_{u,d}$ in the mass matrices.
Then, after solving the RGEs for Yukawa couplings with the generated values of $a_{u,d},b_{u,d},c_{u,d},d_{u,d}$
at the GUT scale,
we plot the observable parameters if the up quark and charged lepton masses fall into the following
experimentally allowed range :
\begin{eqnarray}
m_u(M_Z)=0.8 - 3.0\ {\rm MeV},&
m_c(M_Z)=500 - 800\ {\rm MeV},&
m_t(M_Z)=170 - 180\ {\rm GeV},
\nonumber\\
m_e(M_Z)=0.487\ {\rm MeV},&
m_\mu(M_Z)= 103\ {\rm MeV},&
m_\tau(M_Z)= 1.75\ {\rm GeV},
\nonumber\\
|V_{us}|=0.221 - 0.227\ ,&&
\label{ex1}
\end{eqnarray}
where the $m_u$, $m_c$ and $m_t$ are taken to have maximally wide range \cite{PDG},
the charged lepton masses are allowed to have 1\% error around the displayed central value~\cite{KF}.
We then obtain the predicted values of $m_d$, $m_s$, $m_b$, $|V_{ub}|$, $|V_{cb}|$ and $\sin2\beta$.
The another important observable is the strong coupling constant $\alpha_3$ which is defined
as $\alpha_3\equiv g_3^2/4\pi$ :
\begin{eqnarray}
\alpha_3(M_Z)=0.1187\pm 0.0020\ ,
\label{a3}
\end{eqnarray}
where we use the value in~\cite{PDG}.
As we will see later soon,
the low-energy value of the bottom to tau lepton mass ratio depends on the value of $\alpha_{3}$.
\begin{figure}[t]
\centerline{
\includegraphics[width=7.2 cm,height=7.2 cm]{mbmtautb.eps}
\includegraphics[width=7.2 cm,height=7.2 cm]{mbmtaumr.eps}
}
\caption{
(a): The $\alpha_3$ and $\tan\tilde{\beta}$ dependence of $m_b$ and $m_{\tau}$ are shown.
The heaviest right-handed neutrino mass is taken to have the range of $M_{R3}=10^{12}\sim 10^{16}$ GeV.
(b): The $\alpha_3$ and $M_{R3}$ dependence of $m_b$ and $m_{\tau}$ are shown.
The $\tan\tilde{\beta}$ is taken to have the range of $\tan\tilde{\beta}=3\sim 50$.
The minimal and maximal value of $\alpha_3$ are taken according to eq.(\ref{a3}) in both (a) and (b), respectively.
The lighter right-handed neutrino masses are fixed as $M_{R1}=M_{R2}=10^{10}$ GeV in these figures.
}
\label{btautbmr}
\end{figure}
\begin{figure}[t]
\centerline{
\includegraphics[width=8.2 cm,height=6.5 cm]{mbmtautbsc.eps}
\includegraphics[width=8.2 cm,height=6.5 cm]{mbmtauxsc.eps}
}
\caption{
(a): The scattered plots of $m_b$ and $m_\tau$ in the case of U1-D1 textures for $\tan\tilde{\beta}=3$ and $50$.
The $x_{33}$ is fixed as $x_{33}=1$ in this case.
(b): The case of $x_{33}=1,-3$ are shown when $\tan\tilde{\beta}$ is fixed as $\tan\tilde{\beta}=50$.
In both figures,
the right-handed neutrino masses are fixed as $M_{R1}=M_{R2}=10^{10}$ GeV and $M_{R3}=10^{15}$ GeV.
The two horizontal lines in the figures correspond to the allowed range of $m_\tau$.
The vertical lines correspond to the allowed range of $m_b$.
We find that the case of large $\tan\tilde{\beta}$, say $\tan\tilde{\beta}\simeq 50$ and $x_{33}=1$ is favored by bottom-tau unification comparing with the experimentally allowed range for bottom quark mass : $2.8<m_b<3.1$ GeV from these figures.
}
\label{btauU1D1}
\end{figure}
\subsection{bottom-tau unification}
Here, we discuss the constraint which comes from bottom-tau unification.
Before showing the numerical results,
we present the parameter dependence of the bottom to tau mass ratio at the low-energy scale.
In the MSSM with heavy right-handed neutrinos,
the 1-loop RGE for the ratio of bottom and tau masses is approximately given by
\begin{eqnarray}
\frac{d}{dt}
\left(\frac{m_b}{m_\tau}\right)
\simeq
\frac{1}{16\pi^2}
\left(\frac{m_b}{m_\tau}\right)
\left[
(y_u)_{33}^2-(y_\nu)_{33}^2
+
3\left(
(y_d)_{33}^2-(y_e)_{33}^2
\right)
-
\left(
\frac{16}{3}g_3^2-\frac{4}{3}g_1^2
\right)
\right]\ ,\nonumber\\
\label{RGE}
\end{eqnarray}
where $g_{1,3}$ are the gauge couplings of the standard model gauge group,
$y_{u,d,\nu,e}$ are the Yukawa couplings for the up and down quarks, neutrinos and charged leptons,
respectively,
and $t\equiv \ln\mu$.
The top quark Yukawa coupling $(y_u)_{33}$ tends to cancel the
contribution of the $g_3$ term on the right hand side in eq.(\ref{RGE}).
This cancellation is important to reproduce the phenomenologically allowed low-energy value of
the bottom to tau mass ratio.
Since the tau neutrino Yukawa coupling $(y_\nu)_{33}$ cancels the top quark one,
the bottom quark mass becomes smaller if the absolute value of the
CG coefficient $x_{33}$ in 3-3 element decreases.
Moreover, the lower values of $\tan\tilde{\beta}$ and $M_{R3}$ which
determines a decoupling scale of the third generation of right-handed neutrino
are also disfavored for the same reason.
The explicit example of plots in $m_b-m_\tau$ plane are shown in Fig.~\ref{btautbmr}
for the different values of $\alpha_3$, $M_{R3}$ and $\tan\tilde{\beta}$, respectively.
In these figures, the RGEs are numerically solved.
The $\alpha_3$ and $\tan\tilde{\beta}$ dependence of $m_b$ and $m_{\tau}$ are shown in Fig.~\ref{btautbmr}(a).
The heaviest right-handed neutrino mass is taken to have the range of $M_{R3}=10^{12}\sim 10^{16}$ GeV in Fig.~\ref{btautbmr}(a).
The $\alpha_3$ and $M_{R3}$ dependence of $m_b$ and $m_{\tau}$ are shown in Fig.~\ref{btautbmr}(b).
The $\tan\tilde{\beta}$ is taken to have the range of $\tan\tilde{\beta}=3\sim 50$.
The minimal and maximal values of $\alpha_3$ are taken according to eq.~(\ref{a3})
in Fig.~\ref{btautbmr}(a) and \ref{btautbmr}(b), respectively.
It is shown that both $m_b$ and $m_\tau$ are proportional to $\tan\tilde{\beta}$.
Furthermore, $m_b$ ($m_\tau$) is also proportional to $\alpha_3$ ($M_{R3}$), respectively.
We can confirm the above arguments
by the figures of the results of numerical calculations.
Fig.~\ref{btauU1D1} shows the scattered plots in $m_b-m_\tau$ plane in the case of U1-D1
for the different values of $\tan\tilde{\beta}$ and $x_{33}$ in taking account of the constraints shown in eqs.~(\ref{ex1}) and (\ref{a3}) except for $m_\tau$.
In this analysis, the right-handed neutrino masses are fixed as $M_{R1}=M_{R2}=10^{10}$ GeV
and $M_{R3}=10^{15}$ GeV, respectively.
In the Fig.~\ref{btauU1D1}(a), we take $x_{33}=1$.
We find that large $\tan\tilde{\beta}$ is favored by the bottom-tau unification with allowed range
of bottom quark mass : $2.8<m_b<3.1$ GeV.
The Fig. \ref{btauU1D1}(b) shows the $x_{33}$ dependence of bottom quark mass.
We easily find that the allowed values of bottom quark mass are obtained in the case of $x_{33}=1$.
Therefore, we will consider only the case of
\begin{eqnarray}
x_{33}=1\ \ \ {\rm and}\ \ \ \tan\tilde{\beta}=50\ ,
\end{eqnarray}
hereafter in this paper.
Our next task is to show the difference of the down and strange quark mass predictions in the textures of mass matrices.
Since the low energy value of the bottom to tau mass ratio is not sensitive to the textures of
the up quark and neutrino Dirac mass matrices,
it is enough to show the difference in the case of D1 and D1$'$.
For the case of U1-D1 textures with $x_{33}=1$ and $\tan\tilde{\beta}=50$,
the down and strange quark masses are predicted to be
\begin{eqnarray}
m_d(M_Z)=3.7-5.0\ {\rm MeV}\ ,\ \ m_s(M_Z)=55-74\ {\rm MeV}
\ ,
\end{eqnarray}
where the constraints of eqs.(\ref{ex1}) and (\ref{a3}) are considered, respectively.
Almost the same values are predicted in the case of U2-D1.
These values are nicely located within the experimental allowed region :
$m_d(M_Z)= 2.6-5.2$ MeV and $m_s(M_Z)= 52-85$ MeV \cite{watanabe,KF}.
For the case of U1-D1$'$ textures with $x_{33}=1$ and $\tan\tilde{\beta}=50$,
the down and strange quark masses are predicted to be
\begin{eqnarray}
m_d(M_Z)=3.2-3.4\ {\rm MeV}\ ,\ \ m_s(M_Z)=81-85\ {\rm MeV}
\ .
\end{eqnarray}
Almost the same values are predicted in the case of U2-D1$'$.
These values are more strongly restricted rather than the case of D1.
We recognize that these two textures of down quark mass matrix, $M_d$,
can reproduce the observed down and strange quark masses.
\subsection{The CKM matrix and unitarity triangle}
\begin{figure}[t]
\centerline{
\includegraphics[width=7.2 cm,height=7.2 cm]{VubVcbU1D1.eps}
\includegraphics[width=7.2 cm,height=7.2 cm]{Vubs2bU1D1.eps}
}
\caption{
(a): Predicted values of $|V_{cb}|$ are plotted as a function of $|V_{ub}|$ for the case of U1-D1.
(b): Predicted values of $\sin2\beta$ are plotted as a function of $|V_{ub}|$ for the case of U1-D1.
In both figures, the constraints shown in eqs.(\ref{ex1}) and (\ref{a3}) are considered.
The relevant parameters are fixed as $\tan\tilde{\beta}=50$, $x_{33}=1$, $M_{R1}=M_{R2}=10^{10}$ GeV
and $M_{R3}=10^{15}$ GeV, respectively.
The two horizontal lines in the figures correspond to the allowed range of $|V_{cb}|$ and $\sin2\beta$, respectively.
The vertical lines correspond to the allowed range of $|V_{ub}|$.
}
\label{U1D1}
\end{figure}
\begin{figure}[t]
\centerline{
\includegraphics[width=7.2 cm,height=7.2 cm]{Vubs2bU2D1.eps}
\includegraphics[width=7.2 cm,height=7.2 cm]{Vcbs2bU2D1.eps}
}
\caption{(a): Predicted values of $\sin2\beta$ are plotted as a function of $|V_{ub}|$ for the case of U2-D1.
(b): Predicted values of $\sin2\beta$ are plotted as a function of $|V_{cb}|$ for the case of U2-D1.
In both figures, the constraints shown in eqs.(\ref{ex1}), (\ref{a3}) and (\ref{ex2}) are considered.
The relevant parameters are fixed as $\tan\tilde{\beta}=50$, $x_{33}=1$, $M_{R1}=M_{R2}=10^{10}$ GeV and $M_{R3}=10^{15}$ GeV, respectively.
The two horizontal lines in the figures correspond to the allowed range of $\sin2\beta$.
}
\label{U2D1}
\end{figure}
\begin{figure}[t]
\centerline{
\includegraphics[width=7.2 cm,height=7.2 cm]{Vubs2bU2D2.eps}
\includegraphics[width=7.2 cm,height=7.2 cm]{Vcbs2bU2D2.eps}
}
\caption{(a): Predicted values of $\sin2\beta$ are plotted as a function of $|V_{ub}|$ for the case of U2-D2.
(b): Predicted values of $\sin2\beta$ are plotted as a function of $|V_{cb}|$ for the case of U2-D2.
In both figures, the constraints shown in eqs.(\ref{ex1}), (\ref{a3}) and (\ref{ex2}) are considered.
The relevant parameters are fixed as $\tan\tilde{\beta}=50$, $x_{33}=1$, $M_{R1}=M_{R2}=10^{10}$ GeV and $M_{R3}=10^{15}$ GeV, respectively.
The two horizontal lines in the figures correspond to the allowed range of $\sin2\beta$.
}
\label{U2D2}
\end{figure}
The recent measurements of the CP violating decay of B mesons into charmoniums provide us with the precise value of $\sin2\beta$~\cite{bccs}:
\begin{eqnarray}
\sin2\beta = 0.685\pm 0.032\ ,
\end{eqnarray}
where $\beta$ is one of the angles in the CKM unitarity triangle.
The angle $\beta$ is the most precisely measured quantity among the angles in the CKM unitarity
triangle at present.
The angle $\beta$ is given in terms of the elements in CKM matrix by
\begin{eqnarray}
\beta={\rm arg}\left(\frac{-V_{cd}V_{cb}^*}{V_{td}V_{tb}^*}\right)\ .
\end{eqnarray}
We also take the experimental allowed range of the CKM matrix elements \cite{PDG} :
\begin{eqnarray}
|V_{ub}|=0.0029-0.0045\ ,\ \ \
|V_{cb}|=0.039-0.044\ .
\label{ex2}
\end{eqnarray}
It is obvious that the predictions of $\sin2\beta$ and the CKM matrix elements are closely related each other.
Let us show the difference between the cases of U1 and U2 at first.
Before showing the numerical results,
we estimate the absolute values of the CKM matrix elements and $\sin2\beta$ by following the discussion
presented in ref.~\cite{raby}.
For the case of U1-D1, we obtain the following approximated relations :
\begin{eqnarray}
|V_{us}|\simeq \sqrt{\frac{m_d}{m_s}}
\ ,\ \ \
\frac{|V_{ub}|}{|V_{cb}|}\simeq
\sqrt{\frac{m_u}{m_c}}
\ ,\ \ \
\beta\simeq \arg\left(1-\sqrt{\frac{m_u m_s}{m_c m_d}}e^{i\phi}\right)
\ ,
\label{ckmu1d1}
\end{eqnarray}
where the CP violating phase $\phi$ is the combination of ones of the
quark mass matrices. These approximate forms are given at the GUT scale.
One expects almost constant ratio of $|V_{ub}|$ to $|V_{cb}|$ from
eq.(\ref{ckmu1d1}) .
For the case of U2-D1, we obtain the following relations :
\begin{eqnarray}
|V_{us}|\simeq \sqrt{\frac{m_d}{m_s}}
\ ,\ \ \
|V_{ub}|\simeq
\sqrt{\frac{m_u}{m_t}}
\ ,\ \ \
\beta\simeq \arg\left(1-\sqrt{\frac{m_u m_s}{m_t m_d}}\frac{1}{|V_{cb}|}e^{i\phi'}\right)
\ ,
\label{ckmu2d1}
\end{eqnarray}
where the CP violating phase $\phi'$ is also a combination of ones of
the quark mass matrices.
Comparing eq.~(\ref{ckmu1d1}) with eq.~(\ref{ckmu2d1}),
one finds that the angle $\beta$ in the case of U2-D1 is predicted to be larger than the U1-D1
and may be favored by experimental data.
Fig. \ref{U1D1} shows the predicted values of $|V_{ub}|$, $|V_{cb}|$ and $\sin2\beta$ for the case
of U1-D1 which are to be consistent with the constraints shown
in eqs.~(\ref{ex1}) and (\ref{a3}).
In all analyses of Fig. \ref{U1D1}, Fig.~\ref{U2D1} and Fig.~\ref{U2D2},
the relevant parameters are fixed as $\tan\tilde{\beta}=50$ and $x_{33}=1$, $M_{R1}=M_{R2}=10^{10}$ GeV
and $M_{R3}=10^{15}$ GeV, respectively.
The $|V_{ub}|$ is predicted to be near the lower bound in the case of U1-D1, and the predicted value
of $|V_{cb}|$ is proportional to $|V_{ub}|$.
One can also easily find that the predicted value of $\sin2\beta$ does not reach the experimental
lower bound.
Thus, we should exclude the case of U1 due to the observed value of $\sin2\beta$.
Therefore, we will investigate only the case of U2 in more quantitative way in the following.
Fig.~\ref{U2D1} shows the predicted values of the same parameter sets for the case of U2-D1
in taking account of the constraints in eq.~(\ref{ex2}) in addition to eqs.~(\ref{ex1}) and (\ref{a3}).
The $|V_{ub}|$ and $|V_{cb}|$ are found to cover the whole experimentally allowed region in this case.
The predicted values of $\sin2\beta$ can reach the experimental allowed region only for $|V_{ub}|>0.0036$.
We can conclude that there exists the allowed region where all
the experimental constraints including $\sin2\beta$ are satisfied in the
case of U2-D1.
For the case of U2-D1$'$, we find the predicted values of the same parameter sets as in the case of U2-D1, which are shown in Fig. \ref{U2D2}.
This case has the much smaller allowed region rather than the case of U2-D1.
Since the $|V_{ub}|$ is constrained to have very small value,
it seems to be difficult to reach the allowed region of $\sin2\beta$.
The predictions for the all textures which we take are summarized in the Table 1.
We can see that the case of U2-D1 is the most favored texture,
especially if we
compare the predicted values of $|V_{ub}|$
and $\sin2\beta$ of the other cases.
\begin{table}[t
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
& $m_d(M_Z)$
& $m_s(M_Z)$
& $m_b(M_Z)$
& $|V_{ub}|$
& $|V_{cb}|$
& $\sin2\beta$
\\ \hline
U1-D1
&
$3.7-5.0$
&
$55-74$
&
$3.0-3.2$
&
$\sim 0.003$
&
all
&
$<0.6$
\\ \hline
U2-D1
&
$3.5-4.8$
&
$55-77$
&
$3.0-3.2$
&
$0.0036<$
&
all
&
$<0.8$
\\ \hline
U2-D1$'$
&
$3.2-3.4$
&
$81-85$
&
$3.0-3.2$
&
$<0.0034$
&
all
&
$\sim 0.5-0.6$
\\ \hline
\end{tabular}
\caption{The table shows the summarized predictions for the different textures in the case of $\tan\tilde{\beta}=50$, $x_{33}=1$, $M_{R1}=M_{R2}=10^{10}$ GeV and $M_{R3}=10^{15}$ GeV, respectively.
The "all" means that the predicted region covers all the experimentally allowed region. }
\end{center}
\label{ts}
\end{table}
\section{Discussion}
\label{dis}
We have investigated the symmetric two-zero textures for all quark/lepton mass matrices in the SUSY SO(10)
GUT, in which the Georgi-Jarlskog relations are realized.
The low-energy predictions are obtained by solving the renormalization group equations with the right-handed neutrino threshold effects.
One of the important consequence of such framework is that the bottom quark and tau lepton masses are
unified at the GUT scale.
The large $\tan\tilde{\beta}$ and the heaviest right-handed neutrino mass : $M_{R3}$ near the GUT scale are favored by the bottom-tau unification.
The CG coefficient in 3-3 element of neutrino Dirac mass matrix : $x_{33}$ is also constrained to be $1$ not $-3$.
The predicted value of $\sin2\beta$ strongly depends on the choice of up-quark mass matrix textures.
As results of the analysis,
the case of U2-D1 is favored by the measurement of $\sin2\beta$ while the other cases which include U1-D1, U1-D2 and U2-D2, are not.
However, the allowed region on the $|V_{ub}|-\sin2\beta$ plane in the case of U2-D1 is very limited.
Finally,
we would like to discuss the possible textures and patterns of the CG coefficients for the neutrino Dirac
mass matrix.
These are directly related to neutrino oscillation observables.
The threshold effect of the lightest and the next-lightest
right-handed neutrino masses : $M_{R1}$ and $M_{R2}$ in the RGEs running become
much important to predict neutrino oscillation observables.
We can obtain a strong constraint to the parameters in the neutrino Dirac mass matrix and
the right-handed neutrino one from such analysis
We can expect that such investigation leads us to a complete framework
of matter unification.
\section*{Acknowledgements}
M.B thanks to M.C.Chen for useful discussion, especially for pointing out the
importance of the observed quantities appearing in unitary triangle
diagram.
The work of S.K. has been supported
by the Japan Society of Promotion of Science.
|
2,877,628,089,825 | arxiv | \section{Introduction}
Statistical evidence against a hypothesis often relies on the asymptotic normality of a test statistic, as in the case of the commonly used Wald or score tests.
Many authors ignore the asymptotic nature of the argument and assume that in finite samples the distribution of the test statistic is indeed normal.
This perfunctory approach generates misleading beliefs about the $p$-value distribution, such as i) the distribution of the $p$-values under the null is exactly uniform or that ii) the cumulative distribution function (henceforth, cdf) of the $p$-value under the alternative is concave.
However, there are important exceptions from these rules, e.g. discrete tests are not normally distributed in any finite sample settings, so that the distribution of the $p$-values under the null is certainly not uniform.
Similarly, it is not obvious that the cdf is concave under the alternative as we will illustrate with some examples.
Testing procedures aimed at controlling the family-wise error rate (FWER) or the false discovery rate (FDR, see \cite{BH}) typically assume that i) or ii) holds.
In \cite{Cao2013}, the authors examine the optimality of FDR control procedures when i) or ii) are violated and provide alternative conditions to maintain said optimality.
Clearly, a more precise characterization of the $p$-value distribution that accounts for the approximation error is pivotal in controlling the occurrence of false discoveries.
Complicating matters even further, the issue of calibrating the location and variance of the test statistic is often overlooked, particularly under the alternative.
Under the alternative the test statistic can be improperly re-scaled since often the variance of the test statistic is obtained under the null.
While under the null, the test statistic may not have zero mean and may also not be correctly standardized, thus making the standard Gaussian approximation suspect.
The problem of biases in the variance and expectation is aggravated in the presence of a large number of nuisance parameters.
For instance, while it has been demonstrated in \cite{DiCicio} that the profile score statistic has a location and variance bias under the null, in Section \ref{subsec:score} we show that the variance bias can persist under the alternative.
These concerns motivate us to perform a systematic study of the $p$-value distribution in the presence of information or location biases under the null and alternative, while accounting for the approximation error resulting from the use of asymptotic arguments.
We explore how certain asymptotically non-vanishing and vanishing biases in the variance and location of the test statistic can occur in finite samples, violating the assumptions generally placed on the null and alternative distributions of the $p$-values.
We study both continuous and discrete distributions supported on lattices.
In doing so we include all approximation errors, including those induced by discreteness, to fully characterize the behaviour of the distributions of $p$-values under the null and alternative.
This work extends the results of \cite{hung}, who studied the distribution of $p$-values under alternative assuming the test statistic is normally distributed, to a broader framework.
We focus on univariate test statistics for a one dimensional parameter of interest based on sums of independent random variables, possibly in the presence of a large number of nuisance parameters.
These types of test statistics are commonly used to infer the significance of individual coefficients in most regression models.
The results of the paper are in the same vein as those found in \cite{hall2013bootstrap} and \cite[\S~3]{kolassa1994series}, whose objective was the coverage properties of confidence intervals.
We expand their results to the $p$-values, motivated by the multitude of scientific investigations that rely on the $p$-value distribution rather than confidence intervals.
We begin with a simple example illustrating how the standard assumptions on the null and alternative distributions of the $p$-values can be violated in practice.
\begin{example}
We wish to test the null hypothesis $H_0: \beta = 0.01 $ against the alternative $H_1: \beta = 0.01/1.05$, where $\beta$ is the rate parameter of a gamma distribution, based on 750 observations $x_1, \cdots, x_{750}$, assuming that the shape parameter is known to be $\alpha = 0.01$. From the central limit theorem, we know that the test statistic
\begin{align*}
S_n = \sqrt{n} \left(\frac{\bar{X} - 1}{\sigma} \right) \rightarrow N(0, 1),
\end{align*}
so we are able to obtain a two-sided $p$-value based on the standard normal distribution.
We plot the histograms of the $p$-values obtained under the null and alternative in Figure \ref{fig:test}.
The plots are obtained by simulation using 100,000 replications.
We see on Figure \ref{fig:test} that the distribution of the $p$-values obtained from the simulations does not adhere to its expected behaviour under the null or the alternative.
The upper left plot in Figure \ref{fig:test} shows a marked departure from the $U(0,1)$ distribution expected from the null. Thus, a typical rejection rule which assumes uniformity of the $p$-value distribution under the null will not provide type I error control for certain choices of $\alpha$.
For example, if we desire a $10^{-4}$ significance level, we obtain a type I error approximately equal to $1.579*10^{-3}$, which is fifteen times higher than the nominal level.
Under a local alternative, the upper right plot in Figure 1 shows that the $p$-value distribution may not be stochastically smaller than a $U(0,1)$.
The resulting lack of concavity of the distribution $p$-value under the alternative can violate the typical assumption that the false negative rate is strictly decrease and the FDR is increasing in the nominal control level $\alpha$ in the multiple testing setting; see \cite{Cao2013}.
Note that the cause for this poor calibration is not the low sample size.
\end{example}
\begin{figure}[ht]
\centering
\subfloat{\includegraphics[width = 2in]{null_1_1.png}}
\subfloat{\includegraphics[width = 2in]{alt_1_105.png}}\\
\subfloat{\includegraphics[width = 2in]{gamma_saddle}}
\subfloat{\includegraphics[width = 2in]{gamma_saddle_alt}}
\caption{Distribution of $p$-values under $H_0$ and $H_1$ in Example 1. {\it Upper left}: $p$-values obtained under $H_0$ from the normal approximation. {\it Upper right}: $p$-values obtained under $H_1$ from the normal approximation. {\it Lower left and right}: corrected $p$-values for the null and alternative, respectively, using the saddlepoint approximation that is introduced in Section 3. The number of samples is 750. The upper left panel clearly does not exhibit uniformity and upper right panel's distribution does not appear to have a concave cdf. We plot the theoretical prediction from Theorem 1 in blue for the upper left and upper right panels.}
\label{fig:test}
\end{figure}
To assure the reader that the above example is not a singular aberration, we present in Figure \ref{fig:lung} the histogram of over 13 million $p$-values from the genome-wide association study of lung cancer generated from the UK Biobank data \citep{biobank}. These $p$-values are produced by the Neale Lab \citep{neale}, based off 45,637 participants and 13,791,467 SNPs; SNPs with minor allele frequency less than 0.1\% and INFO scores less than 0.8 were excluded from the analysis.
We note that the histogram exhibits a similar behaviour to the one seen in Example 1, i.e., the distribution of $p$-values exhibits a secondary mode that is far from zero.
\begin{figure}[ht]
\centering
\includegraphics[width=8cm, height = 5cm]{lung_cancer}
\caption{Empirical p-value distribution based on a genome wide association study ($n=45,637$) of lung cancer.
}
\label{fig:lung}
\end{figure}
Figure \ref{shapes} briefly summarizes the shapes that the density of the $p$-value distribution might take for two-tailed tests, based on the results in Theorems 1 and 2 that are introduced in Section 2. The descriptions in Table 1
verbalise the various mathematical conditions that can lead to the four shapes in Figure \ref{shapes}.
In practice it is possible to have combinations of the shapes listed in Figure \ref{shapes}, as the observed test statistics may not be identically distributed and can be drawn from a mixture of the null and alternatives hypotheses.
\begin{figure}[H]
\centering
\subfloat[Shape 1]{\includegraphics[width = 2in]{shape_1.png}}
\subfloat[Shape 2]{\includegraphics[width = 2in]{shape_2.png}}\\
\subfloat[Shape 3]{\includegraphics[width = 2in]{shape_3.png}}
\subfloat[Shape 4]{\includegraphics[width = 2in]{shape_4.png}} \caption{General chart of the behaviour of $p$-values under the null and alternative for a two tailed test. Shapes 1 to 3 were obtained from simulating from the null and alternative from Example 1 with different parameters $\alpha$ and $\beta$. Shape 4 was obtained when using a misspecified variance, as detailed in Section 2.2. }\label{shapes}
\end{figure}
\begin{table}[h]
\caption{Description of the test statistic's characteristics and the resulting shapes (as shown in Figure \ref{shapes}) of the $p$-value distribution under the null or alternative hypothesis.}
\fbox
\begin{tabular}{ | m{3em} | m{6cm}| m{6cm} |}
\hline
Shape & Null & Alternative \\
\hline
1 & The typical uniform shape. & Possible if effect size is small. \\
\hline
2 & Possible if variance is misspecified, underestimated. & Typical behaviour. \\
\hline
3 & Possible if test statistic has large higher order cumulants, see Example 1. & Possible if effect size is small and the higher order cumulants are large, see Example 1. \\
\hline
4 & Possible if variance is misspecified, overestimated, see Example 3. & Possible if variance is misspecified, overestimated and the effect size is small, see Example 3. \\
\hline
\end{tabular}}
\end{table}
Section 2 contains the main theoretical results of this paper, Theorems 1 and 2, which characterize the distribution of $p$-values under the null and alternative.
Section 2.1 examines the $p$-value distribution resulting from the score test, while Section 2.2 studies specific examples.
Section 3 provides numerical results and considers some remedies aimed at calibrating the $p$-value distribution.
Section 4 closes the paper with a discussion of the implications of our results and some recommendations to practitioners.
\section{Distribution of $p$-values under Non-Normality}
All theoretical details and proofs, as well as a brief introduction of the concepts needed for the proof of Theorems 1 and 2, are deferred to the Supplementary Materials.
We consider the case where the test statistic, $S_n$, can be discrete and may also have a non-zero mean under the null and a non-unit variance under the null or alternative.
We assume that $\psi$ is a one dimensional parameter of interest and $\lambda$ is a vector of nuisance parameters.
Without loss of generality, let the statistic $S_n$ either be used to test the null hypothesis $H_0: \psi = \psi_0$ for a two-sided test or $H_0: \psi \geq \psi_0 $ for a one-sided test.
All results are given in terms of the cdf.
Theorems 1 and 2 deal with the case where $S_n$'s distribution is continuous and discrete respectively.
We first consider the case where the statistic $S_n$ admits a density.
We typically assume that $S_n$ has been appropriately calibrated such that $\mathrm{E}[S_n]= 0$ under the null, and $\mathrm{E}[S_n] \neq 0$ under the alternative hypothesis.
We let $p(S_n)$ denote the $p$-value obtained from the test statistic $S_n$.
However, as discussed in the introduction, the mean of $S_n$ may not be exactly $0$ under the null due to a location bias.
We also would expect that the variance of the test statistic should be 1 under the null and alternative, which may not be the case for all test statistics however; see Example 3.
The location bias complicates the precise determination of whether $S_n$'s distribution should be considered under the null or the alternative.
However, note that Theorem 1 statement is applicable under both the null and alternative, since its conclusion depends only on the expectation, variance, and the other cumulants of the statistic $S_n$, regardless of the true hypothesis.
We first introduce some notations:
\begin{itemize}
\item[(i)] $Z_p$ is the $p$-th quantile of a standard normal distribution.
\item[(ii)] $\rho_{n,i}$ is the $i$-th order standardized cumulant of $S_n$, and $\rho_n$ is a vector containing all cumulants.
\item[(iii)] $\phi(x)$ is the standard normal density.
\end{itemize}
\begin{theorem}\label{th:cont_approx}
Let $X_1,\ldots, X_n$ be a sequence of continuous, independent random variables. Set $S_n= \sqrt{n} (\bar{X}_n - a_n)/b_n$ where $\bar{X}_n = n^{-1}\sum_{i = 1}^n X_i$, and let $\{a_n\}_{n\ge 0}$, $\{b_n\}_{n\ge 0}$ be two sequences of real numbers. Let $\mathrm{E}[S_n]= \mu_n$, ${\mathrm{Var}}(S_n) = v_n^2$, and $\rho_n$ denote the cumulants of $(S_n - \mu_n)/v_n$. Then the CDF of the one-sided $p$-value is
\begin{align}
\mathbb{P} (p(S_n) < t) = \Phi\left(\frac{Z_t - \mu_n}{v_n}\right) -E_2\left(\frac{Z_t - \mu_n}{v_n}, \rho_{n} \right) + O\left(n^{-3/2}\right),
\end{align}
and the CDF of the two-sided $p$-value is:
\begin{align}
\mathbb{P} (p(S_n) < t) &= 1 + \Phi\left(\frac{Z_{t/2} - \mu_n}{v_n} \right) - \Phi\left(\frac{-Z_{t/2} - \mu_n}{v_n}\right)\nonumber \\
&+E_2\left(\frac{Z_{t/2} - \mu_n}{v_n} , \rho_n\right) - E_2\left( \frac{-Z_{t/2} - \mu_n}{v_n}, \rho_n\right) + O\left(n^{-3/2} \right), \label{eq:cont_one}
\end{align}
where,
\begin{align}
E_2(t, \rho_n) = -\phi(t)\Big\lbrace \frac{\rho_{n,3} H_2(t)}{6} + \frac{\rho_{n,4} H_3(t)}{24} + \frac{(\rho_{n,3} )^2 H_5(t)}{72} \Big\rbrace, \label{eq:cont_two}
\end{align}
and $H_j(t)$ denotes the j-th Hermite polynomial.
\end{theorem}
\begin{remark}
The $j$-th order Hermite polynomial is a polynomial of $j$-th degree defined through the differentiation of a standard normal density. A table of the Hermite polynomials is given in the Supplementary Materials.
\end{remark}
\begin{remark}
Should an approximation to the probability density of the $p$-value distribution be desired, it can be obtained from differentiating Equations (\ref{eq:cont_one}) and (\ref{eq:cont_two}).
\end{remark}
In general $E_2 = O(1/n^{1/2})$, however in the
the case that $\mu_n = 0$, $E_2(Z_{t/2}/ v_n , \rho_n) - E_2( -Z_{t/2}/v_n, \rho_n) = O(1/n)$ for two-sided tests due to cancellations which occur in the difference of the odd Hermite polynomials.
We refer to terms in $E_2(t, \rho_n)$ as the higher order terms.
Therefore, supposing $\mu_n = 0$ under the null, meaning the sequence $a_n = \mathbb{E}[\bar{X}_n]$, we obtain the following corollary:
\begin{corollary}
Assume the setting and notation from Theorem \ref{th:cont_approx} and suppose that under the null we have $\mathrm{E}[S_n]= 0$, and $\mathrm{Var}(S_n) = 1$. The CDF of the distribution of the $p$-values for a one-sided test under the null is
\begin{align}
\mathbb{P} \left(p(S_n) < t \right) = t + O\left( n^{-1/2}\right),
\end{align}
and the CDF of the distribution of the $p$-values for two-sided test under the null is
\begin{align}
\mathbb{P} \left(p(S_n) < t \right) &= t + O\left(n^{-1} \right).
\end{align}
\end{corollary}
Corollary 1 shows that the two-sided test is preferable unless there is a scientific motivation for using the one-sided test.
The case when $S_n$ has a discrete distribution supported on a lattice is covered in Theorem \ref{th-discrete}.
\begin{theorem}
\label{th-discrete}
Let $X_1, \cdots, X_n $ be a sequence of independent discrete random variables where $X_i$ has mean $m_i$. Suppose that $X_i - m_i$ is supported on a lattice of the form $c + j\cdot d$, for $j \in \mathbb{Z}$ and for all $1\le i \le n$. Assume $d$ is the largest number for which this property holds.
Set $S_n= \sqrt{n} (\bar{X}_n - a_n)/b_n$, where $\bar{X}_n = n^{-1}\sum_{i = 1}^n X_i$, $\mathrm{E}[S_n]= \mu_n$, $\mathrm{Var}(S_n) = v_n^2$, $\rho_n$ as the cumulants of $(S_n - \mu_n)/v_n$ and $d_n = d \: v_n /(\sqrt{n}b_n) $.
Then the CDF of the one-sided $p$-value is
\begin{align*}
\mathbb{P} (p(S_n) < t) &= \Phi\left( \frac{Z_t - \mu_n}{v_n} \right) +E_2 \left( \frac{ Z_t - \mu_n}{v_n}, \rho_n \right)
+ C_2\left(\frac{ Z_t - \mu_n}{v_n}, \rho_n \right) + O\left(n^{-3/2}\right),
\end{align*}
and the CDF of the two-sided $p$-value is
\begin{align*}
\mathbb{P} (p(S_n) < t) &=1 + \Phi\left( \frac{Z_{t/2} - \mu_n}{v_n} \right) - \Phi\left(\frac{-Z_{t/2} - \mu_n}{v_n} \right)
+E_2\left(\frac{Z_{t/2} - \mu_n}{v_n}, \rho_n \right) \\
&- E_2 \left(\frac{-Z_{t/2} - \mu_n}{v_n}, \rho_n \right) +C_2 \left(\frac{Z_{t/2} - \mu_n}{v_n}, \rho_n \right)
- C_2\left(\frac{-Z_{t/2} - \mu_n}{v_n}, \rho_n \right) +O \left(n^{-3/2} \right),
\end{align*}
where,
\begin{align*}
C_2(t, \rho_n) = - d_n Q_1\Big( \frac{ t - \sqrt{n}c}{d_n}\Big) \Big( 1 + \frac{\rho_{n,3} H_3(t)}{6} \Big) + \frac{ d_n^2}{2} Q_2\Big( \frac{ t - \sqrt{n}c}{d_n}\Big),
\end{align*}
and $Q_j(t)$ are periodic polynomials with a period of $1$. On $[0,1)$, they are defined by
\begin{align*}
Q_1(t) = t - \frac{1}{2}, \quad Q_2(t) = t^2 - 2t +\frac{1}{6},
\end{align*}
and $E_2(t, \rho_n)$ and $H_j(t)$ are defined as in Theorem 1.
\end{theorem}
\begin{corollary}
Assume the setting and notation from Theorem \ref{th-discrete} and suppose that under the null $\mathrm{E}[S_n]= 0$, and $Var(S_n) = 1$. Then the $p$-values obtained from one or two-sided tests satisfy
\begin{align}
\mathbb{P} (p(S_n) < t) = t + O\left(n^{-1/2}\right).
\end{align}
\end{corollary}
Note that the convergence is slower by a factor of $n^{-1/2}$ compared to the continuous case for a two-sided test. This is due to the jumps in the CDF which are of order $O(n^{-1/2})$.
\begin{remark}
Under the alternative, the $p$-value distribution depends on the effect size, $\mu_n$, as well as the magnitude of the higher order cumulants.
However, for large values of $\mu_n$ the impact of the higher order terms will be negligible, as $E_2$ is a product of an exponential function and a polynomial function which decays to 0 asymptotically in $\mu_n$. We explore this further in Example 4.
\end{remark}
\begin{remark}
When performing multiple hypothesis testing corrections, the $p$-values of interest are often extremely small.
Therefore from Corollary 1 and 2, we see that a large amount of samples is needed to guarantee the level of accuracy required since the approximation error is additive.
\end{remark}
\subsection{An Application of the Main Theorems: The Score Test} \label{subsec:score}
We examine the broadly used score test statistic, also known as the Rao statistic.
The popularity of the score statistic is due to its computational efficiency and ease of implementation.
In the presence of nuisance parameters, the score statistic is defined through the profile likelihood.
Suppose that the observations $y_i$'s are independent then
\[ l_\text{pro}(\psi) = \sup_{\lambda} l(\psi, \lambda; Y) = l(\psi, \hat\lambda_\psi; Y), \]
where $\hat\lambda_\psi$ denotes the constrained maximum likelihood estimator. The score statistic is defined as:
\[ S_n(\psi_0) = \frac{l_{\text{pro}}^\prime(\psi_0)}{ \lbrace l^{\prime\prime}_{\text{pro} } (\psi_0) \rbrace^{1/2}}= \sum_{i = 1}^n \frac{ \frac{d}{d\psi} l(\psi, \hat\lambda_\psi; y_i)}{ \left\lbrace\sum_{i = 1} \frac{d^2}{d\psi^2} l(\psi, \hat\lambda_\psi; y_i)\right\rbrace^{1/2}} \xrightarrow{D} N(0,1), \]
under the usual regularity assumptions.
Due to the form of $S_n$, we may apply Theorem 1 or 2.
The presence of nuisance parameters induces a bias in the mean and variance of the score statistic; see \cite{profile_bias} and \cite{DiCicio}.
Thus, it is not the case that the mean of the score statistic is 0 and the variance is 1 under the null, as the profile likelihood does not behave like a genuine likelihood and does not satisfy the Bartlett identities.
In general this problem is compounded if the number of nuisance parameters is increased, as we illustrate below.
We only discuss the location bias, since the formulas for the information or variance bias are much more involved and compromise the simplicity of the arguments.
From \cite{profile_bias}, the bias of the profile score under the null is:
\begin{align}
\mathbb{E}\lbrace l^\prime_{\text{pro}}(\psi_0) \rbrace &= \alpha_n + O\left( n^{-1} \right),
\end{align}
where the term $\alpha_n = O(1)$. The form of $\alpha_n$ is given in the Supplementary Materials.
To estimate the effect of the dimension of the nuisance parameter on the size of the bias, we use a similar argument as \cite{laplace}, in which they count the number of nested summations that depends on $k$, the number of parameters in the model, to estimate the rate of growth of a function in $k$.
From the expression of $\alpha_n$ given in the Supplementary Materials, we obtain at most 4 nested summations which depends on $k$ therefore the bias of the profile score is of order $O(k^4)$ in the worst case scenario.
The rather large location bias can be impactful as it may induce a perceived significance when $k$ is large, an example using Weibull regression is given in Section 3.3.
A similar argument can be applied to the information bias; see \cite{DiCicio} for a comprehensive discussion on the form of these biases.
The information bias for the score statistic can also be highly influential under the alternative.
In that case, the expected value of the score statistic is non-zero, which is desirable, but the variance of the statistic $S_n$ can be either over- or underestimated.
Since the true parameter value is not $\psi_0$, there is no guarantee that ${l_{\text{pro}}^{\prime\prime}(\psi_0)}$ gives the correct standardization.
If the estimated variance is larger than the true variance of the score, then it is possible to obtain Shape 4 in Figure \ref{shapes}, which violates the concavity assumption for $p$-value distribution's CDF under the alternative.
Further, if we assume that under the null the $p$-value distribution is uniform, then this also violates the monotonicity assumption required by \cite{Cao2013} for the optimality of FDR control.
Example \ref{ex:score_glm} illustrates this phenomenon using the score test in a generalised linear model.
\begin{example} \label{ex:score_glm}
Assume the following regression model based on the linear exponential family where the density of the observations $y_1, \cdots, y_n$ are independent and follows
\[ h(y_i| \beta, X_i) = \exp\lbrace a(X_i\beta) y_i+ b(X_i\beta) + D(y_i) \rbrace, \]
where $X_i$ is a vector of covariates associated with each $y_i$, and $\beta = (\beta_0, \beta_1, \cdots, \beta_k)$ is a vector of regression coefficients.
Let $f(X\beta) = E[y|X]$ denote the mean function.
\cite{score_reg} studied the score statistic for testing the global null $\beta_1 = \beta_2= \cdots = \beta_k = 0$ and linked the resulting statistic to linear regression.
A similar analysis can be performed for different hypothesis, such as inference for a parameter of interest in the presence of nuisance parameters to produce a more general result whose derivation is consigned to the Supplementary Materials. The resulting score statistic takes the form
\[S_n = \lbrace y - f(X\hat\beta_{\text{null}}) \rbrace^\top W X \left\lbrace X^\top D X \right\rbrace^{-1} X^\top W \lbrace y - f(X\hat\beta_{\text{null}} ) \rbrace \xrightarrow[]{D} \chi^2_{q}, \]
where $f(X\hat\beta_{\text{null}})$ is a vector whose $i$-th entry is $f(X_i\hat\beta_{\text{null}})$, $q$ is the number of constraints in the null hypothesis, and $\hat\beta_{\text{null}}$ denotes the constrained maximum likelihood estimate under the null. W and D are square diagonal matrices of dimension $n$ whose entries are $[W]_{ii} = a^\prime(X_i\hat\beta_{\text{null}})$ and $[D]_{ii} = a^\prime(X_i\hat\beta_{\text{null}}) f^\prime(X_i\hat\beta_{\text{null}}) $ for $i = 1, \dots, n$. Using a suitable change of variable, the statistic $S_n$ can be related to weighted linear regression.
In the common case where we wish to test for $\beta_j = 0$, the score statistic can be re-written in the form:
\[S_n = [ (X^\top D X)^{-1} ]_{jj}^{1/2} \sum_{i = 1}^n a^\prime(X_i \hat\beta_{\text{null}}) x_{ij} \lbrace y_i - f(X_i \hat\beta_{\text{null}}) \rbrace \xrightarrow{D} N(0,1), \]
under the null. Under the alternative we may write
\[S_n = (1 + c_n )\tilde{S}_n + d_n, \]
where $\tilde{S}_n$ converges in distribution to a standard normal. The scaling factor $c_n = O(1)$ is an information bias and $d_n$ plays the role of the effect size and will increase to infinity as the number of samples increases.
However, if $\beta_j \approx 0 $ then $d_n$ can be quite small, meaning that the effect of the scaling factor $c_n$ can be consequential.
For an example of this see Example 3, where the effect size is not large enough to offset the scaling factor.
\end{example}
\begin{remark}
Although the likelihood ratio statistic can be written as a summation of independent random variables, the limiting distribution of the likelihood ratio test is a gamma random variable, therefore Theorem 1 or 2 are not directly applicable.
It may be possible to modify the baseline density used in the Edgeworth expansions to obtain a result based on Laguerre polynomials.
This can also be useful when examining the asymptotic behaviour of test statistics for testing vector parameters of interest, as these test statistics often have a gamma distributed limiting distribution.
\end{remark}
\begin{remark}
The bias issue discussed within this section is also present for the Wald test statistics, even if it can not be represented by a summation of independent random variables.
It is rarely the case that the maximum likelihood estimate is unbiased, and the same applies for the estimate of the variance of the maximum likelihood estimate.
Generally the problem worsens as the number of nuisance parameters increases.
\end{remark}
\subsection{Numerical Examples of Application of the Main Theorems}
We illustrate the results of the main results with some numerical examples to demonstrate how various problems in the distributions of the $p$-value can occur.
We first examine a discrete case where the statistic $S_n$ does not admit a density.
We note that when the $E_2$ term is negligible, our results on the distribution of the $p$-values coincide with those obtained by \cite{hung} when the exact normality of the test statistic holds.
On the contrary, when the additional terms are not negligible or the variance is incorrectly specified,
the behaviour of the distribution of $p$-values can be quite different.
The exact size of the difference depends on the behaviour of the Hermite polynomials, the higher order cumulants and the variance.
We consider the following examples in order to illustrate some of the ramifications.
\begin{example}\label{ex:linkage}
Consider a simple linkage analysis of sibling pairs who share the same trait of interest, a common problem in statistical genetics.
The underlying principle is that genes that
are responsible for the trait are expected to be over-shared between relatives, while the null hypothesis states that the trait similarity does not impact allele sharing, i.e. independence between the trait and gene.
The problematic distribution of $p$-values in this example is caused by the discrete nature of the problem along with a misspecified variance under the alternative.
Since the offsprings are from the same parents, under the null we would expect the number of shared alleles to be either 0, 1 or 2 with probability $\theta_{null} = (p_0, p_1, p_2) = (0.25,0.5,0.25)$ based on Mendel's first law of segregation.
However, under the alternative we can expect the sharing level to be higher than expected.
Assume that we have $n$ affected sibling pairs.
Let $x_i$ be the number alleles shared amongst the $i$-th affected sibling pair.
Then under the null $E[x_i] = 1$, $Var[x_i] = 0.5$ and we let $y_i = (x_i - 1)/\sqrt{0.5}$.
We consider the following well known non-parametric linkage test
\begin{align*}
S_n = \sum_{i = 1}^n \frac{y_i}{\sqrt{n}} \sim N(0,1),
\end{align*}
see \cite{laird2010fundamentals}. The above can be compared to a score test as only the information under the null was used.
Under the alternative, the distribution of the test can be misspecified, since a different distribution of allele sharing will yield a different variance.
Consider the simple example when the distribution of the numbers of shared alleles follows a multinomial distribution with $\theta_{alt1} = (0.09,0.8,0.11)$.
The variance of this distribution is $0.4 < 0.5$. Yet another alternative in which $\theta_{alt2} = (0.29,0.4,0.31)$ yields the variance $0.6 > 0.5$; in both case there is oversharing.
We include visualizations of the $p$-value distribution under the two alternatives in Figure \ref{fig:linkage}. Theorem 2 is used to produce the approximation given by the blue curve, due to the discrete nature of the test statistic.
\begin{figure}
\centering
\subfloat[Score test, $\theta = \theta_{alt1}$]{\includegraphics[width = 2in]{linkage_1.png}}
\subfloat[Score test, $\theta = \theta_{alt2}$ ]{\includegraphics[width = 2in]{linkage_2.png}}
\\
\subfloat[Wald test, $\theta = \theta_{alt1}$]{\includegraphics[width = 2in]{linkage_wald_2.png}}
\subfloat[Wald test, $\theta = \theta_{alt2}$ ]{\includegraphics[width = 2in]{linkage_wald_1.png}}
\caption{Plots for Example 3, examining the behaviour of the $p$-value distribution for non-parametric linkage analysis for the score test (upper panel) and the Wald test (lower panel). The simulation is performed with $n =400$, and $100,000$ replications. Samples of sibling pairs are generated from a multinomial distribution with $\theta_{alt1} = (0.09,0.8, 0.11) $ for the two plots on the left panel, and $\theta_{alt2} = (0.29,0.4,0.31)$ for the two plots on the right panel. For the score test, both histograms have spikes due to the discrete nature of the problem. The discrete version of the Edgeworth approximation, plotted in blue, is used as the test statistic is supported on a lattice. The histograms of the $p$-values obtained form the Wald test look much better than their score test counterparts. }
\label{fig:linkage}
\end{figure}
\end{example}
In this case the problem can be resolved by considering a Wald type test where the variance is calculated from the maximum likelihood estimate $\hat{\theta} = ( \#(x_i = 0)/n, \#(x_i = 1)/n, \#(x_i = 2)/n )$, and use:
\[S_n^\prime = \sum_{i = 1}^n \frac{x_i - 1}{\sqrt{n\widehat{\text{var}}(x_i)}}.
\]
We plot the results of applying the Wald test in Figure \ref{fig:linkage}.
The solution is quite simple in this case, but in more complex models it is more computationally expensive to calculate the variance estimate under the alternative. \\
\noindent\textbf{Example 1 revisited.} The abnormal distribution of $p$-values in this scenario is caused by a large numerical value of $\rho_{n,3}$ and $\rho_{n,4}$.
Going back to Example 1, we look at the theoretically predicted behaviour of the $p$-values under the null and alternative.
Figure \ref{fig:test} shows the histograms of the empirical $p$-values obtained by simulation versus the theoretical prediction given in Theorem 1, shown as the blue curve.
Without accounting for the higher order terms in the expansion we would have expected the null distribution to be uniform, however, using Theorem 1, we obtain a much more accurate description of the $p$-value distribution.
In the bottom panel of Figure \ref{fig:test} we also show a corrected version of the $p$-values approximation using the the saddlepoint approximation which will be introduced in Section 3.
The estimation of small p-values based on the standard normal approximation can be drastically optimistic. We report in Table \ref{tb:deltas}
the differences between the exact and the approximate $p$-value obtained from Example 1 for the 5 smallest $p$-values.
The smallest $p$-values from the normal approximation are not on the same scale as the exact $p$-values, the smallest approximate $p$-value being five-fold times smaller than its exact counterpart.
In contrast, the $p$-values produced by the saddlepoint approximation are very close to the exact ones.
\begin{table}[b]
\caption{ \label{tb:deltas} Table of $p$-values obtained from Example 1 under the null. The exact $p$-values are obtained from the density of the gamma distribution, the approximate $p$-values are obtained from the normal approximation.}
\centering
\begin{tabular}{rrrrr}
\hline
ID & rank & $p$-value exact & $p$-value approx. & $p$-val saddlepoint \\
\hline
60326 & 1 & 1.04E-05 & 1.04E-10 & 1.04E-05 \\
91132 & 2 & 1.46E-05 & 3.06E-10 & 1.47E-05 \\
83407 & 3 & 2.12E-05 & 9.66E-10 & 2.12E-05 \\
97470 & 4 & 3.31E-05 & 3.75E-09 & 3.32E-05 \\
2573 & 5 & 3.80E-05 & 5.66E-09 & 3.81E-05 \\
\hline
\end{tabular}
\end{table}
\begin{example}
We examine the influence of the effect size $\mu_n$ on the distribution of the $p$-values under the alternative using the same set-up as in Example 1. In our simulations we increase the effect size $\mu_n$ by changing the value of $\beta$, while keeping $\alpha$ fixed. The results are displayed in Figure \ref{fig:altnernaives}.
\begin{figure}[H]
\centering
\subfloat[Subfigure 1 list of figures text][Small effect size ]{\includegraphics[width = 1.75in]{alt_1025.png}}
\subfloat[Subfigure 2 list of figures text][Medium effect size ]{\includegraphics[width = 1.75in]{alt_105.png}}
\subfloat[Subfigure 1 list of figures text][Large effect size ]{\includegraphics[width = 1.75in]{alt_1075.png}} \\
\subfloat[Subfigure 1 list of figures text][Small effect size ]{\includegraphics[width = 1.75in]{alt_1025_s.png}}
\subfloat[Subfigure 2 list of figures text][Medium effect size ]{\includegraphics[width = 1.75in]{alt_105_s.png}}
\subfloat[Subfigure 1 list of figures text][Large effect size ]{\includegraphics[width = 1.75in]{alt_1075_s.png}}
\caption{Distribution of the approximated $p$-values (top panel) and the corrected $p$-values (bottom panel), under three different alternatives with $\alpha_1 =\alpha_2 = \alpha_3 =0.01$ and $\beta_2 = 0.01/1.025$, $\beta_1 = 0.01/1.05$ and $\beta_3 = 0.01/1.1$, from left to right.
By increasing the effect size, the approximate $p$-values starts to behave in an expected manner.
While the corrected $p$-values obtained by using the saddlepoint approximation is well behaved for all effect sizes.}
\label{fig:altnernaives}
\end{figure}
As discussed in Remark 3, for large effect sizes $\mu_n$, the distribution of $p$-values generated from the test statistic follows the expected trend, where there is a concentration of $p$-values around $0$ and the density decreases in a monotone fashion to 1.
Conversely, should $\mu_n$ be small then the behaviour under the alternative can be quite different from what we would expect, as illustrated by the top-right plot in Figure \ref{fig:altnernaives}.
\end{example}
\section{Additional Examples and Possible Remedies}
We provide additional examples of problematic $p$-value distributions, and we explore some possible remedies based on high order asymptotics.
We also provide additional examples of problematic $p$-value distributions.
A commonly used tool for higher order asymptotics is the saddlepoint approximation, which is a density approximation that can be integrated to obtain tail probabilities, e.g. $p$-values.
For a good survey of the saddlepoint approximation and its applications in statistics, we refer the reader to \cite{reid1988} or for a more technical reference, we suggest \cite{jensen1995saddlepoint} or \cite{kolassa1994series}.
The saddlepoint approximation can be most easily obtained for a sum or average of independent random variables, $X_1, \dots, X_n$. The density approximation then results in an approximation of the cumulative distribution through a tail integration argument,
\begin{align}
P(\bar{X} < s) = \Phi(r_s)\lbrace 1 + O(n^{-1}) \rbrace, \label{accuracy_saddle}
\end{align}
where $r_s$ is a quantity constructed from the saddlepoint and the cumulants of the distribution of the $X_i$'s. This can be used for conditional inference in generalized linear models by approximating the distribution of the sufficient statistics in a exponential family model; see \cite{davison}.
Another more broadly applicable tail approximation is the normal approximation to the $r^\star$ statistic \citep{barndorff1989asymptotic}, which is obtained by adding a correction factor to $r$, the likelihood root. It can be used in regression settings for inference on a scalar parameter of interest.
Let $r =\text{sign}(\hat\psi) [2 \lbrace l_{\text{pro}}(\hat\psi) - l_{\text{pro}}(\psi_0) \rbrace]^{1/2}$ denote the likelihood root, and in what follows the quantity $Q$ varies depending on the model.
\begin{align*}
P(r < s) = \Phi\left\lbrace r + \frac{1}{r} \log\left(\frac{Q}{r}\right)\right\rbrace \left\lbrace 1 +O\left( n^{-3/2} \right) \right\rbrace.
\end{align*}
Using the above, we also obtain an improved approximation to the true distribution of the likelihood root.
For a discussion of $r^\star$ see \cite{reid_wald}.
The proposed methods require two model fits, one under the alternative and one under the null in order to obtain $r$, contrary to the score test.
The methods listed here are by no means comprehensive since there are a variety of other candidates which may be of use, such as the often applied Firth correction \citep{firth} or other forms of bias correction obtainable by adjusting the score equation \citep{kosmidis2020mean}.
\subsection{The Gamma example}
We apply the saddlepoint approximation to Example 1 and display the results in Figure \ref{fig:test}.
Considering the null $H_0: \alpha = \beta = 0.01$ (the two plots on the left panel), there is a spike around 0 for $p$-values obtained using the CLT (top left plot).
In contrast, we see a marked improvement of the overall
behaviour of the $p$-value distribution after the proposed correction (bottom left plot).
\subsection{Logistic Regression in Genetic Association Studies}
We apply the normal approximation to $r^\star$ to a simulated genome-wide association study to further illustrate the practical use of the proposed correction.
We consider a logistic regression model that links the probability of an individual suffering from a disease to that individual's single nucleotide polymorphism (SNP), a genetic ordinal variable coded as 0, 1 or 2, and other covariates such as age and sex.
Formally, let the disease status of the individual be $Y_i$, which is either $0$ (individual is healthy) or $1$ (individual is sick) and $\pi_i = E[Y_i]$ denote the probability of individual $i$ having the disease and let $X_{i, s}$ denote the genetic covariate of interest of the $i$-th individual, while $X_{i,j}, j = 1, 2$, $j\ne s$ are the other covariates.
The regression model is:
\begin{align*}
\text{logit}(\pi_i) = X^i_s \beta_s + \sum_{j = 1}^2 X^i_j \beta_{j} + \beta_0.
\end{align*}
We consider the difficult case where the disease is uncommon in the population and the SNPs of interest are rare, i.e. most observed values of $X_{i,s}$ are 0.
It is known that in this situation the single-SNP test performs poorly, and pooled analyses of multiple SNPs have been proposed \citep{pooled}.
However for the purpose of this study, we assume that the individual SNPs are of interest.
We consider a simulated example to demonstrate the effectiveness of the correction.
We generate a sample of 3,000 individuals, their genetic variable $X_s$ are simulated from a $Binomial(2, 0.025)$, a binary variable $X_1$ from a $Binomial(1,0.5)$ and finally $X_2$ from a $N(20, 1)$.
We let $\beta_0 = -3.5$, $\beta_s = 0$, $\beta_1 = 0.02$ and $\beta_2 = 0.02$.
With this set of parameters we would expect on average $ \approx 4.6\%$ of the cohort to be in the diseased group, based on the expected value of the covariates, i.e. approximately 137 participants with $Y_i = 1$.
For each replication of the simulation, we re-generate the labels from the logistic model.
Figure \ref{fig:6} shows that the correction works well under the null.
\begin{figure}[ht]
\centering
\subfloat{\includegraphics[width=2.75in]{Wald_null}}
\subfloat{\includegraphics[width=2.75in]{cond_inf_null}}
\caption{Empirical distribution of the null $p$-values from a logistic regression association study of SNPs with with low minor allele frequency and a low number of diseased individuals.
Left histogram displays the $p$-values from the Wald test under the null. The right histogram displays the $p$-values histogram obtained from $r^\star$ under the null. }
\label{fig:6}
\end{figure}
This example suggests that the usefulness of the proposed higher order corrections is not limited to small sample scenarios, as note by \cite{zhou2018efficiently} who used the saddlepoint approximation in case control studies with extreme sample imbalance.
Naively we would expect that with 3,000 participants, of which 137 are in the diseased group, the Wald test should behave correctly. However, the skewed distribution of the SNP values severely reduces the accuracy of the test.
The use of $r^\star$ corrects the distribution of the $p$-values as shown in Figure \ref{fig:6} (right plot) where the distribution of the $p$-values under the null ($\beta_s=0$) is approximately Unif(0, 1) as expected.
In the example above it is clear that even though we have 3,000 individuals, of which 137 are affected by the disease, the standard approximation performs very poorly.
This seems to suggest that in our particular example, the effective sample size is lower than 137 for the diseased group.
Next we consider a simple regression with a single genetic covariate in order to illustrate the loss in information resulting from the sparsity of the minor allele.
We use the available Fisher information about the parameter of interest as a measure of effective sample size.
The standard deviation of the parameter of interest obtained from the inverse information matrix is
\begin{align*}
var(\hat\beta_s) &= \frac{\sum_{i = 1}^{n} \hat{P_i} (1 - \hat{P_i})}{\sum_{i = 1}^{n} \hat{P_i} (1 - \hat{P_i})\sum_{i = 1}^{n} x_i^2 \hat{P_i} (1 - \hat{P_i}) - (\sum_{i = 1}^{n} x_i \hat{P_i} (1 - \hat{P_i}) )^2},\\
&\approx \frac{1}{\sum_{i = 1}^{n} x_{i,s} \hat{P_i} (1 - \hat{P_i})},
\end{align*}
where $\hat{P}_i$ is the predicted probability of an individual being diseased and the approximation is valid under the assumption that the allele frequency is low enough such that we observe very few 1's and almost no 2's.
The information about the parameter $\beta_s$ is increasing in terms of $x_i \hat{P_i}(1 - \hat{P_i})$.
It is apparent that the rate of increase in information is limited by the sparsity of the rare allele.
In order to have more information about the parameter, we would need to observe more individuals who have the rare allele, i.e. $X_i \ne 0$.
\subsection{Logistic Regression - Data from the 1000 Genome Project}
We consider an additional logistic regression example as this type of model is broadly used in statistical genetics.
Using phase 3 data from the 1000 Genome project \citep{1000genome}, we construct an artificial observational study in order to study how these approximations behave on real genome-wide genetic data.
In our simulations, we take the 2504 individuals within the database and assign the $i$-th individual a label of $0$ or $1$ based on the following logistic model, where $\pi_i = P(Y_i = 1)$:
\[ \text{logit}(\pi_i) = \sum_{j = 1}^4 X^i_j \beta_{j} + \beta_\text{Sex}*I( \text{Sex}_i = \text{male}) + \beta_0, \]
where $\text{Sex}_i$ is the biological sex of the $i$-th individual. Four other covariates are included, where $X^i_j$ are independent for all $i, j$ and follow a standard normal distribution.
The model coefficients are set to
\[(\beta_0, \beta_1, \beta_2, \beta_3, \beta_4, \beta_{\text{Sex}}) = (-3.25, 0.025, -0.025, 0.025, -0.03, 0.1).\]
Once we assign a label to the $i$-th individual we keep it fixed throughout the simulation.
We then fit a logistic model using the SNPs for which the minor allele frequency is at least $1\%$ on chromosome $10$, and ethnicity as additional covariates.
We use the Wald test, and $r^\star$, but do not consider the cases where perfect separation occurs, as both methods considered here cannot deal with this issue.
We plot some of the results for the Wald test and $r^\star$.
We focus on rare variants with MAF $\leq 2.5\%$ and semi-common variants with $2.5\% <$ MAF $\leq 10\%$, as the remaining common variants are expected to behave well.
In total $160,580$ SNPs fall into the rare variant category while $176,350$ SNPs fall into the semi-common variant category.
\begin{figure}[h]
\centering
\subfloat[Subfigure 1 list of figures text][Rare variants, Wald]{\includegraphics[width =2in]{wald_low_MAF_10.png}}
\subfloat[Subfigure 1 list of figures text][Rare variants, $r^\star$]{\includegraphics[width = 2in]{rstar_low_MAF_10.png}}\\
\subfloat[Subfigure 1 list of figures text][Semi-common variants, Wald ]{\includegraphics[width = 2in]{wald_mid_MAF_10.png}}
\subfloat[Subfigure 1 list of figures text][Semi-common variants, $r^\star$ ]{\includegraphics[width = 2in]{rstar_mid_MAF_10.png}}
\caption{Distribution of $p$-values for Wald test and $r^\star$. The null distribution was simulated by using sex and four other randomly generated covariates. We fit a logistic regression model using SNPs from chromosome 10, with $160,580$ being rare ($\text{MAF}\leq 2.5\%$ ) and $176,350$ semi-common variants ($2.5\% < \text{MAF}\leq 10\%$ ).}
\label{altnernaives}
\end{figure}
As expected, the two tests behave better for semi-common SNPs than rare SNPs (bottom vs. top panel of Figure 7), producing $p$-values that more closely follow the Unif(0,1) distribution. Among the two tests, the proposed $r^\star$ method clearly out-performs the traditional Wald test.
However, this application also points out the limitation of $r^\star$ as the correction for rare variants is not sufficient (top right plot), and further improvement of the method in this case is of future interest.
\subsection{Weibull survival regression}
Consider an example where there is a large number of nuisance parameters, leading to an inconsistent estimate of the variance.
We examine a Weibull survival regression model in which all of the regression coefficients, except the intercept, are set to $0$ by simulating $y_i \sim \text{Weibull}(1,2)$, independently of any covariate.
We set the number of observations, $n$ to 200 and the number of covariates to $50$, and generated the covariates as IID standard Gaussian, and test for whether the first (non-intercept) regression coefficient is 0.
We perform 10,000 replications and plot the histogram of the $p$-values, and compare the Wald test to the $r^\star$ correction.
\begin{figure}[H]
\centering
\subfloat{\includegraphics[width=0.35\linewidth]{weibull.png}}
\subfloat{ \includegraphics[width=0.35\linewidth]{weibull_r.png}}
\caption{On the left, a histogram of the $p$-values produced by the Wald test for $\beta_1 = 0$ under the null with $n = 200$ and $p = 50$ with no censoring. On the right, a histogram of the $p$-value obtained from the $r^\star$ correction. 10,000 replications were performed.}
\label{regression}
\end{figure}
In Figure \ref{regression} we see a high concentration of $p$-values around $0$ for the Wald test, leading to increased I error.
The corrective procedure brings the distribution under the null much closer to uniformity.
We see that naively adding more and more information into the model while trying to perform inference on a one dimensional parameter of interest is problematic as it creates a perceived significance of the parameter of interest under the null.
\section{Discussion and Conclusion}
We characterize the distribution of $p$-values when the test statistic is not well approximated by a normal distribution by using additional information contained in the higher order cumulants of the distribution of the test statistic.
We also demonstrate that there are issues beyond failure to converge to normality in the that the expectation and variance of the test statistics can be misspecified, and these issues can persist even in large sample settings.
In doing so we have extended the previous work done by \cite{hung} to greater generality, examining the score test in exponential models in the presence of nuisance parameters.
We also examine some possible remedies for making the $p$-value distribution adhere more closely to their usual required behaviour such as uniformity under the null or concavity of the CDF under the alternative.
These assumptions are very important to justify the usage of current FWER and FDR procedures.
The proposed remedies may not solve all problems
relating to the $p$-value distribution in the finite sample settings, but they do at least partially correct some of the flaws.
We suggest the use of the proposed saddlepoint approximation or the normal approximation to $r^\star$ in practice, because a) the exact distribution of a test statistic is often unknown, b) the usual CLT approximation may not be adequate, and c) the high order methods are easy to implement.
This will ensure a closer adherence to the assumptions usually needed to conduct corrective procedures used in FWER control or FDR control.
\section*{Acknowledgement}
The first author would like to thank Nancy Reid, Michele Lambardi di San Miniato and Arvind Shrivats for the help and support they provided. We also thank the Natural Sciences and Engineering Research Council, the Vector Institute and the Ontario government for their funding and support.
\newpage
|
2,877,628,089,826 | arxiv | \section{Introduction}
Neutrino conversions from one flavor to another combined with the change of the
particle helicity, e.g. $\nu_e^\mathrm{L} \leftrightarrow \nu_\mu^\mathrm{R}$, are
usually called neutrino spin-flavor oscillations (see Ref.~\cite{qmSFO}). This
neutrino oscillations scenario is important since it could be the explanation of
the time variability of the solar neutrino flux (see, e.g., Ref.~\cite{solarnu}).
Massive flavor neutrinos are known to mix and can have non-zero magnetic moments.
The influence of the strong magnetic field with the realistic profile could lead to
the spin-flavor oscillations of solar neutrinos (see, e.g.,
Ref.~\cite{realisticB}). Moreover, studying neutrino spin-flavor oscillations
happening inside the Sun, one will be able to discriminate between different solar
models~\cite{PicPulAndBarMan07}. However it was found out in Ref.~\cite{smallcontr}
that neutrino spin-flavor oscillations in solar magnetic fields give a sub-dominant
contribution to the total conversion of solar neutrinos.
In this paper we study neutrino spin and spin-flavor oscillations in matter and in
an external magnetic field. We suppose that a neutrino is a Dirac particle with a
non-zero magnetic moment. It should be mentioned that in spite of the recent claims
of the experimental confirmation that neutrinos are Majorana
particles~\cite{KleKriDieChk04}, the question about the nature of neutrinos is
still open~\cite{EllEng04}. The possibility to distinguish between Dirac and
Majorana particles in the partially polarized solar neutrino flux, due to the
spin-flavor precession, was examined in Ref.~\cite{Sem97}.
To describe the evolution of the neutrino system we apply the technique based on
the relativistic quantum mechanics. We start from the exact solution to the Dirac
equation in an external field and then derive the neutrino wave functions
satisfying the given initial condition. We used this method to describe neutrino
flavor oscillations in vacuum~\cite{FOvac}, in background matter~\cite{Dvo06EPJC}
and spin-flavor oscillations in an external magnetic field~\cite{DvoMaa07}. Note
that neutrino spin-flavor oscillations in electromagnetic fields of various
configurations were examined in Refs.~\cite{emfields,Dvo07YadFiz,twisting} using
the standard quantum mechanical approach.
In Sec.~\ref{SO} we find the solution to the Dirac equation for a neutrino
propagating in background matter and interacting with the twisting magnetic field.
Then we formulate the initial condition problem and receive the transition
probability for spin oscillations in the given external fields. The standard
quantum mechanical transition probability formula is re-derived and the conditions
of its validity are analyzed. In Sec.~\ref{SFO} we apply the obtained Dirac
equation solutions to the description of neutrino spin-flavor oscillations in the
twisting magnetic field. First we discuss magnetic moment matrices of neutrinos in
flavor and mass eigenstates bases. Then we solve the initial condition problem in
two different cases of the magnetic moments matrix in the mass eigenstates basis
with (i) great diagonal elements and (ii) great non-diagonal elements. Note that
the analogous magnetic moments matrices were discussed in Ref.~\cite{DvoMaa07}. We
get neutrinos wave functions and calculate transition probabilities for processes
like $\nu_\beta^\mathrm{L}\xrightarrow{B}\nu_\alpha^\mathrm{R}$. The consistency of
the Dirac-Pauli equation approach with the standard quantum mechanical treatment of
spin-flavor oscillations, based on the Schr\"odinger evolution equation, is
considered in Sec.~\ref{QM}. Then in Sec.~\ref{APPL} we present some applications
and finally we summarize our results in Sec.~\ref{CONCL}.
\section{Neutrino spin oscillations in matter and in a twisting magnetic
field\label{SO}}
In this section we obtain the exact solution to the Dirac-Pauli equation for a
neutrino interacting with background matter and a twisting magnetic field and
discuss spin oscillations of a single Dirac neutrino in the given external fields.
A neutrino is taken to have the non-zero mass $m$ and the magnetic moment $\mu$.
The Lagrangian for this system has the form,
\begin{equation}\label{LagrmattB}
\mathcal{L}=\bar{\nu}(\mathrm{i}\gamma^\mu\partial_\mu - m)\nu-
\bar{\nu}\gamma_\mu^\mathrm{L}\nu f^\mu-
\frac{\mu}{2}\bar{\nu}\sigma_{\mu\nu}\nu F^{\mu\nu},
\end{equation}
where $\gamma_\mu^\mathrm{L}=\gamma_\mu(1+\gamma^5)/2$,
$\sigma_{\mu\nu}=(\mathrm{i}/2)(\gamma_\mu\gamma_\nu-\gamma_\nu\gamma_\mu)$
and $F_{\mu\nu}=(\mathbf{E},\mathbf{B})$ is the electromagnetic
field tensor. In the following we will discuss the situation when
only magnetic field $\mathbf{B}$ is presented, i.e.
$\mathbf{E}=0$. The neutrino interaction with matter is
characterized by the four vector $f^\mu$. For the non-moving and
unpolarized matter one can take that the spatial components of the
vector $f^\mu$ are zero, i.e. $\mathbf{f}=0$. If, for instance, we consider an
electron neutrino propagating in matter, which consists of
electrons, protons and neutrons, we obtain for the time component, $f^0$,
of the vector $f^\mu$ (see, e.g., Ref.~\cite{DvoStu02JHEP}),
\begin{align}\label{fcomp}
f^0 = & \sqrt{2}G_\mathrm{F}
\sum_{f=e,p,n} n_f q_f,
\notag
\\
q_f = & (I_{3\mathrm{L}}^{(f)}-2Q^{(f)}\sin^2\theta_W+\delta_{ef}),
\end{align}
where $n_f$ is the number density of background particles, $I_{3\mathrm{L}}^{(f)}$
is the third isospin component of the matter
fermion $f$, $Q^{(f)}$ is its electric charge, $\theta_{W}$ is
the Weinberg angle and $G_\mathrm{F}$ is the Fermi constant.
It should be noted that Eqs.~\eqref{LagrmattB} and~\eqref{fcomp} constitute the
phenomenological model studied in the present paper. These expressions are valid in
a relatively weak external magnetic field. For example, one has to take into
account the spatial components of the vector $f^\mu$ if we describe neutrino
propagation in background matter composed of electrons under the influence of a
very strong magnetic field with $\sqrt{|\mathbf{B}|} \gg
\max(m_e,T,\mathfrak{M},|\mathbf{p}|)$, where $m_e$ is the electron mass, $T$ is
the temperature of background matter, $\mathfrak{M}$ is its chemical potential and
$\mathbf{p}$ is the neutrino momentum. This situation was analyzed in
Ref.~\cite{EliFerInc04}.
Using Eq.~\eqref{LagrmattB} one writes down the Dirac equation which accounts for
the neutrino interaction with matter and magnetic field,
\begin{align}\label{DireqmattB}
\mathrm{i}\dot{\nu}= & \mathcal{H}\nu,
\notag
\\
\mathcal{H}= & (\bm{\alpha}\hat{\mathbf{p}})+\beta m -
\mu\beta(\bm{\Sigma}\mathbf{B})+f^0(1+\gamma^5)/2,
\end{align}
where $\bm{\alpha}=\gamma^0\bm{\gamma}$, $\beta=\gamma^0$ and
$\bm{\Sigma}=\gamma^0\gamma^5\bm{\gamma}$ are the Dirac matrices. Let us discuss
the case of the twisting magnetic field, $\mathbf{B}=B(0,\sin \omega x,\cos \omega
x)$, where $\omega$ is the frequency of the magnetic field rotation. Sometimes it
is called the spiral undulator magnetic field. Note that neutrino oscillations in
twisting magnetic fields in frames of the quantum mechanical approach were studied
in Ref.~\cite{twisting}.
We notice that the Hamiltonian $\mathcal{H}$ in
Eq.~\eqref{DireqmattB} depends on neither $y$ nor $z$ coordinates.
Therefore we assume that the wave function depends on these
coordinates exponentially, $\nu \sim \exp(\mathrm{i}p_y y +
\mathrm{i}p_z z)$, where $p_y$ and $p_z$ are constant values. Then
for simplicity one can take that $p_y=p_z=0$. It means that a
neutrino moves along the undulator axis. Let us express the neutrino wave function
in terms of the two
component spinors, $\nu^\mathrm{T}=(\varphi,\chi)$. On the basis
of Eq.~\eqref{DireqmattB} we receive equations for the two
component spinors,
\begin{align}\label{phichi}
\mathrm{i}\dot{\varphi} = &
(m-\mu(\bm{\sigma}_{\perp{}}\mathbf{B})+f^0/2)\varphi +
(\sigma_1 \hat{p}_x-f^0/2) \chi,
\notag
\\
\mathrm{i}\dot{\chi} = &
(-m+\mu(\bm{\sigma}_{\perp{}}\mathbf{B})+f^0/2)\chi +
(\sigma_1 \hat{p}_x-f^0/2) \varphi,
\end{align}
where $\hat{p}_x=-\mathrm{i}\partial_x$ and
$\bm{\sigma}_{\perp{}}=(\sigma_2,\sigma_3)$.
Now we replace the neutrino wave function $\nu$ with the new one, $\nu \to
\tilde{\nu} = \mathcal{U}^\dag \nu$, where
$\mathcal{U}=\mathrm{diag}(\mathfrak{U},\mathfrak{U})$ and
$\mathfrak{U}=\cos(\omega x/2)+\mathrm{i}\sigma_1\sin(\omega x/2)$. Then we again
express the new wave function using the two component spinors,
$\tilde{\nu}^\mathrm{T}=(\xi,\eta)$, with $\varphi=\mathfrak{U}\xi$ and
$\chi=\mathfrak{U}\eta$. With help of the following properties of the matrix
$\mathfrak{U}$:
$\mathfrak{U}^\dag(\bm{\sigma}_{\perp{}}\mathbf{B})\mathfrak{U}=\sigma_3 B$,
$\mathrm{d}\mathfrak{U}/\mathrm{d}x=\mathrm{i}\sigma_1 \omega \mathfrak{U}/2$ and
$\mathfrak{U}^\dag \sigma_1 \mathfrak{U}=\sigma_1$, as well as using
Eq.~\eqref{phichi} we arrive to the equations for the new two component spinors,
\begin{align}\label{xieta}
\mathrm{i}\dot{\xi} = &
(m-\mu B \sigma_3+f^0/2)\xi
\notag
\\
& + [(\omega-f^0)/2+\sigma_1 \hat{p}_x]\eta,
\notag
\\
\mathrm{i}\dot{\eta} = &
(-m+\mu B \sigma_3+f^0/2)\eta
\notag
\\
& + [(\omega-f^0)/2+\sigma_1 \hat{p}_x]\xi.
\end{align}
We notice that Eq.~\eqref{xieta} do not contain the dependence on $x$ coordinate.
Thus one gets that the new wave function depends on
$x$ as $\tilde{\nu} \sim \exp(\mathrm{i}p x)$, where $p$ is a
constant value, the analog of the particle momentum. It means that we can
replace $\hat{p}_x \to p$ in Eq.~\eqref{xieta}.
We look for stationary solutions to Eq.~\eqref{xieta}, i.e. $\tilde{\nu} \sim
\exp(-\mathrm{i}E t)$. Supposing that this equation has a non-trivial solution we
receive the energy levels in the form, $E = f^0/2 \pm E^{(\zeta)}$. The function
$E^{(\zeta)}$ depends on
the momentum and the characteristics on the external fields as
\begin{equation}\label{EnergymattB}
E^{(\zeta)}=\sqrt{\mathcal{M}^2+m^2+p^2-2\zeta R^2},
\end{equation}
where $R^2=\sqrt{p^2 \mathcal{M}^2 + (\mu B)^2 m^2}$ and $\mathcal{M}=\sqrt{(\mu
B)^2 + (\omega-f^0)^2/4}$.
In Eq.~\eqref{EnergymattB} $\zeta=\pm 1$ is the discrete quantum number.
Using energy spectrum~\eqref{EnergymattB} we can reproduce the results of the
previous works where the Dirac equation for a neutrino interacting with various
external fields was solved. Namely,
\begin{itemize}
\item neutrino interaction with a constant transversal magnetic field
(see, e.g., Ref.~\cite{DvoMaa07}). This situation corresponds to
$\omega=0$ and $f^0=0$.
Using Eq.~\eqref{EnergymattB} we get
$E = \pm \left( \sqrt{m^2+p^2}-\zeta\mu B \right)$ that coincides with the
energy spectrum used in Ref.~\cite{DvoMaa07};
\item neutrino interaction with background matter
(see Ref.~\cite{matterQFT}). This case corresponds to $\omega=0$ and
$B=0$. With help of Eq.~\eqref{EnergymattB} we receive that
$E = f^0/2 \pm \sqrt{(p-\zeta f^0/2)^2 + m^2}$ that coincides with
the results of Ref.~\cite{matterQFT}.
\end{itemize}
Note that, if we set $\omega=0$ and $B \neq 0$ in Eq.~\eqref{EnergymattB}, we
arrive to the case of a neutrino propagating in background matter under the
influence of a constant transversal magnetic field.
The basis spinors $u^{(\zeta)}$ and $v^{(\zeta)}$ corresponding to the signs
$\pm{}$ in the dispersion relation can be found from Eq.~\eqref{xieta}. The general
expressions for these spinors, which account for the particle mass exactly, are
rather complicated. Therefore we present here the basis spinors for a relativistic
neutrino with $(m/E) \ll 1$,
\begin{align}\label{spinorsmattB}
u^{(\zeta)}= &
\frac{1}{2\sqrt{2\mathcal{M}(\mathcal{M}+\varDelta)}}
\begin{pmatrix}
\mu B+\zeta\mathcal{M}+\varDelta \\
\mu B-\zeta\mathcal{M}-\varDelta \\
\mu B-\zeta\mathcal{M}-\varDelta \\
\mu B+\zeta\mathcal{M}+\varDelta \
\end{pmatrix},
\notag
\\
v^{(\zeta)}= &
\frac{1}{2\sqrt{2\mathcal{M}(\mathcal{M}-\varDelta)}}
\begin{pmatrix}
\mathcal{M}-\varDelta-\zeta\mu B \\
\mathcal{M}-\varDelta+\zeta\mu B \\
\varDelta-\mathcal{M}-\zeta\mu B \\
\varDelta-\mathcal{M}+\zeta\mu B \
\end{pmatrix},
\end{align}
where $\varDelta = (\omega-f^0)/2$. Note that the basis spinors in
Eq.~\eqref{spinorsmattB} satisfy the orthonormality conditions,
\begin{equation}\label{oncso}
u^{(\zeta)\dag}u^{(\zeta')}=v^{(\zeta)\dag}v^{(\zeta')}=
\delta_{\zeta\zeta'},
\quad
u^{(\zeta)\dag}v^{(\zeta')}=0.
\end{equation}
\subsection{Neutrino evolution in matter under the influence of a twisting magnetic
field}
Using the approach developed in our previous works~\cite{FOvac,Dvo06EPJC,DvoMaa07}
we can formulate the initial condition problem for the system in question. For the
given initial wave function $\nu(x,0)$ one should find the wave function $\nu(x,t)$
at subsequent moments of time, while a particle propagates in the external
fields. This wave function has the form (see
Refs.~\cite{FOvac,Dvo06EPJC,DvoMaa07}),
\begin{equation}\label{numattB}
\nu(x,t)=
\mathcal{U}(x)
e^{-\mathrm{i} f^0 t/2}
\int_{-\infty}^{+\infty}\frac{\mathrm{d}p}{2\pi}
e^{\mathrm{i}px}S(p,t)\tilde{\nu}(p,0),
\end{equation}
where
\begin{equation}\label{FTtildenu}
\tilde{\nu}(p,0)=
\int_{-\infty}^{+\infty}\mathrm{d}x
e^{-\mathrm{i}px}\mathcal{U}^\dag(x)\nu(x,0),
\end{equation}
is the Fourier transform of the initial condition for the fermion $\tilde{\nu}$ and
\begin{align}\label{PJmattB}
S(p,t)= &
\sum_{\zeta=\pm 1}
\bigg[
\left(
u^{(\zeta)}\otimes u^{(\zeta)\dag}
\right)
\exp{(-\mathrm{i}E^{(\zeta)} t)}
\notag
\\
& +
\left(
v^{(\zeta)}\otimes v^{(\zeta)\dag}
\right)
\exp{(+\mathrm{i}E^{(\zeta)} t)}
\bigg],
\end{align}
is the analog for the Pauli-Jourdan function for a spinor field interacting with
matter and a twisting magnetic field. The basis spinors $u^{(\zeta)}$ and
$v^{(\zeta)}$ are presented in Eq.~\eqref{spinorsmattB}. To derive
Eqs.~\eqref{numattB}-\eqref{PJmattB} we use orthonormality of the basis
spinors~\eqref{oncso}.
Let us suppose that initially a neutrino is in the state with the following wave
function: $\nu(x,0)=e^{\mathrm{i}kx}\xi_0$, where
$\xi_0^\mathrm{T}=(1/2)(1,-1,-1,1)$. It is possible to check that
$(1/2)(1-\Sigma_1)\xi_0=\xi_0$. Hence, the spinor $\nu(x,0)$ describes a particle
propagating along the $x$-axis, with its spin directed opposite to the $x$-axis,
i.e. a left-handed neutrino. Analogous initial condition was adopted in
Refs.~\cite{FOvac,Dvo06EPJC,DvoMaa07} where neutrino flavor and spin-flavor
oscillations were studied.
Using Eq.~\eqref{FTtildenu} we find that $\tilde{\nu}(p,0)=2\pi
\delta(p-k-\omega/2) \xi_0$. It is interesting to note that the following identity
is satisfied: $\left( v^{(\zeta)}\otimes v^{(\zeta)\dag} \right)\xi_0 = 0$.
Therefore no particles with "negative" energies appear in neutrino interacting with
considered external fields. Using Eqs.~\eqref{spinorsmattB}
and~\eqref{numattB}-\eqref{PJmattB} as well as the chosen initial condition we
arrive to the right-polarized component of the final wave function,
\begin{align}\label{nuR}
\nu^\mathrm{R}(x,t)= &
\frac{1}{2}(1+\Sigma_1)\nu(x,t)
\\
= &
\exp[\mathrm{i}(k+\omega)x-\mathrm{i} f^0 t/2]
\notag
\\
& \times
\notag
\frac{\mu B}{2\mathcal{M}}
\left.
\left(
e^{-\mathrm{i}E^{+{}}t}-e^{-\mathrm{i}E^{-{}}t}
\right)
\right|_{p=k+\omega/2}\kappa_0,
\end{align}
where $\kappa_0^\mathrm{T}=(1/2)(1,1,1,1)$.
Supposing that initially no right-polarized particles are present
and with help of Eq.~\eqref{nuR} we calculate the transition
probability for the process $\nu^\mathrm{L} \to \nu^\mathrm{R}$,
\begin{align}\label{PtrLR}
P_{\nu^\mathrm{L} \to \nu^\mathrm{R}}(t)= &
\frac{(\mu B)^2}{(\mu B)^2 + \varDelta^2}
\notag
\\
& \times
\sin^2
\left.
\left(
\frac{E^{+{}}-E^{-{}}}{2}t
\right)
\right|_{p=k+\omega/2}.
\end{align}
It can be seen from Eq.~\eqref{PtrLR} that the resonance in neutrino spin
oscillations occurs when $\varDelta \to 0$. One finds from Eq.~\eqref{EnergymattB}
that $(E^{+{}}-E^{-{}})/2 = -\mu B$ at $\varDelta=0$. Therefore the resonance
transition probability is always $P_\mathrm{res}(t)=\sin^2(\mu B t)$.
To analyze Eq.~\eqref{PtrLR} we introduce the group velocity,
\begin{equation}\label{grvel}
\mathcal{V}^{(\zeta)}=\frac{\partial E}{\partial p}=
\frac{p}{E^{(\zeta)}}
\left(
1-\zeta\frac{\mathcal{M}^2}{R^2}
\right).
\end{equation}
Now we can distinguish three different cases.
\begin{enumerate}
\item \label{case1}
First we suppose that $p=0$. This situation can happen if
$k=-\omega/2$. With
help of Eq.~\eqref{EnergymattB} we obtain that the energy levels are
$E^{(\zeta)} = \sqrt{(m - \zeta \mu B)^2 + \varDelta^2}$.
Using Eq.~\eqref{grvel}
we receive that the group velocity vanishes, $\mathcal{V}^{(\zeta)}=0$. It
means that the neutrino is captured by the twisting magnetic field.
%
\item \label{case2}
Now we assume that $p=\pm\mathcal{M}$, with $p\neq 0$.
For the definiteness we discuss the situation when $p=\mathcal{M}$
since the case $p=-\mathcal{M}$ can be considered analogously.
The energies corresponding to different values of $\zeta$ are
\begin{align}\label{Epm2case}
E^{+{}} = & m
\sqrt{1-\frac{(\mu B)^2}{\mathcal{M}^2}
+\frac{(\mu B)^4 m^2}{2\mathcal{M}^6}},
\notag
\\
E^{-{}} = & 2\mathcal{M}
\left(
1+\frac{m^2}{8\mathcal{M}^2}
\left[
1+\frac{(\mu B)^2}{\mathcal{M}^2}
\right]
\right).
\end{align}
%
Using Eq.~\eqref{grvel} one can compute the group velocities,
%
\begin{align}\label{Vpm2case}
\mathcal{V}^{+{}} = &
\frac{(\mu B)^2 m}{2\mathcal{M}^3}
\left[
1-\frac{(\mu B)^2}{\mathcal{M}^2}
+\frac{(\mu B)^4 m^2}{2\mathcal{M}^6}
\right]^{-1/2},
\notag
\\
\mathcal{V}^{-{}} = & 1-
\frac{m^2}{8\mathcal{M}^2}
\left[
1+3 \frac{(\mu B)^2}{\mathcal{M}^2}
\right].
\end{align}
%
In Eqs.~\eqref{Epm2case} and~\eqref{Vpm2case} we suppose that
$m \ll \mathcal{M}$. On the basis of
Eqs.~\eqref{Epm2case} and~\eqref{Vpm2case} we get the resonance energies
($\varDelta \to 0$),
%
\begin{align
E^{+{}}_\mathrm{res} \to & \frac{m^2}{\sqrt{2} \mu B},
\notag
\\
E^{-{}}_\mathrm{res} \to & 2 \mu B
\left[
1+\frac{m^2}{4(\mu B)^2}
\right],
\end{align}
%
and group velocities
%
\begin{equation}\label{Vpm2caseRes}
\mathcal{V}^{+{}}_\mathrm{res} \to \frac{1}{\sqrt{2}},
\quad
\mathcal{V}^{-{}}_\mathrm{res} \to 1-
\frac{m^2}{2(\mu B)^2}.
\end{equation}
%
It should be noted that group velocities are always less than one,
$\mathcal{V}^{\pm{}}<1$ [see, e.g., Eq.~\eqref{Vpm2caseRes}].
\item The last situation is realized when $p \neq \pm\mathcal{M}$
and $p \neq 0$. The energies in this case have the form,
%
\begin{align}\label{energy4case}
E^{(\zeta)} = &
p-\zeta\mathcal{M}
\notag
\\
& +
\frac{m^2}{2(p-\zeta\mathcal{M})}
\left[
1-\zeta\frac{(\mu B)^2}{\mathcal{M}p}
\right].
\end{align}
%
The expression for the transition probability~\eqref{PtrLR} is now rewritten in
the following way:
\begin{align}\label{PtrLR4case}
P(t)= & \frac{(\mu B)^2}{(\mu B)^2 + \varDelta^2}
\notag
\\
& \times
\sin^2
\left(
\sqrt{(\mu B)^2 + \varDelta^2}t
\right).
\end{align}
%
Note that transition probability expressions for spin oscillations derived
earlier (see, e.g., Ref.~\cite{twisting}) coincide with
Eq.~\eqref{PtrLR4case} which is valid only if $p \neq \pm\mathcal{M}$
and $p \neq 0$.
\end{enumerate}
It should be noted that the "non-standard" regimes in neutrino spin oscillations
described in items~\ref{case1} and~\ref{case2} are likely to be realized for
neutrinos with small initial momenta (see also Sec.~\ref{APPL} below).
\section{Neutrino spin-flavor oscillations in a twisting magnetic field\label{SFO}}
Now we apply the results of the previous section to the description of neutrino
spin-flavor oscillations in a twisting magnetic field. Let us study the evolution
of two Dirac neutrinos $(\nu_\alpha,\nu_\beta)$ that mix and interact with the
external electromagnetic field $F_{\mu\nu}$. The Lagrangian for this system has the
form
\begin{align}\label{Lagrnu}
\mathcal{L}(\nu_{\alpha},\nu_{\beta})= &
\sum_{\lambda=\alpha,\beta}\bar{\nu}_\lambda
\mathrm{i}\gamma^\mu\partial_\mu \nu_\lambda -
\sum_{\lambda\lambda'=\alpha,\beta}
\bigg[
m_{\lambda\lambda'} \bar{\nu}_\lambda \nu_{\lambda'}
\notag
\\
& +
\frac{1}{2}
M_{\lambda\lambda'}
\bar{\nu}_{\lambda}\sigma_{\mu\nu}\nu_{\lambda'} F^{\mu\nu}
\bigg],
\end{align}
Here $(m_{\lambda\lambda'})$ and $(M_{\lambda\lambda'})$ are the mass and the
magnetic moments matrices that are generally independent. By definition these
matrices are intoduced in the flavor eigenstates basis. The electromagnetic field
is taken to have the same configuration as in Sec.~\ref{SO}.
To analyze the dynamics of the system we again set the initial condition by
specifying the initial wave functions of the flavor neutrinos $\nu_{\lambda}$ and
then analytically determine the field distributions at following moments of time.
We assume that the initial condition is
\begin{equation}\label{inicondnu}
\nu_{\alpha}(x,0)=0,
\quad
\nu_{\beta}(x,0)=\xi(x),
\end{equation}
where $\xi(x)$ is a function to be specified. One of the possible choices for the
initial condition for $\nu_\beta$ is the plane wave field distribution,
$\xi(x)=e^{\mathrm{i} k x}\xi_0$ (see Refs.~\cite{FOvac,Dvo06EPJC,DvoMaa07}). If we
study ultrarelativistic initial particles, we can choose the spinor $\xi_0$ as in
Sec.~\ref{SO}, i.e. in the following form: $\xi_0^\mathrm{T}=(1/2)(1, -1, -1, 1)$.
In order to eliminate the vacuum mixing term in Eq.~\eqref{Lagrnu}, i.e. to
diagonalize the mass matrix, we introduce a new basis of the wave functions, the
mass eigenstate basis $\psi_a$, $a=1,2$, obtained from the original flavor basis
$\nu_{\lambda}$ through the unitary transformation
\begin{equation}\label{matrtrans}
\nu_{\lambda}=\sum_{a=1,2}U_{\lambda a}\psi_a,
\end{equation}
where the matrix $({U}_{\lambda a})$ is parametrized in terms of a mixing angle
$\theta$ as usual
\begin{equation}\label{matrU}
({U}_{\lambda a})=
\begin{pmatrix}
\cos \theta & -\sin \theta \\
\sin \theta & \cos \theta \
\end{pmatrix}.
\end{equation}
The Lagrangian~\eqref{Lagrnu} rewritten in terms of the fields $\psi_a$ takes the
form
\begin{align}\label{Lagrpsi}
\mathcal{L}(\psi_1,\psi_2)= & \sum_{a=1,2}\mathcal{L}_0(\psi_a)
\notag
\\
& -
\frac{1}{2}
\sum_{ab=1,2}\mu_{ab}\bar{\psi}_a\sigma_{\mu\nu}\psi_b F^{\mu\nu},
\end{align}
where $\mathcal{L}_0(\psi_a)=\bar{\psi}_a(\mathrm{i}\gamma^\mu
\partial_\mu-m_a)\psi_a$ is the Lagrangian for the free fermion $\psi_a$ with the
mass $m_a$ and
\begin{equation}\label{magmomme}
\mu_{ab}=\sum_{\lambda\lambda'=\alpha,\beta}
U^{-1}_{a\lambda}{M}_{\lambda\lambda'}U_{\lambda' b},
\end{equation}
is the magnetic moment matrix presented in the mass eigenstates basis. Using
Eqs.~\eqref{inicondnu}-\eqref{matrU} the initial conditions for the fermions
$\psi_a$ become
\begin{equation}\label{inicondpsi}
\psi_1(x,0)=\sin\theta\xi(x),
\quad
\psi_2(x,0)=\cos\theta\xi(x).
\end{equation}
For the given configuration of the electric and magnetic fields we write down the
Dirac-Pauli equation for $\psi_a$, resulting from Eq.~\eqref{Lagrpsi}, as follows:
\begin{equation}\label{Direqpsi}
\mathrm{i}\dot{\psi}_a=\mathcal{H}_a\psi_a+V\psi_b,
\quad
a,b=1,2,
\quad
a \neq b,
\end{equation}
where $\mathcal{H}_a=(\bm{\alpha}\mathbf{p})+\beta m_a-\mu_a \beta
(\bm{\Sigma}\mathbf{B})$ is the Hamiltonian for the particle $\psi_a$ accounting
for the magnetic field, $V=-\mu \beta (\bm{\Sigma}\mathbf{B})$ describes the
interaction of the transition magnetic moment with the external magnetic field,
$\mu_a=\mu_{aa}$, and $\mu=\mu_{12}=\mu_{21}$ are elements of the matrix
$({\mu}_{ab})$.
To find the general solution to Eq.~\eqref{Direqpsi} we follow the method used in
Sec.~\ref{SO} and introduce the new wave functions $\tilde{\psi}_a =
\mathcal{U}^\dag\psi_a$. All the calculations are identical to those made in
Sec.~\ref{SO}. Therefore we present the final result for the wave functions
$\tilde{\psi}_a$,
\begin{align}\label{GsolDPeq}
\tilde{\psi}_{a}(x,t)= &
\int_{-\infty}^{+\infty} \frac{\mathrm{d}p}{\sqrt{2\pi}}
e^{\mathrm{i} p x}
\notag
\\
& \times
\sum_{\zeta=\pm 1}
\Big[
a_a^{(\zeta)}(t)u_a^{(\zeta)}\exp{(-\mathrm{i}E_a^{(\zeta)} t)}
\notag
\\
& +
b_a^{(\zeta)}(t)v_a^{(\zeta)}\exp{(+\mathrm{i}E_a^{(\zeta)} t)}
\Big],
\end{align}
where the energy levels $E_a^{(\zeta)}$ are
\begin{equation}\label{energyDM}
E_a^{(\zeta)}=
\sqrt{\mathcal{M}_a^2 + m_a^2 + p^2 - 2 \zeta R_a^2}.
\end{equation}
Here [see Eq.~\eqref{EnergymattB}]
\begin{align}\label{RaMa}
R_a^2= & \sqrt{p^2 \mathcal{M}_a^2 + (\mu_a B)^2 m_a^2},
\notag
\\
\mathcal{M}_a= & \sqrt{(\mu_a B)^2 + \omega^2/4}.
\end{align}
The basis spinors $u_a^{(\zeta)}$ and $v_a^{(\zeta)}$ can be obtained from
Eq.~\eqref{spinorsmattB} by the following replacement: $\mu \to \mu_a$,
$\mathcal{M} \to \mathcal{M}_a$ and $f^0 \to 0$. Our main goal is to determine the
non-operator coefficients $a_a^{(\zeta)}$ and $b_a^{(\zeta)}$ so that to satisfy
both the initial condition~\eqref{inicondpsi} and the evolution
equation~\eqref{Direqpsi}. Generally the coefficients $a_a^{(\zeta)}(t)$ and
$b_a^{(\zeta)}(t)$ are functions of time.
\subsection{Spin-flavor oscillations in case of diagonal magnetic
moments\label{DMM}}
In this section we suppose that magnetic moments matrix in the mass eigenstates
basis is close to diagonal, i.e. $\mu_a \gg \mu$. This case should be analyzed with
help of the perturbation theory. We expand the wave functions $\tilde{\psi}_a$ in a
series
\begin{equation}\label{expan}
\tilde{\psi}_{a}(x,t)=
\tilde{\psi}^{(0)}(x,t)+
\tilde{\psi}_{a}^{(1)}(x,t)+\dots,
\end{equation}
where $\tilde{\psi}^{(0)}(x,t)$ corresponds to the solution of Eq.~\eqref{GsolDPeq}
when we neglect the potential $V$ there. The function $\tilde{\psi}_{a}^{(1)}(x,t)$
is linear in the transition magnetic moment $\mu$ etc. We omit terms of higher
order in $\mu$ in Eq.~\eqref{expan}. They can be also accounted for but the
corresponding calculations arrear to be cumbersome in the general case.
Using orthonormality conditions of the basis spinors [see also Eq.~\eqref{oncso}],
\begin{equation*}
u_a^{(\zeta)\dag}u_a^{(\zeta')}=v_a^{(\zeta)\dag}v_a^{(\zeta')}=
\delta_{\zeta\zeta'},
\quad
u_a^{(\zeta)\dag}v_a^{(\zeta')}=0,
\end{equation*}
and the results of our previous work~\cite{DvoMaa07} (see also Sec.~\ref{SO}) we
can receive from Eq.~\eqref{GsolDPeq} the expression for the zero order (in $\mu$)
wave functions $\psi_a^{(0)}$, which correspond to the first term in
Eq.~\eqref{expan},
\begin{equation}\label{solpsi0}
\psi_{a}^{(0)}(x,t)=
\mathcal{U}(x)
\int_{-\infty}^{+\infty} \frac{\mathrm{d} p}{2\pi}
e^{\mathrm{i} p x}
S_a(p,t)\tilde{\psi}_{a}(p,0).
\end{equation}
where
\begin{equation
\tilde{\psi}_a(p,0)=
\int_{-\infty}^{+\infty}\mathrm{d}x
e^{-\mathrm{i}px}\mathcal{U}^\dag(x)\psi_a(x,0),
\end{equation}
is the Fourier transform of the initial condition for the spinor $\tilde{\psi}_a$.
Here
\begin{align}\label{PJfB}
S_a(p,t)= &
\sum_{\zeta=\pm 1}
\bigg[
\left(
u_a^{(\zeta)}\otimes u_a^{(\zeta)\dag}
\right)
\exp{(-\mathrm{i}E_a^{(\zeta)} t)}
\notag
\\
& +
\left(
v_a^{(\zeta)}\otimes v_a^{(\zeta)\dag}
\right)
\exp{(+\mathrm{i}E_a^{(\zeta)} t)}
\bigg],
\end{align}
is the analog of the Pauli-Joudan function in the twisting magnetic field [see also
Eq.~\eqref{PJmattB}].
Using Eqs.~\eqref{solpsi0}-\eqref{PJfB} for the given initial condition we can find
the wave functions at any subsequent moments of time. For example, if one initially
has the left-handed neutrino $\nu_\beta^\mathrm{L}$, then field distribution of the
right-handed component of the fermion $\nu_\alpha$ is
\begin{align}\label{nualphaR0}
\nu^{(0)\mathrm{R}}_\alpha(x,t) = &
\frac{1}{2}(1+\Sigma_1)[\cos\theta\psi_1(x,t)-\sin\theta\psi_2(x,t)]
\notag
\\
= & \sin\theta\cos\theta e^{\mathrm{i}(k+\omega)x}
\\
& \times
\bigg[
\frac{\mu_1 B}{2\mathcal{M}_1}
\left(
e^{-\mathrm{i}E^{+{}}_1 t}-e^{-\mathrm{i}E^{-{}}_1 t}
\right)
\notag
\\
\notag
& -
\frac{\mu_2 B}{2\mathcal{M}_2}
\left(
e^{-\mathrm{i}E^{+{}}_2 t}-e^{-\mathrm{i}E^{-{}}_2 t}
\right)
\left.
\bigg]
\right|_{p=k+\omega/2}\kappa_0.
\end{align}
To receive Eq.~\eqref{nualphaR0} we use the same technique as in Sec.~\ref{SO}.
Therefore we may omit the details of calculations. On the basis of
Eq.~\eqref{nualphaR0} one obtains the transition probability for the process
$\nu^\mathrm{L}_\beta \to \nu^\mathrm{R}_\alpha$ in the form
\begin{align}\label{PtrLR0}
P^{(0)}_{\nu^\mathrm{L}_\beta \to \nu^\mathrm{R}_\alpha}(t) = &
\frac{\sin^2 (2\theta)}{4}
\bigg\{
\bigg[
\frac{\mu_1 B}{\mathcal{M}_1}
\sin
\left(
\frac{E_1^{+{}}-E_1^{-{}}}{2}t
\right)
\\
& -
\frac{\mu_1 B}{\mathcal{M}_1}
\sin
\left(
\frac{E_2^{+{}}-E_2^{-{}}}{2}t
\right)
\bigg]^2
\notag
\\
& +
4 \frac{\mu_1 \mu_2 B^2}{\mathcal{M}_1\mathcal{M}_2}
\sin
\left(
\frac{E_1^{+{}}-E_1^{-{}}}{2}t
\right)
\notag
\\
& \times
\sin
\left(
\frac{E_2^{+{}}-E_2^{-{}}}{2}t
\right)
\notag
\\
\notag
& \times
\sin^2
\left(
\frac{E_1^{+{}}+E_1^{-{}}-E_2^{+{}}-E_2^{-{}}}{4}t
\right)
\bigg\}.
\end{align}
The energies $E^{(\zeta)}_a$ in Eqs.~\eqref{nualphaR0} and~\eqref{PtrLR0} are given
in Eq.~\eqref{energyDM}. In Eq.~\eqref{PtrLR0} we also suppose that $p=k+\omega/2$.
The analysis of Eq.~\eqref{PtrLR0} is almost identical to that in Sec.~\ref{SO}.
Therefore we present in the explicit form the final results for the wave function
and the transition probability in the most important case when $p \gg
\max(m_a,\mathcal{M}_a)$. This situation corresponds to spin-flavor oscillations of
ultrarelativistic neutrinos. Now the wave function of $\nu_\alpha$ becomes
\begin{align}\label{nualphaR0fin}
\nu^{(0)\mathrm{R}}_\alpha(x,t) = &
\mathrm{i} \sin \theta \cos \theta
\exp[\mathrm{i}(k+\omega)x-\mathrm{i} p t]
\\
& \times
\bigg[
\frac{\mu_1 B}{\mathcal{M}_1}\exp
\left(
-\mathrm{i}\frac{m_1^2}{2p}t
\right)
\sin\mathcal{M}_1 t
\notag
\\
\notag
& -
\frac{\mu_2 B}{\mathcal{M}_2}
\exp
\left(
-\mathrm{i}\frac{m_2^2}{2p}t
\right)
\sin\mathcal{M}_2 t
\left.
\bigg]
\right|_{p=k+\omega/2}\kappa_0,
\end{align}
and the transition probability in Eq.~\eqref{PtrLR0} is
\begin{align}\label{PtrLR0fin}
P^{(0)}_{\nu^\mathrm{L}_\beta \to \nu^\mathrm{R}_\alpha}(t) = &
\frac{\sin^2 (2\theta)}{4}
\\
& \times
\bigg\{
\left(
\frac{\mu_1 B}{\mathcal{M}_1}\sin\mathcal{M}_1 t-
\frac{\mu_2 B}{\mathcal{M}_2}\sin\mathcal{M}_2 t
\right)^2
\notag
\\
\notag
& +
4\frac{\mu_1 \mu_2 B^2}{\mathcal{M}_1\mathcal{M}_2}
\sin\mathcal{M}_1 t \sin\mathcal{M}_2 t \sin^2[\Phi(k)]
\bigg\},
\end{align}
In Eq.~\eqref{PtrLR0fin} we use the notation for the oscillations phase,
\begin{equation}\label{Phi}
\Phi(k)=\frac{\delta m^2}{4(k+\omega/2)}
\end{equation}
and $\delta m^2 = m_1^2 - m_2^2$ is the mass squared difference. In deriving
Eqs.~\eqref{nualphaR0fin} and~\eqref{PtrLR0fin} we use the analog of the energy
expansion in Eq.~\eqref{energy4case}.
Note that the phase of oscillations in Eq.~\eqref{Phi} depends on the frequency of
the twisting magnetic field. It should be noted that, if we put $\omega=0$ in
Eqs.~\eqref{PtrLR0fin} and~\eqref{Phi}, the transition probability coincides with
that from our work~\cite{DvoMaa07} where we studied neutrino spin-flavor
oscillations in the constant transversal magnetic field.
If we studied the special case of massive neutrinos having equal magnetic moments,
$\mu_1=\mu_2=\mu_0$, we would obtain the expected result from
Eq.~\eqref{PtrLR0fin}. Namely, Eq.~\eqref{PtrLR0fin} can be rewritten as $P=P_F
P_S$, where $P_F = \sin^2(2\theta)\sin^2[\Phi(k)t]$ is the usual transition
probability of flavor oscillations and
\begin{equation}\label{Ptrspin}
P_S=\frac{(\mu_0 B)^2}{\Omega_S^2}
\sin^2(\Omega_S t),
\end{equation}
is the probability of spin oscillations between different polarization states
within each mass eigenstate. In Eq.~\eqref{Ptrspin} $\Omega_S = \sqrt{(\mu_0
B)^2+(\omega/2)^2}$. That is, since the magnetic moment interactions are
insensitive to flavor, the transitions between flavors are solely due to the mass
mixing.
One can obtain the first order corrections (linear in $\mu$) to
Eqs.~\eqref{nualphaR0fin} and~\eqref{PtrLR0fin}. These corrections correspond to
the second term in Eq.~\eqref{expan}. The expressions for the corrections to the
mass eigenstates wave functions are
\begin{align}\label{solpsi1}
\psi_{a}^{(1)}(x,t)= &
-\mathrm{i}\mathcal{U}(x)
\int_{-\infty}^{+\infty} \frac{\mathrm{d}p}{2\pi}
e^{\mathrm{i} p x}
\\
& \times
\sum_{\zeta=\pm 1}
\Big[
\left(
u_a^{(\zeta)}\otimes u_a^{(\zeta)\dag}
\right)
\exp{(-\mathrm{i}E_a^{(\zeta)} t)}
\tilde{V}\mathcal{G}_a^\mathrm{(\zeta)}
\notag
\\
\notag
& +
\left(
v_a^{(\zeta)}\otimes v_a^{(\zeta)\dag}
\right)
\exp{(+\mathrm{i}E_a^{(\zeta)} t)}
\tilde{V}\mathcal{R}_a^\mathrm{(\zeta)}
\Big]
\tilde{\psi}_{b}(p,0),
\end{align}
where
\begin{align}\label{PJfint}
\mathcal{G}_a^{(\zeta)}= &
\int_0^t \mathrm{d}t' \exp{(+\mathrm{i}E_a^{(\zeta)}t')}S_b(p,t'),
\notag
\\
\mathcal{R}_a^{(\zeta)}= &
\int_0^t \mathrm{d}t' \exp{(-\mathrm{i}E_a^{(\zeta)}t')}S_b(p,t').
\end{align}
In Eqs.~\eqref{solpsi1} and~\eqref{PJfint} $a\neq b$. For the details of the
derivation of Eqs.~\eqref{solpsi1} and~\eqref{PJfint} the reader is referred to
Ref.~\cite{DvoMaa07} and Sec.~\ref{SO} of the present paper. Note that $\tilde{V} =
- \mu B \beta \Sigma_3$ in Eq.~\eqref{solpsi1}.
The calculations of the the first order corrections based on Eqs.~\eqref{solpsi1}
and~\eqref{PJfint} are rather cumbersome. Therefore we present here only the final
results in the case when $p \gg \max(m_a,\mathcal{M}_a)$. One has the expression
for the correction to the wave function,
\begin{widetext}
\begin{align}\label{nualphaR1fin}
\nu^{(1)\mathrm{R}}_\alpha(x,t) & =
\mathrm{i} \mu B e^{\mathrm{i}(k+\omega)x}
\frac{1}{4\mathcal{M}_1\mathcal{M}_2}
\displaybreak[1]
\bigg[
(\mu_1 \mu_2 B^2 + \mathcal{M}_1 \mathcal{M}_2 - \omega^2/4) \cos 2 \theta
\left(
\frac{\sin \delta t}{\delta}e^{-\mathrm{i} \sigma t}+
\frac{\sin \Delta t}{\Delta}e^{-\mathrm{i} \Sigma t}
\right)
\\
& -
(\mu_1 \mu_2 B^2 - \mathcal{M}_1 \mathcal{M}_2 - \omega^2/4) \cos 2 \theta
\displaybreak[1]
\left(
\frac{\sin d t}{d}e^{-\mathrm{i} s t}+
\frac{\sin D t}{D}e^{-\mathrm{i} S t}
\right)
\notag
\\
\notag
\displaybreak[1]
& +
(\mathcal{M}_1 - \mathcal{M}_2)\frac{\omega}{2}
\left(
\frac{\sin \delta t}{\delta}e^{-\mathrm{i} \sigma t}-
\frac{\sin \Delta t}{\Delta}e^{-\mathrm{i} \Sigma t}
\right)-
(\mathcal{M}_1 + \mathcal{M}_2)\frac{\omega}{2}
\left(
\frac{\sin d t}{d}e^{-\mathrm{i} s t}-
\frac{\sin D t}{D}e^{-\mathrm{i} S t}
\right)
\left.
\bigg]
\right|_{p=k+\omega/2}\kappa_0.
\end{align}
In Eq.~\eqref{nualphaR1fin} we use the notations,
\begin{align*}
\sigma = & \frac{E_1^{+{}} + E_2^{+{}}}{2} \approx
p + \Upsilon(k) - \bar{\mathcal{M}},
\quad
s = \frac{E_1^{+{}} + E_2^{-{}}}{2} \approx
p + \Upsilon(k) - \delta \mathcal{M},
\\
\Sigma = & \frac{E_1^{-{}} + E_2^{-{}}}{2} \approx
p + \Upsilon(k) + \bar{\mathcal{M}},
\quad
S = \frac{E_1^{-{}} + E_2^{+{}}}{2} \approx
p + \Upsilon(k) + \delta \mathcal{M},
\end{align*}
and
\begin{align*}
\delta = & \frac{E_1^{+{}} - E_2^{+{}}}{2} \approx
\Phi(k) - \delta \mathcal{M},
\quad
d = \frac{E_1^{+{}} - E_2^{-{}}}{2} \approx
\Phi(k) - \bar{\mathcal{M}},
\\
\Delta = & \frac{E_1^{-{}} - E_2^{-{}}}{2} \approx
\Phi(k) + \delta \mathcal{M},
\quad
D = \frac{E_1^{-{}} - E_2^{+{}}}{2} \approx
\Phi(k) + \bar{\mathcal{M}},
\end{align*}
where
\begin{equation*}
\Upsilon(k)=\frac{m_1^2+m_2^2}{4(k+\omega/2)},
\quad
\delta \mathcal{M} = \frac{\mathcal{M}_1 - \mathcal{M}_2}{2},
\quad
\bar{\mathcal{M}} = \frac{\mathcal{M}_1 + \mathcal{M}_2}{2}.
\end{equation*}
To obtain Eq.~\eqref{nualphaR1fin} we use the identity $\langle v_a^{(\zeta)} |
\tilde{V} | \xi_0 \rangle = 0$, which means that no antineutrinos are produced.
On the basis of Eqs.~\eqref{nualphaR0fin} and~\eqref{nualphaR1fin} one calculates
the correction to the transition probability which has the following form:
\begin{align}\label{PtrLR1fin}
P^{(1)}_{\nu^\mathrm{L}_\beta \to \nu^\mathrm{R}_\alpha}(t) = &
\frac{\mu B \sin 2\theta}{4\mathcal{M}_1\mathcal{M}_2}
\bigg\{
(\mu_1 \mu_2 B^2 + \mathcal{M}_1 \mathcal{M}_2 - \omega^2/4)
\frac{\cos 2 \theta}{Z_1}
\notag
\\
\displaybreak[1]
& \times
\bigg[
\Phi(k)\sin[2\Phi(k)t]
\left(
\frac{\mu_1 B}{\mathcal{M}_1}\sin \mathcal{M}_1 t \cos \mathcal{M}_2 t -
\frac{\mu_2 B}{\mathcal{M}_2}\cos \mathcal{M}_1 t \sin \mathcal{M}_2 t
\right)
\notag
\\
\displaybreak[1]
& -
\delta \mathcal{M}
\bigg(
\frac{\mu_1 B}{\mathcal{M}_1}\sin^2 \mathcal{M}_1 t +
\frac{\mu_2 B}{\mathcal{M}_2}\sin^2 \mathcal{M}_2 t -
\left(
\frac{\mu_1 B}{\mathcal{M}_1}+\frac{\mu_2 B}{\mathcal{M}_2}
\right)
\sin \mathcal{M}_1 t \sin \mathcal{M}_2 t
\notag
\\
& + 2
\left(
\frac{\mu_1 B}{\mathcal{M}_1}+\frac{\mu_2 B}{\mathcal{M}_2}
\right)
\sin \mathcal{M}_1 t \sin \mathcal{M}_2 t \sin^2[\Phi(k)t]
\bigg)
\bigg]
\notag
\\
\displaybreak[1]
& -
(\mu_1 \mu_2 B^2 - \mathcal{M}_1 \mathcal{M}_2 - \omega^2/4)
\frac{\cos 2 \theta}{Z_2}
\bigg[
\Phi(k)\sin[2\Phi(k)t]
\left(
\frac{\mu_1 B}{\mathcal{M}_1}\sin \mathcal{M}_1 t \cos \mathcal{M}_2 t -
\frac{\mu_2 B}{\mathcal{M}_2}\cos \mathcal{M}_1 t \sin \mathcal{M}_2 t
\right)
\notag
\\
\displaybreak[1]
& -
\bar{\mathcal{M}}
\bigg(
\frac{\mu_1 B}{\mathcal{M}_1}\sin^2 \mathcal{M}_1 t -
\frac{\mu_2 B}{\mathcal{M}_2}\sin^2 \mathcal{M}_2 t +
\left(
\frac{\mu_1 B}{\mathcal{M}_1}-\frac{\mu_2 B}{\mathcal{M}_2}
\right)
\sin \mathcal{M}_1 t \sin \mathcal{M}_2 t
\notag
\\
\displaybreak[1]
& - 2
\left(
\frac{\mu_1 B}{\mathcal{M}_1}-\frac{\mu_2 B}{\mathcal{M}_2}
\right)
\sin \mathcal{M}_1 t \sin \mathcal{M}_2 t \sin^2[\Phi(k)t]
\bigg)
\bigg]
\notag
\\
\displaybreak[1]
& +
(\mathcal{M}_1 - \mathcal{M}_2)
\frac{\omega}{2 Z_1}
\bigg[
\delta \mathcal{M} \sin[2\Phi(k)t]
\left(
\frac{\mu_1 B}{\mathcal{M}_1}\sin \mathcal{M}_1 t \cos \mathcal{M}_2 t -
\frac{\mu_2 B}{\mathcal{M}_2}\cos \mathcal{M}_1 t \sin \mathcal{M}_2 t
\right)
\notag
\\
\displaybreak[1]
& -
\Phi(k)
\bigg(
\frac{\mu_1 B}{\mathcal{M}_1}\sin^2 \mathcal{M}_1 t +
\frac{\mu_2 B}{\mathcal{M}_2}\sin^2 \mathcal{M}_2 t -
\left(
\frac{\mu_1 B}{\mathcal{M}_1}+\frac{\mu_2 B}{\mathcal{M}_2}
\right)
\sin \mathcal{M}_1 t \sin \mathcal{M}_2 t
\notag
\\
\displaybreak[1]
& + 2
\left(
\frac{\mu_1 B}{\mathcal{M}_1}+\frac{\mu_2 B}{\mathcal{M}_2}
\right)
\sin \mathcal{M}_1 t \sin \mathcal{M}_2 t \sin^2[\Phi(k)t]
\bigg)
\bigg]
\notag
\\
\displaybreak[1]
& -
(\mathcal{M}_1 + \mathcal{M}_2)
\frac{\omega}{2 Z_2}
\bigg[
\bar{\mathcal{M}}\sin[2\Phi(k)t]
\left(
\frac{\mu_1 B}{\mathcal{M}_1}\sin \mathcal{M}_1 t \cos \mathcal{M}_2 t -
\frac{\mu_2 B}{\mathcal{M}_2}\cos \mathcal{M}_1 t \sin \mathcal{M}_2 t
\right)
\notag
\\
\displaybreak[1]
& -
\Phi(k)
\bigg(
\frac{\mu_1 B}{\mathcal{M}_1}\sin^2 \mathcal{M}_1 t -
\frac{\mu_2 B}{\mathcal{M}_2}\sin^2 \mathcal{M}_2 t +
\left(
\frac{\mu_1 B}{\mathcal{M}_1}-\frac{\mu_2 B}{\mathcal{M}_2}
\right)
\sin \mathcal{M}_1 t \sin \mathcal{M}_2 t
\notag
\\
\displaybreak[1]
& - 2
\left(
\frac{\mu_1 B}{\mathcal{M}_1}-\frac{\mu_2 B}{\mathcal{M}_2}
\right)
\sin \mathcal{M}_1 t \sin \mathcal{M}_2 t \sin^2[\Phi(k)t]
\bigg)
\bigg]
\bigg\},
\end{align}
\end{widetext}
where
\begin{equation}\label{Z1Z2}
Z_1 = \Phi^2(k) - \delta \mathcal{M}^2,
\quad
Z_2 = \Phi^2(k) - \bar{\mathcal{M}}^2.
\end{equation}
Note that, if we again put $\omega=0$ in Eqs.~\eqref{nualphaR1fin}
and~\eqref{PtrLR1fin}, we reproduce the results of our previous
work~\cite{DvoMaa07}.
It can be noticed from Eqs.~\eqref{PtrLR1fin} and~\eqref{Z1Z2} that the
perturbative approach is valid until $Z_{1,2} \neq 0$. If either $Z_1$ or $Z_2$ is
equal to zero, we can expect that some non-perturbative effects like resonances can
occur. Unfortunately these effects cannot be quantitatively described in frames of
the approach based on the Dirac-Pauli equation used in the present work. To analyze
such phenomena one should carry out numerical computations within the Schr\"odinger
equation approach (see also Sec.~\ref{QM}). Nevertheless we evaluate the
possibility that $Z_1=0$ for spin-flavor oscillations between active and sterile
neutrinos (see, e.g., Ref.~\cite{KerMaaMyyRii04}) in the twisting magnetic field of
the Sun. In this case one can take into account the magnetic moment of an active
neutrino only. For the following parameters: $k \sim 10\thinspace\text{MeV}$,
$\delta m^2 \sim 10^{-8}\thinspace\text{eV}^2$~\cite{KerMaaMyyRii04},
$\mu_\mathrm{activ} \sim 10^{-11} \mu_\mathrm{B}$, $B \sim 100\thinspace\text{kG}$
and $\omega \sim 10^{-15}\thinspace\text{eV}$~\cite{twisting}, we can see that the
quantities $\mu_\mathrm{activ} B$, $\Phi(k)$ and $\omega$ are of the same order of
magnitude of $10^{-15}\thinspace\text{eV}$. Thus the violation of the validity of
the perturbation theory is quite possible for this kind of situation and resonance
phenomena can happen.
The sum of Eqs.~\eqref{PtrLR0fin} and~\eqref{PtrLR1fin} gives one the transition
probability of spin-flavor oscillations up to terms linear in $\mu$ in case of the
magnetic moments matrix which is close to diagonal.
\subsection{Spin-flavor oscillations in case of non-diagonal magnetic
moments\label{MMM}}
In this section we study neutrino spin-flavor oscillations in case of the
non-diagonal magnetic moments matrix. It means that the transition magnetic moment
dominates over the diagonal ones, i.e. we assume that $\mu \gg \mu_a$.
Now we start directly from Eq.~\eqref{GsolDPeq}. However one cannot treat the
potential $\tilde{V}$ as the small perturbation. To solve this problem we should
use the method elaborated in Ref.~\cite{DvoMaa07}. The following ordinary
differential equations can be derived to determine the coefficients
$a_a^{(\zeta)}(t)$ and $b_a^{(\zeta)}(t)$ in Eq.~\eqref{GsolDPeq}:
\begin{align}\label{abEqsGc}
\mathrm{i}\dot{a}_a^{(\zeta)}= &
\exp{(+\mathrm{i}E_a^{(\zeta)}t)}u^{(\zeta)\dag}
\tilde{V}
\notag
\\
& \times
\sum_{\zeta'=\pm 1}
\Big[
a_b^{(\zeta')}u^{(\zeta')}\exp{(-\mathrm{i}E_b^{(\zeta')} t)}
\notag
\\
& +
b_b^{(\zeta')}v^{(\zeta')}\exp{(+\mathrm{i}E_b^{(\zeta')} t)}
\Big],
\notag
\\
\mathrm{i}\dot{b}_a^{(\zeta)}= &
\exp{(-\mathrm{i}E_a^{(\zeta)} t)}v^{(\zeta)\dag}
\tilde{V}
\notag
\\
& \times
\sum_{\zeta'=\pm 1}
\Big[
a_b^{(\zeta')}u^{(\zeta')}\exp{(-\mathrm{i}E_b^{(\zeta')} t)}
\notag
\\
& +
b_b^{(\zeta')}v^{(\zeta')}\exp{(+\mathrm{i}E_b^{(\zeta')} t)}
\Big],
\end{align}
where $u^{+{}} = \kappa_0$, $u^{-{}} = \xi_0$, $v^{+\mathrm{T}} = (1/2)(1,1,-1,-1)$
and $v^{-\mathrm{T}} = (1/2)(1,-1,1,-1)$. The quantum number $\zeta$ is the
eigenvalue of the operator $(\bm{\Sigma}\mathbf{p})/|\mathbf{p}|=\Sigma_1$, i.e.
for example, $\Sigma_1 u^{(\zeta)} = \zeta u^{(\zeta)}$. Note that the current
definition of $\zeta$ differs from that we use in previous sections. We also drop
the subscripts $a$ and $b$ in the basis spinors since we study the evolution of
ultrarelaticistic neutrinos and assume that $\mu_a \ll \mu$. The energies
$E_a^{(\zeta)}$ in Eq.~\eqref{abEqsGc} take the form
\begin{equation}\label{energyMM}
E_a^{(\zeta)}=\sqrt{m_a^2 + (p + \zeta \omega/2)^2}.
\end{equation}
For the details of the derivation of Eq.~\eqref{energyMM} and the basis spinors the
reader is referred to Sec.~\ref{SO} of this work.
The initial conditions should be added to the differential
equations~\eqref{abEqsGc},
\begin{align}\label{abinicond}
a_a^{(\zeta)}(0)=
\frac{1}{\sqrt{2\pi}}u^{(\zeta)\dag}\tilde{\psi}_a(p,0),
\notag
\\
b_a^{(\zeta)}(0)=
\frac{1}{\sqrt{2\pi}}v^{(\zeta)\dag}\tilde{\psi}_a(p,0).
\end{align}
Eq.~\eqref{abinicond} results from Eq.~\eqref{GsolDPeq} and the orthonormality of
the basis spinors $u^{(\zeta)}$ and $v^{(\zeta)}$.
Taking into account the following identities: $\langle u^{(\zeta)} | \tilde{V} |
v^{(\zeta')} \rangle = 0$, $\langle u^{\pm{}} | \tilde{V} | u^{\pm{}} \rangle = 0$
and $\langle u^{\pm{}} | \tilde{V} | u^{\mp{}} \rangle = - \mu B$, which can be
verified by means of direct calculations, one reveals that Eq.~\eqref{abEqsGc} is
reduced to the form
\begin{equation}\label{abEqsGcred}
\mathrm{i}\dot{a}_a^{\pm{}} =
-{a}_b^{\mp{}} \mu B \exp{[\mathrm{i}(E_a^{\pm{}}-E_b^{\mp{}})t]}.
\end{equation}
Note that the analogous equation for the functions $b_a^{(\zeta)}$ can be also
obtained from Eq.~\eqref{abEqsGc}. Eq.~\eqref{abEqsGcred} is similar to that
analyzed in Ref.~\cite{DvoMaa07} (see also Ref.~\cite{twisting}). Therefore we
write down its solution, e.g., for the functions $a_a^{+{}}$,
\begin{equation}\label{a+MMsol}
a_{1,2}^{+{}}(t)=
\mathrm{i}\frac{\mu B}{\Omega_{\pm{}}}
\sin \Omega_{\pm{}} t
\exp(\pm \mathrm{i} \omega_{\pm{}} t/2) a_{1,2}^{-{}}(0),
\end{equation}
where
\begin{equation}\label{Omegapmomegapm}
\Omega_{\pm{}} = \sqrt{(\mu B)^2 + (\omega_{\pm{}}/2)^2},
\quad
\omega_{\pm{}} = 2\Phi(k) \pm \omega.
\end{equation}
In deriving Eq.~\eqref{a+MMsol} we take into account that initially only
left-handed neutrinos are presented, i.e. $a_a^{+{}}(0)=0$. Indeed
$\tilde{\psi}_a(p,0) \sim \xi_0$ and with help of Eq.~\eqref{abinicond} we get that
$a_a^{+{}}(0)=0$.
Finally, using Eqs.~\eqref{matrtrans}, \eqref{matrU}, \eqref{GsolDPeq}
and~\eqref{a+MMsol} we arrive to the right-handed component of $\nu_\alpha$,
\begin{align}\label{nualphaRMM}
\nu_\alpha^\mathrm{R}(x,t)= &
\mathrm{i} \mu B e^{\mathrm{i}(k+\omega)x}
\\
\notag
\times &
\exp
\left[
-\mathrm{i}
\left(
p + \frac{m_1^2+m_2^2}{4p}
\right)t
\right]
\\
\notag
\times &
\left.
\left(
\cos^2\theta \frac{\sin \Omega_{+{}} t}{\Omega_{+{}}}-
\sin^2\theta \frac{\sin \Omega_{-{}} t}{\Omega_{-{}}}
\right)
\right|_{p = k + \omega/2}
\kappa_0.
\end{align}
With help of Eq.~\eqref{nualphaRMM} we can compute the transition probability for
the process like $\nu^\mathrm{L}_\beta \to \nu^\mathrm{R}_\alpha$ in case of
magnetic moments matrix with great non-diagonal elements,
\begin{align}\label{PtrLRMM}
P_{\nu^\mathrm{L}_\beta \to \nu^\mathrm{R}_\alpha}(t)= &
(\mu B)^2
\\
\notag
& \times
\left[
\cos^2\theta \frac{\sin \Omega_{+{}} t}{\Omega_{+{}}}-
\sin^2\theta \frac{\sin \Omega_{-{}} t}{\Omega_{-{}}}
\right]^2.
\end{align}
Note that, if we approach to the limit $\omega=0$ in Eq.~\eqref{PtrLRMM}, we
reproduce the result of our work~\cite{DvoMaa07}, where spin-flavor oscillations of
neutrinos with similar magnetic moments matrix were studied.
Let us analyze neutrino oscillations at small frequencies of the twisting magnetic
field, $\omega \ll \Omega_0 = \sqrt{(\mu B)^2 + [\Phi(k)]^2}$. In this situation
the transition probability in Eq.~\eqref{PtrLRMM} can be rewritten as
\begin{align}\label{beating}
P(t)= & A(t)\sin^2(\Omega_0 t),
\notag
\\
A(t)= & A_\mathrm{min}+2\delta A\sin^2(\delta \Omega t).
\end{align}
where $\delta A = (A_\mathrm{max}-A_\mathrm{min})/2$, $\delta \Omega =
(\Omega_{+{}}-\Omega_{-{}})/2 \approx \omega \Phi(k)/(2\Omega_0)$ and
\begin{align}\label{Amaxmin}
A_\mathrm{max} & \approx
\left(
\frac{\mu B}{\Omega_0}
\right)^2
\left[
1-\frac{\omega \Phi(k)}{\Omega_0^2}\cos 2 \theta
\right],
\notag
\\
A_\mathrm{min} & \approx
\cos 2 \theta
\left(
\frac{\mu B}{\Omega_0}
\right)^2
\left[
\cos 2 \theta-\frac{\omega \Phi(k)}{\Omega_0^2}
\right].
\end{align}
Eqs.~\eqref{beating} and~\eqref{Amaxmin} show that the behavior of the system is
analogous to beatings occurring in interference of two oscillations with different
amplitudes and frequencies.
Let us discuss neutrinos with the following parameters: $\sin^2\theta = 0.3$,
$\delta m^2 = 10^{-5}\thinspace\text{eV}^2$~\cite{MalSchTorVal04}, $\mu =
10^{-18}\thinspace \mu_\mathrm{B}$ and $k=100\thinspace\text{MeV}$. It is known
that rather strong twisting magnetic fields, up to the critical value of $B \sim
10^{14}\thinspace\text{G}$, can exist in the early Universe~\cite{Ath96}. The time
dependence of neutrino oscillations probability is schematically depicted on
Fig.~\ref{figbeat} for such a magnetic field strength and $\omega =
10^{-13}\thinspace\text{eV}$.
\begin{figure}
\centering
\includegraphics[scale=.44]{figure.eps}
\caption{\label{figbeat}
The time dependence of the neutrino oscillations
probability in the twisting magnetic
field in the case $\mu\gg\mu_{1,2}$ at small values of $\omega$.}
\end{figure}
As one can see on this figure, the rapidly varying transition probability $P(t)$
(solid line) is modulated by the slowly varying function $A(t)$ (dashed line). This
time dependence is different from that described in Ref.~\cite{twisting}.
It is also possible to see on Fig.~\ref{figbeat} that the typical time scale of the
amplitude modulation of the transition probability is about $T_A \approx
0.1\thinspace\text{s}$. The production rate of right-handed neutrinos in the early
Universe should be less than the expansion rate of the Universe in order not to
affect the primordial nucleosynthesis~\cite{prodrnuR}. Hence one has $T_A h > 1$,
where
\begin{equation*}
h \approx 1.24 \times 10^{3}
\left(
\frac{T_\mathrm{pl}}{100\thinspace\text{MeV}}
\right)^2
\thinspace\text{s}^{-1},
\end{equation*}
is the Hubble parameter~\cite{Wei72} and $T_\mathrm{pl}$ is the primordial plasma
temperature. Supposing that neutrinos are at thermal equilibrium at
$100\thinspace\text{MeV}$, i.e. $k \sim T_\mathrm{pl}$, we get that $T_A h \sim 100
\gg 1$.
Despite the mentioned above discrepancy it is interesting to compare the result of
this section [Eq.~\eqref{PtrLRMM}] with the analogous transition probability
formula for Majorana neutrinos~\cite{twisting} at small mixing angle $\theta \to
0$. Note that in this situation magnetic moments matrices in Eqs.~\eqref{Lagrnu}
and~\eqref{Lagrpsi} [or Eq.~\eqref{magmomme}] coincide. In this limit we obtain
from Eq.~\eqref{PtrLRMM} the following transition probability:
\begin{align}\label{PtrMajorana}
P_{\nu^\mathrm{L}_\beta \to \nu^\mathrm{R}_\alpha}(t)= &
\frac{(\mu B)^2}{\Omega_M^2}
\sin^2(\Omega_M t),
\end{align}
where $\Omega_M = \sqrt{(\mu B)^2+[\Phi(k)+\omega/2]^2}$. It can be seen that
Eq.~\eqref{PtrMajorana} coincides with the transition probabilities derived in
Ref.~\cite{twisting} where spin-flavor oscillations of Majorana neutrinos in
twisting magnetic fields were studied on the basis of the quantum mechanical
approach.
It should be also noticed that the transition probability in Eq.~\eqref{PtrLRMM}
vanishes at high frequencies, $\omega \gg \max(m_a, \mu B)$, due to the dependence
of the oscillations phase on $\omega$ [see Eq.~\eqref{Phi}]. This phenomenon was
also mentioned in our paper~\cite{Dvo07YadFiz} in which we examined spin-flavor
oscillations of Majorana neutrinos in rapidly varying external fields.
\section{Quantum mechanical description of neutrino oscillations in a twisting
magnetic field\label{QM}}
In this section we demonstrate that the analog of the main results, yielded in
Secs.~\ref{DMM} and~\ref{MMM} within the Dirac-Pauli equation approach, can be also
obtained with help of the standard quantum mechanical treatment of spin-flavor
oscillations. The consistency between these two approaches is discussed.
To study neutrinos evolution in frames of the Schr\"odinger equation approach it is
convenient to make the coordinates transformation. We assume that
$\mathbf{k}=(0,0,k)$ and $\mathbf{B}=B(\sin \omega t,\cos \omega t,0)$. Now the
Schr\"odinger equation and the effective Hamiltonian for the neutrinos \emph{mass}
eigenstates have the form,
\begin{equation}\label{Scheq}
\mathrm{i}\dot{\Psi}=H\Psi,
\quad
H=
\begin{pmatrix}
H_\mathrm{mass} & H_B \\
H_B^\dag & H_\mathrm{mass} \
\end{pmatrix},
\end{equation}
where $H_\mathrm{mass}=\mathrm{diag}(\mathcal{E}_1,\mathcal{E}_2)$,
$\mathcal{E}_{1,2}=\sqrt{m_{1,2}^2+k^2}$ and
\begin{equation}\label{HB}
H_B=-(B_x+\mathrm{i}B_y)
\begin{pmatrix}
\mu_1 & \mu \\
\mu & \mu_2 \
\end{pmatrix}=
- \mathrm{i} (\mu_{ab}) B e^{-\mathrm{i} \omega t},
\end{equation}
where $\mu_{1,2}$ and $\mu$ are the elements of the magnetic moments
matrix~\eqref{magmomme}. The neutrinos wave function has the following form:
$\Psi^\mathrm{T}=(\psi_1^\mathrm{L}, \psi_2^\mathrm{L}, \psi_1^\mathrm{R},
\psi_2^\mathrm{R})$, where $\psi_{1,2}^\mathrm{L,R}$ are one-component functions.
It can be seen that Eqs.~\eqref{Scheq} and~\eqref{HB} is the generalization, for
the case of Dirac neutrinos, of the corresponding expressions used in
Ref.~\cite{twisting}. The initial condition for the wave function $\Psi$ follows
from Eq.~\eqref{inicondpsi},
\begin{equation}\label{inicondqm}
\Psi^\mathrm{T}(0)=(\sin\theta,\cos\theta,0,0).
\end{equation}
Let us make the matrix transformation,
\begin{align}\label{rotbas}
\Psi = & \mathfrak{V} \widetilde{\Psi},
\notag
\\
\mathfrak{V}= & \mathrm{diag}
(e^{-\mathrm{i} \omega t/2}, e^{-\mathrm{i} \omega t/2},
e^{\mathrm{i} \omega t/2}, e^{\mathrm{i} \omega t/2})
\end{align}
The Hamiltonian $\tilde{H}$ governing the time evolution of the modified wave
function $\widetilde{\Psi}$ is presented in the form
\begin{equation}\label{Scheqmod}
\tilde{H}=
\begin{pmatrix}
H_\mathrm{mass}-\widehat{\mathds{1}}\omega/2 & -\mathrm{i}(\mu_{ab})B \\
\mathrm{i}(\mu_{ab})B & H_\mathrm{mass}+\widehat{\mathds{1}}\omega/2 \
\end{pmatrix},
\end{equation}
where $\widehat{\mathds{1}}$ is the $2 \times 2$ unit matrix. The initial condition
for $\widetilde{\Psi}$ coincides with that for $\Psi$ [see Eq.~\eqref{inicondqm}]
due to the special form of the matrix $\mathfrak{V}$ in Eq.~\eqref{rotbas}.
Then we look for the solutions to the Schr\"odinger equation
$\mathrm{i}\mathrm{d}\widetilde{\Psi}/\mathrm{d}t=\tilde{H}\widetilde{\Psi}$, with
the Hamiltonian given in Eq.~\eqref{Scheqmod}, in the form $\widetilde{\Psi} \sim
e^{- \mathrm{i} \lambda t}$. The secular equation for $\lambda$ is the forth order
algebraic equation in general case. However it can be solved in two situations.
\subsection{Diagonal magnetic moments matrix}
In the case when $\mu = 0$ and $\mu_{1,2} \neq 0$ the roots of the secular equation
are
\begin{equation}\label{DMMroots}
\lambda_{a}^{\pm{}}= \mathcal{E}_a \pm \mathcal{M}_a,
\quad
a=1,2,
\end{equation}
where $\mathcal{M}_a$ is defined in Eq.~\eqref{RaMa}. The basis spinors
$u_a^{\pm{}}$, which are the eigenvectors of the Hamiltonian $\tilde{H}$ are
expresed in the following way:
\begin{align}\label{DMMspinors}
u_1^{+{}}= &
\frac{1}{\sqrt{2\mathcal{M}_1}}
\begin{pmatrix}
-\mathrm{i}\mu_1 B/\mathcal{Z}_1 \\
0 \\
\mathcal{Z}_1 \\
0 \
\end{pmatrix},
\notag
\\
u_1^{-{}}= &
\frac{1}{\sqrt{2\mathcal{M}_1}}
\begin{pmatrix}
\mathcal{Z}_1 \\
0 \\
-\mathrm{i}\mu_1 B/\mathcal{Z}_1 \\
0 \
\end{pmatrix},
\notag
\\
u_2^{+{}}= &
\frac{1}{\sqrt{2\mathcal{M}_2}}
\begin{pmatrix}
0 \\
-\mathrm{i}\mu_2 B/\mathcal{Z}_2 \\
0 \\
\mathcal{Z}_2 \
\end{pmatrix},
\notag
\\
u_2^{-{}}= &
\frac{1}{\sqrt{2\mathcal{M}_2}}
\begin{pmatrix}
0 \\
\mathcal{Z}_2 \\
0 \\
-\mathrm{i}\mu_2 B/\mathcal{Z}_2 \
\end{pmatrix},
\end{align}
where $\mathcal{Z}_{1,2}=\sqrt{\mathcal{M}_{1,2}+\omega/2}$. Note that the vectors
$u_a^{\pm{}}$ correspond to the eigenvalues $\lambda_a^{\pm{}}$.
The general solution to the Schr\"odinger evolution equation has the form,
\begin{equation}\label{DMMgensol}
\widetilde{\Psi}(t)=\sum_{a=1,2} \sum_{\zeta=\pm 1}
\alpha_a^{(\zeta)}u_a^{(\zeta)}\exp(-\mathrm{i}\lambda_a^{(\zeta)}t),
\end{equation}
where the coefficients $\alpha_a^{(\zeta)}$ should be chosen so that to satisfy the
initial condition in Eq.~\eqref{inicondqm}. We choose these quantities as
\begin{align}\label{DMMcoeff}
\alpha_1^{+{}} = & \sin\theta \frac{1}{\sqrt{2M_1}}
\frac{\mathrm{i}\mu_1 B}{\mathcal{Z}_1},
\quad
\alpha_1^{-{}} = \sin\theta \frac{\mathcal{Z}_1}{\sqrt{2M_1}},
\notag
\\
\alpha_2^{+{}} = & \cos\theta \frac{1}{\sqrt{2M_2}}
\frac{\mathrm{i}\mu_2 B}{\mathcal{Z}_2},
\quad
\alpha_2^{-{}} = \sin\theta \frac{\mathcal{Z}_1}{\sqrt{2M_1}},
\end{align}
Then using Eqs.~\eqref{rotbas} and~\eqref{DMMroots}-\eqref{DMMcoeff} we get the
right-polarized components of the wave function $\Psi$ in the form
\begin{align}\label{DMMpsiR}
\psi_1^\mathrm{R}(t)= & \exp[-\mathrm{i}(\mathcal{E}_1-\omega/2)t]
\sin\theta \sin(\mathcal{M}_1 t)
\frac{\mu_1 B}{\mathcal{M}_1},
\notag
\\
\psi_2^\mathrm{R}(t)= & \exp[-\mathrm{i}(\mathcal{E}_2-\omega/2)t]
\cos\theta \sin(\mathcal{M}_2 t)
\frac{\mu_2 B}{\mathcal{M}_2}.
\end{align}
Finally taking into account Eqs.~\eqref{matrtrans}, \eqref{matrU}
and~\eqref{DMMpsiR} we arrive to the right-handed component of $\nu_\alpha$,
\begin{align}\label{DMMnualphaR}
\nu_\alpha^\mathrm{R}(t)= &
\cos\theta \psi_1^\mathrm{R}(t)-\sin\theta \psi_2^\mathrm{R}(t)
\notag
\\
= &
\sin\theta \cos\theta e^{\mathrm{i}\omega t/2}
\bigg[
\exp(-\mathrm{i}\mathcal{E}_1 t)\sin(\mathcal{M}_1 t)
\frac{\mu_1 B}{\mathcal{M}_1}
\notag
\\
& -
\exp(-\mathrm{i}\mathcal{E}_2 t)\sin(\mathcal{M}_2 t)
\frac{\mu_2 B}{\mathcal{M}_2}
\bigg].
\end{align}
One can see from Eq.~\eqref{DMMnualphaR} that the expression for
$\nu_\alpha^\mathrm{R}$ obtained in frames of the Schr\"odinger approach coincides
(to within some irrelevant phase factor) with the analogous expression derived
using the Dirac-Pauli equation [see Eq.~\eqref{nualphaR0fin}].
\subsection{Non-diagonal magnetic moments matrix}
In the situation when $\mu_{1,2}=0$ and $\mu \neq 0$ the secular equation can be
also solved analytically and the corresponding roots are
\begin{equation}\label{MMMroots}
\lambda_{1,2}^{+{}}= \bar{\mathcal{E}} \pm \Omega_{+{}},
\quad
\lambda_{1,2}^{-{}}= \bar{\mathcal{E}} \pm \Omega_{-{}},
\end{equation}
where $\bar{\mathcal{E}}=(\mathcal{E}_1+\mathcal{E}_2)/2$ and $\Omega_{\pm{}}$ are
given in Eq.~\eqref{Omegapmomegapm}. The eigenvectors of the Hamiltonian
$\tilde{H}$ have the following form:
\begin{align}\label{MMMspinors}
u_{1,2} = &
\frac{1}{\sqrt{2\Omega_{+{}}}}
\begin{pmatrix}
0 \\
\mp \mathrm{i}\mu B/R_{\pm{}} \\
R_{\pm{}} \\
0 \
\end{pmatrix},
\notag
\\
v_{1,2} = &
\frac{1}{\sqrt{2\Omega_{-{}}}}
\begin{pmatrix}
\mp \mathrm{i}\mu B/S_{\mp{}} \\
0 \\
0 \\
S_{\mp{}} \
\end{pmatrix},
\end{align}
where $R_{\pm{}}=\sqrt{\Omega_{+{}} \pm \omega_{+{}}/2}$,
$S_{\pm{}}=\sqrt{\Omega_{-{}} \pm \omega_{-{}}/2}$ and $\omega_{\pm{}}$ are given
in Eq.~\eqref{Omegapmomegapm}. The spinors $u_{1,2}$ and $v_{1,2}$ in
Eq.~\eqref{MMMspinors} correspond to the eigenvalues $\lambda_{1,2}^{+{}}$ and
$\lambda_{1,2}^{-{}}$ respectively.
The general solution to the Schr\"odinger equation for the function
$\widetilde{\Psi}$ takes the form,
\begin{align}\label{MMMgensol}
\widetilde{\Psi}(t)= & \exp(-\mathrm{i}\bar{\mathcal{E}}t)
[\alpha_1 u_1 \exp(-\mathrm{i}\Omega_{+{}}t)+
\alpha_2 u_2 \exp(\mathrm{i}\Omega_{+{}}t)
\notag
\\
& +
\beta_1 v_1 \exp(-\mathrm{i}\Omega_{-{}}t)+
\beta_2 v_2 \exp(\mathrm{i}\Omega_{-{}}t)],
\end{align}
We again choose the coefficients $\alpha_{1,2}$ and $\beta_{1,2}$ in
Eq.~\eqref{MMMgensol} to satisfy the initial condition in Eq.~\eqref{inicondqm}.
These coefficients have to be chosen as
\begin{equation}\label{MMMcoeff}
\alpha_{1,2} = \pm \mathrm{i} \cos\theta
\frac{R_{\mp{}}}{\sqrt{2\Omega_{+{}}}},
\quad
\beta_{1,2} = \pm \mathrm{i} \sin\theta
\frac{S_{\pm{}}}{\sqrt{2\Omega_{-{}}}},
\end{equation}
With help of Eqs.~\eqref{rotbas} and~\eqref{MMMroots}-\eqref{MMMcoeff} we obtain
the right-handed components of the wave function $\Psi$ in the form
\begin{align}\label{MMMpsiR}
\psi_1^\mathrm{R}(t)= & \mu B \cos\theta
\exp[-\mathrm{i}(\bar{\mathcal{E}}-\omega/2)t]
\frac{\sin(\Omega_{+{}}t)}{\Omega_{+{}}},
\notag
\\
\psi_2^\mathrm{R}(t)= & \mu B \sin\theta
\exp[-\mathrm{i}(\bar{\mathcal{E}}-\omega/2)t]
\frac{\sin(\Omega_{-{}}t)}{\Omega_{-{}}}.
\end{align}
On the basis of Eq.~\eqref{MMMpsiR} we receive the wave function
$\nu_\alpha^\mathrm{R}$ as
\begin{align}\label{MMMnualphaR}
\nu_\alpha^\mathrm{R}(t) = &
\mu B \exp[-\mathrm{i}(\bar{\mathcal{E}}-\omega/2)t]
\notag
\\
& \times
\left(
\cos^2\theta \frac{\sin \Omega_{+{}} t}{\Omega_{+{}}}-
\sin^2\theta \frac{\sin \Omega_{-{}} t}{\Omega_{-{}}}
\right).
\end{align}
Comparing Eq.~\eqref{MMMnualphaR} with the analogous expression~\eqref{nualphaRMM}
derived in frames of the Dirac-Pauli equation approach we again find an agreement
to within the phase factor.
It should be however mentioned that using the quantum mechanical treatment of
spin-flavor oscillations one cannot reproduce the expression for the phase of the
neutrino oscillations~\eqref{Phi}. It means that in Eqs.~\eqref{DMMnualphaR}
and~\eqref{MMMnualphaR} we have the standard quantum mechanical vacuum oscillations
phase $\Phi_{QM}(k)=\delta m^2/(4k)$.
\section{Applications\label{APPL}}
Let us discuss the applicability of our results to one specific oscillation
channel, $\nu_\mu^\mathrm{L}\xleftrightarrow{B}\nu_\tau^\mathrm{R}$.
According to the recent neutrino oscillations data (see, e.g.,
Ref.~\cite{MalSchTorVal04}) the mixing angle between $\nu_\mu$ and $\nu_\tau$ is
close to its maximal value of $\pi/4$. For such a mixing angle the magnetic moment
matrix given in Eq.~\eqref{magmomme} is expressed in the form,
\begin{multline
({\mu}_{ab})\approx
\\
\begin{pmatrix}
[{M}_{\tau\tau}+{M}_{\mu\mu}]/2+{M}_{\tau\mu} &
-[{M}_{\tau\tau}-{M}_{\mu\mu}]/2 \\
-[{M}_{\tau\tau}-{M}_{\mu\mu}]/2 &
[{M}_{\tau\tau}+{M}_{\mu\mu}]/2-{M}_{\tau\mu} \
\end{pmatrix}.
\end{multline}
Eq.~\eqref{PtrLR0fin} is valid when this matrix is close to diagonal, i.e. if
$|({M}_{\tau\tau}-{M}_{\mu\mu})/2| \ll |({M}_{\tau\tau}+{M}_{\mu\mu})/2
\pm{M}_{\tau\mu}|$. In contrast to the mixing angles, experimental data and
theoretical predictions for the values of neutrino magnetic moments are not very
reliable~\cite{magnmomDM}. However it is known that the diagonal magnetic moments
$M_{\lambda\lambda}$ could be very small in the extensions of the standard model
${M}_{\lambda\lambda}\sim
10^{-19}({m}_{\lambda\lambda}/\text{eV})\mu_\mathrm{B}$~\cite{nuMM}. The transition
magnetic moments, ${M}_{\tau\mu}$ in our notations, can be much greater up to the
experimental limit of $10^{-10}\mu_\mathrm{B}$~\cite{Yao06}. One can see that for
any conceivable values of the masses of the known neutrinos,
${M}_{\mu\mu}$ and ${M}_{\tau\tau}$ are several orders of magnitude smaller than
$10^{-10}\mu_\mathrm{B}$. Our result~\eqref{PtrLR0fin} is valid in this case.
It is worth mentioning that Eqs.~\eqref{solpsi0} and~\eqref{solpsi1} are in
principle applicable for particles with arbitrary initial conditions, e.g., with
small initial momenta. Therefore one can discuss the evolution and oscillations of
relic neutrinos. These particles can gravitationally cluster in the
Galaxy~\cite{RinWon05}. It is possible to relate the temperatures of relic
neutrinos and CMB protons, $T_\nu$ and $T_\gamma$ respectively,
\begin{equation}
T_\nu =
\left(
\frac{4}{11}
\right)^{1/3}
T_\gamma
\approx 0.72 T_\gamma.
\end{equation}
For the present value $T_\gamma = 2.7\ensuremath{^\circ}\thinspace\mathrm{K}$ we
get $T_\nu \approx 1.93\ensuremath{^\circ}\thinspace\mathrm{K}$~\cite{RinWon05}.
Using the estimate for the neutrino mass $m \sim 0.1\thinspace\mathrm{eV}$, because
the sum of all neutrinos masses should be less than
$1\thinspace\mathrm{eV}$~\cite{HanRaf06}, and taking into account that these
neutrinos are non-relativistic particles, one obtains the typical momentum of a
relic neutrino $k \sim 7 \times 10^{-3}\thinspace\mathrm{eV}$. For example, to
realize the "non-standard" neutrino propagation regime described in
item~\ref{case1} of Sec.~\ref{SO} one should use an undulator with the frequency
$|\omega| \sim 2k$ or with the period $L = 2\pi/\omega \sim
0.1\thinspace\mathrm{mm}$.
Strong periodic electromagnetic fields, with a short spatial oscillations length,
can be found in crystals~\cite{Ugg05}. In Ref.~\cite{Bel03} it was proposed to
manufacture deformed crystals with submillimeter periods. The undulator radiation
was recently reported in Ref.~\cite{Bar05} to be produced in undulators with
periods of $0.1-1\thinspace\mathrm{mm}$. Therefore artificial crystalline
undulators with required periods are used in various experiments and hence they can
serve as a good tool to explore properties of relic neutrinos. It should be
mentioned that neutrino scattering on a polarized target and possible tests of
neutrino magnetic moment were examined in Ref.~\cite{RasSem00}.
\section{Summary\label{CONCL}}
We have described the evolution of Dirac neutrinos in matter and in a twisting
magnetic field. We have applied the recently developed approach (see
Refs.~\cite{FOvac,Dvo06EPJC,DvoMaa07}) which is based on the exact solutions to the
Dirac equation in an external field with the given initial condition.
First (Sec.~\ref{SO}) we have found the solution to the Dirac equation for a
neutral $1/2$-spin particle weakly interacting with background matter, that is
equivalent to an external axial-vector field, and non-minimally coupled to an
external electromagnetic field due to the possible presence of an anomalous
magnetic moment. We have discussed the situation when a neutrino interacts with the
twisting magnetic field. The energy spectrum and basis spinors have been obtained.
We have applied these results to derive the transition probability of spin
oscillations in matter under the influence of the twisting magnetic field. The
scope of the standard quantum mechanical approach to the description of neutrino
spin oscillations has been analyzed.
Then (Sec.~\ref{SFO}) we have used the obtained solution to the Dirac equation for
the description of neutrino spin-flavor oscillations in a twisting magnetic field.
We supposed that two Dirac neutrinos could mix and have non-vanishing matrix of
magnetic moments. Moreover the mass and magnetic moments matrices in the flavor
eigenstates basis are generally independent, i.e. the diagonalization of the mass
matrix, that means the transition to the mass eigenstates basis, does not lead to
the diagonalization of the magnetic moments matrix. We have discussed two
possibilities.
In Sec.~\ref{DMM} we have assumed that magnetic moments matrix in the mass
eigenstates basis has great diagonal elements compared to the non-diagonal ones. In
this case one can analyze neutrino spin-flavor oscillations perturbatively. Note
that the perturbative approach allows one to discuss neutrinos with an arbitrary
initial condition. For instance, the evolution of particles with small initial
momenta can be accounted for. The appearance of non-perturbative phenomena like
resonances is analyzed with an example of active-to-sterile neutrinos oscillations.
We have discussed the opposite situation in Sec.~\ref{MMM}, i.e. the magnetic
moment matrix with the great non-diagonal elements. In this case one had to treat
the evolution of the system non-perturbatively. We have demonstrated that this
situation is analogous to beatings resulting from the superposition of two
oscillations. In both cases we have obtained neutrino wave functions, consistent
with the initial conditions, and the transition probabilities. Note that all the
results are in agreement with our previous work~\cite{DvoMaa07} if we set
$\omega=0$, i.e. discuss a constant transversal magnetic field. We have also
examined some limiting cases and compared our results with the previous studies.
It has been shown in Sec.~\ref{QM} that one can derive the analog of the major
results obtained in Secs.~\ref{DMM} and~\ref{MMM} using the Schr\"odinger evolution
equation approach for the description of spin-flavor oscillations of Dirac
neutrinos. The correspondence between these two approaches has been considered.
In Sec.~\ref{APPL} we have discussed magnetic moments matrices in various
theoretical models which predict neutrinos magnetic moments. The validity of our
approach for these situations has been considered. The applications of our results
to the studying of cosmological neutrinos in laboratory conditions have been
examined. In particular we have suggested that artificial crystalline undulators
could be useful for such a research.
The results obtained in the present work are valid for arbitrary magnetic field
strength. The general case of spin-flavor oscillations of Dirac neutrinos in a
twisting magnetic field with an arbitrary magnetic moment matrix has not been
studied analytically earlier. Both experimental and theoretical information about
magnetic moments of Dirac neutrinos is known to be very limited (see, e.g.
Refs.~\cite{magnmomDM,Yao06}). Therefore our results can be helpful since they
enable one to describe phenomenologically spin-flavor oscillations of Dirac
neutrinos under the influence of the magnetic field in question provided neutrinos
possess non-vanishing matrix of magnetic moments. Although we consider neutrinos,
it is possible to straightforwardly apply our formalism to the description of any
$1/2$-spin particles.
\begin{acknowledgments}
The work has been supported by the Academy of Finland under the
contract No.~108875. The author is thankful to the Russian Science Support
Foundation for a grant as well as to Efrain Ferrer (Western Illinois University)
and Kimmo Kainulainen (University of Jyv\"askyl\"a) for useful comments. The
referees' remarks are also appreciated.
\end{acknowledgments}
|
2,877,628,089,827 | arxiv | \section{Interacting dynamics in FLRW universe}
\label{dynamics}
Let us consider the homonegeous and isotropic universe characterized by the
Friedmann-Lemm\^itre-Robertson-Walker (FLRW) line element
$${\rm d}{\rm s}^2= -{\rm d}t^2+
a^2 (t) \left[{\rm d}r^2/(1-kr^2)+ r^2 ({\rm d} \theta^2+ \sin^2 \theta\, {\rm d}
\phi^2)\right],$$
where $a(t)$ is the scale factor of the universe and $k$ is the
spatial curvature which represents a flat, open and closed universe respectively for $k=
0, -1$, and $1$. In such a background, the first Friedmann equation can be written as
\begin{eqnarray}
H^2+ \frac{k}{a^2}=\frac{8\pi G}{3}\rho,\label{friedmann1}
\end{eqnarray}
where $H= \dot{a}/a$, is the Hubble rate of the FLRW universe; $\rho$ is the total
energy density of the universe which is the mixture of baryons, cold dark matter and
dark energy, i.e. $\rho= \rho_b + \rho_{dm}+ \rho_{d}$, where $\rho_b$, $\rho_{dm}$,
$\rho_d$ are respectively the energy densities of baryons, CDM and DE. We further
assume that CDM and DE are interacting with each other while baryons do
not take part in the interation. The energy
conservation equation for the total fluid follows
\begin{eqnarray}
\dot{\rho}+ 3 \frac{\dot{a}}{a} (p+\rho)= 0.\label{continuity}
\end{eqnarray}
Since only CDM and DE interact with each other but baryons do not interact, thus,
the evolution for baryons follows $\dot{\rho}_b+ 3 H \rho_b = 0$ $\Longrightarrow$ $\rho_b =
\rho_{b0}\, a^{-3}$, while the evolution equations for CDM and DE read
\begin{eqnarray}
\dot{\rho}_{dm}+ 3 H \rho_{dm}&=& Q,\label{conservation1}\\
\dot{\rho}_d+ 3 H (1+ w_d) \rho_d&=& -Q,\label{conservation2}
\end{eqnarray}
where $Q$ is the interaction function between the dark sectors.
Physically, the interaction is charaterized by some energy
flow between the sectors interacting with each other. A positive interaction
(i.e. $Q> 0$) implies the flow of energy
from DE to CDM while its negative value denotes the energy
flow in the opposite direction.
Now, introducing the total energy density of CDM and DE as
$\rho_T= \rho_{dm}+ \rho_d$, it is easy to see that the combination of eqns.
(\ref{conservation1}) and (\ref{conservation2}) turns into
\begin{align}
\dot{\rho}_T+ 3 \frac{\dot{a}}{a} (p_T+\rho_T) &= 0.\label{continuity-total}
\end{align}
Now, using Eq.~(\ref{continuity-total}), one can express $\rho_d$ and $\rho_{dm}$ as follows:
\begin{eqnarray}
\rho_d &=&-\frac{\rho_T+ \rho^\prime_T}{w_d},\label{de}\\
\rho_{dm} &=&\frac{\rho^\prime_T+(1+ w_d)\rho_T}{w_d}.\label{dm}
\end{eqnarray}
Here primes denote derivatives
with respect to the variable $x= 3 \ln (a/a_0) = 3 \ln a$
(We set $a_0= 1$ as an usual practice and there is no loss of generality).
Thus, once $\rho_T$ is determined,
the evolution equations for CDM and DE can be understood. However, in the present study we shall concentrate on an interaction function which is the linear combination of the energy densities of CDM and DE. In what follows in the next section we discuss the interacting scenarios both for constant and dynamical equation of state in DE.
\section{Variants of the model}
\label{analytic}
We introduce the following interaction \cite{Quartin2008, PBC2015}
\begin{equation}
Q= 3\lambda_m H \rho_{dm} +3\lambda_d H \rho_d,\label{interaction1}
\end{equation}
where $\lambda_m$, $\lambda_d$ are the coupling parameters that denote the strength (with their magnitudes) and the direction of energy flow (with their signs) between the interacting sectors.
Due to the expression (\ref{interaction1}) the conservation equations
(\ref{conservation1}) and (\ref{conservation2}) are modified, and finally, we get the
following second order differential equation:
\begin{eqnarray}
\rho^{\prime\prime}_T &+& \left(2+ w_d+ \lambda_d- \lambda_m- \frac{w^\prime _d}{w_d}
\right) \rho^\prime_T \nonumber \\
&+& \left[(1+ w_d)(1- \lambda_m)+ \lambda_d-
\frac{w^\prime _d}{w_d} \right] \rho_T= 0, \label{diffeqn}
\end{eqnarray}
which is the master equation to determine the evolution of CDM and DE. Let us
proceed with two different possibilities with the equation of state in DE,
namely when it is either constant or dynamical with the cosmic evolution.
\subsection{The case for constant EoS in DE}
If $w_d={}$ constant, the solution of the differential equation (\ref{diffeqn}) becomes
\cite{Chimento2010}
\begin{equation}
\rho_T= \rho_1 a^{3 m_{+}} + \rho_2 a^{3 m_{-}},\label{energy-const}
\end{equation}
where $\rho_1$, $\rho_2$ are integration constants, $m_{+}$, $m_{-}$ are
\begin{equation}
m_{\pm}= \frac{\lambda_m-w_d-\lambda_d-2\pm \sqrt{(\lambda_m+
w_d+\lambda_d)^2-4\lambda_m \lambda_d}}{2}.\nonumber
\end{equation}
One can see that for this case, the Hubble function (\ref{F1}) takes an analytic form leading to
\begin{eqnarray*}
H^2 = \frac{8\pi G}{3} \Bigl[ \rho_{b0} a^{-3} + \rho_1 a^{3m_{+}}+ \rho_2 a^{3m_{-}} \Bigr] - \frac{k}{a^2}
\end{eqnarray*}
Now, using (\ref{energy-const}), we have the explicit analytic solutions for dust and
dark energy as follows:
\begin{eqnarray}
\rho_{dm}&=& \rho_1 \frac{w_d+1+ m_{+}}{w_d}\, a^{3m_{+}}+ \rho_2 \frac{w_d+1+ m_{-}}{w_d}\, a^{3 m_{-}},\nonumber \label{dust-analytic}\\
\rho_d&=&-\frac{\rho_1 (1+ m_{+})\, a^{3 m_{+}}+ \rho_2 (1+ m_{-})\, a^{3
m_{-}}}{w_d}.\nonumber \label{DE-analytic}
\end{eqnarray}
We mention that in Ref. \cite{PBC2015} the analytic
solution for this particular linear interaction was discussed
assuming that the magnitudes of both coupling parameters are very small, that means, the
product $\lambda_m \lambda_d$ was excluded and the cosmlogical scenario
wer analyzed for the solution with $m_{+}= -(1- \lambda_m)$,
$m_{-}= -(1+\lambda_d+ w_d)$. Certainly, a detailed analysis with no such restriction
is worth investigating. Moreover, the analysis of this model was performed with $194$ Supernovae Type Ia data from \cite{Tonry2003, Barris2004} which needs to be updated with the latest observational data.
Thus, in comparison with the previous study,
the present one has two fold importance:
(i) the solution (\ref{energy-const}) for general ($m_{+}, m_{-}$) completes the study
without any information loss, and (ii) here we employ the current observational data
which provide
better observational constraints on all model parameters. Thus, under
(i) and (ii), the present analytic interacting dark energy model could produce some
interesting information about this interacting dark energy-dark matter model while
constraining it by recent observational data sets.
Further, the usual density parameters for dark matter ($\Omega_{dm0}$)
and dark energy ($\Omega_{d0}$) in terms of the density
parameters for the equivalent two fluids $\Omega_1$ and $\Omega_2$ are given by
\begin{eqnarray}
\Omega_{dm0}&=& \Omega_1\frac{w_d+1+ m_{+}}{w_d}+ \Omega_2 \frac{w_d+1+m_{-}}{w_d},\label{Omega_matter} \\
\Omega_{d0}&=& -\frac{\Omega_1 (1+ m_{+})+ \Omega_2
(1+ m_{-})}{w_d},\label{Omega_DE}
\end{eqnarray}
where $\Omega_{i}= 8\pi G\rho_{i}/3 H_0^2$.
The values $\Omega_1$, $\Omega_2$ can be expressed by using the above two equations
(\ref{Omega_matter}), (\ref{Omega_DE}), their consequence $\Omega_{dm0}+ \Omega_{d0}=
\Omega_1+ \Omega_2$ and the equality
$$
\Omega_{dm0}+ \Omega_{d0}+\Omega_{b0}+\Omega_{k}=\Omega_{1}+
\Omega_{2}+\Omega_{b0}+\Omega_{k}= 1,
$$
results in from Eq.~(\ref{friedmann1}) at the present time $t=t_0$. Here
$\Omega_{b0}=\Omega_b(t_0)$, $\Omega_{k}= -k/(a_0H_0)^2$. In particular,
\begin{equation}
\Omega_1=\frac{w_d\Omega_{dm0}-(1+w_d+ m_{-})(1-\Omega_{b0}-\Omega_{k})}{m_{+}-m_{-}}.
\label{Omega1}
\end{equation}
Also, we note that the total density parameter for matter is,
$\Omega_{m0}= \Omega_{dm0}+ \Omega_{b0}$.
In Sect.~\ref{data-analysis} (see Fig.~\ref{F1}) we investigate how solutions
(\ref{energy-const})
describe the observational data for Type Ia supernovae, baryon acoustic oscillations,
for the Hubble parameter $H(z)$ and CMB.
\subsection{Variable EoS in DE}
\label{variable}
In this section we focus on the interacting models
where the EoS in DE, $w_d$, is dynamical.
There are several interacting dark energy models with possibility of variable EoS in DE,
where reasonable attention has been paid to observational data. In Ref.~\cite{WW2014}, the authors investigated an interacting scenario for $Q= 3 H \lambda_m \rho_m$
with Chevallier-Polarski-Linder (CPL) parametrization \cite{CP2001, Linder2003} as the equation of state in DE. Also, in Ref. \cite{HW2008} the authors
studied the present linear interaction (\ref{interaction1}) with CPL parametrization but
with very old data (182 Gold Type Ia Supernoave data \cite{Riess-Gold}). Thus,
considering the linear interaction (\ref{interaction1}) in our discussion, we aim to
investigate the interacting dynamics between CDM an DE with some new variable equations
of state in $w_d$ including CPL \cite{CP2001, Linder2003} and linear
parametrization \cite{Astier2001, CH1999, WA2002} by Union
2.1 compilation \cite{Suzuki2012} along with Hubble parameter measurements, baryon
acoustic oscillation and CMB data.
Let us first begin our analysis with the
following generalized ansatz
\begin{equation}
\frac{w^{\prime}_d}{w_d}= \alpha\, w_d +\beta,
\label{GA}
\end{equation}
where $\alpha$, $\beta$ are real numbers. We note that the EoS (\ref{GA}) is the
generalized version of the variable EoS of DE presented in
\cite{PBC2015}.
The solution of Eq.~(\ref{GA}) is
\begin{equation}
w_d= \left[ \left(\frac{1}{w_{d0}} + \frac{\alpha}{\beta}\right) a^{-3\beta} -
\frac{\alpha}{\beta} \right]^{-1}.
\label{solution-GA}
\end{equation}
In particular, we consider the following partial cases
\begin{eqnarray}
\mbox{Ansatz I:}\;~~~~\alpha&=&0,\qquad w_d=w_{d0} a^{3\beta}; \label{sol1-GA}\\
\mbox{Ansatz II:}\;~~~~\beta&=&0,\;\;w_d=\frac{w_{d0}}{(1-3 \alpha w_{d0} \ln a)}.
\label{sol2-GA}
\end{eqnarray}
We also consider separately the following ansatz:
\begin{equation}
\mbox{Ansatz III:}~\qquad \alpha=1, \label{Ans3}\\
\end{equation}
which is attractive, because in this case under the condition $\lambda_m= 0$ coefficients
of equation (\ref{diffeqn}) become constant, and its general solution has the simple
form \cite{PBC2015}
\begin{equation}
\rho_T= \tilde{\rho}_1 a^{-3}+ \tilde{\rho}_2 a^{3 (n-1)},\label{variable-total-Energy}
\end{equation}
where $n=\beta-\lambda_d={}$const, and $\tilde{\rho}_1> 0$, $\tilde{\rho}_2> 0$ are
integration constants.
Moreover, we also consider two more interacting scenarios
when the EoS of DE obeys the Chevallier-Polarski-Linder (CPL)
parametrization \cite{CP2001, Linder2003}
\begin{equation}
\mbox{Ansatz IV:} \qquad \qquad \qquad w_d (z)= w_{d0}+ w_1\frac{ z}{1+z},
\label{Ans4}
\end{equation}
and the linear parametrization \cite{Astier2001, CH1999, WA2002}
\begin{equation}
\mbox{Ansatz V:} \qquad \qquad \qquad w_d (z)= w_{d0}+ w_1 z.
\label{Ans5}\end{equation}
Here in both (\ref{Ans4}), (\ref{Ans5}), $w_{d0}$, and $w_1 = dw_d (z)/dz$ at $z= 0$
are two free parameters to be constrained by the observational data. The
dependencies of $w_d (z)$ in (\ref{Ans4}) and (\ref{Ans5}) are alternative to
(\ref{solution-GA}).
\section{Joint Analysis}
\label{data-analysis}
In order to constrain the proposed models with recent observational data,
we use $N_{SN}=580$ data points for Type Ia supernovae from Union 2.1 \cite{Suzuki2012}, $N_H=39$ observed Hubble data points \cite{Simon2005, Stern2010
Moresco2012, Zhang2014, Moresco2015, Moresco2016,
GCH2009, Blake2012, Busca2013, CW2013, Chuang2013, Anderson2014a,%
Anderson2014b, Oka2014, Font-Ribera2014, Delubac2015} and $N_{BAO}=17$ baryon acoustic
oscillation data \cite{GCH2009, Blake2012, Busca2013, CW2013, Chuang2013, Anderson2014a, Anderson2014b, Oka2014, Font-Ribera2014, Delubac2015, Percival2010, Kazin2010, Beutler2011, Blake2011, Padmanabhan2012, Seo2012, Kazin2014, Ross2015, Hinshaw2013},
and finally the cosmic microwave background radiation (CMB)
in the form \cite{HWW2015}.
Our analysis follows the likelihood $\mathcal{L} \propto
\exp(-\chi^2/2)$ where $\chi^2 = \sum_{i} \chi^2_{i}$ ($i$ runs over the all data sets
employed in the analysis). We calculate the best-fitted values of the free model parameters with their
corresponding uncertainties
from the minimization of the $\chi^2$ function. We use two different combined analysis
with the likelihoods $\mathcal{L}_\Sigma\propto
\exp(-\chi^2_{\Sigma}/2)$, $\mathcal{L}_{tot}\propto \exp(-\chi^2_{tot}/2)$,
where
\begin{eqnarray}
\chi^2_{\Sigma}&=& \chi^2_{SN}+ \chi^2_{H}+ \chi^2_{BAO},\label{total-chi21}\\
\chi^2_{tot}&=& \chi^2_{SN}+ \chi^2_{H}+ \chi^2_{BAO}+ \chi^2_{CMB}.\label{total-chi2}
\end{eqnarray}
In the next subsections, we shall shortly describe different data sets and the corresponding
$\chi^2$ functions.
\subsection{Union 2.1 data points}
Type Ia Supernovae are the first indication for existence of some dark energy in our
Universe \cite{Riess1998, Perlmutter1999}. The observable quantities from a Type Ia
supernova (SN Ia) are its redshift $z$ and its apparent magnitude $m_{obs}$, but in the
survey \cite{Suzuki2012} values $m_{obs}$ are recalculated into distance modulus
\begin{equation}
\mu_{obs}=m_{obs}(z)-M+\bar{\alpha} x_1-\bar{\beta} c+\bar{\delta} P.
\label{muobs} \end{equation}
Here, additive terms include the SN Ia absolute magnitude $M$ and corrections connected
with deviations from mean values of lightcurve shape ($x_1$), SN Ia color ($c$) and mass
of a host galaxy (the factor $P$). The parameters $M$, $\bar{\alpha}$, $\bar{\beta}$ and
$\bar{\delta}$ are considered in Ref. \cite{Suzuki2012} as nuisance parameters,
{and} they are fitted simultaneously with $H_0$ and other cosmological
parameters in the flat $\Lambda$CDM model. This approach is usual in SN Ia analysis
\cite{NP2005, Conley2011, Ruiz2012}. So values (\ref{muobs}) in Ref. \cite{Suzuki2012} may
have a model dependent additive term (a systematic error) with concealed dependence on
$H_0$ and other model parameters.
We have to keep in mind this fact, when we compare the observable values (\ref{muobs}) from
Ref. \cite{Suzuki2012} with theoretical values of distance modulus, corresponding to
redshift $z$:
\begin{equation}
\mu_{th}(z)= 5 \log_{10} \left(\frac{D_L (z)}{10\mbox{pc}}\right)
=5\log_{10}\frac{H_0D_L}c+\mu_0.
\label{mu}
\end{equation}
Here, $\mu_0=42{.}384-5\log_{10}h$, $D_L (z)$ is the luminosity distance \cite{Riess1998, NP2005}
\begin{equation}
D_L(z)=\frac{c\,(1+z)}{H_0}S_k
\bigg(H_0\int\limits_0^z\frac{d\tilde z}{H(\tilde
z)}\bigg) \label{DL} \end{equation}
with
$$S_k(x)=\left\{\begin{array}{ll} \sinh\big(x\sqrt{\Omega_k}\big)\big/\sqrt{\Omega_k}, &\Omega_k>0,\\
x, & \Omega_k=0,\\ \sin\big(x\sqrt{|\Omega_k|}\big)\big/\sqrt{|\Omega_k|}, &
\Omega_k<0.
\end{array}\right.$$
{The} value $H_0D_L/c$ in Eq.~(\ref{mu}) is the Hubble free luminosity distance (for the
majority of cosmological models) and only the term $\mu_0$ \cite{NP2005} depend on the
Hubble constant $H_0$ or $h=H_0/100$ km\,s${}^{-1}$Mpc${}^{-1}$.
For any cosmological model, we fix its model parameters $\theta_1,\theta_2,\dots$,
calculate functions $a(t)$, $z=a^{-1}-1$,
$H(z)$, the integral (\ref{DL}),
and hence this model predicts theoretical values $D_L^{th}$ or $\mu_{th}$ for the modulus (\ref{mu}). To compare
these theoretical values with the observational data $z_i$ and $\mu_{obs}(z_i)$
\cite{Suzuki2012} we use the $580\times580$ covariance matrix $C_{SN}$ from
Ref. \cite{Suzuki2012} and the function
\begin{equation}
\tilde\chi^2_{SN}(\theta_1,\dots)= \sum_{i,j=1}^{N_{SN}}
\Delta\mu_i\big(C_{SN}^{-1}\big)_{ij} \Delta\mu_j,\label{chi2a}
\end{equation}
where $\Delta\mu_i=\mu_{th}(z_i,\theta_1,\dots)-\mu_{obs}(z_i).$
To exclude the possible systematic errors in
$\mu_{obs}$ mentioned above, we
follow the marginalization procedure, suggested in Ref. \cite{NP2005}, and consider
below the minimum of the sum (\ref{chi2a}) over $H_0$ (or over $\mu_0$)
\begin{eqnarray}
\chi^2_{SN}=\min\limits_{\mu_0}\tilde\chi^2_{SN}= \tilde\chi^2_{SN}\Big|_{\mu_0=0}&-&
\frac{B^2}C, \label{chi2m}\\
B=\sum_{i,j=1}^{N_{SN}}(\Delta\mu_i-\mu_0)\big(C_{SN}^{-1}\big)_{ij},\,&\;&\,
C=\sum_{i,j=1}^{N_{SN}} \big(C_{SN}^{-1}\big)_{ij}. \nonumber
\end{eqnarray}
In this paper, for all models we use the marginalized function (\ref{chi2m}) to describe
the SNe Ia data \cite{Suzuki2012}.
\begin{table*}
\begin{tabular}{||l|l|l|c||l|l|l|c||} \hline
$z$ & $H_{obs}(z)$ &$\sigma_H$ & References & $z$ & $H_{obs}(z)$ & $\sigma_H$ & References\\ \hline
0.070 & 69 & 19.6& \cite{Zhang2014} & 0.570 & 96.8& 3.4 & \cite{Anderson2014b}\\ \hline
0.090 & 69 & 12 & \cite{Simon2005} & 0.593 & 104 & 13 & \cite{Moresco2012} \\ \hline
0.120 & 68.6& 26.2& \cite{Zhang2014} & 0.600 & 87.9& 6.1 & \cite{Blake2012} \\ \hline
0.170 & 83 & 8 & \cite{Simon2005} & 0.680 & 92 & 8 & \cite{Moresco2012} \\ \hline
0.179 & 75 & 4 & \cite{Moresco2012}& 0.730 & 97.3& 7.0 & \cite{Blake2012} \\ \hline
0.199 & 75 & 5 & \cite{Moresco2012} & 0.781 & 105 & 12 & \cite{Moresco2012}\\ \hline
0.200 & 72.9& 29.6& \cite{Zhang2014} & 0.875 & 125 & 17 & \cite{Moresco2012}\\ \hline
0.240 &79.69& 2.99& \cite{GCH2009} & 0.880 & 90 & 40 & \cite{Stern2010} \\ \hline
0.270 & 77 & 14 & \cite{Simon2005} & 0.900 & 117 & 23 & \cite{Simon2005} \\ \hline
0.280 & 88.8& 36.6& \cite{Zhang2014} & 1.037 & 154 & 20 & \cite{Moresco2012}\\ \hline
0.300 & 81.7& 6.22& \cite{Oka2014} & 1.300 & 168 & 17 & \cite{Simon2005} \\ \hline
0.340 & 83.8& 3.66& \cite{GCH2009} & 1.363 & 160 & 33.6& \cite{Moresco2015}\\ \hline
0.350 & 82.7& 9.1 & \cite{CW2013}& 1.430 & 177 & 18 & \cite{Simon2005} \\ \hline
0.352 & 83 & 14 & \cite{Moresco2012}& 1.530 & 140 & 14 & \cite{Simon2005} \\ \hline
0.400 & 95 & 17 & \cite{Simon2005} & 1.750 & 202 & 40 & \cite{Simon2005} \\ \hline
0.429 & 91.8& 5.3 & \cite{Moresco2016}& 1.965 &186.5& 50.4& \cite{Moresco2015}\\ \hline
0.430 &86.45& 3.97& \cite{GCH2009} & 2.300 & 224 & 8.6 & \cite{Busca2013} \\ \hline
0.440 & 82.6& 7.8 & \cite{Blake2012} & 2.340 & 222 & 8.5 & \cite{Delubac2015}\\ \hline
0.480 & 97 & 62 & \cite{Stern2010} & 2.360 & 226 & 9.3 & \cite{Font-Ribera2014}\\ \hline
0.570 & 87.6& 7.8 & \cite{Chuang2013} & & & & \\ \hline
\end{tabular}
\caption{Hubble parameter values $H_{obs}$ in km\,s$^{-1}$Mpc$^{-1}$ at different
redshifts $z$ with corresponding errors $\sigma_H$.} \label{H-data}
\end{table*}
\subsection{Hubble parameter data}
The Hubble parameter $H$ at some certain redshift $z$ can be measured from differential
ages of galaxies \cite{Simon2005, Stern2010, Moresco2012, Zhang2014, Moresco2015, Moresco2016} with using
the following formula:
$$
H (z)= \frac{\dot{a}}{a} = -\frac{1}{1+z}
\frac{dz}{dt}, $$
In addition, estimations of $H(z)$ may be extracted from line-of-sight BAO data
\cite{GCH2009, Blake2012, Busca2013, CW2013, Chuang2013, Anderson2014a, Anderson2014b, Oka2014, Font-Ribera2014, Delubac2015}.
In this analysis we use $N_H=39$ observed Hubble parameter values
\cite{Simon2005, Stern2010, Moresco2012, Zhang2014, Moresco2015, Moresco2016, GCH2009, Blake2012, Busca2013, CW2013, Chuang2013, Anderson2014a, Anderson2014b, Oka2014, Font-Ribera2014, Delubac2015} in the range $0.070 \leq z \leq 2.36$, which are listed in Table (\ref{H-data}). The
corresponding $\chi^2_{H}$ is defined as
\begin{equation}
\chi^2_{H}= \sum_{i=1}^{N_H} \left[\frac{H_{obs}(z_i)-H_{th}(z_i,
\theta_j)}{\sigma_{H,i}}\right]^2.\label{chi-OHD}
\end{equation}
\subsection{BAO data}
\begin{table*
\begin{tabular}{||l|l|l|l|l|c|l||} \hline
$z$ & $d_z(z)$ &$\sigma_d$ & ${ A}(z)$ & $\sigma_A$ & References & Survey\\ \hline
0.106& 0.336 & 0.015 & 0.526& 0.028& \cite{Beutler2011, Hinshaw2013} & 6dFGS \\ \hline
0.15 & 0.2232 & 0.0084& - & - & \cite{Ross2015} & SDSS DR7 \\ \hline
0.20 & 0.1905 & 0.0061& 0.488& 0.016& \cite{Percival2010, Blake2011} & SDSS DR7 \\ \hline
0.275& 0.1390 & 0.0037& - & - & \cite{Percival2010}& SDSS DR7 \\ \hline
0.278& 0.1394 & 0.0049& - & - & \cite{Kazin2010} &SDSS DR7 \\ \hline
0.314& 0.1239 & 0.0033& - & - & \cite{Blake2011}& SDSS LRG \\ \hline
0.32 & 0.1181 & 0.0026& - & - & \cite{Anderson2014b} &BOSS DR11 \\ \hline
0.35 & 0.1097 & 0.0036& 0.484& 0.016& \cite{Percival2010, Blake2011} &SDSS DR7 \\ \hline
0.35 & 0.1126 & 0.0022& - & - & \cite{Padmanabhan2012} &SDSS DR7 \\ \hline
0.35 & 0.1161 & 0.0146& - & - & \cite{CW2013} &SDSS DR7 \\ \hline
0.44 & 0.0916 & 0.0071& 0.474& 0.034& \cite{Blake2011} & WiggleZ \\ \hline
0.57 & 0.0739 & 0.0043& 0.436& 0.017& \cite{Chuang2013}& SDSS DR9 \\ \hline
0.57 & 0.0726 & 0.0014& - & - & \cite{Anderson2014b} & SDSS DR11 \\ \hline
0.60 & 0.0726 & 0.0034& 0.442& 0.020& \cite{Blake2011} & WiggleZ \\ \hline
0.73 & 0.0592 & 0.0032& 0.424& 0.021& \cite{Blake2011} &WiggleZ \\ \hline
2.34 & 0.0320 & 0.0021& -& - & \cite{Delubac2015} & BOSS DR11 \\ \hline
2.36 & 0.0329 & 0.0017& -& - & \cite{Font-Ribera2014} & BOSS DR11 \\ \hline
\end{tabular}
\caption{Values of $d_z(z)=r_s(z_d)/D_V(z)$ and $A(z)$ (\ref{dzAz}) with errors and
references} \label{TBAO}
\end{table*}
Observational data, connected with baryon acoustic oscillations (BAO), include the
distance \cite{Eisenstein2005}
$$
D_V(z)=\bigg[\frac{cz D_L^2(z)}{(1+z)^2H(z)}\bigg]^{1/3},
$$
and two measured values
\begin{equation}
d_z(z)= \frac{r_s(z_d)}{D_V(z)},\qquad
A(z) = \frac{H_0\sqrt{\Omega_{m0}}}{cz}D_V(z).
\label{dzAz} \end{equation}
Here $r_s(z_d)$ is sound horizon size at the end of the drag era $z_d$.
In this paper we
use the fitting formula from Ref.~\cite{Aubourg2015}
\begin{equation}
r_s(z_d)=\frac{55.154\exp\big[72.3(\Omega_\nu h^2 + 0.0006)^2\big]}
{(\Omega_{m0} h^2)^{0.25351} (\Omega_{b0} h^2)^{0.12807}}\mbox{ Mpc}.
\label{rsA} \end{equation}
Here dependence on neutrino contribution $\Omega_\nu$ is negligible for reasonable values
$\sum m_\nu \le 0.23$ eV \cite{Planck2015} (below we suppose $\sum m_\nu=0.06$ eV \cite{Planck2015, Aubourg2015}).
Calculations with similar observational data and with the function (\ref{rsA}) were made
in Ref. \cite{Sharov2016} for the models: $\Lambda$CDM, with generalized and modified
Chaplygin gas and with quadratic equation of state (described below in
Sect.~\ref{comparison}). The best fitting value of $\Omega_{b0}$ in Eq.~(\ref{rsA})
\begin{equation}
\Omega_{b0}=0.044
\label{Omb} \end{equation}
was obtained for the $\Lambda$CDM and appeared to be just the same for 3 other models
in Ref. \cite{Sharov2016}. One should note that the value (\ref{Omb}) is connected with the
formula (\ref{rsA}). Calculations in Ref. \cite{Sharov2016} with the more simple fitting
formula $r_d=(r_d h)_{fid}\cdot h^{-1}$ for all 4 models demonstrated similar
estimations of model parameters, but very weak dependence of them on $\Omega_{b0}$. It
is connected with similarity in properties of dark matter and baryons. Due to this
reason we do not consider $\Omega_{b0}$ as a free model parameter and fix it in the form
(\ref{Omb}) for all models in this paper. The additional reason is necessity to minimize
a number of free model parameters for considered scenarios.
To take into account all available BAO data \cite{GCH2009, Blake2012, Busca2012, CW2013, Chuang2013, Anderson2014a, Anderson2014b, Oka2014, Font-Ribera2014, Delubac2015, Percival2010, Kazin2010, Beutler2011, Blake2011, Padmanabhan2012, Seo2012, Kazin2014, Ross2015, Hinshaw2013}
for parameters (\ref{dzAz}), we consider in this paper $N_B=17$ data points for $d_z(z)$
and 7 data points for $A(z)$ presented in the Table~\ref{TBAO}.
Measurements of $d_z(z)$ and $A(z)$ from Refs.~\cite{Percival2010, Blake2011} in
Table~\ref{TBAO}
are not independent. So the $\chi^2$ function for
the values (\ref{dzAz}) is
\begin{equation}
\chi^2_{BAO}(\theta_j)=(\Delta d)^TC_d^{-1}\Delta d+
(\Delta { A})^TC_A^{-1}\Delta A,
\label{chiB} \end{equation}
where $\Delta d=d_z(z_i)-d_z^{th}$, $\Delta A=A(z_i)-A^{th}$.
The elements of covariance matrices
$C_d^{-1}=||c^d_{ij}||$, $C_A^{-1}=||c^A_{ij}||$ in Eq.~(\ref{chiB}) are
\cite{Percival2010, Blake2011, Hinshaw2013}:
$$\begin{array}{lll}
c^d_{33}=30124,& c^d_{38}=-17227,& c^d_{88}=86977, \\
c^d_{1\!11\!1}=24532.1, & c^d_{1\!11\!4}=-25137.7,& c^d_{1\!11\!5}=12099.1,\\
c^d_{1\!41\!4}=134598.4,& c^d_{1\!41\!5}=-64783.9,& c^d_{1\!51\!5}=128837.6;\\
c^A_{1\!11\!1}=1040.3, & c^A_{1\!11\!4}=-807.5, & c^A_{1\!11\!5}=336.8, \\
c^A_{1\!41\!4}=3720.3, & c^A_{1\!41\!5}=-1551.9, & c^A_{1\!51\!5}=2914.9.
\end{array}$$
Here $c_{ij}=c_{ji}$, the remaining matrix elements are $c_{ij}=0$, if $i\ne j$,
and $c_{ii}=1/\sigma_i^2$.
\subsection{CMB data}
Cosmological data associated with the cosmic microwave background (CMB) radiation
include parameters at the photon-decoupling epoch $z_*=1089.90 \pm0.30$ \cite{Planck2015}, in particular, the comoving sound horizon $r_s(z_*)$ and the
distance $D_M(z_*)=D_L(z_*)\big/(1+z_*)$ \cite{Aubourg2015, HWW2015}. In this paper we use the CMB parameters in the form \cite{HWW2015}
$$
\mathbf{x}=\big(R,\ell_A,\omega_b\big)=\bigg(\sqrt{\Omega_m}\frac{H_0D_M(z_*)}c,\,\frac{\pi D_M(z_*)}{r_s(z_*)},\,\Omega_bh^2\bigg).
$$
In the corresponding $\chi^2$ function
\begin{equation}
\chi^2_{CMB}=\Delta\mathbf{x}\cdot C_{CMB}^{-1}\big(\Delta\mathbf{x}\big)^{T},
\label{chiCMB}
\end{equation}
we use the covariance matrix $C_{CMB}$ and the distance priors
$$\Delta \mathbf{x}=\mathbf{x}-\big(1.7448,\; 301.46,\; 0.0224\big),$$
from Ref.~\cite{HWW2015}, which were derived from \cite{Planck2015} data with free amplitude of the lensing power spectrum.
\begin{figure*}
\centerline{\includegraphics[scale=0.72,trim=3mm 0mm 2mm -1mm]{Fig1_1.pdf}}
\caption{For the model (\ref{interaction1}) with $w_d={}$const
in the first and third rows of panels we present dependence of
$\min\chi^2_\Sigma$ and $\min\chi^2_{tot}$ on $H_0$, $\Omega_{m0}$, $\Omega_k$ , $\lambda_m$, $\lambda_d$ and $w_d$
and also (in the panels below) the correspondent dependence for parameters of a minimum point.
In the bottom panels the contour plots in the planes of 2 parameters are drawn at
$1\sigma$, $2\sigma$ and $3\sigma$ confidence levels for $\chi^2_\Sigma$ (blue lines)
and $\chi^2_{tot}$ (filled contours).}
\label{F1}
\end{figure*}
\subsection{Results for $w_d ={}$ constant.}
\label{result-1}
We investigated, how the $\chi^2$ functions (\ref{total-chi21}) and
(\ref{total-chi2}) depend on model parameters for different
variants of the model, considered in
Sect.~\ref{analytic}. For the model with $w_d={}$const
(equivalently, solution (\ref{energy-const})),
the constraints on the model parameters are presented in Table~\ref{Estim} and
the corresponding plots are shown in Fig.~\ref{F1}.
The first and third rows of panels in Fig.~\ref{F1} illustrate how the minimum of the sums
(\ref{total-chi21}) ($\min\chi^2_\Sigma$) and (\ref{total-chi2}) ($\min\chi^2_{tot}$)
depend on one chosen parameter: $H_0$, $\Omega_{m0}$, $\lambda_m$, $\lambda_d$,
$\Omega_k$ and $w_d$. Here, for $\chi^2_\Sigma$ we compare two cases: for the model with
6 free parameters including $\Omega_{k}\ne0$ where such dependencies are shown as blue thick
lines, but for the flat case $\Omega_k=0$, the corresponding plots are black dashed
lines. The graphs for $\chi^2_{tot}$ with CMB (red dash-dotted lines) are made for the
general case $\Omega_{k}\ne0$.
In particular, in the top-left panel of Fig.~\ref{F1}, the function $\min\chi^2_{tot}(H_0)$
means $\min\limits_{\Omega_{m0},\Omega_k,w_d,\lambda_m,\lambda_d}\chi^2_{tot}$ (and the
similar minimum for $\chi^2_\Sigma$). The $\chi^2$ absolute minima for these cases are
presented in Table~\ref{Estim} with optimal values and $1\sigma$ errors of model
parameters. For each considered variant of the model (\ref{interaction1}) the
corresponding line in Table~\ref{Estim} is obtained from the joint analysis
SNe+$H(z)$+BAO (for $\chi^2_\Sigma$) and the lower case of the line includes the
absolute minimum of $\chi^2_{tot}$ and estimations from the joint analysis SNe+$H(z)$+BAO+CMB.
For example, for the variant $w_d={}$const with $\Omega_k\ne0$, we estimate the Hubble constant
$ H_0=70.40_{-2.13}^{+2.18}$ km\,(s\,Mpc)${}^{-1}$ for $\chi^2_\Sigma$ and
$H_0=70.18_{-1.97}^{+1.77}$ km\,(s\,Mpc)${}^{-1}$ for
$\chi^2_{tot}$; $1\sigma$ errors are extracted from the
one-dimensional likelihood function ${\cal L}\propto\exp(-\chi^2/2)$.
The similar estimation for $\Omega_{m0}$ is determined by the functions
$\min\chi^2_j(\Omega_{m0})=\min\limits_{H_0,\Omega_k,w_d,\lambda_m,\lambda_d}\chi^2_j$,
if $\Omega_k\ne0$; $j=\Sigma,\,{tot}$. These graphs for $\chi^2_\Sigma$ in the flat and
non-flat cases are rather close and have distinct minimum with small $1\sigma$ deviation
$\Delta\Omega_{m0}\simeq0.013$. In the case $\chi^2_{tot}$ the minimum is the same, but
with smaller $\Delta\Omega_{m0}\simeq0.008$. It is connected with the factor
$\sqrt{\Omega_{m0}}$ in the values $A(z)$ in (\ref{dzAz}) and $R$ in (\ref{chiCMB}), so the
contributions of $\chi^2_{BAO}$ and $\chi^2_{CMB}$ in the sum (\ref{total-chi2}) are very
sensitive to $\Omega_{m0}$ values.
Dependence of $\min\chi^2_{tot}$ on the curvature $\Omega_k$ in the top-right panel of
Fig.~\ref{F1} is strongly asymmetric, unlike the case of $\min\chi^2_\Sigma$. In these
cases we have different $1\sigma$ intervals for $\Omega_k$ (see Table~\ref{Estim}),
but both include values $\Omega_k\simeq0$. Some asymmetry may be seen for the plots
$\min\chi^2(\lambda_m)$, $\min\chi^2(\lambda_d)$ and $\min\chi^2(w_d)$ in the third
row of panels. These calculations result in the estimations of $\lambda_m$, $\lambda_d$
and $w_d$ in Table~\ref{Estim}.
\begin{table*
\begin{tabular}{|l|l|c|c|c|c|c|c|c|} \hline
Variant & Data$\!$ &$\min\chi^2$& $H_0$& $\Omega_{m0}$& $\lambda_m$& $\lambda_d$& $w_{d0}$ & 6th parameter \\
\hline %
$w_d={}$const & $\chi^2_\Sigma$ & 576.29 & $70.40_{-2.13}^{+2.18}$ & $0.285\pm0.013$ & $0.115_{-0.265}^{+0.217}$ &
$-0.093_{-0.259}^{+0.230}$ & $-0.913_{-0.214}^{+0.132}$ & $\Omega_k=-0.124_{-0.190}^{+0.213}$\rule{0pt}{1.2em} \\
& $\chi^2_{tot}$ & 576.45 & $70.18_{-1.97}^{+1.77}$ & $0.285\pm0.008$ & $0.115_{-0.193}^{+0.208}$ &
$-0.097_{-0.240}^{+0.212}$ & $-0.955_{-0.113}^{+0.070}$ & $\Omega_k=-0.064_{-0.019}^{+0.102}$\rule{0pt}{1.2em} \\
\hline
$w_d={}$const & $\chi^2_\Sigma$ & 576.64 & $69.68_{-1.75}^{+1.80}$ & $0.287\pm0.013$ & $0.090_{-0.260}^{+0.213}$ & $-0.059_{-0.280}^{+0.233}$ &
$-0.994_{-0.157}^{+0.123}$ & $-$\rule{0pt}{1.2em} \\
$\;$\&\,$\Omega_k=0$ & $\chi^2_{tot}$ & 576.98 & $69.31_{-1.52}^{+1.67}$ & $0.285\pm0.008$ & $0.039_{-0.104}^{+0.193}$ & $-0.024_{-0.184}^{+0.168}$ &
$-0.987_{-0.074}^{+0.096}$ & $-$\rule{0pt}{1.2em} \\
\hline
Ansatz I & $\chi^2_\Sigma$& 576.29& $69.55_{-1.73}^{+1.80}$ & $0.288\pm0.013$ & $0.173_{-0.32}^{+0.155}$ & $-0.277_{-0.309}^{+0.407}$ &
$-0.94_{-0.157}^{+0.137}$\rule{0pt}{1.2em} & $\beta=-0.25_{-0.54}^{+0.77}$ \\
$\;(\alpha=0)$ & $\chi^2_{tot}$ & 576.93& $69.86_{-1.68}^{+1.73}$ & $0.286\pm0.008$ & $0.245_{-0.34}^{+0.42}$ & $0.120_{-0.114}^{+0.136}$ &
$-1.092_{-0.125}^{+0.148}$\rule{0pt}{1.2em} & $\beta=0.34_{-0.475}^{+0.12}$ \\
\hline
Ansatz II & $\chi^2_\Sigma$ & 576.57& $69.66_{-1.75}^{+1.80}$ & $0.287\pm0.013$ & $0.123_{-0.175}^{+0.18}$ & $-0.112_{-0.26}^{+0.285}$ &
$-0.988_{-0.135}^{+0.11}$ & $\alpha=0.073_{-0.055}^{+0.062}$\rule{0pt}{1.2em} \\
$\;(\beta=0)$ & $\chi^2_{tot}$ & 576.95& $70.22_{-1.72}^{+1.69}$ & $0.288_{-0.008}^{+0.007}$ & $-0.035_{-0.08}^{+0.047}$ & $0.075_{-0.097}^{+0.125}$ &
$-0.980_{-0.145}^{+0.13}$ & $\alpha=-0.032_{-0.064}^{+0.068}$\rule{0pt}{1.2em} \\
\hline
Ansatz III & $\chi^2_\Sigma$& 576.64& $69.68_{-1.74}^{+1.80}$ & $0.287\pm0.013$ & $0.098_{-0.24}^{+0.475}$ & $-0.060_{-0.266}^{+0.74}$ &
$-0.996_{-0.155}^{+0.12}$ & $\beta=0.997_{-0.115}^{+0.243}$\rule{0pt}{1.2em} \\
$\;(\alpha=1)$ & $\chi^2_{tot}$ & 577.33& $68.82_{-1.35}^{+1.48}$ & $0.290\pm0.007$ & $0.022_{-0.048}^{+0.093}$ & $-0.548_{-0.42}^{+0.58}$ &
$-0.96_{-0.142}^{+0.095}$ & $\beta=0.235_{-0.10}^{+0.216}$\rule{0pt}{1.2em} \\
\hline
Ansatz IV & $\chi^2_\Sigma$& 576.07& $69.23_{-1.86}^{+1.90}$ & $0.292_{-0.015}^{+0.016}$ & $0.237_{-0.250}^{+0.076}$ & $-0.452_{-0.43}^{+0.73}$ &
$-0.786_{-0.31}^{+0.356}$ & $w_1=-3.14_{-4.72}^{+4.30}$\rule{0pt}{1.2em} \\
$\;$Eq. (\ref{Ans4}) & $\chi^2_{tot}$ & 576.90& $69.14_{-1.62}^{+1.74}$ & $0.285\pm0.008$ & $0.016_{-0.038}^{+0.044}$ & $-0.054_{-0.110}^{+0.096}$ &
$-0.925_{-0.215}^{+0.23}$ & $w_1=-0.68_{-1.14}^{+0.90}$\rule{0pt}{1.2em} \\
\hline
Ansatz V & $\chi^2_\Sigma$& 575.97& $69.32_{-1.74}^{+1.84}$ & $0.292\pm0.015$ & $0.220_{-0.245}^{+0.062}$ & $-0.497_{-0.386}^{+0.64}$ &
$-0.810_{-0.27}^{+0.338}$ & $w_1=-2.68_{-4.03}^{+3.75}$\rule{0pt}{1.2em} \\
$\;$Eq.(\ref{Ans5}) & $\chi^2_{tot}$ &576.92& $69.18_{-1.65}^{+1.70}$ & $0.292\pm0.008$ & $0.023_{-0.042}^{+0.030}$ & $-0.092_{-0.120}^{+0.084}$ &
$-0.822_{-0.255}^{+0.192}$ & $w_1=-0.43_{-1.57}^{+0.65}$\rule{0pt}{1.2em} \\
\hline
\end{tabular}
\caption{{Variants of the model (\ref{interaction1}) and $1\sigma$ estimates of the model parameters
using the joint analysis
SNe+$H(z)$+BAO (the upper case on all lines) and SNe+$H(z)$+BAO+CMB (the lower case on all lines).}}
\label{Estim}
\end{table*}
The panels in the second and forth rows of Fig.~\ref{F1} correspond to the above panels
and present dependencies of coordinates of minima points (optimal values of parameters)
on $H_0,\dots$, $\Omega_k$ for the function $\chi^2_\Sigma$ (if $\Omega_{k}\ne0$) as
thick lines and for $\chi^2_{tot}$ as dots. One can see that optimal values of
$\lambda_m$ and $\lambda_d$ have distinct negative correlation (observed explicitly in
the middle bottom panel), optimal values of $\Omega_{m0}$ and $h=H_0/100$ depend on
other parameters rather weakly.
In 3 bottom panels of Fig.~\ref{F1} we present the
$1\sigma$ (68.27\%), $2\sigma$ (95.45\%) and $3\sigma$ (99.73\%) contour plots for the
functions $\min\chi^2_j(p_1,p_2)$ in the planes of two parameters. The minimum is
calculated over the remaining 4 parameters. The mentioned level lines are shown for
the $\chi^2_\Sigma$ function (blue curves) and for $\chi^2_{tot}$ as filled contours.
The circles and stars in the plots mark minimum points obtained respectively for $\chi^2_{tot}$ and
$\chi^2_\Sigma$.
\subsection{Results for $w_d \neq $ constant. }
\label{result-2}
For the variable EoS in DE, presented in section \ref{variable},
we summarize their observational constraints in Table \ref{Estim}
for both SNe+$H(z)$+BAO and SNe+$H(z)$+BAO+CMB. We consider
several possibilities for variable $w_d$.
The first one is very general given in Eq.~(\ref{GA})
and it provides three distinct possibilities in
equations (\ref{sol1-GA}), (\ref{sol2-GA}) and (\ref{Ans3}) while
additionally we consider CPL (\ref{Ans4}) and linear
parametrizations (\ref{Ans5}).
The general ansatz (\ref{GA}) with its solution
(\ref{solution-GA}) gives not only new possibilities, but also additional problems of
the following two types: (i) two extra model parameters (3 parameters $w_{d0}$, $\alpha$
and $\beta$ instead of one $w_d$); (ii) singularities in the past, which appear in
different scenarios of the class (\ref{solution-GA}).
These singularities connected with bad behavior of densities $\rho_{dm}$, $\rho_d$ or
their sum $\rho_T$ at a moment $t_s$ in the past, when the scale factor remains finite
and nonzero $\big(a(t_s)\ne0\big)$, they may be classified into the following three
types:
\begin{eqnarray}
a) &\;&\lim\limits_{t\to t_s}\rho_T=\infty;\nonumber\\
b) &\;&\rho_{dm}<0, \mbox{ \ if \ } t< t_s; \label{singul}\\
c) &\;&\rho_d<0, \mbox{\, \ if \ } t< t_s.\nonumber
\end{eqnarray}
These cases resemble classification of singularities in Refs. \cite{NOT2005, NO2010
Bamba-et-al-2012}.
For singularities (\ref{singul}) of the type (c) DE pressure $p_d(t_s)$ remains finite
at the moment $t_s$, whereas $\rho_d(t_s)=0$; they may be also divided into class (c1)
with $p_d(t_s)=0$ and finite $w_d=p_d/\rho_d$ and class (c2) with $p_d(t_s)\ne0$, where
$w_d$ tends to infinity if $t\to t_s$. Possible singularities (\ref{singul}) compel us
to be especially careful, when we calculate numerically parameters of effective
scenarios in this model. We should exclude domains in parameter space with singular
behavior of physical densities irrespective of type (\ref{singul}). The example of
singular solution with the type (c1) singularity (\ref{singul}) is shown
in Fig.~\ref{Fsing}.
\begin{figure}
\centerline{\includegraphics[width=0.45\textwidth]{Fsingul.pdf}}
\caption{ Evolution of the
scale factor $a(\tau)$ and densities $\Omega_m (\tau)$, $\Omega_d (\tau)$ for Ansatz I
(\ref{sol1-GA}) is shown for the regular solution (top) with optimal parameters from
Table~\ref{Estim} and for the singular solution (bottom) with type (c1) singularity
(\ref{singul}) (here $\lambda_m= -0.01$, other parameters are the same).}
\label{Fsing}
\end{figure}
In Fig.~\ref{Fsing} we compare the regular solution for Ansatz I (\ref{sol1-GA}) (the
top panel) with the singular solution in the bottom panel. One can see how the scale
factor $a$ (blue solid lines) and densities
$\Omega_m= \rho_m/\rho_{cr}$ (magenta dash-dotted lines) and $\Omega_d(\tau)= \rho_d/\rho_{cr}$ (green dashed lines)
depend on dimensionless time $\tau=H_0t$. Here $\rho_{cr}= 3 H_0^2/ 8 \pi G$ is the
critical density of the universe. The model parameters are taken with their optimal
values from Table~\ref{Estim} for $\chi^2_\Sigma$, but for the type (c1) singularity
(the bottom panel) with difference only in one value: $\lambda_m=-0.01$. In this
singular case the DE density $\Omega_d$ becomes negative at $\tau<\tau_s$.
We mentioned above, that the number $N_p$ of model parameters for scenarios with
variable $w_d$ satisfying Eq.~(\ref{GA}) is too large, it is disadvantage in competition
with other models in accordance with information criteria \cite{Akaike1974, SKK2006, SHL2012}.
So we have to exclude non-flat scenarios and fix in this section $\Omega_k=0$. But
even for the flat case we have $N_p=7$ parameters: $H_0$, $\Omega_{m0}$, $\lambda_m$,
$\lambda_d$, $w_{d0}$, $\alpha$, $\beta$.
An attempt to exclude $\lambda_m$ appeared to be unsuccessful: though in the case
$\lambda_m=0$, we can avoid some singularities (\ref{singul}) and
instabilities in perturbations \cite{HWA2008, VMM2008}, but the best value of the
function (\ref{total-chi2}) $\min\chi^2_\Sigma\simeq576.74$ is worth, than in the case
$w_d={}$const (\ref{energy-const}) (but $\lambda_m\ne0$). So we have to fix other
parameters. First, we consider the case $\alpha=0$ (\ref{sol1-GA}), denoted in
Sect.~\ref{variable} as Ansatz I.
For Ansatz I ($\alpha=0$) we have no acceptable analytic solution of
Eq.~(\ref{diffeqn}), so we investigate numerical solutions of the system
(\ref{friedmann1}), (\ref{conservation1}), (\ref{conservation2}), (\ref{interaction1}),
(\ref{sol1-GA}) with natural initial conditions $\rho_m\big|_{t=t_0}=\rho_{m0}$,
$\rho_d\big|_{t=t_0}=\rho_{d0}$ at the present day and integration ``into the past''.
For Ansatz I we can reach the best values $\min\chi^2_\Sigma\simeq576.29$ (this
solution is shown in Fig.~\ref{Fsing} in the top panel) and
$\min\chi^2_{tot}\simeq576.93$ . The corresponding values of model parameters are
tabulated in the ``Ansatz I'' line of Table~\ref{Estim}.
We analyze the flat case of Ansatz I (\ref{sol1-GA}) in Fig.~\ref{F3}. For one
dimensional distributions and contour plots we use notations of Fig.~\ref{F1}: the red
dashed lines and filled contours for $\min\chi^2_{tot}$ and the blue lines for
$\min\chi^2_\Sigma$. In 3 panels of Fig.~\ref{F3} (upper left; upper middle and lower left) we compare this variant of the model with Ansatz II
(\ref{sol2-GA}) (the green lines). One should note that for both cases the optimal
values of parameters (in particular, for $\lambda_m$, $\lambda_d$) do not coincide for
$\chi^2_{tot}$ and $\chi^2_\Sigma$. In other words, when we include the CMB contribution
$\chi^2_{CMB}$ (\ref{chiCMB}) into the function
$\chi^2_{tot}=\chi^2_{\Sigma}+\chi^2_{CMB}$, the resulting minimum point for
$\chi^2_{tot}$ appears to be shifted. This effect can be seen in the bottom-right panel
of Fig.~\ref{F3}, where for the contour plots of Ansatz I the circle and star respectively mark
the minimum points for $\chi^2_{tot}$ and $\chi^2_\Sigma$ (see also the one
dimensional distributions for $\min\chi^2(\lambda_d)$ and $\min\chi^2(\beta)$).
As a consequence of this behavior we have the absolute minima of $\chi^2_{tot}$ for
Ansatz I and Ansatz II in Table~\ref{Estim} only a bit better than the value $576.98$
for the case $w_d={}$const with $\Omega_k=0$. Note that the both variants turn into this
case, if we take $\beta=0$ and $\alpha=0$ in Eqs.~(\ref{sol1-GA}), (\ref{sol2-GA}), respectively.
\begin{figure*}
\centerline{\includegraphics[width=0.7\textwidth]{Fig3_1column.pdf}}
\caption{For Ansatz I (\ref{sol1-GA}) and Ansatz II (\ref{sol2-GA}) with $\Omega_{k}=0$
we present one dimensional distributions of $\min\chi^2_\Sigma$ and $\min\chi^2_{tot}$.
For Ansatz I we also draw two dimensional contour plots with notations from
Fig.~\ref{F1}: the blue lines for $\chi^2_\Sigma$, but the filled contours and red
dashed lines for $\chi^2_{tot}$. We note that the circles and stars in the plots mark minimum points obtained respectively for $\chi^2_{tot}$ and $\chi^2_\Sigma$.}
\label{F3}
\end{figure*}
\begin{figure
\centerline{\includegraphics[width=0.5\textwidth]{FSNHQ.pdf}}
\caption{For different variants of the model with optimal
parameters from Table~\ref{Estim} we show the plots for $D_L(z)$ (upper panel)
describing the SNe data \cite{Suzuki2012}, $H(z)$ functions with the data from Table~\ref{H-data} (middle panel) and $Q(z)$ dependence (lower panel). We note that the same labels in the lower panel follow for the
other two plots (i.e. upper and middle plots). We further note that the plots for different variants of the models both in upper and middle panel are almost indistingushable from each other while although in the lower panel the plots for $Q(z)/Q_0$ are distingushable for large reshift but for low redshifts they are also indistinguishable from each other. }
\label{FSNHQ}
\end{figure}
\begin{figure*}
\includegraphics[width=0.95\textwidth]{Fig5.pdf}
\caption{Dependence of $\min\chi^2_\Sigma$ on $H_0$, $\lambda_m$, $\lambda_d$,
$\Omega_{m0}$, $w_{d0}$, $w_1$ for Ansatz IV (\ref{Ans4}) (red and magenta lines) and
for Ansatz V (\ref{Ans5}) (green and aquamarine lines). The contour plots with
$1\sigma$, $2\sigma$ and $3\sigma$ confidence levels in the bottom
panels are shown for Ansatz IV in notations of Fig.~\ref{F1} where the red lines
stands for $\chi^2_\Sigma$ and the filled one for $\chi^2_{tot}$. The circles and
stars in the contour plots mark minimum points obtained respectively for $\chi^2_{tot}$ and
$\chi^2_\Sigma$.}
\label{F4}
\end{figure*}
When we compare the variants of the model (\ref{GA}), we keep in mind
that Ansatz I ($\alpha=0$) and Ansatz II (\ref{sol2-GA})
($\beta=0$) have the same number of model
parameters $N_p=6$, but for the case of $w_d={}$const with $\Omega_k=0$, this value is
$N_p=5$. We try to minimize $N_p$, hence, here and below for all variants of the model, we
consider only the flat case ($\Omega_k=0$). In Fig.~\ref{F3} we compare both Ansatz I and Ansatz II where we see that for Ansatz~I, the essential advantage is in the absolute minimum of $\chi^2_\Sigma$, but small for $\chi^2_{tot}$.
Dependence of $\min\chi^2_\Sigma$ and $\min\chi^2_{tot}$ on $\Omega_{m0}$ in the
bottom-left panel of Fig.~\ref{F3} is similar for both presented variants of the model,
but the upper curves for $\chi^2_{tot}$ are more narrow. As usual, these minima are
taken over all remaining parameters. In other panels we see different dependence of
these minima on $\lambda_m$, $\lambda_d$ and $\beta$. These behavior in some cases are
connected with various singularities (\ref{singul}), which can appear in certain domains
of the parameter space.
In Fig.~\ref{FSNHQ} we demonstrate how the most successful variants of the model
with parameters from Table~\ref{Estim} describes SNe data \cite{Suzuki2012} with the functions $D_L(z)$ (the upper panel) and $H(z)$ data from
Table~\ref{H-data} (the middle panel); in the bottom panel we draw the corresponding
plots of the interaction function (\ref{interaction1}): $Q(z)= 3H(\lambda_m \rho_{dm}
+\lambda_d \rho_d)$ in its dimensionless form $Q(z)/Q_0$, where
$Q_0=H_0\rho_{cr}=3 H_0^3/(8 \pi G)$.
The $H(z)$ data in the middle panel are marked as cyan or magenta stars,
if they are obtained from differential ages or from BAO data. From
Fig.~\ref{FSNHQ}, we see that the plots of $D_L(z)$ and $H(z)$ are practically
coincide for all considered variants.
For the five most successful variants of the model, the dotted lines
correspond to the optimal parameters for
$\chi^2_{tot}$ (SNe+$H(z)$+BAO+CMB); the solid and dashed lines
describe the minimization of $\chi^2_\Sigma$ (SNe+$H(z)$+BAO). We observe that
the observational data SNe+$H(z)$+BAO always suggest that there is a transition
of $Q$ at late time from its positive values to negative values,
and the transion occurs around $z\simeq 0.4$. On the other hand, for
the observational data SNe+$H(z)$+BAO+CMB, except for Ansatz I
(in this case $Q$ remains positive throughout the evolution of the universe),
all other variants keep the same behaviour as we observe for the data
SNe+$H(z)$+BAO. That means $Q$ changes its sign from positive to negative values
around the same redshift.
Thus, we find that almost all variants allow the flow of energy
from CDM to DE at late time (precisely for $z \lesssim 0.4$) while at for $z \gtrsim 0.4$, the
energy flow takes place from DE to CDM.
Moreover, for $w_d =$ constant (both for $\Omega_k =0$ and $\Omega_k \neq 0$), as seen from the Fig. \ref{FSNHQ}, the quantity $Q/Q_0$ is very very close to zero, that means, a very small interaction
is favored in this case. It is an interesting result becasue some other interactions also conclude very small interaction in the dark sector for constant $w_d$, see \cite{NPS2016, KN2016, XW2016}.
For the Ansatz III (\ref{Ans3}) the best values of $\min\chi^2_\Sigma$ and $\min\chi^2_{tot}$
(see Table~\ref{Estim}) are worse, than the corresponding minima for other
variants of the model with the same $N_p=6$. The main drawback of Ansatz III is
that its solutions behave badly with the optimal parameters: they are close to singular solutions
of types (a) and (c) (\ref{singul}). So in our calculations of values in
Table~\ref{Estim} we had to bypass singular domains in the parameter space.
We investigate the same ansatz (Ansatz III)
with the choice $\lambda_m=0$, considered in Ref. \cite{PBC2015} since the background is analytically solved for this case,
see eq. (\ref{variable-total-Energy}).
For this model our analysis shows that $\min\chi^2_\Sigma\simeq576.81$ and
$\min\chi^2_{tot}\simeq577.46$. Thus, it is seen that this variant with analytic solutions
appeared to be unsuccessful in comparison with the case
(\ref{energy-const}): $w_d={}$const, $\Omega_k=0$ (for these variants $N_p=5$, so they
are comparable). Due to these reasons we do not present the constraints for
$\lambda_m=0$, $\alpha=1$ in a separate line in Table~\ref{Estim}.
Thus, we observe that
Ansatz III for both choices $\lambda_m \neq 0$ and $\lambda_m = 0$ is not
suitable as reported by the observational data. So, we do not present
any graphical analysis for this ansatz.
The next variant (\ref{Ans4}) (Ansatz IV) behaves better for $\chi^2_\Sigma$.
It has only the type (c1) singularities in the domain
$\lambda_m<0$. This domain is far from optimal values of the model parameters for
$\chi^2_\Sigma$ with the smallest $\min\chi^2_\Sigma\simeq576.07$ (see Table~\ref{Estim}).
However, the minimum of $\chi^2_{tot}$ is achieved in
the $\lambda_m<0$ domain. So, the minimal value 576.88 of
$\chi^2_{tot}=\chi^2_{\Sigma}+\chi^2_{CMB}$ appears to be rather large.
This behavior is seen in the top-left panel of Fig.~\ref{F4}, where the red dash-dotted
line for $\chi^2_\Sigma$ and the dashed magenta line for $\chi^2_{tot}$ show, how the
minima $\min\limits_{\Omega_{m0},\lambda_d,H_0,w_{d0},w_1}\chi^2$ depend on $\lambda_m$.
For the last variant (\ref{Ans5}) (Ansatz V),
we achieve the absolute minimum
for $\chi^2_\Sigma$ among all considered models in Tables~\ref{Estim} and \ref{Compar}:
$\min\chi^2_\Sigma\simeq575.97$.
However, if we add the CMB, minimum of $\chi^2_{tot}$ becomes rather large, because it
is achieved near the $\lambda_m<0$ domain. In this domain solutions have the type (c)
singularity (\ref{singul}), so it is practically forbidden.
In Fig.~\ref{F4} one can see the one dimensional distributions of $\min\chi^2_\Sigma$ and
$\min\chi^2_{tot}$ for Ansatz IV (\ref{Ans4}) and Ansatz V (\ref{Ans5}). In the bottom
panels we draw two dimensional contour plots for Ansatz IV with filled contours for
$\chi^2_{tot}$ and red lines for $\chi^2_\Sigma$. Here, we use notations from
Figs.~\ref{F1} and \ref{F3}. In particular, the circles and stars demonstrate difference
between minimum points for $\chi^2_{tot}$ and $\chi^2_\Sigma$. A striking feature that one
must note is in the behaviour of the $w_1$ parameter in Ansatz IV (\ref{Ans4}) and Ansatv V (\ref{Ans5}). From Table \ref{Estim}, one can see that the value of $w_1$ for both
ansatze (Ansatz IV and Ansatz V) significantly chnages after the inclusion of the CMB data.
\section{Interacting and non-interacting models: A statistical comparison}
\label{comparison}
In this section we compare our interacting dark energy scenario with some other
existing non-interacting cosmological models purely from the statistical ground. In Table~\ref{Compar} we demonstrate how these models
describe the same observational data for SNe Ia \cite{Suzuki2012}, $H(z)$ and BAO from
Tables~\ref{H-data}, \ref{TBAO}. Calculations were made in accordance with the
procedure described in Sect.~\ref{data-analysis}, and in Ref. \cite{Sharov2016}. Therefore,
we will briefly describe the models in the following subsections.
\subsection{Modified Chaplygin gas and its family}
The equation of state for modified Chaplygin gas (MCG) with pressure $p_g$ and energy
density $\rho_g$ is \cite{DBC2004, Benaoum2002}
\begin{equation}
p_g= A \rho_g- \frac{B}{\rho_g^{\alpha}}.
\label{MCG} \end{equation}
Modified Chaplygin gas is the subsequent generalizations of
Chaplygin gas (EoS: $p_g= -B/\rho_g$) and generalized Chaplygin gas (GCG) with EoS
\cite{Kamenshchik2001}:
\begin{equation}
p_g= -B/\rho_g^{\alpha}.
\label{GCG} \end{equation}
In these models GCG or MCG acts as a unified candidate for dark matter and dark energy.
In Table~\ref{Compar} for the MCG and GCG models we cite the results of calculations
from Ref.~\cite{Sharov2016}, where these scenarios were explored as two-component models
with usual dust-like baryonic matter component $\rho_b$ and the Chaplygin gas component
$\rho_g$: $\rho=\rho_b+\rho_g$. In this case the Friedmann
equation
(\ref{friedmann1}) is
\begin{eqnarray}
&& H^2/H_0^2= \Omega_{b0} a^{-3}+ \Omega_ka^{-2}\nonumber \\
&+&(1-\Omega_{b0}-\Omega_k)\Big[B_s+(1-B_s)\,a^{-3(1+A)(1+\alpha)}\Big]^{1/(1+\alpha)}.
\nonumber\end{eqnarray}
Here the dimensionless parameter $B_s=B\rho_{g0}^{-1-\alpha}/(1+A)$ is used
instead of $B$, {and} $\rho_{g0}=\rho_g(t_0)$.
For MCG, GCG and other cosmological models the estimations in Table~\ref{Compar}
were made for the value $\Omega_{b0}=0.044$
(Eq. (\ref{Omb})). It is shown in
Ref. \cite{Sharov2016}, that this value is optimal for the $\Lambda$CDM, MCG, GCG and the
model with EoS (\ref{quadr}), if we use the fitting formula (\ref{rsA}). These
estimations were supported with the more simple fitting formula $r_d=(r_d h)_{fid}\cdot
h^{-1}$, but in the latter case the mentioned models are not sensitive to a value
$\Omega_{b0}$ in the range $0\le\Omega_{b0}\le0.15$, because of similarity in properties
of dark matter and baryonic matter. Due to this reason we do not consider $\Omega_{b0}$
as a free model parameter and fix it in the form (\ref{Omb}) for all models in
Table~\ref{Compar}.
One can see
in Table~\ref{Compar} that the MCG model demonstrates the value
$\min\chi^2_\Sigma=576.45$, it is a bit better than the the $w_d={}$const model
(\ref{energy-const}). The MCG mode also has $N_p=5$ parameters: $H_0$, $\Omega_k$, $A$,
$B_s$, $\alpha$. For the GCG model the minimum of $\chi^2_\Sigma$ is worse, however in
this case we have $N_p=4$ parameters (because $A=0$), so the GCG model gets advantage
from information criteria.
\subsection{Quadratic equation of state}
We consider a cosmic substratum having quadratic equation of state which has similar
unified behavior as in MCG. Further, this quadratic EoS asymptotically becomes of de
Sitter type. The EoS \cite{AB2006, Sharov2016}
$$
p=\tilde{p}_0+ w_0 \rho_g+ \tilde{\beta} \rho_g^2
$$
includes the first three terms of the Taylor series expansion of an arbitrary function
$p= f (\rho_g)$, where $\tilde{p}_0$, $w_0$, $\tilde{\beta}$ are free parameters. It is
convenient to rewrite this EoS in the form \cite{Sharov2016}
\begin{equation}
p=p_0 \rho_{cr}+ w_0 \rho_g+ \beta\rho_g^2/\rho_{cr}, \label{quadr}
\end{equation}
where $p_0= \tilde{p}_0/\rho_{cr}$, $\beta= \tilde{\beta}\rho_{cr}$ are the
dimensionless parameters and $\rho_{cr}= 3 H_0^2/ 8 \pi G$.
\begin{table*
{\begin{tabular}{l|c|c|c|c|c|c|c|c} \hline
Model &$ \min\chi^2_\Sigma $ &$ \min\chi^2_{tot} $ &$\dfrac{\min\chi^2_{tot}}{d.o.f} $ & $N_p$ & $AIC_\Sigma$ & $AIC_{tot}$
& $\Delta\,AIC_ {tot}$ & $\Delta\,BIC_ {tot}$\\ \hline
$w_d={}$const (non-flat)$\!\!\!$
& 576.29 & 576.45 & 0.9107 & 6 & 588.29 & 588.45 & 2.46 & 15.84 \\
\hline
$w_d={}$const (flat)
& 576.44 & 576.98 & 0.9101 & 5 & 586.44 & 586.98 & 0.99 & 9.91 \\
\hline
Ansatz I [Eq. (\ref{sol1-GA})]
& 576.29 & 576.93 & 0.9114 & 6 & 588.29 & 588.93 & 2.94 & 16.32 \\
\hline
Ansatz IV [Eq. (\ref{Ans4})]
& 576.07 & 576.90 & 0.9114 & 6 & 588.07 & 588.90 & 2.91 & 16.29 \\
\hline
Ansatz V [Eq. (\ref{Ans5})]
& 575.97 & 576.92 & 0.9114 & 6 & 587.97 & 588.92 & 2.93 & 16.31 \\
\hline
$ \Lambda$CDM$ $
& 578.56 & 579.99 & 0.9119 & 3 & 584.56 & 585.99 & 0 & 0 \\
\hline
GCG [Eq. (\ref{GCG})]
& 577.01 & 578.26 & 0.9106 & 4 & 585.01 & 586.26 & 0.27 & 4.73 \\
\hline
MCG [Eq. (\ref{MCG})]
& 576.45 & 577.62 & 0.9111 & 5 & 586.45 & 587.62 & 1.63 & 10.55 \\
\hline
Quadratic [Eq. (\ref{quadr})]
& 576.03 & 577.46 & 0.9108 & 5 & 586.03 & 587.46 & 1.47 & 10.39 \\
\hline
CPL [Eq. (\ref{CPL})]
& 576.57 & 577.83 & 0.9114 & 5 & 586.57 & 587.83 & 1.84 & 10.76 \\
\hline
Linear [Eq. (\ref{linear})]
& 576.57 & 577.74 & 0.9113 & 5 & 586.57 & 587.74 & 1.75 & 10.67 \\
\hline \end{tabular}
\caption{A statistical comparison of some successful variants of the interacting dark energy model with some well known non-interacting cosmological models has been presented using two different combined analysis SNe+$H(z)$+BAO and SNe+$H(z)$+BAO+CMB.}
\label{Compar}}
\end{table*}
Solving the conservation equation (\ref{continuity}), we obtain the energy density
$\rho_g$ in the form
$$\frac{\rho_g}{\rho_c}=\left\{\begin{array}{ll} \frac{1}{2 \beta}
\left[\frac{\Gamma-\sqrt{|\Delta|}\tan\left(\frac{x}{2} \sqrt{|\Delta|}\right)}
{1-|\Delta|^{-\frac{1}{2}}\tan\left(\frac{x}{2} \sqrt{|\Delta|}\right)}-1-w_0\right], &\Delta < 0,\\
\frac{1}{2 \beta} \left[\left(\frac{x}{2}+\frac{1}{\Gamma}\right)^{-1}-1-w_0\right], & \Delta=0,\\
\frac{\rho_-(\Omega_m-\rho_{+})\,a^{-3\sqrt{\Delta}}-\rho_{+}(\Omega_m-\rho_{-})}
{(\Omega_m-\rho_{+})\,a^{-3\sqrt{\Delta}}-\Omega_m+ \rho_{-}}, &
\Delta > 0;
\end{array}\right.$$
where $\Delta= (1+w_0)^2- 4 \beta p_0$, $ \Omega_{m}= 1- \Omega_k- \Omega_{b0}$,
$\Gamma= 2 \beta \Omega_m+ 1+ w_0$, $\rho_{\pm}= \frac{-1-w_0\pm
\sqrt{\Delta}}{2\beta}$.
Hence, the evolution equation can be written as
$$
H^2= H_0^2 \left[\frac{\rho_g}{\rho_c}+ \Omega_{b0} a^{-3}+\Omega_k a^{-2}\right].
$$
The value $\min\chi^2_\Sigma=576.03$ for the model (\ref{quadr}) in Table~\ref{Compar}
is better than for the MCG model, it is close to the best result of Ansatz V
(\ref{Ans5}).
\subsection{CPL parametrization}
We also consider the universe including cold dark matter and a dark energy component
with Chevallier-Polarski-Linder (CPL) parametrization \cite{CP2001, Linder2003}, i.e.
Eq. (\ref{Ans4})
\begin{equation}
w = w_0+ w_1 \frac{z}{1+z}= w_0+ w_1(1-a),
\label{CPL}\end{equation}
where $w_0$, $w_1$ are two free parameters. In presence of this dark energy component
(with its present time fraction $\Omega_{X0}=1-\Omega_{m0} - \Omega_{k}$) the evolution
equation is
$$
\frac{H^2}{H_0^2}=\Omega_{m0} a^{-3}+ \Omega_{k} a^{-2}+ \Omega_{X0} a^{-3(1+w_0+w_1)}
e^{3 w_1(a-1)}.
$$
It is interesting to compare this model with the considered above variant (\ref{Ans4})
(Ansatz IV) of our interacting model with the similar EoS. In other words, the CPL model
(\ref{CPL}) transforms into Ansatz IV (\ref{Ans4}), if we include the interaction term
(\ref{interaction1}) with two model parameters $\lambda_m$, $\lambda_d$ and fix the
curvature parameter $\Omega_{k}=0$.
From Table~\ref{Compar} we see that the interacting scenario
(\ref{Ans4}) (Ansatz IV) has the essential advantage ($\min\chi^2_\Sigma\simeq576.07$
and $\min\chi^2_{tot}\simeq576.9$) in compared to the non-interacting model (\ref{CPL})
($\min\chi^2_\Sigma\simeq576.57$, $\min\chi^2_{tot}\simeq577.83$).
\subsection{Linear parametrization}
The model with linear parametrization in EoS is similar to the considered above CPL
parametrization (\ref{CPL}), but has the following EoS \cite{Astier2001, CH1999, WA2002}
\begin{equation}
w = w_0+ w_1z,
\label{linear}\end{equation}
where $w_0$, $w_1$ are two free parameters
to be constrained by the observational data.
{The evolution equation for a universe made of cold matter and the dark energy with the above equation of state} is
$$
\frac{H^2}{H_0^2}=\Omega_{m0} a^{-3}+ \Omega_{k} a^{-2}+ \Omega_{X0} a^{-3(1+w_0-w_1)}
e^{3 w_1 \left( \frac{1-a}{a}\right)}.
$$
One can see in Table~\ref{Compar} and in Fig.~\ref{F4} that the model (\ref{linear})
behaves very closely to the CPL scenario (\ref{CPL}), but it demonstrates essentially
worse $\min\chi^2_\Sigma$ than the corresponding interactive model (\ref{Ans5}) (Ansatz V).
Hierarchy of the scenarios in Table~\ref{Compar} will change, if we take into account
information criteria which use a number $N_p$
of model parameters (degrees of freedom). In particular, the Akaike and Bayesian information criteria are given by
\cite{Akaike1974, SHL2012, SKK2006}
$$
AIC = \min\chi^2_\Sigma +2N_p,\qquad AIC = \min\chi^2_\Sigma +N_p\log N,
$$
where $N$ is the number of data points used in the fit.
These criteria give advantage to the $\Lambda$CDM and other models with minimal $N_p$.
\section{Summary}
\label{conclu}
In the FLRW background {of our Universe} we have considered an interacting scenario
between dark matter and dark energy where both of them obey barotropic equation of
state.
The interaction is a linear combination of the energy densities of the dark components
in the form
$Q= 3\, H\, \lambda_m\, \rho_m+ 3\, H\, \lambda_d\, \rho_d$, where
($\lambda_m$, $\lambda_d$) are the coupling parameters describing the strength and
direction of energy flow from their sign (i.e. whether $Q> 0$ or $Q< 0$). Since the EoS in DE could be either constant
or variable, hence we have examined both the possibilities to explore the cosmological
scenarios with the use of current astronomical data. For $w_d=$ constant, the evolution
equations for matter and dark energy take analytic forms.
For variable $w_d$ we have proposed three ansatze in Eqns.
(\ref{sol1-GA}), (\ref{sol2-GA}), (\ref{Ans3}), which emerge from the generalized ansatz
given in Eq. (\ref{GA}).
In addition to these, we have considered two more variable EoS in DE in the forms of CPL
and linear parametrizations in equations (\ref{Ans4}), (\ref{Ans5}), respectively. Altogether, we have considered 7 variants for the present interacting model for a detailed analysis.
Henceforth, with the introduction of 7 variants of the EoS in dark energy, we
constrained the model parameters using the joint analysis of Union 2.1,
Hubble parameter measurements, baryon acoustic oscillation data points and
cosmic microwave background shift parameter.
We used statistical minimization technique for the $\chi^2$ functions
where we consider two different joint analyses (i)
$\chi^2_{\Sigma}=\chi^2_{SN}+ \chi^2_{H}+ \chi^2_{BAO}$,
and (ii) $\chi^2_{tot}=\chi^2_{SN}+ \chi^2_{H}+ \chi^2_{BAO}+ \chi^2_{CMB}$.
The results of the analyses are presented
in Table~\ref{Estim}.
We found that for $w_d= $ constant, the curvature parameter $\Omega_k$ plays significant
role in the analysis. We investigated the cases $\Omega_k \neq 0$ and $\Omega_k= 0$. The
difference in the behavior of $\min\chi^2_\Sigma$ and $\min\chi^2_{tot}$ for both
the variants has been presented in the top panels of
Fig. \ref{F1} and in Table~\ref{Estim}. For the case $\Omega_k\ne 0$ of the model with
$w_d= $ constant, the minimal value of $\chi^2_\Sigma$ is better,
so for this case we presented two dimensional contour
plots at $1\sigma$, $2\sigma$, $3\sigma$ confidence levels in Fig. \ref{F1}.
Further, the possibility of variable EoS in DE has been investigated with 5 different
variants in Eqs. (\ref{sol1-GA}) $-$ (\ref{Ans5}) for the present interaction.
We found that the variants may experience singularities (\ref{singul}) (see Fig.
\ref{Fsing}) at finite time, and hence, we excluded the domains of the parameters
leading to the singular behavior and analyzed them by the current data sets mentioned
above. In most of the cases we notice that one of the coupling parameters of the
interaction possesses negative sign,
so during the evolution of the universe the present interaction $Q$ changes its sign,
thereby the direction of energy flow changes. This effect has been shown in Fig.~\ref{FSNHQ} (see the bottom one) for some successful variants of the model. This shows that for $z \lesssim 0.4$, almost all successful variants (except Ansatz I) predict the flow of energy from CDM to DE (i.e. $Q< 0$) while for $z \gtrsim 0.4$, the energy flows from DE to CDM (i.e. $Q>0$).
Furthermore, in figures \ref{F3}, \ref{F4},
we have presented the graphical variation of the
$\min\chi^2_\Sigma$ and $\min\chi^2_{tot}$ over the model parameters for the variable
EoS in DE presented in the paper. For Ansatz I and
Ansatz IV (CPL parametrization), we
have presented the contour plots in the two dimensional plane for several couple of
model parameters at 1$\sigma$, 2$\sigma$, 3$\sigma$ confidence levels.
For all variants of the model with variable $w_d$, the positions of the minimum points in
parameter spaces are essentially different for the functions $\chi^2_\Sigma$ and
$\chi^2_{tot}$. In fact, we observe that the minimal values of $\chi^2_{tot}$
for these variants are larger in these cases.
Based on the analysis we may conclude that the interacting model with $w_d= $ constant,
is the most successful one in respect to all observational data.
Finally, some of the successful variants of the interaction model,
such as, $w_d= $ constant (with $\Omega_k =0$ and $\Omega_k \neq 0$), Ansatz I,
Ansatz IV, Ansatz V have been compared with some
known non-interacting cosmological models, such as, $\Lambda$CDM model, unified models, namely, the
generalized Chaplygin gas (GCG), modified Chaplygin gas (MCG), a fluid with quadratic
equation of state, and finally with CPL and linear parametrizations in DE. The results
have been presented in Table \ref{Compar}.
It is found that the present interacting DE model with constant $w_d$ slightly favors
the phantom region in agreement with the latest report \cite{NPS2016}. The best
absolute value of $\min\chi^2_\Sigma$ is achieved for the interacting model with EoS
(\ref{Ans5}) (Ansatz V). The second result demonstrates that among the non-interacting
models the model with quadratic EoS in (\ref{quadr}) provides a better fit with the
observational data. However, the number of model parameters of this non-interacting
model ($N_p=5$) is less than the number of model parameters for the interacting model
(\ref{Ans5}), but larger than the number of model parameters in models GCG or
$\Lambda$CDM.
Finally, we notice that although different models have different model parameters, still
from AIC and BIC analysis, the models presented in Table \ref{Compar} do not deviate so
much from the $\Lambda$CDM model with minimum number of model parameters in comparison
with others.
\section*{ACKNOWLEDGMENTS}
The authors thank the referee for some essential comments to improve the work.
SP was supported by the Science and Engineering Research Board through NPDF (File No:
PDF/2015/000640).
|
2,877,628,089,828 | arxiv | \section{Introduction}
We study $r$-nets, a powerful tool in computational and metric geometry, with several applications in approximation algorithms.
An $r$-net for a metric space $(X,\norm{\cdot}), \, |X|=n$ and for numerical parameter $r$ is a subset $R\subseteq X$
such that the closed $r/2$-balls centered at the points of $R$ are disjoint, and the closed $r$-balls around the same points cover all of $X$. We define approximate $r$-nets analogously. Formally,
\begin{dfn}
Given a pointset $X \subseteq {\mathbb R}^d$, a distance parameter $r \in {\mathbb R}$ and an approximation parameter $\epsilon >0$, a $(1+\epsilon)r$-net of $X$ is a subset $R \subseteq X$ s.t. the following properties hold:
\begin{enumerate}
\item (packing) For every $p, q \in R$, $p \neq q $, we have that $\norm{p-q}_2 \geq r$.
\item (covering) For every $p \in X$, there exists a $q \in R$ s.t. $\norm{p-q}_2 \leq (1+\epsilon)r$.
\end{enumerate}
\end{dfn}
\paragraph{Previous Work.}
Finding $r$-nets can be addressed naively by considering all points of $X$ unmarked and, while there remains an unmarked point $p$, the algorithm adds it to $R$ and marks all other points within distance $r$ from
$p$. The performance of this algorithm can be improved by using grids and hashing \cite{HP04}.
However, their complexity remains too large when dealing with big data in high dimension. The naive algorithm is quadratic in $n$ and the grid approach is in $O(d^{d/2} n)$, hence it is relevant only for constant dimension $d$~\cite{HR15}.
In \cite{HPM05}, they show that an approximate net hierarchy for an
arbitrary finite metric can be computed in $O(2^{ddim}n \log n)$, where $ddim$ is the doubling dimension.
This is satisfactory when doubling dimension is constant, but requires a vast amount of resources when it is high.
When the dimension is high, there is need for algorithms with time complexity polynomial in $d$ and subquadratic in $n$. One approach, which computes $(1+\epsilon)r$-nets in high dimension is that of \cite{EHS15}, which uses the Locality Sensitive Hashing (LSH) method of
\cite{AI08}. The resulting time complexity is
$\tilde{O}(dn^{2-\Theta(\epsilon)})$, where $\epsilon>0$ is quite small and $\tilde{O}$ hides polylogarithmic factors.
In general, high dimensional analogues of classical geometric problems
have been mainly addressed by LSH. For instance, the approximate closest pair problem can be trivially solved by performing $n$ approximate nearest neighbor (ANN) queries. For sufficiently small $\epsilon$, this costs $\tilde{O}(dn^{2-\Theta(\epsilon)})$ time, due to the complexity factor of an LSH query. Several other problems have been reduced to ANN queries \cite{GIV01}. Recently, Valiant \cite{Val12}, \cite{Val15} presented an algorithm for the approximate closest pair problem
in time $\tilde{O}(dn^{2-\Theta(\sqrt{\epsilon})})$. This is a different approach in the sense that while LSH exploits dimension reduction through random projections, the algorithm of \cite{Val15} is inspired by high dimensional phenomena. One main step of the algorithm is that of projecting the pointset up to a higher dimension.
\paragraph{Our Contribution.}
We present a new randomized algorithm that computes approximate $r$-nets in time subquadratic in $n$ and polynomial in the dimension, and improves upon the complexity of the best known algorithm.
Our method does not employ LSH and, with probability $1-o(1)$, it returns
$R\subset X$, which is a $(1+\epsilon)r$-net of $X$.
We reduce the problem of an approximate $r$-net for arbitrary vectors (points) under Euclidean distance to the same problem for vectors on the unit sphere. Then, depending on the magnitude of distance $r$, an algorithm handling ``small" distances or an algorithm handling ``large" distances is called.
These algorithms reduce the Euclidean problem of $r$-nets on unit vectors to finding an $r$-net for unit vectors under inner product (Section~\ref{SGeneral}). This step requires that the multiplicative $1+\epsilon$ approximation of the distance corresponds to an additive $c\epsilon$ approximation of the inner product, for suitable constant $c>0$.
Next, we convert the vectors having unit norm into vectors with entries $\{-1, +1\}$ (Section \ref{SInner}).
This transformation is necessary in order to apply the Chebyshev embedding of \cite{Val15}, an embedding that damps the magnitude of the inner product of ``far" vectors, while preserving the magnitude of the inner product of ``close"
vectors.
For the final step of the algorithm, we first
apply a procedure that allows us to efficiently
compute $(1+\epsilon)$-nets in the case where the number of ``small" distances is large. Then, we apply a modified version of the {\tt Vector Aggregation} algorithm of \cite{Val15}, that exploits fast
matrix multiplication, so as to achieve the desired running time.
In short, we extend Valiant's framework \cite{Val15} and we compute $r$-nets in time $\tilde{O}(dn^{2-\Theta(\sqrt{\epsilon})})$, thus improving on the exponent of the LSH-based construction \cite{EHS15}, when $\epsilon$ is small enough. This improvement by $\sqrt{\epsilon}$ in the exponent is the same as the complexity improvement obtained in \cite{Val15} over the LSH-based algorithm for the approximate closest pair problem.
Our study is motivated by the observation that computing efficiently an $r$-net leads to efficient solutions for several geometric problems, specifically in approximation algorithms. In particular, our extension of $r$-nets in high dimensional Euclidean space can be plugged in the framework of~\cite{HR15}. The new framework has many applications, notably the $k$th nearest neighbor distance problem, which we solve in $\tilde{O}(dn^{2-\Theta(\sqrt{\epsilon})})$.
\paragraph{Paper Organization.}
Section \ref{SInner} presents an algorithm for
computing an approximate net with respect to the inner product for a set of unit vectors.
Section~\ref{SGeneral} translates the problem of finding $r$-nets under Euclidean distance to the same problem under inner product. In Section \ref{Sapps}, we discuss applications of our construction and possible future work. Omitted proofs are included in the Appendices.
We use $\norm{\cdot}$ to denote the Euclidean norm $\norm{\cdot}_2$ throughout the paper.
\section{Points on a sphere under inner product}\label{SInner}
In this section, we design an algorithm for constructing an approximate $\rho$-net of vectors on the sphere under inner product.
To that end, we reduce the problem to constructing an approximate net under absolute inner product for vectors that lie on the vertices of a unit hypercube.
Since our ultimate goal is a solution to computing $r$-nets with respect to Euclidean distance, we allow additive error in the approximation, which under certain assumptions, translates to multiplicative error in Euclidean distance.
In the following, we define rigorously the notion of approximate $\rho$-nets under inner product.
\begin{dfn}
\label{DfnNetInn}
For any $X\subset {\mathbb S}^{d-1}$, an approximate $\rho$-net for $(X,\langle \cdot,\cdot\rangle)$ ,
with additive approximation parameter $\epsilon>0$, is a subset $C\subseteq X$ which satisfies the following properties:
\begin{itemize}
\item for any two $p \neq q \in C$, $\langle p, q \rangle < \rho$, and
\item for any $x\in X$, there exists $p \in C$ s.t. $\langle x, p \rangle \geq \rho-\epsilon$.
\end{itemize}
\end{dfn}
One relevant notion is that of $\epsilon$-kernels \cite{AHV05}. In $\epsilon$-kernels, one is interested in finding a subset of the input pointset, which approximates its directional width. Such constructions have been extensively studied when the dimension is low, due to their relatively small size.
\subsection{Crude approximate nets}
In this subsection we develop our basic tool, which is based on the Vector Aggregation Algorithm by \cite{Val15}. This tool aims to compute approximate $\rho$-nets with multiplicative error, as opposed to what we have set as our final goal for this section, namely to bound additive error.
Moreover, in the context of this subsection, two vectors are close to each other when the magnitude
of their inner product is large, and two vectors are
far from each other when the magnitude of their inner product is small.
Let $|\langle \cdot,\cdot \rangle|$ denote the magnitude of the inner product of two vectors.
\begin{dfn} \label{DfnMagnInnNet}
For any $X=[x_1,\ldots, x_n], X' =[x_1',\ldots,x_n'] \subset {\mathbb R}^{d \times n}$, a crude approximate $\rho$-net for $(X,X',|\langle \cdot,\cdot \rangle|)$,
with multiplicative approximation factor $c>1$, is a subset $C\subseteq [n]$ which satisfies the following properties:
\begin{itemize}
\item for any two $i \neq j \in C$, $|\langle x_i, x_j' \rangle| < c \rho$, and
\item for any $i\in [n]$, there exists $j \in C$ s.t.\ $|\langle x_i, x_j' \rangle| \geq \rho$.
\end{itemize}
\end{dfn}
{\tt Vector Aggregation} follows the
exposition of \cite{Val15}.
The main difference is that, instead of the ``compressed'' matrix $Z^T Z$, we use the form $X^T Z$, where $Z$ derives from vector aggregation.
Both forms encode the information in the Gram matrix $X^T X$.
The matrix $X^TZ$ is better suited for our purposes, since each row corresponds to an input vector instead of an aggregated subset; this extra information may be useful in further problems.
\begin{framed}
{\tt Vector Aggregation}\\
Input: $X =[x_1,\ldots,x_n] \in {\mathbb R}^{d \times n}$,
$X' =[x_1',\ldots,x_n'] \in {\mathbb R}^{d \times n}$,
$\alpha \in (0,1)$, $\tau>0$.
Output: $n\times n^{1-\alpha}$ matrix $W$ and random partition $S_1 , \ldots , S_{n^{1-\alpha}}$ of $\{x_1,\ldots,x_n\}$.
\begin{itemize}
\item Randomly partition $[n]$ into $n^{1-\alpha}$ disjoint subsets, each of size $n^{\alpha}$ , denoting the
sets $S_1 , \ldots , S_{n^{1-\alpha}}$.
\item For each $i = 1, 2, \ldots , 78 \log n$:
\begin{itemize}
\item Select $n$ coefficients $q_1 ,\ldots , q_n \in \{-1, +1\}$ at random.
\item Form the $d\times n^{1-\alpha}$ matrix $Z^i$ with entries $z_{j,k}^i=\sum_{l\in S_k} q_l \cdot x_{j,l}'$
\item $W^i=X^T Z^i$
\end{itemize}
\item Define the $n \times n^{1-\alpha}$ matrix $W$ with $w_{i,j} =quartile(|w_{i,j}^1|,\ldots|w_{i,j}^{78 \log n}|)$.
\item Output $W$ and $S_1 , \ldots , S_{n^{1-\alpha}}$.
\end{itemize}
\end{framed}
\begin{thm}\label{ThmVecAgg}
Let $X \in {\mathbb R}^{d \times n}$,
$X' \in {\mathbb R}^{d \times n}$,
$\alpha \in (0,1)$, $\tau>0$ the input of {\tt Vector Aggregation}. Then, the algorithm returns a matrix $W$ of size $n\times n^{1-\alpha}$ and a random partition $S_1 , \ldots , S_{n^{1-\alpha}}$, which with probability $1-O(1/n^3)$ satisfies the following:
\begin{itemize}
\item For all $j\in [n]$ and $k\in [n^{1-\alpha}]$, if $\forall u \in S_k$, $ |\langle x_j, u \rangle|\leq \tau$ then $|w_{j,k}| < 3 \cdot n^{\alpha} \tau$.
\item For all $j\in [n]$ and $k\in [n^{1-\alpha}]$
if $\exists u\in S_k$, $|\langle x_j, u \rangle|\geq 3n^{\alpha}\tau$ then $|w_{j,k}| \geq 3 \cdot n^{\alpha} \tau$.
\end{itemize}
Moreover, the algorithm runs in time
$\tilde{O}(dn+n^{2-\alpha}+MatrixMul( n \times d,d \times n^{1-\alpha}))$.
\end{thm}
For the case of pointsets with many ``small" distances,
we rely crucially on the fact that the expected
number of near neighbors for a randomly chosen point
is large.
So, if we iteratively choose random points and
delete these and their neighbors, we will end up with
a pointset which satisfies the property of having sufficiently few ``small" distances. Then, we apply {\tt Vector Aggregation}.
\begin{framed}
{\tt Crude ApprxNet}\\
Input: $X =[x_1,\ldots,x_n] \in {\mathbb R}^{d \times n}$,
$X' =[x_1',\ldots,x_n'] \in {\mathbb R}^{d \times n}$,
$\alpha \in (0,1)$, $\tau>0$.
Output: $C'\subseteq [n]$, {$F' \subseteq [n]$}.
\begin{itemize}
\item $C\gets \emptyset$, $F_1 \gets \emptyset, F_2 \gets \{x_1,\ldots,x_n\}$
\item Repeat $n^{0.5}$ times:
\begin{itemize}
\item Choose a column $x_i$ uniformly at random.
\item $C \gets C \cup \{x_i\}$.
\item Delete column $i$ from matrix $X$ and column $i$ from matrix $X'$.
\item Delete each column $k$ from matrix $X$, $X'$ s.t. $|\langle x_i, x_k'
\rangle| \geq \tau$.
\item If there is no column $k$ from matrix $X$ s.t. $|\langle x_i, x_k'\rangle| \geq \tau$, then $F_1 \gets F_1 \cup \{x_i\}$
\end{itemize}
\item Run {\tt Vector Aggregation}
with input $X$, $X'$, $\alpha$, $\tau$ and output
$W$, $S_1,\ldots,S_{n^{1-\alpha}}$.
\item For each of the remaining columns $i=1,\ldots$:
\begin{itemize}
\item For any $|w_{i,j}|\geq 3 n^{\alpha} \tau$:
\begin{itemize}
\item If more than $n^{1.7}$ times in here, output "ERROR".
\item Compute inner products between $x_i$ and vectors in $S_j$.
For each vector $x_k' \in S_j$ s.t. $x_k' \neq x_i$ and $|\langle x_i,x_k'\rangle|\geq \tau$,
delete row $k$ {and $F_2 \gets F_2 \backslash \{ x_i\}$.}
\end{itemize}
\item $C \gets C \cup \{x_i\} $
\end{itemize}
\item Output indices of $C$ {and $F \gets \{F_1 \cup F_2 \}$}.
\end{itemize}
\end{framed}
\begin{thm}\label{ThmCrudeNet}
On input $X =[x_1,\ldots,x_n] \in {\mathbb R}^{d \times n}$,
$X' =[x_1',\ldots,x_n'] \in {\mathbb R}^{d \times n}$,
$\alpha \in (0,1)$, $\tau>0$, {\tt Crude ApprxNet}, computes a crude $3n^{\alpha}$-approximate $\tau$-net for $X$, $X'$, following the notation of Definition \ref{DfnMagnInnNet}.
The algorithm costs time:
$$
\tilde{O}(n^{2-\alpha}+ d \cdot n^{1.7+\alpha}+MatrixMul( n \times d,d \times n^{1-\alpha})),
$$
and succeeds with probability $1-O(1/n^{0.2})$.
Additionally, it outputs a set $F\subseteq R$ with the following property:
$\{x_i \mid \forall x_j \neq x_i~ |\langle x_j,x_i \rangle | < \tau \}\subseteq F \subseteq \{x_i \mid \forall x_j \neq x_i~ |\langle x_j,x_i \rangle | < n^a\tau \}$.
\end{thm}
\begin{proof}
We perform $n^{0.5}$ iterations and for each, we compare the inner products between the randomly chosen vector and all other vectors. Hence, the time needed is $O(dn^{1.5})$.
In the following, we denote by
$X_i$ the number of vectors which have ``large" magnitude of the inner product with the randomly chosen point in the $i$th iteration.
Towards proving correctness,
suppose first that ${\mathbb E}[X_i]>2n^{0.5}$ for all $i=1, \ldots n^{0.5}$. The expected number of vectors we delete in each
iteration of the algorithm is more than $2n^{0.5}+1$. So, after $n^{0.5}$ iterations,
the expected total number of deleted vectors will be greater than $n$. This means
that if the hypothesis holds for all iterations we
will end up with a proper net.
Now suppose that there is an iteration $j$ where ${\mathbb E}[X_j] \leq 2n^{0.5}$. After all iterations, the
number of ``small" distances are at most $n^{1.5}$ on expectation. By Markov's inequality,
when the
{\tt Vector Aggregation} algorithm is called, the following is satisfied
with probability
$1-n^{-0.2}$ :
$$
|\{(i,k) \mid |\langle x_i, x_k'\rangle| \geq\tau, i\neq k\}| \leq n^{1.7} .
$$
By Theorem \ref{ThmVecAgg}
and the above discussion, the number of entries in the matrix $W$ that we need to visit is at most $n^{1.7}$. For each entry, we perform a brute force
which costs $d n^\alpha$.
Now notice that the first iteration stores centers $c$ and deletes all points $p$ for which $|\langle c,p\rangle| \geq \tau$. Hence, any two centers $c,c'$ satisfy $ |\langle c,p\rangle| < \tau$. In the second iteration, over the columns of $W$, notice that by Theorem \ref{ThmVecAgg}, for any two centers $c,c'$ we have $|\langle c,c'\rangle| <3 n^{\alpha}\tau.$
\end{proof}
\subsection{Approximate inner product nets}
In this subsection, we show that the problem of computing $\rho$-nets for the inner product of unit
vectors reduces to the less natural problem of Definition \ref{DfnMagnInnNet}, which refers to the magnitude of the inner product.
The first step consists of mapping the unit vectors to vectors in $\{-1,1\}^{d'}$. The mapping is essentially Charikar's LSH scheme \cite{Cha02}.
Then, we apply the Chebyshev embedding of~\cite{Val15} in order to achieve gap amplification, and finally
we call algorithm {\tt Crude ApprxNet}, which will now return
a proper $\rho$-net with additive error.
\begin{thm}[\cite{Val15}] \label{ThmUnif}
There exists an algorithm with the following properties. Let $d'=O(\frac{\log n}{\delta^2})$
and $Y \in {\mathbb R}^{d'\times n}$ denote its output
on input $X$, $\delta$, where $X$ is a
matrix whose columns have unit norm, with probability $1 - o(1/n^2 )$, for all pairs $i, j \in [n]$,
$
\Big|{\langle Y_i , Y_j \rangle}/{d'}-\Big(1-2 \cdot {\mathrm{cos}^{-1}(\langle X_i,X_j \rangle)}/{ \pi}\Big)\Big| \leq \delta
,$
where $X_i$, $Y_i$ denote the $i$th column of $X$ and $Y$ respectively. Additionally, the runtime of
the algorithm is $O( \frac{d n \log n}{\delta^2})$.
\end{thm}
The following theorem provides a randomized embedding that damps the magnitude of the inner product of ``far" vectors, while preserving the magnitude of the inner product of ``close"
vectors. The statement is almost verbatim that of~\cite[Prop.6]{Val15} except that we additionally establish an asymptotically better probability of success. The proof is the same, but since we claim stronger guarantees on success probability, we include the complete proof in Appendix~\ref{AppThmCheb}.
\begin{thm}\label{ThmCheb}
Let $Y$, $Y'$ be the matrices output by algorithm ``Chebyshev Embedding" on
input $X, X' \in \{-1,1\}^{d\times n} , \tau^{+}\in [-1,1] , \tau^{-} \in [-1,1]$ with $\tau^{-}<\tau^{+}$ , integers $q, d'$.
With probability
{$ 1 - o(1/n)$} over the randomness in the construction of
$Y, Y'$, for all $i, j \in [n]$,
$\langle Y_i , Y_j' \rangle$ is within $\sqrt{d'} \log n$ from the value
$
T_q \Big(\frac{\langle X_i, X_j'\rangle/d' - \tau^{-}}{\tau^{+}-\tau^{-}}2 -1 \Big) \cdot d' \cdot (\tau^{+}-\tau^{-})^q /{2^{3q-1}},
$
where $T_q$ is the degree-$q$ Chebyshev polynomial of the first kind. The algorithm runs in
time $O(d' \cdot n\cdot q)$.
\end{thm}
\begin{framed}
{\tt Inner product ApprxNet}\\
Input: $X =[x_1,\ldots,x_n] $ with each $x_i \in {\mathbb S}^{d-1}$, $\rho \in [-1,1]$, $\epsilon \in (0,1/2]$.
Output: Sets $C, F \subseteq [n]$.
\begin{itemize}
\item If $\rho\leq \epsilon$, then:
\begin{itemize}
\item $C \gets \emptyset$, $F\gets \emptyset$, $W \gets \{x_1,\ldots,x_n\}$
\item While $W\neq \emptyset$:
\begin{itemize}
\item Choose arbitrary vector $x\in W$.
\item $W \gets W \setminus \{y \in W \mid \langle x,y\rangle \geq \rho-\epsilon \}$
\item $C \gets C \cup \{x\}$
\item If $\forall y \in W$, $\langle x,y \rangle<\rho-\epsilon $ then $F\gets F. \cup \{x\}$
\end{itemize}
\item Return indices of $C$, $F$.
\end{itemize}
\item Apply Theorem \ref{ThmUnif} for input $X$, $\delta=\epsilon/2 \pi$
and output $Y\in \{-1,1\}^{d' \times n}$ for $d'=O(\log n/\delta^2)$.
\item Apply Theorem \ref{ThmCheb}
for input $Y$, $d''= n^{0.2}$, $q=50^{-1} \log n$,
$\tau^-=-1$, $\tau^{+}=1-\frac{2 \cos^{-1}(\rho-\epsilon)}{\pi} +\delta$
and output $Z, Z'$.
\item Run algorithm {\tt Crude ApprxNet}
with input $\tau=3n^{0.16}$, $\alpha=\sqrt{\epsilon}/500$, $Z,Z'$ and output $C$, $F$.
\item Return $C$, $F$.
\end{itemize}
\end{framed}
\begin{thm}\label{AppIPnet}
The algorithm {\tt Inner product ApprxNet}, on input $X =[x_1,\ldots,x_n] $ with each $x_i \in {\mathbb S}^{d-1}$, $\rho \in [-1,1]$ and $\epsilon \in (0,1/2]$,
computes an approximate $\rho$-net with additive
error
$\epsilon$, using the notation of Definition \ref{DfnNetInn}. The algorithm runs in time $\tilde{O}(dn+n^{2-\sqrt{\epsilon}/600})$ and
succeeds with probability $1-O(1/n^{0.2})$. Additionally, it computes a set $F$ with the following property:
$\{x_i \mid \forall x_j \neq x_i~ \langle x_j,x_i \rangle < \rho -\epsilon \}\subseteq F \subseteq \{x_i \mid \forall x_j \neq x_i~ \langle x_j,x_i \rangle < \rho \}$.
\end{thm}
\section{Approximate nets in high dimensions}
\label{SGeneral}
In this section, we translate the problem of computing $r$-nets in $({\mathbb R}^d,\|\cdot \|)$ to the problem of computing
$\rho$-nets for unit vectors under inner product. One intermediate step is that of computing $r$-nets for
unit vectors under Euclidean distance.
\subsection{From arbitrary to unit vectors}
In this subsection, we show that if one is interested in
finding an $r$-net for $({\mathbb R}^d,\|\cdot \|)$, it is sufficient
to solve the problem for points on the unit sphere. One analogous statement is used in \cite{Val15}, where they prove that one can apply a randomized mapping from the general Euclidean space to points on a unit sphere, while preserving the ratio of distances for any two pairs of points. The claim derives by the simple observation that an $r$-net in the initial space can be approximated
by computing an $\epsilon r/c$-net on the sphere, where $c$
is the maximum norm of any given point envisaged as vector. Our exposition is even simpler since we can directly employ the analogous theorem from \cite{Val15}.
\begin{cor}\label{Standardize}
There exists an algorithm, {\tt Standardize}, which, on input a $d \times n$ matrix $X$ with entries $x_{i,j} \in {\mathbb R}$, a constant $\epsilon \in (0, 1)$ and a distance parameter $r \in {\mathbb R}$, outputs a $m'\times n $ matrix $Y$, with columns having unit norm and $m'=\log^3n$, and a distance parameter $\rho \in {\mathbb R} $, such that a $\rho$-net of $Y$ is an approximate $r$-net of $X$, with probability $1-o(1/poly(n))$.
\end{cor}
\begin{comment}
\begin{framed}
\textbf{Standardize}\\
Input: A $d \times n$ matrix $X$ with entries $x_{i,j} \in {\mathbb R}$, a constant $\epsilon \in (0, 1)$, a distance parameter $r \in {\mathbb R}$.
Output: A $m'\times n $ matrix $Y$ with columns having unit norm and $m'=\log^3n$, and a distance parameter $\rho \in {\mathbb R} $, such that our algorithm computes an $r$-net of $X$ given a $\rho$-net of $Y$.
\begin{itemize}
\item Define two $d$-dimensional vectors $X_{n+1}, X_{n+2}$, s.t.\ $r'=X_{n+1}-X_{n+2}$ and $\|r'\|=r$, and let matrix $X'$ denote the concatenation of $X$, $X_{n+1}$ and $X_{n+2}$ with size $d\times n+2$.
\item Perform a Johnson-Lindenstrauss transformation on the columns of $X'$, projecting them to dimension $m'$, so as to yield matrix $X''$.
\item Let $c$ denote the magnitude of the largest column of $X''$. Choose a random $m'$-dimensional vector $u$ of magnitude $8c/\epsilon$.
\item Let matrix $Y$ be the result of adding $u$ to each column of $X''$ and normalizing all columns so as to have unit norm.
\item Define $\rho:=\|Y_{n+1}-Y_{n+2}\|$ to be the new distance parameter.
\end{itemize}
\end{framed}
\end{comment}
\subsection{Approximate nets under Euclidean distance}
In this subsection, we show that one can translate the problem of computing an $r$-net for points on the unit sphere under Euclidean distance, to finding an $r$-net for unit vectors under inner product as defined in Section \ref{SInner}. Moreover, we identify the subset of the $r$-net which contains
the centers that are approximately far from any other point. Formally,
\begin{dfn}
Given a set of points $X$ and $\epsilon>0$,
a set $F\subseteq X$ of $(1+\epsilon)$-approximate $r$-far points is defined by the following property: $\{x\in X \mid \forall x \neq y \in X ~ \|x-y\| > (1+\epsilon)r \}\subseteq F \subseteq \{x\in X \mid \forall x \neq y \in X ~ \|x-y\| > r \}$.
\end{dfn}
If $r$ is greater than some constant, the problem can be immediately solved by the law of cosines. If $r$ cannot be considered as constant, we distinguish cases $r\geq 1/n^{0.9}$ and $r <1/n^{0.9}$. The first case
is solved by a simple modification of an analogous algorithm in \cite[p.13:28]{Val15}. The second case is not straightforward and requires partitioning the pointset in a manner which allows computing $r$-nets for each part separately. Each part has bounded diameter which implies that we need to solve a ``large $r$" subproblem.
\begin{thm}\label{ThmLargeRadius}
There exists an algorithm, {\tt ApprxNet(Large radius)}, which,
for any constant $\epsilon\in (0,1/2]$, $X\subset {\mathbb S}^{d-1}$ s.t. $|X|=n$, outputs
a $(1+\epsilon)r$-net and a set of $(1+\epsilon)$-approximate $r$-far points with probability $1-O(1/n^{0.2})$. Additionally,
provided $r>1/n^{0.9}$ the runtime of the algorithm is $\tilde{O}(dn^{2-\Theta(\sqrt[]{\epsilon})})$.
\end{thm}
Let us now present an algorithm which translates the problem of finding an $r$-net for $r<1/n^{0.9}$ to the problem of computing an $r$-net for $r\geq 1/n^{0.9}$. The main idea is that we compute disjoint subsets $S_i$, which are far enough from each other, so that we can compute $r$-nets for each $S_i$ independently. We show that for each $S_i$ we can compute
$T_i \subseteq S_i$ which has bounded diameter and
$T_i'\subseteq S_i$ such that $T_i$, $T_i'$ are disjoint, each point in $T_i$ is far from each point in $T_i'$, and $|T_i'|\leq 3|S_i|/4$. It is then easy to find $r$-nets for $T_i$ by employing the ApprxNet(Large radius) algorithm. Then, we recurse on $T_i'$ which contains a constant fraction of points from $|S_i|$. Then, we cover points in
$S_i \setminus(T_i \cup T_i')$ and points which do not belong to any $S_i$.
\begin{framed}
{\tt ApprxNet(Small radius)}\\
Input: $X =[x_1,\ldots,x_n]^T $ with each $x_i \in {\mathbb S}^{d-1}$, $r< 1/n^{0.9}$, $\epsilon \in (0,1/2]$.
Output: Sets $R, F \subseteq [n]$.
\begin{enumerate}
\item Project points on a uniform random unit vector
and consider projections $p_1,\ldots,p_n$ which
wlog correspond to $x_1,\ldots,x_n\in {\mathbb R}^d$.
\item
Traverse the list as follows
\begin{itemize}
\item If $|\{j \mid p_j \in [p_i-r,p_i] \}| \leq n^{0.6}$ or $i=n$:
\begin{itemize}
\item If $|\{j \mid p_j <p_i \}| \leq n^{0.9}$ remove
from the list all points $p_j$ s.t. $p_j<p_i-r$ and save set $K=\{x_j \mid p_j\in [p_i-r,p_i] \}$.
\item If $|\{j \mid p_j <p_i\}| > n^{0.9}$ save sets $K_i=\{x_j \mid p_j\in [p_i-r,p_i] \} \cup K$,
$S_i=\{x_j\mid p_j<p_i-r\}\setminus K$ and remove projections of $S_i$ and $K_i$ from the list.
\end{itemize}
\end{itemize}
\item After traversing the list if we have not saved any $S_i$ go to 5; otherwise for each $S_i$:
\begin{itemize}
\item For each $u\in S_i$, sample
$n^{0.1}$ distances between $u$ and randomly chosen
$x_k\in S_i$. Stop if for the selected $u\in S_i$, more than $1/3$ of the sampled points are in distance $\leq r n^{0.6}$. This means that
one has found $u$ s.t.
$|\{x_k \in S_i, \|u-x_k\|\leq r n^{0.6}\}| \geq |S_i|/4$ with high probability. If no such point was found, output "ERROR".
\item Let $0\leq d_1\leq \ldots\leq d_{|S_i|}$ be the
distances between $u$ and all other points in $S_i$.
Find $c\in[ r n^{0.6},2r n^{0.6}]$ s.t.
$|\{j \in [n] \mid d_j \in [c,c+r] \}| <n^{0.4}$,
store $W_i=\{x_j \mid d_j \in [c,c+r] \}$,
and remove $W_i$ from $S_i$.
\item Construct the sets $T_i=\{x_j \in S_i \mid d_j<c\}$
and $T_i'=\{x_j \in S_i \mid d_j > c+r\}$.
\begin{itemize}
\item For $T_i$, subtract $u$ from all vectors in $T_i$, run {\tt Standardize}, then {\tt ApprxNet (Large radius)}, both with {$\epsilon/4$}. Save points which correspond to output at $R_i$, $F_i$ respectively.
\item Recurse on $T_i'$ the whole algorithm, and notice that $|T_i'|\leq 3 |S_i|/4$. Save output at $R_i'$, and $F_i'$ respectively.
\end{itemize}
\end{itemize}
\item
Let $R \gets \bigcup_i R_i \cup R_i'$ and
$F \gets \bigcup_i F_i \cup F_i'$. Return to the list $p_1,\ldots,p_n$.
\begin{enumerate}
\item {Remove from $F$ all points which cover at least one point from $\bigcup_i W_i$ or $\bigcup_i K_i$.}
\item \label{itm:deleteb}
Delete all points $ (\bigcup_i T_i) \setminus (\bigcup_i R_i)$, and $ (\bigcup_i T_i') \setminus (\bigcup_i R_i')$.
\item \label{itm:deletec}For each $i$ delete all points in $W_i$ covered by $R_i$, or covered by $R_i'$.
\item \label{itm:deleted}For each $i$ delete all points in $K_i$ covered by $R$.
\item Finally delete $R$ from the list. Store the remaining points at $F'$.
\end{enumerate}
\item $R' \gets \emptyset$. Traverse the list as follows: For each $p_i$, check the distances from all $x_j$ s.t.
$p_j\in [p_i-r,p_i]$.
\begin{itemize}
\item If $\exists\, x_j \in R' :$ $\|x_i-x_j\| \leq r$, delete $x_i$ from the list, set $F' \gets F' \backslash \{x_i , x_j\}$ and continue traversing the list.
\item If there is no such point $x_j$ then $R \gets R \cup \{x_i\}$ and continue traversing the list.
\end{itemize}
\item Output indices of $R\gets R \cup R'$ and $F \gets F \cup F'$.
\end{enumerate}
\end{framed}
\begin{thm}\label{ThmSmallr}
For any constant $\epsilon>0$, $X\subset {\mathbb S}^{d-1}$ s.t.
$|X|=n$, and $r < 1/n^{0.9}$, {\tt ApprxNet(Small radius)} will output
a $(1+\epsilon)r$-net and a set of $(1+\epsilon)$-approximate $r$-far points in time $\tilde{O}(dn^{2-\Theta(\sqrt[]{\epsilon})})$, with probability $1-o(1/n^{0.04})$.
\end{thm}
\begin{proof}
Note that points in $S_i$ had projections
$p_i$ in sets of contiguous intervals of width $r$; each interval had $\geq n^{0.6}$ points,
hence the diameter of the projection of $S_i$ is $\leq n^{0.4}r$.
By the Johnson Lindenstrauss Lemma \cite{DG02} we have that for $v \in {\mathbb S}^{d-1}$ chosen uniformly at random:
$${\mathrm Pr}\Big[\langle u,v\rangle^{2}\leq \frac{\|u\|^2}{n^{0.4}}\Big]\leq \frac{\sqrt{d} \sqrt{e}}{n^{0.2}}.
$$
Hence,
$
{\mathbb E}[| \{ x_k,x_j\in S_i \mid \|x_k-x_j\| \geq n^{0.6}r \text{ and } \|p_k-p_j\|\leq n^{0.4} r\}|]
\leq |S_i|^2 \cdot \frac{\sqrt{e d}}{n^{0.2}},
$
and the probability
$$
{\mathrm Pr}[| \{ x_k,x_j\in S_i \mid \|x_k-x_j\| \geq n^{0.6}r \text{ and } \|p_k-p_j\|\leq n^{0.4} r\}| \geq |S_i|^{1.95}] \leq |S_i|^{0.05} \cdot \frac{\sqrt{e d}}{n^{0.2}}\leq \frac{\sqrt{e d}}{n^{0.15}}.
$$
Taking a union bound over all sets $S_i$ yields a probability of failure $o({1}/{n^{0.045}})$.
This implies that (for large enough $n$, which implies large enough $|S_i|$) at least
$${\binom{|S_i|}{2}} -|S_i|^{1.95}\geq {\frac{|S_i|^2}{4}}$$
distances between points in $S_i$ are indeed small
($\leq n^{0.6}r$). Hence, there exists some point $p_k \in S_i$ which $(n^{0.6}r)$-covers $|S_i|/2$ points. For each possible $p_k$
we sample $n^{0.1}$ distances to other points, and by {Chernoff bounds}, if a point $(n^{0.6}r)$-covers a fraction of more than $1/2$ of the points in $S_i$, then it covers more than
$n^{0.1}/3$ sampled points with high probability. Similarly, if
a point $(n^{0.6}r)$-covers a fraction of less than $1/4$ of the points in $S_i$, then it covers less than
$n^{0.1}/3$ sampled points with high probability.
More precisely, for some fixed $u\in S_i$, let $X_j=1$
when for the $j$th randomly chosen point $v\in S_i$, it holds
$\| u-v\| \leq n^{0.6}r$ and let $X_j=0$ otherwise. Then, for $Y=\sum_{j=1}^{n^{0.1}} X_j$, it holds:
$$
{\mathbb E}[Y]\geq n^{0.1}/2 \implies {\mathrm Pr}[Y\leq n^{0.1}/3 ]\leq \exp(- \Theta(n^{0.1})),
$$
$$
{\mathbb E}[Y]\leq n^{0.1}/4 \implies {\mathrm Pr}[Y\geq n^{0.1}/3]\leq \exp(- \Theta(n^{0.1})).
$$
Since for any point $x\in T_i$ and any point $y \in T_i'$ we have $\|x-y\|>r$, the packing property of $r$-nets is preserved when we build $r$-nets for $T_i$ and $T_i'$ independently. For each $T_i$, we succeed in building $r$-nets with probability $1-O(1/n^{0.2})$. By a union bound over all sets $T_i$, we have a probability of failure $O(1/n^{0.1})$.
Furthermore, points which belong to sets $W_i$ and $K_i$ are possibly covered and need to be checked.
For the analysis of the runtime of the algorithm, notice that step \ref{itm:deleteb} costs time
$O(d\cdot (\sum_i|T_i|+\sum_i|T_i'|))=O(dn)$. Then,
step \ref{itm:deletec} costs time $O(d \cdot \sum_i |W_i|\cdot |T_i|+d \cdot \sum_i |W_i|\cdot |T_i'|)=O(d n^{1.4})$. Finally, notice that we have at most $n^{0.1}$ sets $K_i$. Each $K_i$ contains at most $2n^{0.6}$ points, hence checking each point in $\bigcup_i K_i$ with each point in $R$ costs $O(d n^{1.7})$.
Now regarding step 5, consider any interval $[p_i-r,p_i]$ in the initial list, where all points are projected. If $|\{ j \mid p_j \in [p_i-r,p_i]\}\leq 2 n^{0.9}$ then the $i$th iteration in step 5 will obviously cost $O(n^{0.9})$, since previous steps only delete points. If $|\{ j \mid p_j \in [p_i-r,p_i]\}> 2 n^{0.9}$, we claim that
$|\{j<i \mid p_j \in [p_i-r,p_i] \text{ and } K_j \text{ is created}\}| \leq 1$. Consider the smallest $j <i$ s.t. $K_j$ is created and $p_j\in [p_i-r,p_i]$. This means that all points $p_k$, for $k\leq j$, are deleted when $p_j$ is visited. Now assume that there exists integer $l \in (j,i)$ s.t. $K_l$ is created. This means that the remaining points in the interval $[p_l-r,p_l]$ are
$\leq n^{0.6}$ and all of the remaining points $p_k <p_l$ are more than $n^{0.9}$. This leads to contradiction, since by the deletion in the $j$th iteration, we know that all of the remaining points $p_k <p_l$ lie in the interval $[p_l-r,p_l]$.
Now, assume that there exists one $ j<i$ s.t. $p_j \in [p_i-r,p_i]$ and $K_j$ is created. Then, when $p_i$ is visited, there at least $2 n^{0.9}-n^{0.6}>n^{0.9}$ remaining points in the interval $[p_i-r,p_i]$. Hence, there exists $l\geq i$ for which
the remaining points in the interval $[p_i-r,p_i]$
are contained in $S_l \cup K_l$. Hence in this case, in step 5, there exist at most $O(n^{0.6})$ points which are not deleted and belong to the interval $[p_i-r,p_i]$. Now assume that there does not exist any $ j<i$ s.t. $p_j \in [p_i-r,p_i]$ and $K_j$ is created. This directly implies that there exists $l\geq i$ for which
the remaining points in the interval $[p_i-r,p_i]$
are contained in $S_l \cup K_l$.
At last, the total time of the above algorithm is dominated by the calls to the construction of the partial $r$-nets of the sets $T_i$. Thus, the total running time is $O(\sum_{ i} {|T_i|}^{2-\Theta(\sqrt{\epsilon})}+\sum_{ i} {|T_i|'}^{2-\Theta(\sqrt{\epsilon})})=
O(\sum_{ i} {|T_i|}^{2-\Theta(\sqrt{\epsilon}})+\sum_{i} {(3|T_i|/4)}^{2-\Theta(\sqrt{\epsilon})})=
\tilde{O}(n^{2-\Theta(\sqrt{\epsilon}))}).$
{Finally, taking a union bound over all recursive calls of the algorithm we obtain a probability of failure $o(1/n^{0.04})$.}
\end{proof}
We now present an algorithm for an
$(1+\epsilon)r$-net for points in ${\mathbb R}^d$ under Euclidean distance.
\begin{framed}
{\tt ApprxNet}\\
Input: Matrix $X=[x_1,\ldots,x_n]$ with each $x_{i} \in {\mathbb R}^d$, parameter $r \in {\mathbb R}$, constant $\epsilon \in (0, 1/2]$.
Output: $R \subseteq \{x_1,\ldots,x_n\}$
\begin{itemize}
\item Let $Y$, $r'$ be the output of algorithm {\tt Standardize} on input $X$, $r$ with parameter $\epsilon/4$.
\item If $r \geq 1/n^{0.9}$ run {\tt ApprxNet(Large radius)} on input $Y$, $\epsilon/4, r'$ and return points which correspond to the set $R$.
\item If $r < 1/n^{0.9}$ run {\tt ApprxNet(Small radius)} on input $Y$, $\epsilon/4, r'$ and return points which correspond to the set $R$.
\end{itemize}
\end{framed}
\begin{thm}\label{ApprxNet}
Given $n$ points in ${\mathbb R}^d$, a distance parameter $r \in {\mathbb R}$ and an approximation parameter $\epsilon \in (0, 1/2]$, with probability $1-o(1/n^{0.04})$, {\tt ApprxNet} will return a $(1+\epsilon)r-net$, $R$, in $\tilde{O}(dn^{2-\Theta(\sqrt{\epsilon})})$ time.
\end{thm}
\begin{proof}
The theorem is a direct implication of Theorems \ref{ThmLargeRadius}, \ref{ThmSmallr}, \ref{Stand}.
\end{proof}
\begin{thm}\label{DelFar}
Given $X\subset{\mathbb R}^d$ such that $|X|=n$, a distance parameter $r \in {\mathbb R}$ and an approximation parameter $\epsilon \in (0, 1/2]$, there exists an algorithm, {\tt DelFar}, that will return, with probability $1-o(1/n^{0.04})$, a set $F'$ with the following properties in $\tilde{O}(dn^{2-\Theta(\sqrt{\epsilon})})$ time:
\begin{itemize}
\item If for a point $p \in X$ it holds that $\forall q\neq p, q \in X$ we have $\|p-q\| > (1+\epsilon)r$, then $p \notin F'$.
\item If for a point $p \in X$ it holds that $\exists q\neq p, q \in X$ s.t. $\|p-q\| \leq r$, then $p \in F'$.
\end{itemize}
\end{thm}
\section{Applications and Future work}\label{Sapps}
Concerning applications, in \cite{HR15}, they design an approximation scheme, which solves various
distance optimization problems. The technique employs a grid-based construction of $r$-nets which is linear in $n$, but exponential in $d$. The main prerequisite of the method is the existence of
a linear-time decider (formally defined in Appendix~\ref{Aframework}). The framework is especially interesting when the dimension is constant, since the whole algorithm costs time linear in $n$ which, for some problems, improves upon previously known near-linear algorithms.
When the dimension is high, we aim for polynomial dependency on $d$, and subquadratic dependency on $n$.
Let us focus on the problem of approximating the {\it $k$th nearest neighbor distance}.
\begin{dfn}
Let $X\subset {\mathbb R}^d$ be a set of $n$ points, approximation error $\epsilon>0$, and let $d_1\leq \ldots \leq d_n$ be the nearest neighbor distances. The problem of computing an $(1+\epsilon)$-approximation to the {\it $k$th nearest neighbor distance} asks for a pair
$x,y\in X$ such that $\|x-y\|\in [(1-\epsilon)d_k,(1+\epsilon)d_k]$.
\end{dfn}
Now we present an approximate decider for the problem above.
This procedure combined with the framework we mentioned earlier, which employs our net construction, results in an efficient solution for this problem in high dimension.
\begin{framed}
{\tt kth NND Decider}\\
Input: $X \subseteq {\mathbb R}^d$, constant $\epsilon\in (0,1/2]$, integer $k>0$.
Output: An interval for the optimal value $f(X, k)$.
\begin{itemize}
\item Call {\tt DelFar}$(X, \frac{r}{1+\epsilon/4}, \epsilon/4)$ and store its output in $W_1$.
\item Call {\tt DelFar}$(X, r, \epsilon/4)$ and store its output in $W_2$.
\item Do one of the following:
\begin{itemize}
\item If $|W_1| > k$, then output $``f(X, k) < r"$.
\item If $|W_2| < k$, then output $``f(X, k) > r"$.
\item If $|W_1| \leq k$ and $\abs{W_2} \geq k$, then output $``f(X, k) \in [\frac{r}{1+\epsilon/4}, \frac{1+\epsilon/4}r]"$.
\end{itemize}
\end{itemize}
\end{framed}
\begin{thm}\label{KND}
Given a pointset $X \subseteq {\mathbb R}^d$, one can compute a $(1+\epsilon)$-approximation to the $k$-th nearest neighbor in $\tilde{O}(dn^{2-\Theta(\sqrt{\epsilon})})$, with probability $1-o(1)$.
\end{thm}
To the best of our knowledge, this is the first high dimensional solution for this problem.
Setting $k=n$ and applying Theorem \ref{KND} one can compute the {\it farthest nearest neighbor} in $\tilde{O}(dn^{2-\Theta(\sqrt{\epsilon})})$ with high probability.
Concerning future work, let us start with the problem of finding a greedy permutation.
A permutation $\Pi = <\pi_1, \pi_2,\dots >$ of the vertices of a metric space $(X, \norm{\cdot})$ is a \textit{greedy permutation} if each vertex $\pi_i$ is the farthest in $X$ from the set $\Pi_{i-1} = <{\pi_1,\dots, \pi_{i-1}}>$ of preceding vertices. The computation of $r$-nets is closely related to that of the greedy permutation.
The $k$-center clustering problem asks the following: given a set $X \subseteq {\mathbb R}^d$ and an integer $k$, find the smallest radius $r$ such that $X$ is contained within $k$ balls of radius $r$.
By \cite{EHS15}, a simple modification of our net construction implies an algorithm for the $(1+\epsilon)$ approximate greedy permutation in time $\tilde{O}(d n^{2-\Theta(\sqrt{\epsilon})} \log \Phi)$ where $\Phi$ denotes the spread of the pointset.
Then, approximating the greedy permutation implies a
$(2+\epsilon)$ approximation algorithm for $k$-center clustering problem. We expect that one can avoid any dependencies on $\Phi$.
\if 0
The Corollaries below follow from Theorem \ref{ApprxNet}, Lemma 3.5\cite{EHS15} and Lemma 2.1\cite{EHS15}.
\begin{cor} Let $X$ be a set of $n$ points in ${\mathbb R}^d$, $\epsilon \in (0, 1)$ an error parameter and let $\Phi$ be the spread of the
Euclidean metric on $X$. Then, one can compute in $O(dn^{2-\Theta(\sqrt{\epsilon})}\log \Phi)$
expected time a sequence
that is a $(1 + \epsilon)$-greedy permutation for the Euclidean metric on $X$, with high probability.
\end{cor}
\begin{cor}
Given a set $X$ of $n$ points in ${\mathbb R}^d$, an integer $k$ and an error parameter $\epsilon \in (0, 1)$, one can compute with high probability a $(2+\epsilon)$-approximation to the optimal $k$-center clustering in $O(dn^{2-\Theta(\sqrt{\epsilon})}\log \Phi)$, where $\Phi$ is the spread of the Euclidean metric on $X$.
\end{cor}\fi
\subsection*{Acknowledgment.}
I.Z.~Emiris acknowledges partial support by the EU
H2020 research and innovation programme, under the Marie Sklodowska-Curie grant
agreement No 675789: Network ``ARCADES".
\newpage
\bibliographystyle{alpha}
|
2,877,628,089,829 | arxiv | \section{Introduction}
Relativistic astrophysical jets are ubiquitous in astrophysical systems. Most collimated relativistic jets
extend between several thousands up to millions of parsecs \cite[e.g.,][]{blandford2019} and have been
observationally associated with the activity of central black holes in Active Galactic Nuclei
\cite[AGN, e.g.][] {EHT-I} and Gamma-ray Bursts
(GRBs) \cite[e.g,][]{ruiz2018}. The formation and powering of these astrophysical jets are highly complex phenomena involving
relativistic plasmas and twisted magnetic fields which are organized in such a manner as to ultimately launch an outflow from a central
compact source.
For more than two decades, Particle-in-Cell (PIC) models have been used to study unmagnetized and magnetized relativistic jets that
interact with the interstellar medium. These investigations have offered tremendous insight into the instabilities, turbulence,
and shocks that can develop in the out-flowing plasma, leading to particle acceleration and the production of nonthermal radiation
\cite[e.g.,][]{giannios2009,Pino10,uzdensky2011,granot2012,mcKinney2012,sironi2013,
sironi2015,Ardaneh16,Kadowaki18,Kadowaki19,Christie19,Fowler19}.
Relativistic jets interact with the plasma environment of an astrophysical source and subsequently
instabilities occur which are responsible for the acceleration of particles \cite[e.g.,][]{nishikawa2021}.
In some cases, e.g. for unmagnetized jets, previous computer simulations have shown that
the Weibel instability (WI) is ubiquitous and generates relativistic shocks resulting in particle
acceleration \cite[e.g.,][]{silva2003,nishikawa2003,jaroschek2005,spitkovsky2008a,spitkovsky2008b,
dieckmann2008,nishikawa2009}. Other instabilities such as the kinetic Kelvin-Helmhotz
(kKHI) and the mushroom instability (MI), are driven by the velocity-shear at the boundary
between the jet and the ambient medium in a slab model \cite[e.g.,][]{alves2012,nishikawa2013,alves2015,nishikawa2014}.
It should be noted that kKHI is generated along the jet velocity, in contrast to MI, which is excited in the direction perpendicular to it. For an $e^{\pm}$ pair jet both kKHI and MI generate an AC magnetic field whilst an electron-proton jet generates a DC magnetic field.
PIC simulation studies with a slab jet geometry have been performed to investigate the evolution of cylindrical jets with a helical magnetic-field topology
\cite[e.g.,][]{nishikawa2014,alves2015, nishikawa2019,nishikawa2021}. The present investigation focuses on the nature of particle acceleration in these relativistic plasma flows.
Another possible mechanism of particle acceleration in jets is magnetic reconnection. In this process the
magnetic topology is rearranged and the magnetic energy is converted into thermal and kinetic particle energy.
Magnetic reconnection is observed in solar and planetary magnetospheric plasmas. It is also
often assumed to be an important mechanism of particle acceleration in extragalactic environments such as
AGN and GRB jets \cite[e.g,][]{Drenkhahn2002, Pino05, uzdensky2011, zhang2011, granot2012, mcKinney2012,
giannios2010, giannios2011, komissarov2012, sironi2015, Pino18, Kadowaki18, Kadowaki19,
Christie19,Fowler19}.
Magnetic reconnection has commonly been studied with PIC simulations using the so-called Harris model
in a slab geometry. It was observed to produce a significant particle acceleration
\cite[e.g.,][]{zenitani2005,oka2008, daughton2011, kagan2013, wendel2013, karimabadi14, sironi2014, guo2015, guo2016a, guo2016b}. These studies however cannot be applied directly to astrophysical relativistic jets,
since observations \citep{hawley2015,gabuzda2019}, as well as MHD modelling \citep{tchekhovskoy2015}, suggest that the magnetic-field
topology is pre-dominantly helical; i.e., it
consists of toroidal and poloidal magnetic field components.
Global 3D PIC modelling of relativistic jets allows for a self-consistent investigation of the complex kinetic processes
occurring in the jet and the surrounding medium. These processes can reveal electron-scale short-wavelength
instabilities, their saturation and associated phenomena. Such studies have
been first performed for unmagnetized jets
\citep{nishikawa2016a}. PIC simulations of relativistic jets containing helical magnetic fields were, for
the first time, conducted by \citet{nishikawa2016b}. These initial studies addressed the early, linear growth of kinetic instabilities in the electron-proton and electron-positron jets. However, such simulations were limited by the size of the computational box.
The present study involves a much larger jet radius and longer simulation
times than previous works \cite[e.g.,][]{nishikawa2016a, nishikawa2016b, nishikawa2017}, allowing for a non-linear evolution of
the jets with a toroidal magnetic field. It is designed to address the following key questions: (i) How does a toroidal magnetic field affect the growth of kKHI, MI, and WI within the jet and in the jet-ambient plasma boundary? (ii) How do jets composed of electrons and positrons
and jets composed of electrons an protons evolve in the presence of a large-scale toroidal magnetic field?
(iii) How and where are particles accelerated in jets with different plasma compositions?
Since the magnetic field structure and particle composition of relativistic jets is still not well understood, this systematic
study of e$^{-}$ - p$^{+}$ and e$^{\pm}$ jets containing a toroidal field helps to provide an advanced
and detailed understanding of the magnetic field evolution, the generation of instabilities, possible reconnection events, and
the particle acceleration applicable in the environments of AGN and GRB jets. It is important to note that the differences
in the magnetic field morphologies between jets composed of e$^{-}$ - p$^{+}$ and e$^{\pm}$ could leave significant imprints on
the polarized emission from AGN jets and GRBs. Particularly, circular polarization
(measured as the Stokes parameter $V$) in the continuum radio emission from AGN jets provides a powerful diagnostic tool of magnetic structures and particle composition because, unlike linear polarization, circular polarization is expected to remain almost completely unmodified by external screens \cite[e.g.,][]{osu2013}.
It is important to note, as discussed in \citep{nishikawa2020} for e$^{-}$ -\, p$^{+}$
jets, that our simulations do not address the large-scale plasma flows of macroscopic
parsec-scale jets studied by relativistic magnetohydrodynamic (RMHD) simulations. Instead they explore the relevant kinetic-scale physics within relativistic
jet plasmas, which cannot be studied with RMHD simulations.
Our study, therefore, is complementary to RMHD models and yields important insights into the kinetic processes at work in relativistic astrophysical jets \citep[see also][]{nishikawa2020,nishikawa2021}.
This paper is organised as follows: After we describe the simulation set-up in Section 2, the main differences between electron-positron and electron-proton jets are shown at the linear stage and at the fully developed non-linear stage in Section 3.
In particular, we discuss the kinetic instabilities in the linear and non-linear stage in paragraph 3.1. In paragraph 3.2 we present results about acceleration discussing the patterns of electron acceleration and deceleration in comparison with the structure of the electromagnetic field, while we show the three-dimensional magnetic field evolution in paragraph 3.4. Lastly, in paragraph 3.4 we present results about the role of the non-linear stage, the instabilities growth and their role to consequent acceleration. In Section 4, we summarize and discuss our conclusions.
\begin{figure}
\hspace{0.8cm}
\includegraphics[scale=0.4,angle=0]{globaljetTb}
\caption{The schematic jet injection scheme with a toroidal magnetic field ($B_{\phi}$).
The jet electrons and positrons (protons) are injected so that the current (indicated with the red arrow) is generated to support the toroidal
magnetic field.}
\label{jet_inj}
\end{figure}
\section{Numerical approach of the simulation set-up}
\vspace*{-0.0cm}
Our 3D PIC code is a modified version of the relativistic electromagnetic PIC code
TRISTAN \citep{tristan} with MPI-based parallelization \citep{niemiec_2008,nishikawa2009}.
The numerical grid is set to $(L_{x}, L_{y}, L_{z}) = (1285\Delta, 789\Delta, 789\Delta)$ and is twice as long as that in our
previous simulation studies \citep{nishikawa2016b,nishikawa2017,nishikawa2019}. Here $\Delta =1$ is
the size of the grid cells. Open boundaries are used on the surfaces at $x/\Delta=0$ and $x/\Delta=1285$, whilst periodic
boundary conditions are implemented along the transverse directions $y$ and $z$.
Since the jet is located in the center of the simulation box far from the boundaries, the effect of periodic boundaries is negligible.
\subsection{A new jet injection scheme and simulation parameters}
In this work, we use a new jet injection scheme. Into the ambient plasma at rest we inject a cylindrical jet which propagates
in the $x$-direction, with a toroidal magnetic field (Eq. \ref{hmfcar}),
as schematically shown in Fig.~\ref{jet_inj}.
The jet is injected at $x = 100\Delta$ in the center of the $y$ - $z$ plane at $(y_{\rm jc},\,z_{\rm jc})$, and propagates in the
$x$-direction. Its radial width in cylindrical coordinates
is $r_{\rm jet} = 100\Delta$.
In this study, we apply only a toroidal magnetic field $B_{\phi}(r)=B_{0}(r/a)[1 + (r/a)^2]$. The poloidal field component, $B_{\rm x}$
\citep[e.g.,][]{nishikawa2020}, is not included.
$B_{\phi}$ has a peak amplitude at $r=a$ where $a$ is the characteristic radius.
In Cartesian coordinates, the
corresponding field components, $B_{\rm y}$ and $B_{\rm z}$, are calculated as:
\begin{eqnarray}
B_{y}(y, z) = \frac{(z-z_{\rm jc})B_{0}}{a[1 + (r/a)^2]}, \, \, \ \
B_{z}(y, z) = -\frac{(y-y_{\rm jc})B_{0}}{a[1 + (r/a)^2]}.
\label{hmfcar}
\end{eqnarray}
Equation~(\ref{hmfcar}) describes the magnetic field of left-handed polarity for positive $B_{0}$. In this study we assume $a=50\Delta$.
For the field outside of the jet we multiply Equation~(\ref{hmfcar}) with a damping function:
\begin{equation}
\Theta(r- r_{\rm jet})\,= \frac{r_{\rm jet}}{r}, \,\,\, \, \, {\rm where}\,\,\, r > r_{\rm jet}.
\end{equation}
We assume a top-hat density profile for the jet head.
Although the shape of a real jet is far more complex,
the present results are the first step in a series of advanced numerical investigations including an implementation of
a Gaussian (Lorentzian) (non top-hat) profile, which is closer to jet density profiles generated by General Relativistic MHD (GRPIC)
simulations \citep[e.g.,][]{nishikawa2021}.
In our new jet injection scheme the magnetic field is co-moving with the jet and is generated by the current that is self-consistently created by the jet particles.
First, we calculate the velocities of the jet particles based on $\mathbf{J}(r)= \nabla\, \times \mathbf{B}$ in the
jet frame. Then the velocities of the jet particles are Lorentz-transformed to the simulation frame. In the simulation frame the jet electrons propagate faster than positrons (protons), which generates a negative current in the jet
(clockwise as viewed from the jet head). In order to sustain the current
in the jet, the toroidal magnetic field is gradually applied at the jet orifice, located at $x/\Delta=100-102$,
and a motional electric field is set-up, $\mathbf{E}_{\rm mot} = - \mathbf{v}_{\rm j}
\times \mathbf{B}$. Here, $\mathbf{v}_{\rm j}={v}_{\rm j,x}\mathbf{\hat{x}}$, where ${v}_{\rm j,x}$ is the $x$-component of the jet velocity.
In this way one avoids non-linear effects emerging from the constantly applied magnetic field in the simulation frame where unnatural
banding and currents in the centre of jets might occur \citep[e.g.,][]{nishikawa2020}. After thorough testing of
this new jet injection
scheme, we confirm that these unnatural effects are not observed.
In our simulations, the jet Lorentz factor is set to $\gamma_{\rm jt}=15$. The jet is initially weakly
magnetized while the ambient medium is unmagnetized. The jet's magnetic-field amplitude
$B_{0}=0.1c$ corresponds to a plasma magnetization \mbox{$\sigma = B_{\rm 0}^{2}/(n_{\rm e}m_{\rm
e}\gamma_{\rm jt}c^{2}) = 6.9\times 10^{-4}$}, where $c$ is the speed of light, $m_{\rm e}$ is the electron rest mass
and $n_{\rm e}$ the electron density. In order to investigate
the non-linear stage of the jet's evolution,
we follow the jet for a sufficiently long time of $t_{\rm max}=1000\omega_{\rm pe}^{-1}$.
\begin{figure*}
\hspace*{3.1cm} {\bf e$^{\pm}$ jet} \hspace*{2.5cm} (a) \hspace*{4.1cm} {\bf e$^{-}$ - p$^{+}$ jet}
\hspace*{2.5cm} (b)
\includegraphics[scale=0.45,angle=0]{dlorBxz16sEpaN_011}
\includegraphics[scale=0.45,angle=0]{dlorBxz16sEraN_011}
\hspace*{6.5cm} (c) \hspace*{7.8cm} (d)
\includegraphics[scale=0.45,angle=0]{dlorBxz16sEpaHn_011}
\includegraphics[scale=0.45,angle=0]{dlorBxz16sEraHn_011}
\caption{2D maps of the Lorentz factor of the jet electrons at $y/\Delta =381$ for
e$^{\pm}$ jets (left panels) and the e$^{-}$ - p$^{+}$ jets (right panels) with $r_{\rm jet} = 100\Delta$ at time $t = 1000\,\omega_{\rm pe}^{-1}$.
Panels (a) and (b) show unmagnetized jets and panels (c) and (d) the jets with the toroidal magnetic field.
Black arrows show the in-plane magnetic field $(B_x,B_z)$.}
\label{Lorentz}
\end{figure*}
Both the jet and the ambient plasma are composed of electrons and protons or of electrons and positrons. The initial number densities measured in the simulation frame are $n_{\rm jt}= 8$ and
$n_{\rm amb} = 12$
in the jet and in the ambient plasma, respectively.
The Debye length for the ambient electrons is $\lambda_{\rm D}=0.5\Delta$ and the electron skin depth is $\lambda_{\rm se} = c/\omega_{\rm pe} = 10.0\Delta$, where $\omega_{\rm pe} = (e^{2}n_{\rm amb}/(\epsilon_0 m_{\rm e}))^{1/2}$ is the electron plasma frequency.
The thermal speed of jet electrons is $v_{\rm jt,th,e} = 0.014c$ in the jet frame
whilst in the ambient plasma it is $v_{\rm am,th,e} = 0.05c$.
The thermal speed of protons is smaller by a factor of $(m_{\rm p}/m_{\rm e})^{1/2}\approx 42$.
\begin{figure*}
\hspace*{3.1cm} {\bf e$^{\pm}$ jet} \hspace*{2.3cm} (a) \hspace*{4.6cm} {\bf e$^{-}$ - p$^{+}$ jet}
\hspace*{2.5cm} (b)
\includegraphics[scale=0.5,angle=0]{dedeBxz16fMpaN_011}
\includegraphics[scale=0.5,angle=0]{dedeBxz16fMraN_011}
\hspace*{6.3cm} (c) \hspace*{8.cm} (d)
\includegraphics[scale=0.5,angle=0]{dedeBxz16fMpaHn_011}
\includegraphics[scale=0.5,angle=0]{dedeBxz16fMraHn_011}
\caption{Color maps of the jet electron density with black arrows depicting the magnetic field components in the
$x$ - $z$ plane, at $t=1000\, \omega_{\rm pe}^{-1}$.
As in Fig.~\ref{Lorentz}, left panels show results for e$^{\pm}$ jets and the right panels for e$^{-}$ - p$^{+}$ jets. Upper panels (a) and (b) show unmagnetized jets and lower panels (c)
and (d) jets with a toroidal magnetic field. Note that the maps display the jet structures in a region of the computational box, $250<x/\Delta<500$, that
fully encompasses the injected jet with $r_{\rm jet} = 100\Delta$. The jet electron density at injection is $n_{\rm jt}= 8$.}
\label{ede4}
\end{figure*}
\begin{figure*}
\hspace*{3.1cm} {\bf e$^{\pm}$ jet} \hspace*{2.3cm} (a) \hspace*{4.1cm} {\bf e$^{-}$ - p$^{+}$ jet}
\hspace*{2.5cm} (b)
\includegraphics[scale=0.5,angle=0]{dByBxz16fMpaN_011}
\includegraphics[scale=0.5,angle=0]{dByBxz16fMraN_011}
\hspace*{6.3cm} (c) \hspace*{8.cm} (d)
\includegraphics[scale=0.5,angle=0]{dByBxz16fMpaHn_011n}
\includegraphics[scale=0.5,angle=0]{dByBxz16fMraHn_011n}
\caption{Color maps of the $B_{\rm y}$ magnetic field with arrows depicting the magnetic field components in the $x$ - $z$ plane, at $t=1000\, \omega_{\rm pe}^{-1}$. See Fig.~\ref{ede4}.
The squares with dashed lines indicate the areas plotted in the 3D displays in Fig. \ref{3DBV} (red) and Fig. \ref{ReconS} (blue).}
\label{By4}
\end{figure*}
\section{Results of simulations of a jet with a toroidal magnetic field}
We present simulation results for e$^{\pm}$ and e$^{-}$ -\, p$^{+}$ jets with a toroidal
magnetic field, applying a new and improved jet injection scheme. We are in particular interested in the differences in the dynamical
behaviour of the jets of different plasma compositions and in the way these jets interact with the surrounding environment.
In order to give an overview of how the toroidal magnetic field affects the evolution of the jet, in Figs.~\ref{Lorentz}-\ref{Jx4} we present simulation results for jets with a toroidal field and compare them to the results obtained for unmagnetized jets.
Figure \ref{Lorentz} presents a global jet structure and shows the Lorentz factor of the jet electrons for an e$^{\pm}$ (a, c) and an e$^{-}$ - p$^{+}$ (b, d) jet.
In all panels the magnetic field direction in the $x$ - $z$ plane is indicated with the black arrows.
For the e$^{\pm}$ jets, the structures seen at the boundary between the jet and the ambient plasma at
$500< x/\Delta < 1000$ show the excitation and development of kKHI and MI, where the MI represents the transverse component/dynamics of the kKHI. After the dissipation of the magnetic
field around $x/\Delta = 800$ (see Fig.~\ref{By4}), the disruption around the
jet head is prominent in both the unmagnetized and weakly magnetized jets. The jets expand
radially, which resembles a Mach cone (a bow shock) structure.
Such bow shock structures are less prominent in the e$^{-}$ - p$^{+}$ jets. A notable difference between the unmagnetized jet and the jet with the toroidal magnetic field is that in the latter
the jet electrons are expanded outside the jet in a region $x/\Delta\simeq 500-900$.
This indicates that the jet electrons are radially pushed as a result of instabilities excited within the jet.
We further discuss the particle Lorentz factor distributions in the jets in Section~3.2.
Figure \ref{ede4} depicts the density of the jet electrons (ambient electrons excluded) for e$^{\pm}$ jets (left column) and e$^{-}$ - p$^{+}$ jets (right column).
We observe that the jet electron density structures fluctuate greatly between the cases studied. This is due to the excited instabilities, whose growth depends on the plasma conditions which are different for the two jet compositions and magnetizations.
One can note that in the jets with toroidal magnetic fields the jet electrons are collimated towards the center of the jets in a region $400\lesssim x/\Delta\lesssim 800-900$, which is coincident with a stronger MI excited in the presence of the toroidal field, as we show below.
Figure \ref{By4} shows the amplitude of $B_{\rm y}$ component with the arrows indicating the magnetic field components in the $x$ - $z$ plane\footnote{Supplementary videos at \href{https://doi.org/10.5281/zenodo.5449257}{doi: 10.5281/zenodo.5449257}}
For the magnetized jets, we apply $B_{\rm 0} = 0.1$ initially at the jet orifice while $B_{\rm y}$ is measured in the same units.
In the presence of the toroidal magnetic field for the e$^{\pm}$ jet,
we find a maximum value of $B_{\rm y} = 2.69$ (Fig. \ref{By4}c), which means that $B_{\rm 0}$ is amplified by a factor of 26.9 over the initial value. It is a factor of 1.7 stronger than magnetic fields in the unmagnetized e$^{\pm}$ jet. The evident differences in the magnetic field structure in cases with and without the initial field, indicate a significant impact of the toroidal field on the development of the kinetic instabilities, despite the weakness of the initial magnetic field. The same is true for the e$^{-}$ - p$^{+}$ jet, in which the magnetic field amplification is much stronger and the field amplitudes reach $B/B_0\approx 56$, a factor of 1.3 stronger compared to the unmagnetized case. One can note that for both jet compositions
the magnetic field dissipates, i.e., becomes considerably weakened, in the jet region beyond $x/\Delta\gtrsim 800$.
The umnagnetized and magnetized jet structures are further compared in Section 3.1.
Figure \ref{Jx4} shows the current amplitude $J_{\rm x}$ and the magnetic field components in the $x$ - $z$ plane. A stronger $J_{\rm x}$ for the pair jet causes a MI modified with a developed kKHI
(Fig. \ref{ByBxz}). Also, the concentric current structure merges at the centre of the jet as seen in the e$^{-}$ - p$^{+}$ jet
(Fig. \ref{Jx4}d).
\begin{figure*}
\hspace*{3.1cm} {\bf e$^{\pm}$ jet} \hspace*{2.0cm} (a) \hspace*{4.1cm} {\bf e$^{-}$ - p$^{+}$ jet}
\hspace*{2.5cm} (b)
\includegraphics[scale=0.48,angle=0]{dJxBxz16fMpaN_011n}
\includegraphics[scale=0.48,angle=0]{dJxBxz16fMraN_011n}
\hspace*{6.3cm} (c) \hspace*{8.cm} (d)
\includegraphics[scale=0.48,angle=0]{dJxBxz16fMpaHn_011n}
\includegraphics[scale=0.48,angle=0]{dJxBxz16fMraHn_011n}
\caption{Color maps of the
$J_{\rm x}$ component of the electric current with arrows showing the magnetic field components in the
$x$ - $z$ plane, at $t=1000\, \omega_{\rm pe}^{-1}$. See Fig.~\ref{ede4}}.
\label{Jx4}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.48,angle=0]{ByBxz16paH_011C}
\includegraphics[scale=0.48,angle=0]{ByBxz16raH_011C}
\caption{Color maps of the magnetic field amplitude $B_{\rm y}$ and arrows depicting the magnetic field components in the $x$ - $z$ plane,
both at $t=600\, \omega_{\rm pe}^{-1}$ (upper panels) and $1000\, \omega_{\rm pe}^{-1}$ (lower panels), respectively.
The jet is injected at $x=100\Delta$ in the middle of the $y$ - $z$ plane and propagates in $+x$-direction.
Panels (a, c) are for an e$^{\pm}$ plasma while panels (b, d) are for an e$^{-}$ - p$^{+}$ composition.
The peak amplitudes of $B_{\rm y}$ are (a) $\pm 1.591$, (b) $\pm 3.339$,
(c) $\pm 2.691$ and (d) $\pm 5.673$.
\label{ByBxz}}
\end{figure*}
\subsection{Kinetic instabilities in the linear and non-linear stage}
Figure \ref{ByBxz} shows the magnetic field component $B_{\rm y}$ in the $x$ - $z$ plane at $y/\Delta=381$ with an in-plane magnetic field depicted with black arrows. Results for jets with toroidal magnetic fields are shown and the upper panels present the field structures in the linear regime at time $t =600\, \omega_{\rm pe}^{-1}$, whereas the lower panels depict the non-linear stage at time $t=1000 \, \omega_{\rm pe}^{-1}$ (compare Figs.~\ref{By4}c, d).
The pair jet is shown in the left panels (a, c), and the e$^{-}$ - p$^{+}$ jet in the right panels (b, d).
\begin{figure}
\hspace*{6.5cm} (a)
\hspace*{-0.35cm}
\includegraphics[width=0.99\linewidth]{dphasx_vx16raHn_011b}
\hspace*{6.5cm} (b)
\includegraphics[width=0.99\linewidth]{dphasx_vz16raHn_011b}
\hspace*{6.5cm} (c)
\includegraphics[width=0.99\linewidth]{dExByz16MFraHn_011_520}
\vspace*{-0.2cm}
\caption{a) $x$ - $\gamma v_{\rm x}$ distribution of jet (red) and ambient (blue)
electrons at $t = 1000\,\omega_{\rm pe}^{-1}$, for an electron-proton jet. The initial $\gamma v_{\rm x} = 15$ is marked with the horizontal
dashed black line. b) $x$ - $\gamma v_{\rm z}$ distribution of jet (red)
electrons at $t = 1000\,\omega_{\rm pe}^{-1}$, for a e$^{-}$ - p$^{+}$ jet. The initial $\gamma v_{\rm z} = 0$ is marked with the horizontal dashed black line.
The cross section at at $x/\Delta = 520$ is marked with a vertical dashed lines.
c) Color-map of $E_x$ in the $y$-$z$ plane at $x/\Delta = 520$, marked by the vertical lines in panels (a) and (b), with the arrows indicating $B_{y,z}$. The maxima and minima are $\pm 2.41$.}
\label{bcontours}
\end{figure}
\begin{figure}
\hspace*{6.0cm} (a)
\vspace{-0.5cm}
\hspace*{-0.31cm}
\includegraphics[width=0.82\linewidth]{dphasx_vxpaHn_007Lb}
\hspace*{6.cm} (b)
\hspace*{0.3cm}
\includegraphics[width=0.90\linewidth]{dExBxz16FpaHn_007}
\hspace*{6.0cm} (c)
\hspace*{0.3cm}
\includegraphics[width=0.90\linewidth]{dExByz16MFpaHn_007_560}
\vspace*{-0.2cm}
\caption{a) $x$ - $\gamma v_{\rm x}$ distribution of jet (red) and ambient (blue)
electrons at $t = 600\,\omega_{\rm pe}^{-1}$, for an e$^{\pm}$ jet. The initial $\gamma v_{\rm x} = 15$ is marked with the horizontal dashed black line.
Panel b:
Color map of $E_{\rm x}$ with the arrows of $B_{\rm x, z}$ at $y/\Delta = 381$ at $t = 600\,\omega_{\rm pe}^{-1}$.
The cross section at at $x/\Delta = 560$ is marked vertical dashed lines.
Panel c: Color map of $E_x$ in the $y$ - $z$ plane at $x/\Delta = 560$, marked by the vertical bar in
panels a and b, with arrows indicating $B_{y,z}$. The maxima and minima are $\pm 2.41$.}
\label{econtours}
\end{figure}
\iffalse
\begin{figure}
\hspace*{-0.05cm}
\includegraphics[width=0.99\linewidth]{paaccelB}
\vspace*{-0.2cm}
\caption{Upper panel: $x$ - $\gamma v_{\rm x}$ distribution of jet (red) and ambient (blue)
electrons at $t = 1000\,\omega_{\rm pe}^{-1}$, for an e$^{\pm}$ jet. The initial $\gamma v_{\rm x} = 15$ is marked with the horizontal
dashed black line. Lower panel: Color map of $J_{\rm x}$ in the $x$-$z$ plane at $x/\Delta = 750$ and 850, with arrows
indicating $B_{x,z}$. The maximum and minimum are $\pm 50.0$. Vertical dashed lines show the peaks of the
accelerated jet electrons and the stripes the $x$ component
of current. The number of striped $J_{\rm x}$ corresponds to the peaks of accelerated jet electrons.}
\label{paaccel}
\end{figure}
\fi
\begin{figure}
\hspace*{-0.05cm}
\includegraphics[width=0.99\linewidth]{dphasx_vx16paHn_011C}
\includegraphics[width=0.99\linewidth]{dExBxz16fMpaHn_011}
\vspace*{-0.2cm}
\caption{Upper panel: $x$ - $\gamma v_{\rm x}$ distribution of jet electrons (red), jet positron (green) and
ambient (blue) electrons at $t = 1000\,\omega_{\rm pe}^{-1}$.
Lower panel: Color map of $E_{\rm x}$ in the $x$-$z$ plane at $y/\Delta = 381$, with arrows indicating $B_{x,z}$.
The maximum and minimum are $\pm 0.817$.
The dislocation of jet electrons and positrons generate the strips of the positive and negative $E_{\rm x}$.}
\label{outofface}
\end{figure}
\begin{figure*}
\hspace*{3.1cm} {\bf e$^{\pm}$ jet} \hspace*{2.0cm} (a) \hspace*{4.1cm} {\bf e$^{-}$ - p$^{+}$ jet}
\hspace*{2.5cm} (b)
\includegraphics[scale=0.6,angle=0]{dphasx_vx16paHn_007b}
\includegraphics[scale=0.6,angle=0]{dphasx_vx16raHn_007b}
\hspace*{6.3cm} (c) \hspace*{8.cm} (d)
\includegraphics[scale=0.6,angle=0]{dphasx_vx16paHn_011b}
\includegraphics[scale=0.6,angle=0]{dphasx_vx16raHn_011b}
\hspace*{6.3cm} (e) \hspace*{8.cm} (f)
\includegraphics[scale=0.6,angle=0]{dphasx_vz16paHn_011c}
\includegraphics[scale=0.6,angle=0]{dphasx_vz16raHn_011c}
\caption{Phase-space $x$ - $\gamma V_{\rm x}$ electrons, for the e$^{\pm}$ (a, c) jet and e$^{-}$ - p$^{+}$ (b, d) jet,
respectively, for $t =600\, \omega_{\rm pe}^{-1}$ (upper row) and $t =1000\, \omega_{\rm pe}^{-1}$ (middle row).
Panels (e) and (f) show the phase-space $x$ - $\gamma V_{\rm z}$ electrons, for the e$^{\pm}$ jet and e$^{-} -\, p^{+}$
jet, respectively, at $t= 1000\omega_{\rm pe}^{-1}$.
In the e$^{\pm}$ jet the jet electrons are slightly decelerated and
later develop oscillations caused by kKHI and MI. In the e$^{-}$ - p$^{+}$ jet, the jet electrons decelerate in bulk
before the oscillatory pattern is established.
Red colour indicates the jet electrons and the blue one
the ambient electrons}.
\label{dphas}
\end{figure*}
Inside the jets the WI is initially generated. At the jet head at $x/\Delta \approx 650-700$ the instability generates magnetic field filaments aligned with the jet propagation direction. Downstream
along the jet the oblique mode of the WI dominates, visible as striped magnetic fields at $x/\Delta \approx 300-600$ for the e$^{\pm}$ jet (Fig. \ref{ByBxz}a).
The wavelength of the WI is about $4\lambda_{\rm se}$. The kKHI and MI start to grow
simultaneously in the same region of the jet, at the jet-ambient medium boundary and across the jet, respectively. The excitation of the MI and the kKHI is merged with the WI, which results in slanted striped structures of the magnetic field in the e$^{\pm}$ jet.
The wavelength of the kKHI mode is about $6\lambda_{\rm se}$, whilst the wavelength of the MI mode is about $5\lambda_{\rm se}$ along the jet radius
(i.e., perpendicular to the jet axis). Note that the pair jet structure at this stage is similar to the unmagnetized jet structure in the non-linear stage (compare Fig.~\ref{By4}a).
Since the growth rates of MI and kKHI are similar, the excited
modes propagate towards the jet center.
In the non-linear stage, Figure \ref{ByBxz}c shows the grown kKHI and MI. The MI grows stronger and generates two dominant modes along the jet radius, the inner mode having
larger amplitude. Simultaneously, the longitudinal kKHI wave modes modulate the magnetic field along the jet.
For the e$^{-}$ - p$^{+}$ jet, Figure \ref{ByBxz}b shows that the MI grows around $x/\Delta = 400$ dominantly, it propagates toward
the jet center and generates two MI modes in the linear stage. The supplemental movie \footnote{MovieByBxz$\_$e$-$p.mp4 at \href{https://zenodo.org/record/5449257}{doi: 10.5281/zenodo.5449257}} shows clearly the stronger MI mode near the jet center.
In the non-linear stage, Figure \ref{ByBxz}d shows a very strong toroidal (helical) magnetic field
in the jet at $t=1000 \omega_{\rm pe}^{-1}$ (at approximately $500\lesssim x/\Delta \lesssim 830$),
with the amplitude $B/B_{\rm 0}\approx 56$ (see above).
This magnetic field amplification is attributed to the kKHI and MI which was similarly observed in the unmagnetized case of
\cite{nishikawa2016a} (see also Fig.~\ref{By4}b).
One can note that the outer MI mode merges with the inner mode (Fig. \ref{ByBxz}d).
Figure \ref{ByBxz}d shows that the field structure pinched by the MI is strongly modulated which reflects the growth of kKHI along
the jet, demonstrating that the field collimation is caused by the pinching of the jet electrons towards the center of the jet, as already noted for
the electron density structures~\footnote{Supplementary videos MovieByBxz$\_$e$-$p.mp4, MovieByBxz$\_$e$-$e.mp4 at \href{https://zenodo.org/record/5449257}{doi: 10.5281/zenodo.5449257}}.
The detailed structure of the growing modes of both MI and kKHI is shown in Fig. \ref{bcontours}b below.
\subsection{Electromagnetic fields and particle acceleration}
Figure \ref{bcontours} shows the pattern of the jet and ambient electron acceleration and deceleration in comparison with the structure of the electromagnetic field at $t = 1000\,\omega_{\rm pe}^{-1}$ for the e$^{-}$ - p$^{+}$ jet.
The top panel (Fig.~\ref{bcontours}a) shows the $x$ - $\gamma v_{\rm x}$ distribution of the jet (red) and the ambient (blue) electrons whilst the middle panel shows
the $x-\gamma v_{\rm z}$ distribution of the jet electrons (Fig.~\ref{bcontours}b). The initial $\gamma v_{\rm x} = 15$ and $\gamma v_{\rm z} = 0$ are marked with horizontal dashed black lines in Figures~\ref{bcontours}a and \ref{bcontours}b, respectively.
In the non-linear stage of the simulation, a few electrons are accelerated up to $\gamma v_x\approx 25$ at $x/\Delta \approx 700-750$
and up to around $\gamma v_x\approx 35$
at $x/\Delta\approx 950$.
One can also note the jet electron acceleration to $\gamma v_{\rm z}\approx 10$ that occurs in the jet region at $x/\Delta\approx 600-950$.
Figure \ref{bcontours}c shows a distribution of the $E_x$ component of the electric field
in the $y-z$ plane at
$x/\Delta = 520$, as marked with vertical bars in panels (a) and (b).
Black arrows indicate $B_{y,z}$. It is noted that the original jet (radius) is fully contained within the plot area. Therefore, the strong electric field region visible in the figure is located near the center of the jet ($z/\Delta =381$) and it corresponds to the MI mode that is dominant at this $x$-location in the jet (compare Fig.~\ref{ByBxz}). The concentric pattern around the jet center ($m = 5$ or 6) is excited while the outer and inner modes of the MI are merged and so the mode becomes asymmetric, taking a form of the
upside down mushroom at $x/\Delta = 520$ and $z/\Delta \lesssim 381$. This electric field accompanying the excited MI along the jest pushes the jet electrons outside of the jet. The expelled jet electrons
are visible in Figure~\ref{Lorentz}d at $480 \lesssim x\Delta \lesssim 800$ for $z/\Delta < 381$.
Figure~\ref{econtours}a shows
the $x-\gamma v_{\rm x}$ distribution of the jet (red)
and the ambient (blue) electrons for the e$^{\pm}$ jet
in the linear stage
at time $t=600 \omega_{\rm pe}^{-1}$. One can note
the pattern of the electron acceleration and deceleration along the jet, with some electrons reaching $\gamma v_{\rm x} \approx 25$ at $x\approx 560\Delta$.
To understand how the jet electrons are
energized we examine $E_{\rm x}$ in the $x-z$ (Fig.~\ref{econtours}b) and $y -z$ (Fig.~\ref{econtours}c) planes.
Figures \ref{econtours}b and \ref{econtours}c show MI modes that form a symmetric pattern ($m > 5$) around the jet center ($z/\Delta =381$)
and have a complicated structure along the $z$ direction.
Such structures of $E_{\rm x}$ are formed because
the combined modes of MI and kKHI propagate obliquely, and become more vertical
in the non-linear stage (compare $720 < x/\Delta <1050$ in Fig.~\ref{outofface}, bottom panel).
The kKHI modulates the electromagnetic jet structure along the jet propagation direction which leads to the complex field pattern along the jet. This is responsible for the oscillated jet electrons (positrons).
The temporal analysis of phase-velocity distributions ($x$ - $\gamma v_{\rm x}$) reveals that the bunched jet electrons (positrons) propagate with the jet velocity, which indicates that the generated patterns of $E_{\rm x}$ are quasi-steady in time.
The jet electrons that are accelerated most significantly up to $\gamma v_x\approx 25$ are located around $x/\Delta \approx 560$ where the negative quasi-steady
$E_{\rm x}$ is found at the outer layer of the jet (Fig. \ref{econtours}b).
As indicated in Fig. \ref{econtours}c, the patterns of $E_{\rm x}$ show two concentric MI modes ($m > 5$)
in the transverse plane, and the large negative area of $E_{\rm x}$ is responsive in support of the accelerated jet electrons.
It should be noted that the energetic jet electrons around $x/\Delta\approx 560$ at $t = 600\,\omega_{\rm pe}^{-1}$ (the third peak from the jet head in Fig. \ref{dphas}a) are accelerated further around $x/\Delta\approx 960$, as shown in Fig. \ref{dphas}c
at $t = 1000\,\omega_{\rm pe}^{-1}$.
This indicates that the quasi-steady negative electric fields propagate with the jet and accelerate jet electrons up to $\gamma v_{\rm x} \sim 35$.
Figure \ref{outofface} shows the $x$ - $\gamma v_{\rm x}$ distribution of jet electrons (red), jet positron (green) and
ambient (blue) electrons, for a pair jet at $t = 1000\,\omega_{\rm pe}^{-1}$.
The color-map shows the $E_{\rm x}$ in the $x$ - $z$ plane at $y/\Delta = 381$ and the arrows indicate the $B_{\rm x,z}$ field.
We understand that the slight dislocations of the jet electrons and positrons are the response to magnetic field structures, as
the corresponding variations in $v_z$ suggest (cf. Fig.\ref{dphas}c), and they generate through $\dot{\mathbf{E}}\propto \mathbf{j}$
the strips of the positive and negative $E_{\rm x}$ seen in the color-map of the same figure.
It should be noted that Figure \ref{By4}c shows a rather week $B_{\rm y}$ ($700 < x/\Delta <1050$), however Figure \ref{outofface} (bottom
panel) shows strong striped patterns of $E_{\rm x}$, which are generated by the
out-of-phase distributions of jet electrons and positrons, as shown
in Fig. \ref{outofface} (top panel).
Figure \ref{dphas} shows the phase-space $x-\gamma v_{\rm x}$ distributions for the e$^{\pm}$ (a, c)
jet and e$^{-}$ - p$^{+}$ (b, d) jet, respectively, for $t =600\, \omega_{\rm pe}^{-1}$ (upper row) and $t =1000\, \omega_{\rm pe}^{-1}$ (middle row).
Red colour indicates the jet electrons and the blue one the ambient electrons.
At first glance, the phase-space distributions indicate electron acceleration at
different locations. Initially the jet electrons have a Lorentz factor of $\gamma \simeq 15$.
Figure \ref{dphas}a shows that the jet electrons (red) get accelerated and decelerated by the excited instabilities as shown in Fig. \ref{econtours}. Figure \ref{dphas}c shows three different stages for the e$^{\pm}$ jet
at $t =1000\, \omega_{\rm pe}^{-1}$:
the early linear stage at $400 <x/\Delta <700$, the later linear stage at $700 <x/\Delta <900$
where jet electrons are strongly accelerated and decelerated by
the MI and kKHI, and the non-linear stage at $900 <x/\Delta <1050$ where jet electrons are further accelerated.
At the non-linear stage some jet electrons around $x/\Delta \approx 950$, reach maximum energies of $\gamma \approx 30$.
In the e$^{-}$ - p$^{+}$ jet (panel d), the maximum acceleration
occurs similarly at around $x/\Delta\approx 950$.
The maximum electron energy is slightly higher than for the e$^{\pm}$ jet, which can also be observed in the electron energy distributions (see Fig. \ref{veldis2}).
We note that the ambient electrons (blue) in the e$^{\pm}$ jet are accelerated within seven bunches along the jet, that is
in the range $700\lesssim x/\Delta \lesssim 950$, whilst the ambient electrons around the e$^{-}$ - p$^{+}$ jet (panel d) are accelerated
to approximately $\gamma = 10$ in less prominent bunches and up to $x/\Delta\approx 950$.
Figure \ref{dphas}c illustrates correlations of the energy gains and losses between the ambient and the jet
electrons. Since the jet electrons propagate with the saturated instabilities, the peaks of the accelerated
jet electrons are rather sharp.
However, the ambient electrons do not move with the excited waves, and thus instead the peaks of the accelerated
ambient electrons are mostly slanted toward the jet propagation. It should be noted that the acceleration of
the ambient electrons stops at approximately $x/\Delta\approx 950$, which is explained by the fact that the
electromagnetic fields dissipate around $x/\Delta\approx 900$.
\begin{figure*}
\hspace*{3.1cm} {\bf e$^{\pm}$ jet} \hspace*{2.7cm} (a) \hspace*{3.8cm} {\bf e$^{-}$ - p$^{+}$ jet}
\hspace*{2.2cm} (b)
\includegraphics[scale=0.35,angle=0]{vBV16LaSpaHn_010h}
\includegraphics[scale=0.35,angle=0]{vBV16LaSraHn_010h}
\hspace*{6.6cm} (c) \hspace*{7.3cm} (d)
\includegraphics[scale=0.35,angle=0]{vBV16LaSpaHn_011h}
\includegraphics[scale=0.35,angle=0]{vBV16LaSraHn_011h}
\caption{The magnetic field vectors within the cuboid
($770 < x/\Delta < 1120; 256<y/\Delta, z/\Delta < 506$) at $t =900\,
\omega_{\rm pe}^{-1}$ (first row) and $t =1000\,
\omega_{\rm pe}^{-1}$ (second row) for e$^{\pm}$ (panel a, c) and e$^{-}$ - p$^{+}$ (panel b, d). The center
of the jet is at $y/\Delta =z/\Delta=381$. For the magnetic field inside the jet,
the plots show half of the regions clipped at the center of jet in the $x$ - $z$ plane
($381 < y/\Delta < 506$). The red dashed squares in Fig. \ref{By4}c and \ref{By4}d show the volume of these plots.
The maximum and minimum of the legend of the magnetic field strength are 2.8 and 0.2.}
\label{3DBV}
\end{figure*}
At $t=1000\ \omega_{\rm pe}^{-1}$ the acceleration region of the ambient electrons around the electron-positron jet (Fig. \ref{dphas}c) approximately coincides
with the jet region at $750 < x/\Delta < 950$, in which a strong magnetic turbulence after the dissipation is observed, see Figs. \ref{3DBV}, \ref{Btot}, and \ref{jxByz}.
The non-linear saturation of the instabilities ends, hence the magnetic fields dissipate and accelerate jet electrons (not the ambient electrons).
Figures \ref{dphas}e and \ref{dphas}f show $x-\gamma v_{\rm z}$ as a function of $x/\Delta$ for the jet electrons in e$^{\pm}$
and e$^{-}$ - p$^{+}$ jets.
In the e$^{\pm}$ jet, the jet electrons are gradually accelerated
at a later stage as oscillations develop, caused by the kKHI and MI as shown in Fig. \ref{dphas}e.
In contrast, Fig. \ref{dphas}f shows that in the e$^{-}$ - p$^{+}$ jet electrons are accelerated perpendicularly when decelerated along the $x$-direction due to the MI (or kKHI) (Fig. \ref{dphas}d), which clearly indicates magnetic deflection as also shown in Fig. \ref{bcontours}.
The ambient electrons are strongly accelerated as they are swept
up into the relativistic jet plasma. Fig. \ref{3DBV} indicates that a similar form of electron acceleration occurs during the dissipation of magnetic fields around $x/\Delta =900$ for the e$^{\pm}$ jet species.
We note that the acceleration of electrons between $900 <x/\Delta < 1050$ is not due to the electric field of the generated instabilities, but correlated to the dissipation of the magnetic fields. The cause of dissipation is due to the termination of the non-linear saturation, as described by \cite{blandford2017}.
\subsection{Three-dimensional magnetic field evolution}
Figure \ref{3DBV} shows the 3D evolution of the magnetic field near the jet head in the region indicated by the red rectangle in Fig. \ref{ByBxz}. The figure shows the magnetic-field vectors within a cuboid at ($770 < x/\Delta < 970-1120;\, 256<y/\Delta, z/\Delta < 506$) at $t =900\, \omega_{\rm pe}^{-1}$ and $t =1000\, \omega_{\rm pe}^{-1}$ for the e$^{\pm}$ (a, c) and the e$^{-}$ - p$^{+}$ (b, d) jets with their center at $y/\Delta =z/\Delta=381$. The plots show a half-section of the jet's center
in the $x-z$ plane with $381 < y/\Delta < 506$ in order to view the interiors of the jets. For the e$^{\pm}$ jet, the jet-head is located at $x/\Delta = 900$ at time $t=900\ \omega_{pe}^{-1}$ (panel a), and it moves to $x/\Delta = 1000$ after $t=1000\ \omega_{pe}^{-1}$ (c). Comparing panels a) and c) one observes that the magnetic fields between $800 < x/\Delta < 1000$ are first getting twisted and then dissipated.
Around $x/\Delta\approx 850$, at both time-steps, the magnetic fields generated by the outer MI mode have dissipated. The magnetic fields in the inner MI mode get weak, but they reappear after $x/\Delta = 1050$, as shown in Fig. \ref{3DBV}c.
\begin{figure*}
\hspace*{3.2cm} (a) \hspace*{7.8cm} (b)
\includegraphics[width=0.49\linewidth]{dBtotByz11MFpaHn_011_800}
\includegraphics[width=0.49\linewidth]{dBtotByz11MFpaHn_011_900c}
\hspace*{3.2cm} (c) \hspace*{8.8cm} (d)
\includegraphics[width=0.49\linewidth]{dBtotByz11MFpaHn_011_950}
\includegraphics[width=0.49\linewidth]{dBtotByz11MFpaHn_011_1050}
\caption{The total magnetic field strength of the e$^{\pm}$ jet at $x/\Delta=$800 (a), 900 (b), 950 (c) and 1050 (d), for $281<y/\Delta, z/\Delta < 481$ at time $t =1000\, \omega_{\rm pe}^{-1}$. The arrows indicate
the magnetic field $(B_{\rm y},B_{\rm z})$; red circles indicate possible reconnection cites.}
\label{Btot}
\end{figure*}
In contrast, Figure \ref{3DBV}b indicates that the e$^{-}$ - p$^{+}$ jet shows a weakened magnetic field
at the jet centre surrounded by the swirling magnetic fields.
For this jet the front edge of the toroidal magnetic field at the centre is peeled-off during the jet propagation, as seen in Fig. \ref{3DBV}d.
This indicates that the toroidal magnetic field might have dissipated at the non-linear stage and as a consequence moved towards the jet boundary where six flux ropes are formed,
as discussed in \cite{blandford2017}.
Figure \ref{Btot} shows the total magnetic field strength of the e$^{\pm}$ jet in the $y-z$ plane at $x/\Delta=800, 900, 950, 1050$ and time
$t=1000\omega_{pe}^{-1}$, which allows us to highlight possible reconnection sites indicated by the red circles.
The plots focus on the area, in which the jet is injected, $281<y/\Delta, z/\Delta < 481$.
These magnetic field plots are 2D slices of the magnetic field shown in Fig. \ref{3DBV}c
for $y/\Delta > 381$.
The arrows indicate the field components $B_{\rm y}$ and $ B_{\rm z}$. Figure \ref{Btot}d shows the structure beyond the jet radius
(the spatial expansion of the e$^{\pm}$ jet is shown in Fig. \ref{Lorentz}c).
As a necessary condition, the reconnection location should coincide with the regions of minimum magnetic field strength, indicated by the dark blue colour in Fig. \ref{Btot}. In panels b and c,
we mark examples of possible reconnection sites with red circles.
\begin{figure*}
\hspace*{3.0cm} (a) \hspace*{5.0cm} (b) \hspace*{4.8cm} (c)
\includegraphics[scale=0.32]{djxByz16MpaHn_011_600}
\includegraphics[scale=0.32]{djxByz16MpaHn_011_700}
\includegraphics[scale=0.32]{djxByz16MpaHn_011_800}
\hspace*{3.0cm} (d) \hspace*{5.0cm} (e) \hspace*{4.8cm} (f)
\includegraphics[scale=0.32]{djxByz16MpaHn_011_900}
\includegraphics[scale=0.32]{djxByz16MpaHn_011_1000}
\includegraphics[scale=0.32]{djxByz16MpaHn_011_1050}
\caption{The $x$-component $J_{\rm x}$ of the current density (color coded) and the magnetic field components $B_{\rm y,z}$ (black arrows)
for the e$^{\pm}$ jet at $x/\Delta=600, 700, 830, 900, 1000, {\rm and}\, 1100$ at $t= 1000\, \omega_{\rm pe}^{-1}$.}
\label{jxByz}
\end{figure*}
Figure \ref{Btot}a shows that at $x/\Delta=800$ the inner MI mode becomes dominant due to the collimation of jet electrons, which generate the clockwise circular magnetic field. At the same time,
the outer MI mode starts to split in the jet,
resulting in a number of magnetic structures through the residues of the MIs. These structures are found in the areas of weak magnetic fields (blue) where swirling and/or
incoming-outgoing magnetic fields exist. A possible reconnection site is marked by a red circle where the magnetic flux changes from being oriented inward (on the left) to being oriented outward (on the right).
This is a typical morphology of the magnetic reconnection sites.
These magnetic structures start to get surrounded by fields of opposite polarity as shown in Fig. \ref{jxByz}c below. The magnetic fields are produced by the jet current modulated by
the excited kKHI and MI. At this time the outer MI mode is dissipated as shown in Fig. \ref{3DBV}c, where the magnetic field is weak near the jet boundary.
At $x/\Delta=900$ (panel b) we observe a strongly turbulent magnetic field. The toroidal field structure gets distorted and dispersed, as also seen in
the movies\footnote{the total magnetic field in the $y-z$ plane at $289 < y/\Delta,\, z/\Delta <480$; for e$^{\pm}$ and e$^{-}$ - p$^{+}$ jets respectively,
see supplementary videos MovieBtot$\_$e$-$e.mp4, MovieBtot$\_$e$-$p.mp4, at \href{https://doi.org/10.5281/zenodo.5449257}{doi:10.5281/zenodo.5449257}}
provided as supplementary material for the two jet compositions (electron-positron and electron-proton), showing the spatial evolution of the magnetic field and their differences.
Subsequently, panel (c) shows that the magnetic field starts to get reorganized and forms multiple magnetic flux ropes.
Here, three red circles indicate possible reconnection sites where the magnetic fields are oriented in the opposite directions.
These processes reflect the growing non-linear stage of the MI and kKHI where the instabilities saturate and break.
The supplemental movies for both jet compositions\footnote{see MovieBtot$\_$e$-$e.mp4, MovieBtot$\_$e$-$p.mp4 at \href{https://doi.org/10.5281/zenodo.5449257}{doi: 10.5281/zenodo.5449257}}
further show that the magnetic structures interact with each other and with the surrounding environment, generating the
right conditions for magnetic reconnection.
Examples of possible reconnection sites can be found at e.g. $(y/\Delta, z/\Delta)=(330, 380)$ (c) among several other locations
where the total magnetic field approaches a null point, with a surrounding magnetic field of the opposite direction.
\subsection{Non-linear stage, instabilities growth and acceleration}
For the e$^{\pm}$ jet, Figure \ref{jxByz} shows six 2D isocontour plots
for the $x$-component $J_x$ of the current density in the $y-z$ plane and the magnetic field components $B_{\rm y,z}$
(shown with black arrows) at $x/\Delta=600, 700, 830, 900, 1000, {\rm and}\, 1100$
(panels a, b, c, d, e, f), at $t=1000\, \omega_{\rm pe}^{-1}$.
It should be noted that Figs. \ref{jxByz}a and \ref{jxByz}b show two modes of MI along the jet radius, indicated by
two arrows with MI in Fig. \ref{ByBxz}c along the jet radius ($z$) in the $x$ - $z$ plane, which
are recognized through two concentric orange rings with few (six) bunched currents along
the poloidal direction (the jet centre is located at the middle of panels) in the $y-z$ plane.
Figure \ref{jxByz}c shows the non-linear stage where the MI modes are dissipated whilst at $x/\Delta=900$ the center MI reappears. Near the
jet head, at $x/\Delta=1050$, Fig. \ref{jxByz}e shows the weak MI, consistent with
Fig. \ref{3DBV}c. Figure \ref{jxByz} also demonstrates that the change of polarity is
more prominent at approximately $x/\Delta=900$ (see also Fig. \ref{Btot}b), where the distortion and filamentary evolution is more evident with
a slight weakening of the magnetic field and a consequent reorganization at the non-linear stage (transition from panel b to d).
To investigate the transition from the late linear stage to the non-linear stage, we plot in Figure \ref{ReconS}
the 3D magnetic field vectors within a cuboid of $600 < x/\Delta < 950$, $256<y/\Delta, z/\Delta < 506$
(indicated by the blue dashed square in Fig. \ref{By4}c and \ref{By4}d)
at t =$1000\, \omega_{\rm pe}^{-1}$ for e$^{\pm}$ and e$^{-}$ - p$^{+}$ jets.
For the magnetic field to be given inside the jet, the plots show half of the regions clipped at the center of jet in the $x$ - $z$ plane
($381 < y/\Delta < 506$). Note that these plots overlap with Fig. \ref{3DBV} at $x/\Delta = 770$.
Beyond $x/\Delta=750$, for the e$^{\pm}$ jet a magnetic field disruption of the outer mode of the MI occurs resulting
in disordering via a non-linear saturation of the kKHI \& MI that is seen up to $x/\Delta \approx 950$.
Around $x/\Delta = 780$ the magnetic field near the jet boundary dissipates and the magnetic field near the centre
of the jet gets dissipated, around $x/\Delta = 820$.
\begin{figure*}
\hspace*{1.7cm} {\bf e$^{\pm}$ jet} \hspace*{3.3cm} (a) \hspace*{4.1cm} {\bf e$^{-}$ - p$^{+}$ jet}
\hspace*{2.5cm} (b)
\includegraphics[scale=0.38,angle=0]{vJxBl16NpaHn_011qs}
\includegraphics[scale=0.38,angle=0]{vJxBl16NraHn_011qs}
\caption{The $x$-component of current $J_{\rm x}$ within a cuboid of $600 < x/\Delta < 950$; $256<y/\Delta,
z/\Delta < 506$) at $t =1000\, \omega_{\rm pe}^{-1}$ for a e$^{\pm}$ (a) and e$^{-}$ - p$^{+}$ (b) jet.
The centre of the jet is at $y/\Delta =z/\Delta=381$.
The thin lines show the magnetic field lines in the jet. The thin white lines show the boundary the regions
clipped at the center of jet in the $x$ - $z$ plane ($256 < y/\Delta < 381$) and $y-z$ plane ($381 < z/\Delta
< 506$). The thin regions ($600 <x/\Delta< 620$)
is shown at the left top corner. The blue dashed squares in Fig. \ref{By4}c and \ref{By4}d show the volume of these plots. There is
an overlap with Figs. \ref{3DBV} c and \ref{3DBV}d including the range $770 < x/\Delta < 1120$.}
\label{ReconS}
\end{figure*}
\begin{figure*}
\hspace*{3.1cm} {\bf e$^{\pm}$ jet} \hspace*{2.0cm} (a) \hspace*{3.8cm} {\bf e$^{-}$- p$^{+}$ jet}
\hspace*{2.5cm} (b)
\includegraphics[scale=0.60,angle=0]{dvelparHEG16LpaHn_011hC}
\includegraphics[scale=0.60,angle=0]{dvelparHEG16LraHn_011hC}
\caption{Particle energy distributions of the jet (red) and
ambient (blue) electrons in and around
the e$^{\pm}$ jet (a) and e$^{-}$ - p$^{+}$ jet (b) in the two regions $x/\Delta < 600$
(dashed lines) and $x/\Delta > 600$ (solid lines) at $t = 1000\, \omega_{\rm pe}^{-1}$.}
\label{veldis2}
\end{figure*}
Figure \ref{ReconS}a shows the growing of kKHI and MI (and of WI), as well as the generation of
two modes of MIs (indicated by two lines in Fig. \ref{ByBxz}c), along the jet radius ($z$) up to $x/\Delta = 750$.
Note that without the magnetic field the jets propagate collimated, but instabilities
push the jet particles out of the original jet boundary, as shown in Fig. \ref{Lorentz}, in particular at the non-linear
stage. Moreover, in Fig. \ref{By4} all jets are collimated most probably due to the MI. On the other hand with a toroidal magnetic field the MI grows
stronger, therefore the jet (jet electrons) are stronger collimated.
Figure \ref{ReconS}a shows three bunched twisted magnetic fields at $x/\Delta = 680, 740, {\rm and}\, 790$
which show the non-linear saturation of the grown instabilities, as also shown in Fig. \ref{ByBxz}c.
These correspond to the groups of accelerated and decelerated bunches of electrons in the calculated phase-space
distributions shown in, e.g., Fig. \ref{dphas}c.
It should be noted that even when the magnetic fields dissipate around $x/\Delta= 900$, the electrons accelerate further up to $x/\Delta= 1050$, as shown in Fig. \ref{dphas}c.
At around $x/\Delta=900$ the swirled structure of the jet is distorted and weakened, but it is also reorganized at the centre of the jet. This indicates the existence of a late non-linear
stage of the MI and kKHI. The MI mode near the jet boundary dissipates around $x/\Delta = 750$ first, but the inner mode (MI and kKHI) stays longer and dissipates
at approximately $x/\Delta = 1000$.
It is important to note that by comparing the previous panels of Fig. \ref{3DBV} with Figs. \ref{ByBxz} and \ref{Btot}, the 3D perspectives
indicate distinctive differences in the location of the magnetic flux ropes and vector directions, contrary to the information
extracted from the 2D projections onto the $y$ -$z$ and $x$ - $z$ planes.
The latter indicates that the viewing-angle sensitivity should be taken into account for future studies combined with
observational jet polarization maps, as we will discuss in the following section. In the future work with larger jet systems, we will aim at investigating a comparable case
study with astronomical source observables.
Figure \ref{veldis2} shows the energy distribution of the jet (red) and ambient (blue) electrons in and around the e$^{\pm}$ and e$^{-}$ - p$^{+}$ jet in the two
regions $x/\Delta < 600$ and $x/\Delta > 600$. It shows that electron acceleration is most significant in the non-linear stage ($x/\Delta > 600$). For the pair jet
the ambient electrons in the linear stage are not accelerated as significantly as they are around the e$^{-}$ - p$^{+}$jet.
Comparing Figs. \ref{veldis2} and \ref{dphas} the electrons are further
accelerated by the dissipation of magnetic fields in the non-linear stage,
as it is also seen in the kinetic simulations of driven magnetized turbulence \citep[e.g.,][]{Zhdankin18}.
Note that in the latter simulation studies, the turbulent magnetic fluctuations were
externally forced into the simulation system, and therefore were not self-consistent.
\cite{Comisso18} have investigated particle acceleration at reconnecting current sheets due to stochastic interactions with turbulent fluctuations (plasmoids and vertexes).
In our simulations at the non-linear stage, plasmoids (vertexes) are generated and these may accelerate jet electrons in a similar way. On the contrary, the turbulent magnetic field in
our simulations (where we see multiple magnetic flux ropes (e.g. in Figs. \ref{ByBxz}-\ref{3DBV})) is self-consistently created
in the relativistic jets, through the self-consistent dissipation of the toroidal magnetic field. Previous studies \citep[][]{kowal11,kowal12,lazarian2016} discuss the
particle acceleration process in turbulent magnetic reconnection.
\section{Summary and discussion}
We have used 3D PIC simulations to study the spatio-temporal evolution of magnetized relativistic electron-positron and electron-proton jets, including their kinetic instabilities and the associated particle acceleration.
We examined the excited kinetic instabilities and the associated
magnetic fields at the non-linear stage, as they occur in astrophysical relativistic jets and where the dissipation of the magnetic
fields generate electric fields that are sufficiently strong to further accelerate particles to Lorentz factors up to 35.
In this work we used a new jet injection scheme. We injected both e$^{\pm}$
and e$^{-}$ - p$^{+}$ jets, with a co-moving toroidal magnetic field while we applied a top-hat jet density profile. The current was self-consistently carried by the jet
particles, $\mathbf{J}=\nabla \times \mathbf{B}$. In order to sustain the current
in the jet, a toroidal magnetic field was applied at the jet orifice and a motional electric field was established. Both jets were initially weakly magnetized while the
ambient medium remains unmagnetized. We have run the simulations sufficiently long in order to examine the non-linear effects of the jet evolution.
We found that different jet compositions present different instability modes.
Three different instabilities (WI, MI, kKHI) grow in similar timescales depending on the plasma conditions where some instabilities grow stronger. Particularly the modes of MI
are different for different jet compositions and an excitation of MI or kKHI is weaker for the unmagnetized case, nevertheless for jets with toroidal magnetic fields there is stronger
MI and kKHI growth. We conclude that a relatively weak initial magnetic field can change the development of the kinetic plasma instabilities, even if the latter significantly amplify
the magnetic field, here by factors reaching 50.
Figure \ref{bcontours} shows that quasi-steady electric fields accelerate and decelerate jet electrons
at the linear stage and that the consequent magnetic field dissipation accelerates jet electrons further at the non-linear stage.
At the non-linear stage we also observed that jet electrons propagate outside the
jet and that the jet boundaries seem to get distorted by the kKHI. Near the jet head of
the e$^{\pm}$ jet, as shown in Figs. \ref{jxByz}d and \ref{jxByz}e, we witness a re-arrangement of
the magnetic field and a general weakening of the currents with counter-clockwise direction, seen from the jet front. We found that at the initial stages along
the jet a strong toroidal magnetic field is maintained by a strong $-J_{\rm x}$ current by the collimated jet (ambient) electrons.
The magnetic field generated by different instabilities, in the non-linear stage becomes dissipated and reorganized into a new topology.
The 3-dimensional magnetic field topology indicates possible reconnection sites and the accelerated particles
are significantly accelerated in the non-linear stage up to a Lorentz factor of 35, by the dissipation of the magnetic field (and/or reconnection).
We have identified several potential sites of magnetic reconnection in our simulations;
however, an unambiguous determination in 3D is not trivial. The magnetic field structure of a
reconnection site in 2D simulations consists of $X$ and $O$ shapes which can be recognized
easily. We need to rely on the changes of the magnetic field direction and the position of null (very weak) magnetic fields
in the 2D projections in this work. The complex structures of 3D reconnections have been investigated
in e.g., \cite{parnell10, lalescu15, lazarian2016, bentkamp19, lazarian2020, borissov20}. In order to determine the reconnection
locations analytically, we would need to investigate the eigenvalues of Jacobian matrix, which is beyond the scope of this work,
for more details, see \cite{Cai07}. The present work is an important first step among a series of future follow-up simulations of larger jet radii
that can cover more modes of kinetic instabilities. Therefore, although our results give
important first insights, we expect that larger simulations will shed further light onto understanding
the evolution of different jet species, the dissipation of swirling magnetic fields,
the consequent development of electric and magnetic field irregularities,
organized magnetic patterns, reconnection events, and particle acceleration,
which are critical to observational astronomy.
It is known that the currents and magnetic structures are very different for e$^{-}$ - p$^{+}$ and e$^{\pm}$ jets.
These differences arise from the different mobilities of protons and positrons,
becoming manifest in the polarization signatures of radiation.
It follows that the resulting magnetic field structures are different enough
to yield distinctive polarizations in VLBI (Very Long Baseline Interferometry) observations of AGN jets at the highest angular resolutions \cite[e.g.,][]{gomez2016}.
For example, toroidal magnetic fields (like in our
simulations) inside and outside of an e$^{-}$ - p$^{+}$ jet contribute to circular polarization.
This may help us to distinguish an e$^{-}$ - p$^{+}$ jet clearly from an e$^{\pm}$ jet,
at least partially, and in accordance with the present and recent studies,also to establish if and when a possible dissipation of the
swirling magnetic fields occur in accordance with the present and recent studies \citep[e.g.][]{nishikawa2020}.
VLBI observations of AGN jets are not sufficient to distinguish signatures of
different jet plasma compositions.
Circular polarization (CP) has been detected so far in a limited number
of AGN jets \cite[e.g.,][]{wardle98, homan99, homan01, gabuzda08, thum18},
in all cases at very low percentage levels that do not exceed $1-2\%$, in contrast to the relatively high values of linear polarization that usually can reach $30-40\%$.
CP in AGN jets can be produced either as an intrinsic component of the synchrotron radiation
(for electron-proton jets), or through Faraday conversion of linear polarization \citep{jonesodell77},
the latter appearing to be the prevailing mechanism \cite[e.g.,][]{wardle98, gabuzda08}.
Constraints on the jet composition requires not only extremely accurate CP measurements, but also a robust determination of the three-dimensional
magnetic field structure, in order to quantify the level of Faraday conversion from linear to circular polarization \citep{wardle03},
as well as stringent constraints on the energy distribution of radiating particles, see \cite{wardle98}.
There are currently a lot of efforts, both within the MOJAVE and BU Blazar programs
\citep[e.g.,][]{jorstad05, pushkarev12}, to generate additional
VLBI images of CP which will help advance our knowledge. In addition, the participation of phased ALMA in VLBI arrays, such
as the Event Horizon Telescope and the GMVA \cite[e.g.,][]{EHT19, issaoun21, goddi}
provide a significant boost in sensitivity that will allow a better characterization of the CP in AGN jets, and most importantly
the CP spectra which may hold the key to determine the jet composition.
We propose that several of the complicated magnetic field structures we report in this work could be observed and verified in the near future
with polarimetric VLBI observations at extremely high angular resolutions, if they can resolve the transverse structure of the jet, as with
space VLBI \cite[e.g.,][]{gomez2016,RA3C84} and with the Event Horizon Telescope \cite[e.g.,][]{EHT-I,EHT-3C279}. For example,
flares may be found associated with reconnection where the dissipation of a significant fraction of the magnetic energy occurs.
Particularly, this may happen when an accelerated particle beam is directed along the
line of sight \citep[see][]{komissarov2012,mcKinney2012,sironi2015}. Our study might have
very important implications in this context as, for example, prompt GRB
emission could be due to reconnection events. One of our next steps in future work will be to investigate this phenomenon by combining
observations with the temporal and spectral properties of simulation studies.
\section*{Appendix}
Supplementary videos for this publication can be obtained from
\href{https://doi.org/10.5281/zenodo.5449257}{doi: 10.5281/zenodo.5449257}
\section*{Acknowledgments}
This work was supported by the NASA-NNX12AH06G, NNX13AP-21G, and NNX13AP14G grants.
Recent work was also provided by the NASA through Chandra Award Number GO7-18118X
(PI: Ming Sun at UAH) issued by the Chandra X-ray Center, which is operated by the SAO for
and on behalf of the NASA under contract NAS8-03060.
The work of J.N. has been supported by Narodowe Centrum Nauki through research project
2019/33/B/ST9/02569. Y.M. is supported by the ERC Synergy Grant ``BlackHoleCam: Imaging the
Event Horizon of Black Holes'' (Grant No. 610058).
The work of I.D. has been supported by the NUCLEU project.
Simulations were performed using Pleiades and Endeavor facilities at NASA Advanced Supercomputing
(NAS: s2004), using Comet at The San Diego Supercomputer Center (SDSC), and Bridges at the
Pittsburgh Supercomputing Center,
which are supported by the NSF. JLG acknowledges the support of the Spanish Ministerio de Econom\'{\i}a y
Competitividad (grants AYA2016-80889-P, PID2019-108995GB-C21), the Consejer\'{\i}a de Econom\'{\i}a,
Conocimiento, Empresas y Universidad of the Junta de Andaluc\'{\i}a (grant P18-FR-1769),
the Consejo Superior de Investigaciones Cient\'{\i}ficas (grant 2019AEP112), and the State Agency
for Research of the Spanish MCIU through the Center of Excellence Severo Ochoa award for the Instituto de
Astrof\'{\i}sica de Andaluc\'{\i}a (SEV-2017-0709).
|
2,877,628,089,830 | arxiv |
\section{References}
|
2,877,628,089,831 | arxiv | \section{Introduction}
Arguably the strongest challenges presently faced by the otherwise
highly successful cold dark matter (CDM) paradigm for structure
formation in the Universe occur on `small scales', i.e.~within
individual dark matter halos. One of the most prominent of these
problems is the long standing and controversially debated question
whether rotation curves of low surface brightness galaxies can be
accommodated in the CDM theory
\citep[e.g.][]{McGaugh1998,Hayashi2003,Navarro2004}. Another important
issue is concerned with the amount of dark matter substructure
theoretically predicted by CDM, and whether this is in conflict with
the relative paucity of satellites observed in the Milky Way and the
Local Group \citep[e.g.][]{Moore1999,Klypin1999,Stoehr2002}.
Interestingly, both of these problems have only been recognized once
N-body simulations reached high enough resolution for detailed studies
of the phase-space structure of dark matter halos, allowing them to
probe the inner dark matter cusp to sub-kpc scales, and to establish
that dark halos are filled with a host of self-bound dark matter
satellites, instead of being the smooth halos envisioned based on
earlier low-resolution work. It has now become clear that the
existence of substructure is a generic prediction of hierarchical
structure formation in CDM models, where the assembly of collapsed
mass proceeds via a merger hierarchy that allows some of the infalling
dense lumps to survive as dynamically distinct substructure inside
virialized halos until late times.
On the scales of ordinary galaxies like the Milky Way
($\sim\,10^{12}\,{\rm M}_{\odot}$), the number of bound satellites
predicted by the simulations is much larger than the dozen or so
satellites actually detected around our Galaxy. Comparing the observed
velocity distribution function of these satellites with the predicted
one \citep{Moore1999,Klypin1999}, this has been interpreted as
evidence for a serious surfeit of dark matter satellites, and even
considered to mark a ``crisis'' of the $\Lambda$CDM model. However,
as \citet{Stoehr2002} have shown, there is significant uncertainty in
the conversion of stellar line-of-sight velocity dispersion into peak
circular velocities of dark matter substructures. Using high-resolution
simulations of a Milky Way sized halo, \citeauthor{Stoehr2002} in fact
were able to show that all the known satellites of the Milky Way can
be comfortably hosted by the most massive dark matter substructures
expected for its halo. In this picture, there is however still a sea
of small mass dark matter sub-halos which are largely devoid of stars.
The leading hypothesis to explain this situation is to allude to
baryonic processes of galaxy formation, inhibiting star formation
preferentially in low mass halos. Proposed processes for such feedback
include a photo-ionizing UV background or the expulsion of gas in
shallow potential wells by supernova explosions
\citep[e.g.][]{Bullock2000,Benson2002,Kravtsov2004}.
If vast numbers of {\em dark} sub-halos exist, lensing might be the
best way to detect them \citep[e.g.][]{Trentham2001}. This is not only
true for the scales of galaxies, but particularly for rich clusters of
galaxies, where strong and weak lensing effects can be combined in a
powerful way to construct detailed mass models for clusters. This
allows in principle a direct mapping of the sub-halo mass function,
thereby providing an important test of the CDM paradigm. Upon
comparison with the observed galaxy luminosity function of clusters,
it can also give valuable insights into the galaxy formation process.
\begin{table*}
\begin{center}
\begin{tabular}{lcccccccc}
\hline\hline\noalign{\smallskip}
${\rm Cluster}$&{z} & ${\sigma_{0\ast}}$&${r_{t^\ast}}$&${M_{\rm ap}/L_v}$&$\rm {M^\ast}$&
$\sigma_{\rm clus}$ & ${\rho_{\rm clus}(r = 0)}$\\
& & (km\,s$^{-1}$) & (kpc) &
(M$_\odot$/L$_\odot$) & (10$^{11}$M$_\odot$) & (km\,s$^{-1}$) &
(10$^6$ ${\rm M}_\odot$ kpc$^{-3}$)\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
{A\,2218} & ${0.17}$ & ${180\pm10}$ & ${40\pm12}$ &
${5.8\pm1.5}$ & $\sim\,14 $ & ${1070\pm70}$ & {3.95}\\
{A\,2390} & ${0.23}$ & ${200\pm15}$ & ${18\pm5}$ &
${4.2\pm1.3}$ & $\sim\,6.4 $ &${1100\pm80}$& {16.95}\\
{AC\,114} & ${0.31}$ & ${192\pm35}$ & ${17\pm5}$ &
${6.2\pm1.4}$ & $\sim\,4.9 $ &${950\pm50}$& {9.12}\\
{Cl\,2244$-$02} & ${0.33}$ & ${110\pm7}$ & ${55\pm12}$ &
${3.2\pm1.2}$ & $\sim\,6.8 $ &${600\pm80}$ & {3.52}\\
{Cl\,0024+16} & ${0.39}$ & ${125\pm7}$ & ${45\pm5}$ &
${2.5\pm1.2}$ & $\sim\,6.3 $ &${1000\pm70}$ & {3.63}\\
{Cl\,0054$-$27} & ${0.58}$& ${230\pm18}$ & ${20\pm7}$ &
${5.2\pm1.4}$ & $\sim\,9.4 $ &${1100\pm100}$ & {15.84}\\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table*}
In this letter, we present results of the first comparison between the
lensing determined substructure mass function in clusters on mass
scales ranging from about $10^{11}\,-10^{12.5}\,{\rm M}_{\odot}$ with
that inferred from high resolution cosmological N-body simulations.
To this end, we construct high resolution mass maps of galaxy clusters
by applying galaxy-galaxy lensing techniques where substructure inside
clusters is detected and mapped using the anisotropies that they
produce in the observed shear field. We compare our lensing
measurements directly with results from high-resolution N-body
simulations, allowing us to test the robustness of the CDM model and
the associated hierarchical galaxy formation paradigm. Note that
compared to the scales of galaxies, we expect many more dark matter
structures to be visible optically in clusters, making the comparison
of sub-halo mass functions less affected by uncertainties in the
galaxy formation physics. Therefore, full consistency between the
abundance of optically detected galaxies, substructure in CDM N-body
models, and sub-halos detected by lensing can be asked for;
establishing such a consistency can be viewed as a strong test of the
theoretical paradigm. Below, we outline our methodology for mapping
substructure in clusters. We then present the mass function derived
from the maximum-likelihood analysis. This is followed by a comparison
with the substructure detected in simulated clusters, and a discussion
of our results. We adopt $h = 0.7$, $\Omega_0 = 0.3$ and
$\Omega_{\Lambda} = 0.7$.
\section{Quantifying substructure in clusters with gravitational lensing}
We obtain the mass spectrum of clumps in a cluster by combining
constraints from strong lensing observations (highly magnified
multiply imaged systems) and weak lensing (the radially averaged
tangential shear profile). To this end, we use a self-similar,
parametric mass model to describe the cluster as a composite of a
large-scale smooth mass distribution and several sub-clumps
\citep{Natarajan1997,Natarajan1998,Natarajan2002,Natarajan2004}.
These sub-halos are associated with bright, early-type galaxies in the
cluster under the assumption that mass traces light. The local
anisotropies in the shear field induced by the sub-halos in their
vicinity is then used statistically to quantify the mass of the
sub-clumps and their spatial extents. A likelihood method is used to
retrieve characteristic halo parameters
\citep[e.g.][]{Natarajan1997,Natarajan1998,Geiger1998}. On
applying these techniques to an ensemble of HST cluster lenses
(results are presented in Table~1) we find
that the spatial extents inferred are consistent with tidal stripping;
early-type galaxies do possess dark halos that extend well beyond the
light but these halos are more compact than those around field
galaxies of equivalent luminosity.
In performing the likelihood analysis to obtain characteristic
parameters for the sub-clumps in the cluster we assume that light
traces mass. This is an assumption that is well supported by
galaxy-galaxy lensing studies in the field \citep{Wilson2001} as well
as in clusters \citep{Clowe2002,Hoekstra2003}. The individual galaxies
and the smooth cluster component are modeled self-similarly with
truncated pseudo-isothermal mass distributions (PIEMD). The parameters
that characterize a truncated PIEMD are: a truncation radius $r_t$
identified with the tidal radius, a core radius $r_0$ and a central
velocity dispersion $\sigma_0$. For the smooth component the values of
these parameters are set by the observed strong lensing features, and
for the galaxies, combined constraints from the strong lensing and the
weak shear field determine the best-fit parameters for a fiducial
galaxy halo. These values recovered from the maximum likelihood
analysis are shown in Table~1. In order to relate the observed light
distribution in the early-type cluster galaxies to their masses a set
of physically motivated scaling laws are assumed:
\begin{eqnarray}
\sigma_0 = \sigma_{0*}\,\left(\frac{L}{L*}\right)^{1/4};\;\;r_0 =
r_{0*}\,\left(\frac{L}{L*}\right)^{\alpha};\;\; r_t = r_{t*}\,\left(\frac{L}{L*}\right)^{\alpha}.
\end{eqnarray}
The total mass of the sub-halo associated with a galaxy of luminosity
$L$ is:
\begin{eqnarray}
M \propto \sigma_{0*}^2 r_{t*} \left(\frac{L}{L*}\right)^{1/2+\alpha}; \;\;\;
\frac{M}{L} \propto \sigma_{0*}^2 r_{t*} \left(\frac{L}{L*}\right)^{1/2-\alpha}.
\end{eqnarray}
Note however that for our choice of mass model and value adopted for
the exponent $\alpha$, the mass to light ratio is not constant with
radius within an individual galaxy halo. The derived mass spectrum of
sub-halos is not a strong function of $\alpha$, as discussed in
\citep{Natarajan2004}.
Dark halos are associated with the locations of bright, early-type
cluster galaxies and the fiducial parameters for a typical halo are
then extracted from the likelihood analysis. A high resolution mass
model for the entire cluster is built using the strong lensing regime
to constrain the inner region ($r\,\sim\,r_{\rm Einstein}$), and the
local anisotropies in the shear field are used to obtain properties of
the galaxy halos around early-type cluster members. Since the
procedure involves a scaled, self-similar mass model that is
parametric, we obtain a mass estimate for the dark halos (sub-clumps)
of the cluster galaxies as a function of their luminosity. This
provides us with a clump mass spectrum. Note that tidal truncation by
the cluster causes these halo masses to be lower than that of
equivalent field galaxies at comparable redshifts obtained from
galaxy-galaxy lensing. The fraction of mass in the clumps is only
10-20\% of the total mass of the cluster within the inner
$500\,h^{-1}\,{\rm kpc}$ of these high central density clusters. We
are limited to this spatial scale as the lensing analysis was
performed on HST-WFPC2 data with pointings at cluster centers. The
remaining 80-90\% of the cluster mass is consistent with being
smoothly distributed (in lumps with mass $M\,<\,10^{10}\,{\rm
M}_{\odot}$).
In Fig.~\ref{fig1}, we show the mass function retrieved from
galaxy-galaxy lensing for each of the five HST clusters. There is a low-mass
cut-off in the observed clump spectrum, at around $10^{11}{\rm
M}_\odot$, which is due to observational limitations. The mass
resolution of this technique is limited by the depth and field of view
of the Wide Field Planetary Camera (WFPC2) aboard the Hubble Space
Telescope, by the number of background galaxies per foreground lens,
and the reliability with which shapes can be measured for the faintest
background galaxies in the HST image of the central region.
Unfortunately, this limits the number of reliably determined lumps per
rich cluster to about 40, implying that only the massive end of the
clump spectrum corresponding to the brightest cluster galaxies can be
probed. However, despite the low number statistics, the individual
cluster measurements show a marked rise in substructure abundance
towards lower mass scales. This becomes particularly apparent once the
clusters are stacked, as we have done in the lower right panel of
Fig.~\ref{fig1}.
\section{Cluster substructure in high resolution $\Lambda$CDM simulations}
We analyze substructure in high-resolution dark matter simulations of
clusters formed in the $\Lambda$CDM model. Our set of simulations is
taken from the study carried out by \citet{Springel2001} of
a single rich cluster of mass $8.4\times 10^{14}\,h^{-1}{\rm
M}_\odot$, simulated in 4 steps of ever increasing resolution using
the parallel tree-code {\small GADGET} \citep{gadget2001}. In these
simulations (referred to as `S1' to `S4'), the particle mass
resolution increases from $6.87\times 10^9$ to $4.68\times
10^7\,h^{-1}{\rm M}_\odot$, corresponding to $1.3\times 10^5$
particles up to about 20 million within the virial radius, making the
highest resolution simulation in this series one of the best resolved
simulations of a single rich cluster carried out to date. This
simulated cluster has a comparable central density and mass within the
inner $500\,h^{-1}\,{\rm kpc}$ as the massive HST cluster-lenses
studied here.
Self-bound gravitational substructure is found with the algorithm
{\small SUBFIND} \citep{Springel2001}. It starts by determining
locally over-dense dark matter substructure candidates in a fully
adaptive fashion, and then subjects each of them to a gravitational
unbinding procedure, such that a catalog of self-bound dark matter
substructures results. In the lowest resolution simulation S1 of the
series, 118 substructures can be detected, a number that increases to
more than 4600 in the high-resolution simulation S4. Note however that
these additionally resolved small substructures are all of ever lower
mass; already the low resolution simulation captures the correct
number of massive satellites, with (at least on average) the correct
mass.
The measured differential mass function of substructures appears to be
a power law, ${\rm d}N/{\rm d}m \propto m^{-\alpha}$, with slope close
to $\alpha\simeq -1.8$ \citep{Springel2001,Helmi2002,DeLucia2004}.
Interestingly, this is very close to the result by \citet{Lee2004} who
has attempted an analytic calculation of the sub-halo mass function
based on modeling the complex dynamical history of galaxies in the
cluster with a parameterized model to account for the effects of
global tidal truncation \citep[see also][]{Taylor2001}. Given that we
have also demonstrated that the spatial extents of substructures
inferred from galaxy-galaxy lensing are consistent with the tidal
stripping hypothesis, this lends support to the validity of a simple
tidal-limit approximation. Similar to the lensing result, both the
analytic model of \citet{Lee2004} and the numerical simulations find
that only about 10\% of the sub-halo mass is bound in substructures,
most of it in a handful of most massive sub-halos which dominate the
cumulative mass in the substructures.
We note that numerical studies by \citet{DeLucia2004} find that the
substructure mass function depends only weakly on the properties of
the parent halo mass. Also, the mass fraction in substructure is
relatively insensitive to the tilt and overall normalization of the
primordial power spectrum \citep{Zentner2003}. Only for radically
altered CDM models, for example by truncating small-scale power,
\citeauthor{Zentner2003} find that their models yield projected
substructure mass fractions that are lower than the estimates from
strong lensing.
\section{Comparison}
We now compare the galaxy-galaxy lensing results with the
substructure mass function obtained from the N-body simulations. In
Fig.~\ref{fig3}, we show histograms for the distribution of
substructure masses in the four simulations of \citet{Springel2001}
and contrast them individually with the stacked result for the
clusters A2218, A2390, and Cl0054. Since the number of substructures
in the observable mass range is quite small, we expect large
system-to-system variations between different clusters. The stacking
of clusters of similar mass that we applied here reduces the
associated scatter somewhat.
Comparing the lensing result with the four simulated clusters (which
also show some numerical scatter amongst each other), we find broad
general agreement in the mass range $10^{11}\,-10^{12.5}\,{\rm
M}_{\odot}$, both in the amplitude and the slope of the substructure
mass function. Given that no free parameter or scaling has been
applied to obtain this match, the agreement is in fact remarkable.
However unlike the observations, the simulations show no low-mass
cut-off. If this cut-off is entirely due to resolution effects, which
appears plausible, the interpretation is that the observations only
see the `tip of the iceberg' of the substructure distribution as far
as their number is concerned, even though they detect most of the mass
in substructures. Fig.~\ref{fig3} gives also a hint that the
simulations may systematically over-predict the mass of the most
massive satellites, but we caution that such an effect could also be
caused a systematic effect in the mass estimate based on the lensing
technique, for example if the massive substructures preferentially
correspond to halos that have fallen in most recently.
\section{Discussion}
On comparing the clump mass spectrum obtained for galaxy clusters from
high-resolution N-body simulations of $\Lambda$CDM models to those
obtained from galaxy-galaxy lensing in HST cluster lenses, we find
excellent agreement. Despite the fact that the lensing analysis
assumes that mass traces light and is only sensitive to a restricted
mass range, it is clear that there is no substructure problem in CDM
on mass scales spanning $10^{11}\,-10^{12.5}\,{\rm M}_{\odot}$. This
is in sharp contrast to the situation on galactic scales, where the paucity
of observed satellites when compared with the abundant dark matter
substructure predicted by simulations has been characterized as a
crises for CDM. While the severity of this problem has probably been
overstated initially -- in fact it may have partially gone away by now
\citep{Stoehr2002} -- it is clear that CDM predicts a rich spectrum of
dark matter substructure extending to very small masses, both for
clusters and galactic halos. In clusters, up to several hundred of the
most massive substructures can be directly identified with the
luminous cluster galaxies, and semi-analytic models of galaxy
formation show that this association leads to highly successful models
for the population of cluster galaxies \citep{Springel2001}. On the
other hand, on galactic scales, a much larger number of truly dark
satellites must be present according to the N-body
models. Gravitational lensing is probably our best bet to detect such
structures of small mass, and in fact, on these small scales, the
observed flux anomalies in multiply-imaged quasar systems
\citep{Mao1998,Chiba2002,Dalal2002,Metcalf2001,Metcalf2002,Mao2004} have been
interpreted as evidence for significant substructure in the mass range
$10^4\,<\,M\,<\,10^8\,{\rm M}_{\odot}$.
In this letter, we have shown that the clump mass function
independently determined from lensing (a technique that is unaffected
by the dynamical state of the cluster) is in excellent agreement with
that obtained in high resolution cosmological N-body simulations of
clusters of galaxies in the $\Lambda$CDM model.
\acknowledgements
PN thanks her collaborators on the HST cluster-lenses project:
Jean-Paul Kneib, Ian Smail and Richard Ellis.
\bibliographystyle{apj}
|
2,877,628,089,832 | arxiv | \section{Introduction}
Online video has become one of the most popular applications on the Internet,
and global Internet video traffic will grow threefold between 2016 and 2021 \cite{networking2016forecast}.
However, user viewing experience still needs improvements due to unstable network conditions and limited bandwidth capacities, especially for the users of mobile streaming services.
Moreover, the growing number of viewers and the wide adoption of High-Definition (HD) videos in streaming services make bandwidth requirements grow explosively.
This may further deteriorate user viewing experiences if the deployment of network resources cannot catch up with the growing demands of video consumption.
These realities make it challenging for video service providers to provide satisfactory viewing experiences.
Adaptive Bitrate (ABR) streaming is currently the most effective solution for video streaming under unstable network conditions.
Each video is encoded into many representations of different bitrates for ABR streaming.
The client can dynamically select the most suitable representation according to the current network conditions.
As such, the rate adaptation mechanism is vital to the performance of ABR streaming.
To design proper rate adaptation approaches for improving Quality of Experience (QoE) \cite{brunnstrom2013qualinet} for ABR streaming,
QoE metrics should be defined first so as to quantitatively evaluate the performance of rate adaptation.
The most commonly adopted QoE metrics in ABR streaming include rebuffering time, average bitrate, video quality variation, etc.
These are {\em objective} QoE metrics, as they are based upon measured performance parameters of the video delivery system.
The objective QoE metrics neglect the viewer's subjective feelings as they experience the video delivered to them \cite{brunnstrom2013qualinet}.
The user subjective engagement with the streamed video depends on what is happening in the video. Not all segments of the video draw the same attention from the user. For instance, for a user watching a soccer game, there is high engagement when the action is near the goal, but low attention when a player fetches the ball out of bounds. We denote by {\em interestingness} the level of (subjective) engagement that the video draws from the user.
Currently, video content is delivered in networks as binary data and the semantic-level information of video content is ignored by rate adaptation schemes.
However, the semantic information of video content plays an important role on the user's subjective viewing experiences, e.g.,
influencing user attention and interest.
Therefore, it is also necessary to consider the subjective QoE metrics for optimizing QoE.
The human visual attention system is selective \cite{kosslyn1996image}, and the more interesting parts of the video content draw more user attention.
Allocating more bitrate budgets for the interesting parts of video content can achieve higher viewing experiences and reduce the information loss caused by video distortion.
However, due to the complexity of video content and the subtlety of the user's interest towards video content,
it is challenging to analyze video content from the user's perspective and incorporate the information for rate adaptation.
To address these problems, we first design a deep learning based approach for analyzing the interestingness of video content.
Then, we design a Deep Q-Network (DQN) based approach for rate adaptation by incorporating video interest information.
The method can learn the optimal rate adaptation policy by jointly considering buffer occupancy, bandwidth, and the interestingness of video content.
We evaluate the performance of our method using real-world datasets.
The rest of this paper is organized as follows.
Section \ref{sec:related-work} presents the related works on rate adaptation schemes.
Section \ref{sec:system-design} presents the system design and workflows.
Section \ref{sec:video-interest-analysis} presents the deep learning based approach for interestingness recognition.
Section \ref{sec:dqn-rate-adaptation} introduces the DQN based approach for rate adaptation while considering video interestingness information.
Section \ref{sec:performance-evaluation} presents the performance evaluation of our proposed method.
Section \ref{sec:conclusion} concludes this paper.
\section{Related Work} \label{sec:related-work}
Many existing works have studied the rate adaptation problem by considering different influence factors or using different mathematical models for maximizing QoE.
Huang \emph{et al.} \cite{huang2015buffer} designed a buffer-based approach by considering the current buffer occupancy.
Li \emph{et al.} \cite{li2014probe} designed a client-side rate adaptation algorithm by envisioning a general probe-and-adapt principle.
Yin \emph{et al.} \cite{Yin:2015:CAD:2785956.2787486} proposed a Model Predictive Control (MPC) approach by jointly considering buffer occupancy and bandwidth.
Bokani \emph{et al.} \cite{bokani2015optimizing} and Zhou \emph{et al.} \cite{zhou2016mdash} adopted Markov Decision Process (MDP) for rate adaptation.
Spiteri \emph{et al.} \cite{spiteri2016bola} adopted Lyapunov framework to design an online algorithm to minimize rebuffering and maximize QoE, without requiring bandwidth information.
Qin \emph{et al.} \cite{qincontrol} proposed a PID based method for rate adaptation,
and Mao \emph{et al.} \cite{mao2017neural} adopted deep reinforcement learning for rate adaptation.
In this line of works, they mainly considered the objective QoE metrics, aiming to improve the performances on
rebuffering time, average bitrate, and video quality variation.
Cavallaro \emph{et al.} \cite{cavallaro2005semantic} showed that the use of semantic video analysis prior to encoding for adaptive content delivery reduces bandwidth requirements.
Hu \emph{et al.} \cite{hu2017semantic} proposed a semantics-aware adaptation scheme for ABR streaming by semantic analysis for soccer video.
Fan \emph{et al.} \cite{fan2015segment} utilized various features collected from streaming services to determine if a video segment attracts viewers for optimizing live game streaming.
Dong \emph{et al.} \cite{dong2018personalized} designed a personalized emotion-aware video streaming system based on the user's emotional status.
In this line of works, they considered different subjective factors for optimizing video streaming services to improve QoE.
\section{System Design} \label{sec:system-design}
\begin{figure}
\begin{center}
\epsfig{file=system_design-2.eps, width=0.70\columnwidth}
\end{center}
\vspace{-1em}
\caption{The design of the COI based rate adaptation for ABR Streaming.}\label{fig:sys-design}
\end{figure}
We illustrate the design of the Content-of-Interest (CoI) based rate adaptation mechanism for ABR streaming in Fig. \ref{fig:sys-design}.
The system consists of the following components.
\emph{Streaming Server:}
The streaming server pre-processes video files and streams the video content to users.
For video pre-processing, each video file will be encoded into many representations at different bitrates and segmented into many equal-duration video chunks.
Each video chunk will be processed to analyze the interestingness of the video content.
The available bitrate information and the interestingness information of each video chunk will be included in the Media Presentation Description (MPD) manifest file \cite{stockhammer2011dynamic}.
In this work, we mainly consider Video-on-Demand (VoD) services, and the video encoding and interestingness recognition will be performed offline before video streaming.
\emph{Video Player:}
The video player requests the MPD of a video file when starting a video session and analyzes the available bitrates and the interestingness information of the video content.
The video player requests the selected video chunks from the streaming server, and measures the average bandwidth for downloading each video chunk.
\emph{DQN Agent:} We adopt the DQN method \cite{mnih2013playing} for rate adaptation.
The DQN agent will use the bandwidth, the current buffer occupancy, and the interestingness of the next several video chunks as the system state for determining which bitrate should be selected for the next video chunk.
\section{Interestingness Recognition Algorithm}\label{sec:video-interest-analysis}
In this section, we introduce the deep learning approach for recognizing the interestingness of video content.
We illustrate the model for video interestingness recognition in Fig. \ref{fig:video-interest-prediction}.
Video chunks consi{}st of a series of video frames in time order.
It has been shown that 3D Convolutional Networks (3D ConvNets) are more suitable for learning spatiotemporal features \cite{tran2015learning},
there{}fore, we adopt 3D ConvNets for learning spatiotemporal features.
We extract 16 images from each video chunk and use 3D ConvNets to generate video features.
The extracted video features from 3D ConvNets will be input into two Fully-Connected (FC) layers, and the activation function for the fully-connected layers is Rectifier \cite{relu_function}.
The output layer has one node and the activation function is the Softmax function \cite{softmax_function}.
The output value is real-valued, which represents the interestingness of a video chunk,
and a higher value represents a higher level of video interestingness.
We adopt the TVSum dataset \cite{song2015tvsum} for training the network for interestingness recognition.
The dataset was created by segmenting videos into two second-long video segments,
and 20 users were invited to rate each segment compared to other segments from the same video.
The average of the rating for each segment is used as the ground truth,
and the scale is from one to five.
The data is split into small batches that are used to calculate the loss and update the network in each training epoch.
The loss function is the Mean Squared Error (MSE),
\begin{equation}\label{eqn:mse-loss}
MSE = \frac{1}{n} \sum_{i=1}^n (y_i - \hat{y_i})^2,
\end{equation}
where $n$ is the number of samples (video chunks) in each training batch, $\hat{y_i}$ is the predicted interestingness of sample $i$,
and $y_i$ is the ground-truth of the interestingness of sample $i$.
For training the network,
we adopt Adam \cite{kingma2014adam} for training the fully connected layers and the output layer.
\begin{figure}
\begin{center}{}
\epsfig{file=video-interest-network.eps, width=.85\columnwidth}
\end{center}
\vspace{-1em}
\caption{The deep learning model for video interestingness recognition.}\label{fig:video-interest-prediction}
\end{figure}
\begin{table}
\centering
\caption{Key Notations and Definitions} \label{tabel:key-notation}
\begin{tabular}{p{1.45cm}|p{6.2cm}} \hline
Notation & Definition \\ \hline
$t$ & the discrete time slot, $t = 1,2, ...$ \\
$s_t, a_t, r_t$ & system state, action, reward at time slot $t$ \\
$B$ & the set of available bitrates for each video \\
$v_t$ & the average bandwidth for downloading video chunk $t$\\
$I_t$ & the interestingness of video chunk $t$ \\
$\overrightarrow{v_t}$ & the vector of the average bandwidth for downloading the next $k$ video chunks\\
$L_t$ & buffer occupancy before downloading video chunk $t$ \\
$b_t$ & the selected bitrate for video chunk $t$ \\
$\overrightarrow{w_t}$ & the vector consisting of the interestingness of the following $h$ video chunks \\
$\pi$ & the policy for choosing bitrate for the next video chunk \\
$r_t$ & reward during time slot $t$ \\
$f(\cdot)$ & mapping the interestingness of a video chunk to the weight for a video chunk \\
$q(\cdot)$ & mapping video bitrate to video quality \\
$\alpha$ & the weight for the penalty of rebuffering time \\
$\beta$ & the weight for the penalty of quality variation \\
$Q(s,a)$ & the quality of the state-action combination\\
$N$ & the number of transitions chosen from replay buffer for minibatch training\\
$\theta$ & the weights of the DQN network\\
\hline\end{tabular}
\end{table}
\section{DQN based Interest-Aware Rate Adaptation} \label{sec:dqn-rate-adaptation}
In this section, we introduce the DQN based interest-aware rate adaptation for ABR streaming.
The key notations used in this paper are summarized in Table \ref{tabel:key-notation}.
\subsection{Problem Formulation for Interest-Aware Rate Adaptation}
We adopt a discrete time system, where the time is denoted as $t=1,2,3,...$.
The duration of each time slot may not be equal, and depends on the time for downloading a video chunk.
We formulate the interest-aware rate adaptation as a Reinforcement Learning (RL) problem,
where the agent interacts with the streaming environment for learning the optimal rate adaptation policy.
More specifically, after downloading video chunk $t-1$, the agent receives the observed system state $s_t$,
then takes action $a_t$ for selecting the bitrate for video chunk $t$ according to the current policy, and finally gets reward $r_t$ after downloading video chunk $t$.
These procedures will be repeated until the end of a video session.
\textbf{Streaming Environment:}
We denote the set of available bitrates in the streaming system for each video as $B$.
The bandwidth during a video session is time-varying, and we denote the average bandwidth for downloading video chunk $t$ as $v_t$.
The interestingness of video chunk $t$ is denoted as $w_t$.
The selected bitrate for video chunk $t$ is denoted as $b_t$.
\textbf{State:}
The state describes the bandwidth of the streaming service, the buffer occupancy of the video player, and the interestingness of the following video chunks, etc.
We denote the state at time slot $t$ as $s_t$, specifically,
\begin{equation}\label{eqn:state}
s_t = (\overrightarrow{v_t}, L_t, b_{t-1}, \overrightarrow{w_t}, \overrightarrow{u_t}),
\end{equation}
where $\overrightarrow{v_t}$ is the vector consisting of the predicted average bandwidth for downloading the next $k$ video chunks (i.e., $\overrightarrow{v_t} = (v_{t},v_{t+1}, ...,v_{t+k-1})$),
$L_t$ is the buffer occupancy before downloading video chunk $t$,
$b_{t-1}$ is the selected bitrate for video chunk $t-1$,
$\overrightarrow{w_t}$ is the vector consisting of the interestingness of the following $h$ video chunks (i.e., $\overrightarrow{w_t} = (w_{t},w_{t+1}, ...,w_{t+h-1})$),
$\overrightarrow{u_t}$ is the vector consisting of the available chunk sizes of video chunk $t$.
Here, the interestingness information for each video chunk of a whole video file is known at the start of a video session,
because video content will be pre-processed on the server and the interestingness information will be included in MPD.
\textbf{Action:}
The control action for the agent is to select the bitrate for the next requested video chunk according to the current system state, which can be described as
\begin{equation}
a_t = \pi(s_t) \to b_t, b_t \in B,
\end{equation}
where $\pi$ is the policy for selecting bitrate.
\textbf{Reward:}
We adopt the following utility function revised based on the QoE metrics defined in \cite{Yin:2015:CAD:2785956.2787486} for measuring the reward during a time slot,
\begin{equation} \label{eqn:qoe-function}
r_t(s_t, a_t) = \underbrace{f(w_t)}_{\text{weight}} \underbrace{q(b_t)}_{\text{quality}} - \underbrace{\alpha R_t}_{\text{video stall}} - \underbrace{\beta |q(b_t) - q(b_{t-1})|}_{\text{quality variation}},
\end{equation}
where $r_t$ is the reward for time slot $t$,
$f(\cdot)$ maps the interestingness of a video chunk to the weight for a video chunk,
$q(\cdot)$ maps video bitrate to video quality,
$\alpha$ is the weight for the penalty of rebuffering time,
$R_t$ is the rebuffering time incurred during time slot $t$,
and $\beta$ is the weight for the penalty of quality variations.
With the reward function in Eq. \ref{eqn:qoe-function}, the video chunks with higher interestingness have higher weights,
therefore, the agent will get more rewards if the video chunks with higher interestingness are allocated more bitrate budgets.
\textbf{Objective:}
Our objective is to derive the optimal rate adaptation policy for maximizing the rewards over a video session. Due to the uncertainly of system dynamics, future rewards and present rewards have different importance and weights.
Therefore, we maximize the overall discounted rewards,
in which the present rewards have higher importance and the future rewards have less importance, mathematically,
\begin{equation}\label{eqn:qoe-maximization}
\pi^{*} = \argmax_{\pi} \mathop{\mathbb{E_\pi}} \sum_{i=0}^{\infty} \gamma^{i} r_{t+i}(s_{t+i}, a_{t+i}),
\end{equation}
where $\pi^{*}$ is the optimal rate adaptation policy that needs to be derived and $\gamma$ is the discount factor.
\subsection{DQN for Learning Rate Adaptation Policy}
\begin{figure}
\begin{center}
\epsfig{file=dqn.eps, width=.9\columnwidth}
\end{center}
\vspace{-1em}
\caption{The DQN network for interest-aware rate adaptation.}\label{fig:dqn-based}
\end{figure}
We adopt DQN \cite{mnih2013playing} for learning the rate adaptation policy, and the network of DQN is illustrated in Fig. \ref{fig:dqn-based}.
The inputs of the network are the system states listed in Eq. \eqref{eqn:state},
and the outputs of the network are the action-value function, $Q(s, a, \theta)$,
which represents the quality of the state-action combinations for each state $s$ and action $a$.
$\theta$ represents the weights of Q network, which will be updated during training.
We illustrate the details of the DQN based learning algorithm for rate adaptation in Algorithm \ref{DQN_solution}.
At the start of each video session, the video player is initialized and a video file is randomly chosen.
When selecting the bitrate for a video chunk, the agent randomly selects a bitrate with probability $\epsilon$.
Otherwise, the agent will choose the bitrate that has the maximum action-value given the current state.
The video player will download the video chunk of the selected bitrate.
After the completion of the download, the agent will calculate the reward according to Eq. \eqref{eqn:qoe-function}
and observe the next state.
The transition $(s_t, a_t, r_t, s_{t+1})$ will be stored into the replay buffer.
We will randomly choose N transitions from replay buffer for training the network at each gradient descent step.
For each sampled transition, we denote it as $(s_{t'}, a_{t'}, r_{t'}, s_{t'+1})$.
The following loss function is adopted for training DQN,
\begin{equation} \label{eqn:loss-function}
L(\theta_i) = \mathbb{E}[(y_{t'} - Q(s_{t'},a_{t'};\theta_i))^2],
\end{equation}
where $y_{t'} = \mathbb{E}[r_{t'}+\gamma \max_{a'}Q(s_{t'+1},a';\theta_{i-1})|s_{t'},a_{t'}]$ and
$\theta_i$ denotes the weights of the Q network at the $i$-th iteration.
Then, a mini-batch gradient descent step will be performed to update the weights of the Q network.
After the training, the Q network will be adopted by the agent for making rate adaption decision.
For the next requested video chunk, the bitrate which has the largest action-value for the current state will be selected by the agent.
\begin{algorithm}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand\algorithmicensure {\textbf{Output:} }
\caption{DQN for Interest-Aware Rate Adaptation} \label{DQN_solution}
\begin{algorithmic}[1]
\State{Initialize replay memory D}
\State{Initialize Q Network with random weights}
\For{video session $ = 1,2,...,M$}
\State{Initialize the video player and choose a video file}
\State{Observe initial state $s_1$}
\For{video chunk $t = 1,2,...,K$}
\State{With probability $\epsilon$ randomly select a bitrate $a_t$}
\State{otherwise select bitrate $a_t = \argmax_a Q(s_t, a; \theta)$}
\State{Download video chunk $t$ until completed}
\State{Observe reward $r_t$ and next sate $s_{t+1}$}
\State{Store transition $(s_t, a_t, r_t, s_{t+1})$ into $D$}
\State{Randomly sample $N$ transitions from D}
\State{Set $y_{t'} = r_{t'}$, if the video session ends}
\State{otherwise set $y_{t'} = r_{t'} + \gamma \max_{a} Q(s_{t'+1},a;\theta_{i-1})$}
\State{Train the network using Eq. \eqref{eqn:loss-function} as loss function}
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Experiment} \label{sec:performance-evaluation}
In this section, we illustrate the experiment settings and the performance of the CoI based rate adaptation method.
\subsection{Experimental Settings}
To simulate different network conditions, we adopt the FCC broadband dataset \cite{fcc_dataset}
and the 3G/HSDPA mobile dataset \cite{riiser2013commute} for training DQN and evaluating performance.
In our experiment, $\overrightarrow{v_t}$ is the vector of the predicted bandwidth for the next two video chunks.
$\overrightarrow{w_t}$ is the vector of the video interestingness for the next three video chunks.
We adopt the settings of the penalty for rebuffering time and quality variations used in \cite{Yin:2015:CAD:2785956.2787486},
where $\alpha$ is 3000, $\beta$ is 1, and $q(\cdot)$ are identity functions.
$f(\cdot)$ scales the video interestingness values from 1-5 to 1-3 with normalization.
The available bitrate levels are 350kbps, 600kbps, 1000kbps, 2000kbps, 3000kbps.
For the DQN agent, after the hyper-parameters searching and tuning, we adopt the following parameters setting: we use a fully-connected neural network with two hidden layers of size 256 and 512, the activation function is ReLu, and the output layer uses a linear activation function to output the approximated Q value for a given state and action pair. A naive $\epsilon$-greedy policy is used for exploration and the probability of randomly selecting an action during training is 0.2. The learning rate is 0.1, the replay buffer size of DQN is 10000, the discount factor is 0.8, the decay parameter for updating target Q network is 0.5, the batch size is 256, and for each instance of training, we sample 50 batches of data.
\subsection{Baseline Methods}
We compare the performances of our method with the following methods:
1) Buffer-Based (BB) approach \cite{huang2015buffer} chooses the bitrate for the next video chunk as a function of the buffer occupancy.
In our settings, the reservoir (r) is five seconds and the cushion (c) is 20 seconds.
2) Rate-Based (RB) approach chooses the maximum available bitrate less than the predicted bandwidth.
3) Robust-MPC approach \cite{Yin:2015:CAD:2785956.2787486} uses MPC method to select the bitrate for maximizing the overall QoE over the prediction horizon.
The prediction horizon of Robust-MPC is three time slots.
4) DQN-Constant approach also adopts DQN method for rate adaptation, however, the weights of the video chunk is constantly set as two.
RB, Robust-MPC, DQN-Constant, and our proposed approach use the harmonic mean of the average bandwidth of the past 5 video chunks as bandwidth prediction for the next video chunk.
\subsection{Performance Evaluation}
\subsubsection{Video Interestingness Recognition Precision}
\begin{figure}
\begin{center}
\epsfig{file=mse.eps, width=.7\columnwidth}
\end{center}
\vspace{-1em}
\caption{The interestingness recognition error during different iterations. The recognition error is converged to 0.02 after 18,000 iterations. }\label{fig:mse-prediction}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=error.eps, width=.7\columnwidth}
\end{center}
\vspace{-1em}
\caption{The interestingness recognition error distribution. The error is mainly distributed around 0.0 which demonstrates the good performance of the recognition model.}\label{fig:error-dis}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=interest_dis.eps, width=.7\columnwidth}
\end{center}
\vspace{-1em}
\caption{The distribution of the weights of the video chunks. After scaling the interestingness values from [1.0, 5.0] to [1.0, 3.0], the weights are mainly distributed from 1.0 to 2.0 with the mean at around 1.5.}\label{fig:interest-distribution}
\end{figure}
There are overall 6245 user-annotated video chunks in the dataset, and we randomly choose 90\% of the video chunks for training and 10\% of the video chunks for evaluating the performance.
In Fig. \ref{fig:mse-prediction}, we illustrate the interestingness recognition error during different iterations in the training stage.
It can be observed that the recognition error decreases over the training iterations and finally converges,
and the MSE converges to 0.02 after 18,000 iterations.
The interestingness recognition error distribution is illustrated in Fig. \ref{fig:error-dis}, and the mean error is 0.34.
The interestingness prediction is biased towards giving a lower score, because the interestingness values of most of the video chunks are small,
and the prediction algorithm tends to predict a lower value for reducing the overall MSE.
We use the normalization function as $f(\cdot)$ in Eq. \eqref{eqn:qoe-function} for scaling the interestingness value into the weight of a video chunk.
The range of the weight is from 1.0 to 3.0.
The overall distribution of the weights of the video chunks is illustrated in Fig. \ref{fig:interest-distribution}.
\subsubsection{Performances on Rebuffering Time, Average Bitrate, and Bitrate Variations}
We first evaluate the performance of different methods on rebuffering time, bitrate variation, and video quality.
We run the tests over 40 video sessions, and each video session has 200 video chunks.
For each video session, we randomly choose a bandwidth trace and the interestingness information of a video file.
The performance of each method is illustrated in Table \ref{tab:table2}.
From the results in Table \ref{tab:table2}, we can observe that the performances of our proposed CoI method on rebuffering time,
average bitrate, and quality variations are close to the performances of the state-of-the-art methods, including Robust-MPC, BBA, and RBA.
This verifies that introducing video interestingness information for rate adaptation will not deteriorate the performances from the perspective of objective QoE metrics.
Moreover, CoI reaches the highest mean value of average bitrate per session out of all the methods and the lowest standard deviation of it.
For average rebuffering time, the CoI method is lower than the BBA and close to the Robust-MPC.
For the bitrate variation, CoI method is lower than the BBA and quite close to the Robust-MPC.
Note that the average bitrate and rebuffering time will both increase under the CoI method.
This is due to that the video interestingness value is larger than one, and it will increase the weight of video quality in the reward function (Eq. \eqref{eqn:qoe-function}),
compared with rebuffering time and quality variations.
For verification, we can observe that DQN-Constant has a higher average bitrate compared with Robust-MPC, BBA, and RBA, yet the rebuffering time of DQN-Constant is also significantly larger than the other methods.
We also give the empirical distributions of average bitrate, rebuffering time, and quality variations of different methods in Fig. \ref{fig:cdf-bitrate}, \ref{fig:cdf-rebuffering}, and \ref{fig:cdf-switching}. We can observe that the CoI method has the highest distributions on bitrate comparing with the rest methods. For the distributions of rebuffering time and quality variations, the CoI method gets quite good results though not the lowest since there is a trade-off between minimizing the rebuffering time, quality variations and maximizing the video interestingness value.
\begin{table*}[t]
\centering
\caption{The average performances per video session.}
\label{tab:table2}
\begin{tabularx}{\textwidth}{ p{6cm} p{1.8cm} p{1.8cm} p{1.8cm} p{1.8cm} p{1.8cm}}
\toprule
& RBA & BBA & Robust-MPC & CoI & DQN-Constant\\
\midrule
\rowcolor{LightCyan}
Average Rebuffering Time (s) & 0.3617 & 0.9439 & 0.7661 & 0.9173 & 1.915 \\
Standard Deviation of Rebuffering Time (s) & 0.0717 & 1.9731 & 1.4079 & 1.2803 & 2.397 \\
\rowcolor{LightCyan}
Average Bitrate(kbps) & 1762.3 & 1996.6 & 2014.5 & 2231.8 & 2145.6 \\
Standard Deviation of Average Bitrate(kbps) & 617.1 & 517.7 & 538.5 & 452.4 & 512.9 \\
\rowcolor{LightCyan}
Bitrate Variation (kbps/chunk) & 76.3598 & 176.5488 & 115.5366 & 124.5122 & 202.183 \\
Standard Deviation of Bitrate Variation (kbps/chunk) & 39.5099 & 133.6111 & 74.3199 & 91.2549 & 162.813 \\
\bottomrule
\end{tabularx}
\end{table*}
\subsubsection{Relation between Video Interestingness and Average Bitrate}
We illustrate the average bitrate for different levels of video interestingness in Fig. \ref{fig:Interest-level}.
Because video interestingness is real-valued, we divide the interestingness of the video chunks into four levels, namely,
1.0-1.4, 1.4-1.8, 1.8-2.2, 2.2-2.6 and 2.6-3.0.
We can observe that the average bitrates for the video chunks with higher levels of interestingness are allocated with higher bitrate budgets on average.
This verifies the effectiveness of the DQN method for aligning bitrate allocation with video interestingness.
In comparison, the other content-agnostic rate adaptation methods, which ignore video interestingness information, will allocate the bitrate budgets equally among different levels of video interestingness.
We also evaluate the correlation between video interestingness and average bitrate for different methods using Pearson coefficient, Spearman coefficient, Kendall's tau coefficient, and the results are shown in Fig. \ref{fig:Interest-correlation}.
The results show that there is no linear correlation between the variables for the content-agnostic approaches.
In contrast, the average bitrate and video interestingness are positively correlated with each other under the CoI method.
\begin{figure}
\begin{center}
\epsfig{file=bar_bitrate.eps, width=.7\columnwidth}
\end{center}
\vspace{-1em}
\caption{The average bitrates for different levels of video interestingness. It can be observed that CoI method tends to allocate more birtrate budgets to video chunks that have higher video interestingness whereas other methods don't show the tendency.}\label{fig:Interest-level}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=coefficient.eps, width=.7\columnwidth}
\end{center}
\vspace{-1em}
\caption{The correlation coefficient between video interestingness and bitrate. The results show that there is no linear correlation between video interestingness and bitrate for RBA, BBA, and Robust-MPC methods. But the result of CoI method shows a positive correlation.}\label{fig:Interest-correlation}
\end{figure}
\subsubsection{Convergence of DQN agent with different hyper-parameters setting}
We also verify the convergence of DQN agent with different hyper-parameters setting, including the network size, learning rate, exploration strategy etc. All the results prove the robustness of our DQN agent with the environment.
Fig. \ref{fig:Q-reward} shows the cumulative reward of the DQN agent with different $\epsilon$-greedy strategies. It can achieve the best performance when $\epsilon$ is 0.2.
\begin{figure}
\begin{center}
\epsfig{file=q_converge.eps, width=.7\columnwidth}
\end{center}
\vspace{-1em}
\caption{The average cumulative rewards of DQN agent under different probability of $\epsilon$-greedy strategy. The DQN agent gets the highest cumulative rewards with the probability of 0.2 to randomly choose the actions.}\label{fig:Q-reward}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=cdf_bitrate.eps, width=.7\columnwidth}
\end{center}
\vspace{-1em}
\caption{The Empirical CDF of average bitrate per session. The results show that CoI tends to allocate a higher bitrate for each video chunck.}\label{fig:cdf-bitrate}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=cdf_rebuffering.eps, width=.7\columnwidth}
\end{center}
\vspace{-1em}
\caption{The Empirical CDF of average rebuffering time per session. It can be observed that CoI method maintains a relatively low rebuffering time even under higher bitrate selection comparing to other methods.}\label{fig:cdf-rebuffering}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=cdf_switching.eps, width=.7\columnwidth}
\end{center}
\vspace{-1em}
\caption{The Empirical CDF of average bitrate variations per session. Though quality variation only accounts for a small part of the reward, the CoI method still controls the bitrate variation to a level comparable to the other methods.}\label{fig:cdf-switching}
\end{figure}
\section{Conclusion} \label{sec:conclusion}
In this work,
we proposed a CoI based rate adaptation method for ABR streaming.
We first developed a deep learning method for recognizing the interestingness of the video content,
and then developed a DQN method which can incorporate interestingness information for rate adaptation so that
the video content with higher interestingness will be allocated with higher bitrate budgets.
Compared with the state-of-the-art rate adaptation methods,
the CoI method will not compromise the performances on the objective QoE metrics of average bitrate, rebuffering time, and quality variations.
Therefore, it can have more advantages compared with the content-agnostic rate adaptation methods in some video streaming scenarios.
Our method has the following limitations.
First, different application scenarios may have different criteria for video interestingness.
For instance, in video lectures, the informativeness of the video content may determine its interestingness to the viewers;
in sport videos, the interestingness may be determined by the actions being played.
Second, users may require different video quality differentiation among the video content of different levels of interestingness.
For instance, in some scenarios, the user may only require a slightly higher quality for the video content with higher interestingness,
while in other scenarios the user may require a significant higher quality.
These problems require the CoI method to be customized according to the specific requirements of a given scenario, e.g., implementing dataset for training the interestingness prediction algorithm or tuning the DQN model to achieve the required quality differentiation.
Nevertheless, our method has the elasticity for achieving the personalization.
\bibliographystyle{IEEEtran}
|
2,877,628,089,833 | arxiv | \section{Introduction}
\subsection{Background}
\label{ssec:backg}
Fix a rational prime $p$.
For $q=p^a$ a power of $p$, denote by $\BF_q$ the finite field with $q$ elements, $\BQ_q$ the unramified extension of $\BQ_p$ of degree $a$ and $\BZ_q$ its ring of integers.
Let $f(x)\in\BF_q[x]$ be a polynomial of degree $d$ with Teichm\"uller lifting $\hat f(x)\in\BZ_q[x]$.
Let $\chi:\BF_q^\times\to \BC_p^\times$ be a multiplicative character and $\omega:\BF_q^\times\to \BZ_q^\times$ the Teichm\"uller lifting.
Then we can write $\chi=\omega^{-u}$ for some $0\le u\le q-2$.
For a non-trivial additive character $\psi_m:\BZ_p\to \BC_p^\times$ of order $p^m$, define the twisted $L$-function
\[L_u(s,f,\psi_m)=\exp\left(\sum_{k=1}^\infty S_{k,u}(f,\psi_m)\frac{s^m}m\right),\]
where $S_{k,u}(f,\psi_m)$ is the twisted exponential sum
\[S_{k,u}(f,\psi_m)=\sum_{x\in\BF_{q^k}^\times}\psi_m\left(\Tr_{\BQ_{q^k}/\BQ_p}\bigl(\hat f(\hat x)\bigr)\right)\omega^{-u}\left(\Nm_{\BF_{q^k}/\BF_q}(x)\right).\]
If $p\nmid d$, then $L_u(s,f,\psi_m)$ is a polynomial of degree $p^{m-1}d$ by Adolphson-Sperber \cite{AdolphsonSperber1987, AdolphsonSperber1991, AdolphsonSperber1993}, Li \cite{Li1999}, Liu-Wei \cite{LiuWei2007} and Liu \cite{Liu2007}.
We will use the twisted $T$-adic exponential sums developed by Liu-Wan \cite{LiuWan2009} and Liu \cite{Liu2002, Liu2009}.
Define the twisted $T$-adic $L$-function
\[L_u(s,f,T)=\exp\left(\sum_{k=1}^\infty S_{k,u}(f,T)\frac{s^k}{k}\right)\in 1+s\BZ_q\ldb T\rdb\ldb s\rdb, \]
where $S_{k,u}(f,T)$ is the twisted $T$-adic exponential sum
\[S_{k,u}(f,T)=\sum_{x\in\BF_{q^k}^\times}(1+T)^{\Tr_{\BQ_{q^k}/\BQ_p}(\hat f(\hat x))}\omega^{-u}\left(\Nm_{\BF_{q^k}/\BF_q}(x)\right).\]
Then $L_u(s,f,\psi_m)=L_u(s,f,\pi_m)$ where $\pi_m=\psi_m(1)-1$.
Denote by
\[C_u(s,f,T)=\prod_{j=0}^\infty L_u(q^j s,f,T)\in 1+s\BZ_q\ldb T\rdb\ldb s\rdb\]
the characteristic function, which is $T$-adic entire in $s$.
Then
\[L_u(s,f,T)=C_u(s,f,T)C_u(qs,f,T)^{-1}.\]
Since the $\pi_m^{a(p-1)}$-adic Newton polygon of $C_u(s,f,\pi_m)$ does not depend on the choice of $\psi_m$, we denote it by $\NP_{u,m}(f)$.
Denote by $\NP_{u,T}(f)$ the $T^{a(p-1)}$-adic Newton polygon of $C_u(s,f,T)$.
As shown in \cite{LiuWan2009} and \cite{Liu2007}, $\NP_{u,m}(f)$ lies over the infinity $u$-twisted Hodge polygon $H_{[0,d],u}^\infty$, which has slopes
\begin{equation}\label{eq:hodge}
\frac nd+\frac{1}{bd(p-1)}\sum_{k=1}^b u_k,\ n\in\BN.
\end{equation}
If we write $0\le s_0\le\cdots\le s_{p^{m-1}d-1}\le 1$ the $q$-adic slopes of $L_u(s,f,\pi_m)$, then the $q$-adic slopes of $C_u(s,f,\pi_m)$ are
\[j+s_i,\quad 0\le i\le p^{m-1}d-1, j\in\BN.\]
That's to say, the $\pi_m^{a(p-1)}$-adic Newton polygon of $L_u(s,f,\pi_m)$ is the restriction of $\NP_{u,m}(f)$ on $[0,p^{m-1}d]$, and it determines $\NP_{u,m}(f)$.
The prime $p$ is required large enough in the following results.
When $\chi=\omega^{-u}$ is trivial, in \cite{Zhu2014} and \cite{LiuLiuNiu2009}, they gave a lower bound of the Newton polygons.
They defined a polynomial on the coefficients of $f$, called Hasse polynomial. If the Hasse polynomial is nonzero, then the Newton polygons coincide this lower bound.
Assume that $f(x)=x^d+\lambda x^e$ is a binomial. Since the exponential sums can be transformed to the twisted case when $d$ and $e$ are not coprime, we assume $(d,e)=1$ in this paper.
When $u=0$, we list the known cases here.
\begin{itemize}
\item $p\equiv 1\bmod d$, it's well-known that the Newton polygons coincides the Hodge polygon.
\item $e=1$, see \cite[\S 1, Theorem]{Yang2003}, \cite[Theorem~1.1]{Zhu2014} and \cite[Theorem~1.1]{OuyangYang2016a}.
\item $e=d-1,p\equiv -1\bmod d$, see \cite{OuyangZhang2016}.
\item $e=2,p\equiv 2\bmod d$, see \cite{ZhangNiu2021}.
\end{itemize}
For arbitrary $u$, Liu-Niu \cite{LiuNiu2011} obtained the Newton polygons when $e=1$. Zhang-Niu \cite{ZhangNiu2021} also give a conjectural description of the Newton polygons when $p\equiv e\bmod d$.
\subsection{Notations}
We list the notations we will use.
\begin{itemize}
\item $i,j,v,w,k,\ell,n$ indices.
\item $f(x)=x^d+\lambda x^e\in \BF_q[x]$ a binomial with $d>e\ge 1,(d,e)=1,\lambda\neq 0$.
\item $\omega^{-u}:\BF_q^\times\to\BC_p^\times$, where $\omega$ is the Teichm\"ller lifting and $0\le u\le q-2$.
\item $H_{[0,d],u}^\infty$, the infinity $u$-twisted Hodge polygon with slopes in \eqref{eq:hodge}.
\item $c=\frac{q-1}{(q-1,u)}$ the order of $\omega^{-u}$, then $u=\frac{(q-1)\mu}c$ for some $(\mu,c)=1$.
\item $P_{u,e,d}$ a polygon with slopes $w(i)$, defined in \eqref{eq:Pchiped}.
\item $b$ the least positive integer such that $p^bu\equiv u\bmod{(q-1)}$ (equivalently, $p^b\equiv 1\bmod c$).
\item $0\le u_i\le p-1$ such that $u=u_0+u_1p+\cdots+u_{a-1}p^{a-1}$, $u_i=u_{b+i}$.
\item $\ov x$ the minimal non-negative residue of $x$ modulo $d$.
\item $\delta_P$ takes value $1$ if $P$ happens; $0$ if $P$ does not happen.
\item $I_n=\set{1,\dots,n}, I_n^*=\set{0,1,\dots,n}$.
\item $S_n$ (resp. $S_n^*$) the set of permutations of $I_n$ (resp. $I_n^*$).
\item $C_{t,n}$ the minimum of $\sum_{i=0}^n \ov{e^{-1}(pi-\tau(i)+t)}$ for $\tau\in S_n^*$ and $S_{t,n}^\circ$ the set of $\tau\in S_n^*$ such that the summation reaches minimal. Set $C_{t,-1}=0$ for convention.
\item $R_{i,\alpha}=\ov{e^{-1}(pi+\alpha)},\ r_{i,\alpha}=\ov{e^{-1}(t-\alpha-i)}$, see Proposition~\ref{pro:polygon}. We will the subscript $\alpha$ if there is no confusion.
\item $\bfC_{t,n,\alpha}$ the maximal size of $\set{i\in I_n^*\mid R_{i,\alpha}+r_{\tau(i),\alpha}\ge d}$ for $\tau\in S_n^*$. We will the subscript $\alpha$ if there is no confusion.
\item $y_{t,i}^\tau=\ov{e^{-1}(pi-\tau(i)+t)},\ x_{t,i}^\tau=d^{-1}(pi-\tau(i)+t-ey_{t,i}^\tau)$ the unique solution of $dx+ey=pi-\tau(i)+t$ with $0\le y\le d-1$.
\item $h_{n,k}, h_{u,e,d}$ the Hasse numbers defined in \eqref{eq:hasseh}.
\item $\bfp$ the minimal non-negative residue of $p$ modulo $cd$.
\item $H_{\mu,c,\bfp,e,d}\in\BZ$ a constant defined in \eqref{eq:hasse}.
\item $E(X)$ the $p$-adic Artin-Hasse series, see \eqref{eq:artin-hasse}.
\item $\pi$ a $T$-adic uniformizer of $\BQ_p\ldb T\rdb$ given by $E(\pi)=1+T$, with a fixed $d(q-1)$-th root $\pi^{\frac1{d(q-1)}}$.
\item $E_f(X)$, see \eqref{eq:E_f}.
\item $M_u=\frac{u}{q-1}+\BN$.
\item $\CL_u$ a Banach space, see \eqref{eq:CL_u}.
\item $\CB_u$ a subspace of $\CL_u$, see \eqref{eq:CB_u}.
\item $\CB=\CB_u\oplus\CB_{pu}\oplus\cdots\oplus\CB_{p^{b-1}u}$.
\item $\psi:\CL_u\ra \CL_{p^{-1}u}$ defined as $\psi\left(\sum_{v\in M_u} b_v X^v\right)=\sum_{v\in M_{p^{-1}u}} b_{pv}X^v$.
\item $\sigma\in\Gal(\BQ_q/\BQ_p)$ the Frobenius, which acts on $\CL_u$ via the coefficients.
\item $\Psi=\sigma^{-1}\circ\psi\circ E_f:\CB_u\to\CB_{p^{-1}u}$ the Dwork's $T$-adic semi-linear operator.
\item $c_n$ the coefficients of $\det(1-\Psi s\mid \CB)$, see \eqref{eq:c_n}.
\item $s_k\equiv p^k u\bmod{q-1}$ with $0\le s_k\le q-2$.
\item $\Gamma=\left(\gamma_{(v,\frac{s_k}{q-1}+i),(w,\frac{s_\ell}{q-1}+j)}\right)$ the matrix coefficient of $\Psi$ on $\CB$, see \eqref{eq:Gamma}.
\item $\Gamma^{(k)}$ the sub-matrix of $\Gamma$ defined in \eqref{eq:Gamma}.
\item $A^{(k)}=A\cap\Gamma^{(k)}$ the sub-matrix of a principal minor $A$ of $\Gamma$.
\item $\CA_n$ the set of all principal minor $A$ of order $bn$, such that every $A^{(k)}$ has order $n$.
\item $\phi(n)\in\BN\cup\set{+\infty}$ the minimal $x+y$ where $dx+ey=n,x,y\in\BN$.
\item $\gamma_{(\frac{s_k}{q-1}+i,\frac{s_\ell}{q-1}+j)}$, see \eqref{eq:gamma2}.
\item $\Arr{x}{n}:=x(x-1)\cdots(x-n+1), \Arr{x}{0}:=1$ the falling factorial.
\end{itemize}
\subsection{Main results}
In this paper, we give an explicit lower bound of Newton polygons of twisted $L$-functions of binomial $f(x)=x^d+\lambda x^e$.
We reduce the Hasse polynomial to a certain integer \eqref{eq:hasse}. Then $p>(d-e)(2d-1)$ does not divide this constant, if and only if this lower bound coincides the Newton polygons.
Finally, we show that this condition holds for $e=d-1$.
Denote by $P_{u,e,d}$ the polygon such that
\begin{equation}\label{eq:Pchiped}
P_{u,e,d}(n)=\frac{n(n-1)}{2d}+\frac{1}{bd(p-1)}\sum_{k=1}^b \bigl(nu_k+(d-e)C_{u_k,n-1}\bigr),\ n\in\BN.
\end{equation}
Denote by $w(n)=P_{u,e,d}(n+1)-P_{u,e,d}(n)$. Then
\[w(n)=\frac nd+\frac{1}{bd(p-1)}\sum_{k=1}^b\bigl(u_k+(d-e)(C_{u_k,n}-C_{u_k,n-1})\bigr).\]
This polygon lies above the Hodge polygon $H_{[0,d],u}^\infty$ with same points at $d\BZ$, and $w(n+d)=1+w(n)$. Moreover, we have $w(n)\le w(n+1)$ if $p>(d-e)(2d-1)$. See Proposition~\ref{pro:polygon}.
\begin{theorem}\label{thm:lower_bound}
Assume that $p>(d-e)(2d-1)$. Then $\NP_{u,T}(f)$ lies above $P_{u,e,d}$. As a corollary, $\NP_{u,m}(f)$ lies above $P_{u,e,d}$.
\end{theorem}
Define
\begin{equation}\label{eq:hasseh}
h_{n,k}:=\sum_{\tau\in S_{u_k,n}^\circ}\sgn(\tau)\prod_{i=0}^n\frac{1}{x_{u_k,i}^\tau!y_{u_k,i}^\tau!},\quad h_{u,e,d}:=\prod_{n=0}^{d-2}\prod_{k=1}^b h_{n,k}.
\end{equation}
\begin{theorem}\label{thm:coincide}
Assume that $p>(d-e)(2d-1)$. Then
\begin{equation}\label{eq:equal}
\NP_{u,m}(f)=\NP_{u,T}(f)=P_{u,e,d}
\end{equation}
holds if and only if $h_{u,e,d}\in\BZ_p^\times$, if and only if $p\nmid H_{\mu,c,\bfp,e,d}$.
\end{theorem}
Here $H_{\mu,c,\bfp,e,d}\in\BZ$ is a constant defined in \eqref{eq:hasse} and $\bfp$ is the minimal positive residue of $p$ modulo $cd$. Thus we have the following corollary.
\begin{corollary}\label{cor:cong}
Assume that \eqref{eq:equal} holds for
\[a, m, p, f(x)=x^d+\lambda x^e\in\BF_{p^a}[x], u=\frac{(p^a-1)\mu}c,\]
where $b\mid a, \lambda\neq 0$ and $p>(d-e)(2d-1)$.
Then
\begin{enumerate}
\item $H_{\mu,c,\bfp,e,d}\neq 0$.
\item For any
\[a', m', p',f'(x)=x^d+\lambda' x^e\in\BF_{p^{\prime a'}}[x], u'=\frac{({p'}^{a'}-1)\mu}c,\]
where $b\mid a, \lambda\neq0$ and $p'>(d-e)(2d-1)$, we have \eqref{eq:equal} if $p'\equiv p\bmod {cd}$ and $p'>H_{\mu,c,\bfp,e,d}$.
\item As $p'\equiv p\bmod cd$ tends to infinity, the polygons $\NP_{u,m}(f)$ and $\NP_{u,T}(f)$ tend to $H_{[0,d],u}^\infty$, which only depends on $\mu,c,\bfp,d$.
\end{enumerate}
\end{corollary}
The following result extends \cite{OuyangZhang2016}, as they considered the untwisted case with an additional condition $p\equiv -1\bmod d$.
\begin{theorem}\label{thm:examples}
Assume that $e=d-1$.
We have $\NP_{u,m}(f)=\NP_{u,T}(f)=P_{u,e,d}$ if $p>c(d^2-d+1)$.
\end{theorem}
We give the following conjecture, which generalizes the conjecture in \cite{ZhangNiu2021}.
Note that $h_{u,e,d}$ may be zero since $S_{u_k,n}^\circ$ may be empty, so we require that $p$ is large with respect to $c$, as in Corollary~\ref{cor:cong} and Theorem~\ref{thm:examples}.
\begin{conjecture}\label{con:lower_bound}
If $p$ is large enough with respect to $c,d$, then $\NP_{u,m}(f)=\NP_{u,T}(f)=P_{u,e,d}$.
\end{conjecture}
\section{The lower bound}
\subsection{The property of the lower bound polygon}
For any integer $t$, we denote
\[C_{t,n}=\min_{\tau\in S_n^*} \sum_{i=0}^n \ov{e^{-1}(pi-\tau(i)+t)}.\]
We set $C_{t,-1}=0$ for convention.
For any integer $\alpha$, we denote
\[R_{i,\alpha}=\ov{e^{-1}(pi+\alpha)},\ r_{i,\alpha}=\ov{e^{-1}(t-\alpha-i)}\]
and
\[\bfC_{t,n,\alpha}=\max\#\set{i\in I_n^*\mid R_{i,\alpha}+r_{\tau(i),\alpha}\ge d}.\]
\begin{proposition}\label{pro:polygon}
(1) For any $\alpha$, we have
\[C_{t,n}=\sum_{i=0}^n(R_{i,\alpha}+r_{i,\alpha})-d \bfC_{t,n,\alpha}.\]
(2) For any $\alpha$, we have
\[\bfC_{t,n+d,\alpha}=d-1+\bfC_{t,n,\alpha},\quad C_{t,n+d}=C_{t,n}.\]
Thus $w(n+d)=1+w(n)$ and $P_{u,e,d}(dn)=H_{[0,d],u}^\infty(dn)$.
(3) If $p>(d-e)(2d-1)$, we have $w(n)\le w(n+1)$.
\end{proposition}
\begin{proof}
We omit the subscript $\alpha$ in this proof for convention.
(1) It follows from
\[\ov{e^{-1}(pi-\tau(i)+t)}=R_i+r_{\tau(i)}-d\delta_{R_i+r_{\tau(i)}\ge d}.\]
(2) We have
\[\bfC_{t,n}=\max_{\tau\in S_n^*}\#\set{i\in I_n^*\mid R_i\ge d-r_{\tau(i)}}.\]
Note that
\[\set{R_i\mid i\in I_{n+d}^*}=\set{R_i\mid i\in I_n^*}\cup\set{0,1,\dots,d-1}, \]
\[\set{d-r_i\mid i\in I_{n+d}^*}=\set{d-r_i\mid i\in I_n^*}\cup\set{d,1,\dots,d-1}. \]
We may drop the $0$ and $d$ since they do not affect the size.
Apple Lemma~\ref{lem:remove_one_element} $(d-1)$ times and we get $\bfC_{t,n+d}=d-1+\bfC_{t,n}$.
Since
\[\sum_{i=n+1}^{n+d}(R_i+r_i)=2\sum_{j=0}^{d-1}j=d(d-1),\]
we have $C_{t,n+d}=C_{t,n}$. Thus $w(n+d)=1+w(n)$.
Note that $C_{t,n+d}=C_{t,n}$ also holds for $n=-1$. Hence $C_{t,dn-1}=0$ and $P_{u,e,d}(dn)=H_{[0,d],u}^\infty(dn)$.
(3) Denote by $\delta=\delta_{R_n+r_n\ge d}$. For any $\tau\in S_n^*$, write $i=\tau(n)$, $j=\tau^{-1}(n)$ and $\tau_1=(ni)\tau$. Then $\tau_1(n)=n$, $\tau_1(j)=i$ and
\[\begin{split}
&\delta+\#\set{i\in I_{n-1}^*\mid R_i+r_{\tau_1(i)}\ge d}-\#\set{i\in I_n^*\mid R_i+r_{\tau(i)}\ge d}\\
=&\delta+\delta_{R_j+r_i\ge d}-\delta_{R_j+r_n\ge d}-\delta_{R_n+r_i\ge d}.
\end{split}\]
If this is $-2$, then $2d>R_n+r_n+R_j+r_i\ge 2d$, that's impossible. Thus $\delta+\bfC_{t,n-1}-\bfC_{t,n}\ge -1$.
Any $\sigma\in S_{n-1}^*$ can be viewed as an element $\sigma_1\in S_n^*$ fixing $n$. Thus
\[\delta+\#\set{i\in I_{n-1}^*\mid R_i+r_{\sigma(i)}\ge d}=\#\set{i\in I_n^*\mid R_i+r_{\sigma_1(i)}\ge d}.\]
and then $\delta+\bfC_{t,n-1}\le \bfC_{t,n}$.
Now
\[\begin{split}
&C_{t,n}-C_{t,n-1}\\
=&R_n+r_n-d(\bfC_{t,n}-\bfC_{t,n-1})\\
=&\ov{e^{-1}(pn-n+t)}+d(\delta+\bfC_{t,n-1}-\bfC_{t,n})
\end{split}\]
lies in $[-d,d-1]$.
Therefore,
\[\begin{split}
&w(n)-w(n-1)\\
=&\frac{1}{d}+\frac{d-e}{bd(p-1)}\sum_{k=1}^b(C_{u_k,n}-2C_{u_k,n-1}+C_{u_k,n-2})\\
\ge&\frac{1}{d}+\frac{(d-e)(1-2d)}{d(p-1)}\ge 0
\end{split}\]
since $p>(d-e)(2d-1)$.
\end{proof}
\begin{lemma}\label{lem:remove_one_element}
Let $A=\set{a_0,\dots,a_m}$ and $B=\set{b_0,\dots,b_m}$ be two multi-sets of integers.
Assume that $a_0\ge b_0$ and for any $i>0$, $b_i>a_0$ or $b_i\le b_0$.
Then
\[\max_{\tau\in S_m^*}\#\set{i\in I_m^*\mid a_i\ge b_{\tau(i)}}=1+\max_{\sigma\in S_m}\#\set{i\in I_m\mid a_i\ge b_{\tau(i)}}.\]
\end{lemma}
\begin{proof}
Every permutation in $S_n$ can be viewed as an permutation in $S_n^*$ fixing $0$, thus ``$\ge$'' holds trivially. Write $i=\tau(0)$, $j=\tau^{-1}(0)$ and $\tau_1=(0i)\tau$. Then $\tau_1(0)=0$ and $\tau_1(j)=i$. Thus
\[\begin{split}
&\#\set{i\in I_m^*\mid a_i\ge b_{\tau_1(i)}}-\#\set{i\in I_m^*\mid a_i\ge b_{\tau(i)}}\\
=&1+\delta_{a_j\ge b_i}-\delta_{a_j\ge b_0}-\delta_{a_0\ge b_i}.
\end{split}\]
If this is negative, then $a_0\ge b_i>a_j\ge b_0$, which is impossible.
Thus ``$\le$'' holds.
\end{proof}
\subsection{The twisted \texorpdfstring{$T$}{T}-adic Dwork's trace formula}
This part is almost the same with \cite[\SS 2,3]{LiuNiu2011}.
Denote by
\begin{equation}\label{eq:artin-hasse}
E(X)=\exp\left(\sum_{i=0}^\infty p^{-i}X^{p^i}\right)=\sum_{n=0}^\infty \lambda_n X^n\in\BZ_p\ldb X\rdb
\end{equation}
the $p$-adic Artin-Hasse series.
Then $\lambda_n=1/n!$ if $n<p$.
Denote by
\begin{equation}\label{eq:E_f}
E_f(X)=E(\pi X^d)E(\pi\hat\lambda X^e)=\sum_{n=0}^\infty \gamma_n X^n.
\end{equation}
Then
\[\gamma_k=\sum\pi^{x+y}\lambda_x\lambda_y\hat\lambda^y,\]
where $(x,y)$ runs through non-negative solutions of $dx+ey=k$.
Denote by $M_u=\frac{u}{q-1}+\BN$.
Define
\begin{equation}\label{eq:CL_u}
\CL_u=\set{\left.\sum_{v\in M_u} b_v \pi^{\frac{v}{d}}X^v\;\right|\; b_v\in\BZ_q\ldb \pi^{\frac{1}{d(q-1)}}\rdb}
\end{equation}
and
\begin{equation}\label{eq:CB_u}
\CB_u=\set{\left.\sum_{v\in M_u} b_v \pi^{\frac{v}{d}}X^v\in\CL_u\;\right|\;\ord_\pi b_v\to+\infty\text{ as }v\to+\infty}.
\end{equation}
Define a map
\begin{equation}\label{eq:psi}
\begin{split}
\psi:\CL_u&\lra \CL_{p^{-1}u}\\
\sum_{v\in M_u} b_v X^v&\longmapsto \sum_{v\in M_{p^{-1}u}} b_{pv}X^v.
\end{split}
\end{equation}
The power series $E_f$ defines a map on $\CB_u$ via multiplication.
Let $\sigma\in \Gal(\BQ_q/\BQ_p)$ be the Frobenius, which acts on $\CL_u$ via the coefficients.
Then the Dwork's $T$-adic semi-linear operator $\Psi=\sigma^{-1}\circ\psi\circ E_f$ sends $\CB_u$ to $\CB_{p^{-1}u}$.
Hence $\Psi$ acts on
\[\CB:=\bigoplus_{i=0}^{b-1}\CB_{p^iu}.\]
We have a linear map
\[\Psi^a=\psi^a\circ\prod_{i=0}^{a-1}E_f^{\sigma^i}(X^{p^i})\]
on $\CB$ over $\BZ_q\ldb\pi^{\frac{1}{d(q-1)}}\rdb$.
Since $\Psi$ is completely continuous in the sense of \cite{Serre1962}, the following determinants are well-defined.
\begin{theorem}\label{thm:characterize Newton polygon}
We have
\[C_u(s,f,T)=\det\left(1-\Psi^a s \;\left|\; \CB_u/\BZ_q\ldb\pi^{\frac{1}{d(q-1)}}\rdb\right.\right).\]
Thus the $T$-adic Newton polygon of $C_u(s,f,T)$ is the lower convex closure of
\[\left(n,\frac1b\ord_T(c_{abn})\right),\ n\in\BN,\]
where
\begin{equation}\label{eq:c_n}
\det\left(1-\Psi s\;\left|\;\CB/\BZ_p\ldb\pi^{\frac{1}{d(q-1)}}\rdb\right.\right)=\sum_{i=0}^\infty (-1)^n c_n s^n.
\end{equation}
\end{theorem}
\begin{proof}
See \cite[Theorem~4.8]{LiuWan2009}, \cite{Liu2007}, \cite[Theorems~2.1, 2.2]{LiuLiuNiu2009} and \cite[Theorems~2.1, 5.3]{LiuNiu2011}.
\end{proof}
Write $s_k\equiv p^k u\bmod{q-1}$ with $0\le s_k\le q-2$. Then $s_{b-k}=s_{-k}=u_k+u_{k+1}p+\cdots+u_{k+a-1}p^{a-1}$.
Let $\xi_1,\dots,\xi_a$ be a normal basis of $\BQ_q$ over $\BQ_p$.
The space $\CB$ has a basis
\[\set{\xi_v(\pi^{\frac1d}X)^{\frac{s_k}{q-1}+i}}_{(i,v,k)\in\BN\times I_a\times I_b}\]
over $\BZ_p\ldb\pi^{\frac{1}{d(q-1)}}\rdb$.
Let $\Gamma=\left(\gamma_{(v,\frac{s_k}{q-1}+i),(w,\frac{s_\ell}{q-1}+j)}\right)_{\BN\times I_a\times I_b}$ be the matrix of $\Psi$ on $\CB$ with respect to this basis.
Then
\begin{equation}\label{eq:Gamma}
\Gamma=\begin{pmatrix}
0&\Gamma^{(1)}&0&\cdots&0\\
0&0&\Gamma^{(2)}&\cdots&0\\
\vdots&\vdots&\vdots&\ddots&\vdots\\
0&0&0&\cdots&\Gamma^{(b-1)}\\
\Gamma^{(b)}&0&0&\cdots&0
\end{pmatrix},
\end{equation}
where
\[\Gamma^{(k)}=\left(\gamma_{(v,\frac{s_{k-1}}{q-1}+i),(w,\frac{s_k}{q-1}+j)}\right)_{\BN\times I_a}.\]
Hence we have
\[\det\left(1-\Psi s\;\left|\; \CB/\BZ_p\ldb\pi^{\frac{1}{d(q-1)}}\rdb\right.\right)=\det(1-\Gamma s)=\sum_{n=0}^\infty (-1)^{bn} c_{bn}s^{bn}\]
with $c_n=\sum \det(A)$, where $A$ runs through all principal minors of order $n$, see \cite{LiZhu2005}.
Denote by $A^{(k)}=A\cap \Gamma^{(k)}$ as a minor of $\Gamma^{(k)}$.
If $A$ has order $bn$, but the order of some $A^{(k)}$ is not $n$, then $\det(A)=0$.
Denote by $\CA_n$ the set of all principal minors of order $bn$, such that every $A^{(k)}$ has order $n$.
Then
\begin{equation}\label{eq:expression in terms of minors}
c_{bn}=\sum_{A\in\CA_n}\det(A)=(-1)^{n(b-1)}\sum_{A\in\CA_n}\prod_{k=1}^b \det(A^{(k)}).
\end{equation}
\begin{theorem}\label{thm:lower_coe}
If $p>(d-e)(2d-1)$, then
\[\ord_\pi(\det(A))\ge ab(p-1)P_{u,e,d}(n+1)\]
for any $A\in\CA_{a(n+1)}$.
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:lower_bound}]
By Theorem~\ref{thm:lower_coe} and \eqref{eq:expression in terms of minors}, we have
\[\ord_\pi(c_{abn})\ge ab(p-1)P_{u,e,d}(n).\]
Thus $\NP_{u,T}(f)$ lies above $P_{u,e,d}$ by Theorem~\ref{thm:characterize Newton polygon}.
Note that $\NP_{u,m}(f)\ge \NP_{u,T}(f)$ by definition. Therefore, $\NP_{u,m}(f)$ also lies above $P_{u,e,d}$.
\end{proof}
\subsection{Estimation on \texorpdfstring{$c_n$}{coefficients}}
Denote by
\[\phi(n)=\min\set{x+y\mid dx+ey=n,x,y\in\BN}\in\BN\cup\set{+\infty}.\]
Here the minimal element in $\emptyset$ is regarded as $+\infty$.
For $i,j\in\BN,k\in I_b$, define
\begin{equation}\label{eq:gamma2}
\gamma_{(\frac{s_{k-1}}{q-1}+i,\frac{s_k}{q-1}+j)}=
\pi^{\frac{s_k-s_{k-1}}{d(q-1)}+\frac{j-i}{d}}\gamma_{pi-j+u_{-k}}.
\end{equation}
Then
\[\xi_w^{\sigma^{-1}}\gamma_{(\frac{s_{k-1}}{q-1}+i,\frac{s_k}{q-1}+j)}^{\sigma^{-1}}=\sum_{u\in I_a}\gamma_{(v,\frac{s_{k-1}}{q-1}+i),(w,\frac{s_k}{q-1}+j)}\xi_v\]
and
\begin{equation}\label{eq:ord terms}
\begin{split}
&\ord_\pi\left(\gamma_{(v,\frac{s_{k-1}}{q-1}+i),(w,\frac{s_k}{q-1}+j)}\right) \ge \ord_\pi\left(\gamma_{(\frac{s_{k-1}}{q-1}+i,\frac{s_k}{q-1}+j)}\right)\\
=&\frac{s_k-s_{k-1}}{d(q-1)}+\frac{j-i}{d}+\phi(pi-j+u_{-k}).
\end{split}
\end{equation}
\begin{lemma}\label{lem:lower_bound}
For any $\tau\in S_n^*$ and integer $t$,
\[\sum_{i=0}^n \phi(pi-\tau(i)+t)\ge d^{-1}\left(\frac{(p-1)n(n+1)}2+(n+1)t+(d-e)C_{t,n}\right).\]
\end{lemma}
\begin{proof}
We may assume that $pi-\tau(i)+t\in d\BN+e\BN$ for each $i$.
One can easily show that
\[\phi(k)=d^{-1}\left(k+(d-e)\ov{e^{-1}k}\right)\]
and the minimum arrives at
\[(x,y)=\left(d^{-1}(k-e\ov{e^{-1}k}),\ov{e^{-1}k}\right).\]
Thus
\begin{equation}\label{eq:phi}
\phi(pi-j+t)=d^{-1}\left(pi-j+t+(d-e) \ov{e^{-1}(pi-j+t)}\right).
\end{equation}
The result then follows easily.
\end{proof}
\begin{lemma}\label{lem:d copies}
Assume $a_i=a_{i+m}$ and $b_i=b_{i+m}$ for any $i$. Then
\[\max_{\tau\in S_{md}}\#\set{i\in I_{md}\mid a_i\ge b_{\tau(i)}}=d\max_{\sigma\in S_m}\#\set{i\in I_m\mid a_i\ge b_{\sigma(i)}}.\]
\end{lemma}
\begin{proof}
We may assume that $a_k\ge b_k$ and for any $i\neq k$, $b_i>a_k$ or $b_i\le b_k$. Otherwise both sides should be zero.
We may assume that $k=m$ for simplicity.
Apply Lemma~\ref{lem:remove_one_element} to $(a_{mi},b_{mi})$, we get
\[\max_{\tau\in S_{md}}\#\set{i\in I_{md}\mid a_i\ge b_{\tau(i)}}
=d+\max_{\sigma}\#\set{i\in I_{md}-m\BZ\mid a_i\ge b_{\tau(i)}},\]
where $\sigma$ runs through permutations on $I_{md}-m\BZ$.
Since
\[\max_{\tau\in S_{m}}\#\set{i\in I_{m}\mid a_i\ge b_{\tau(i)}}
=1+\max_{\sigma}\#\set{i\in I_{m}-\set{m}\mid a_i\ge b_{\tau(i)}},\]
where $\sigma$ runs through permutations on $I_{m}-\set m$, the result then follows by induction on $m$.
\end{proof}
\begin{lemma}\label{lem:lower_bound N}
For any $i\in \BN\times I_a$, we write $i=(i',i'')$.
Then for any permutation $\tau$ on $I_n^*\times I_a$,
\[\sum_{i\in I_n^*\times I_a} \phi(pi'-\tau(i)'+t)\ge\frac{a}{d}\left(\frac{(p-1)n(n+1)}{2}+(n+1)t+(d-e)C_{t,n}\right).\]
\end{lemma}
\begin{proof}
By Eq.~\eqref{eq:phi}, we only need to show that
\[\min_{\tau}\sum_{i\in I_n^*\times I_a}\ov{e^{-1}(pi-\tau(i)+t)}=a C_{t,n}.\]
By Proposition~\ref{pro:polygon}, it can be reduced to
\[\max_\tau\#\set{i\in I_n^*\times I_a\mid R_{i',\alpha}+r_{\tau(i)',\alpha}\ge d}=a \bfC_{t,n,\alpha}.\]
This follows from Lemma~\ref{lem:d copies}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:lower_coe}]
This proof is similar to \cite[Theorem~3.2]{ZhangNiu2021}.
Denote by $\CR$ the set of indices of $A$ and
\[\CR^{(k)}\times\set k=\CR\cap(\BN\times I_a\times\set k),\quad \CR^{(0)}=\CR^{(b)}.\]
Then $\#\CR^{(k)}=a(n+1)$,
\[A^{(k)}=\left(\gamma_{(v,\frac{s_{k-1}}{q-1}+i),(w,\frac{s_k}{q-1}+j)}\right)_{(i,v)\in \CR^{(k-1)},(j,w)\in\CR^{(k)}}\]
and
\[\det(A)=\prod_{k=1}^b\det(A^{(k)})=\sum_{\tau}\sgn(\tau)\prod_{i\in\CR}\gamma_{i,\tau(i)},\]
where $\tau$ runs through permutations of $\CR$ such that $\tau(\CR^{(k-1)})=\CR^{(k)}$.
Here,
\[\ord_\pi\left(\prod_{i\in\CR}\gamma_{i,\tau(i)}\right)\ge S_\CR^\tau\]
by \eqref{eq:ord terms}, where
\[\begin{split}
S_\CR^\tau&=\sum_{k=1}^b\sum_{i\in \CR^{(k-1)}} \left(\frac{\tau(i)'-i'}{d}+\phi\bigl(pi'-\tau(i)'+u_{-k}\bigr)\right)\\
&\ge d^{-1}\sum_{k=1}^b\sum_{i\in\CR^{(k-1)}}\left((p-1)i'+(d-e)\ov{e^{-1}(pi'-\tau(i)'+u_{-k})}\right)
\end{split}\]
by Eq.~\eqref{eq:phi}.
By Lemma~\ref{lem:lower_bound N},
\[S_\CN^\sigma\ge ab(p-1)P_{u,e,d}(n+1),\]
where $\CN=I_n^*\times I_a\times I_b$.
By \eqref{eq:expression in terms of minors}, we only need to show that for any permutation $\tau$ of $\CR\neq\CN$ such that $\tau(\CR^{(k-1)})=\CR^{(k)}$, there is a permutation $\sigma$ of $\CN$ such that $\sigma(\CN^{(k-1)})=\CN^{(k)}$ and $S_\CR^\tau\ge S_\CN^\sigma$.
Assume $\#(\CR\bs \CN)=m$. Write $T=(\CN\bs \CR)\cup\tau^{-1}(\CR\bs\CN)$, then $\#T=2m$ and $\CN\bs T=\CN\cap\tau^{-1}(\CN\cap\CR)$. Thus $\tau(\CN\bs T)\subset \CN$.
Note that for $i\in\CR\bs\CN,j\in\CN\bs\CR$, $i'\ge n+1\ge j'+1$.
We can choose a permutation $\sigma$ of $\CN$ such that $\sigma(\CN^{(k-1)})=\CN^{(k)}$ and $\sigma=\tau$ on $\CN\bs T$. Then
\[\begin{split}
&d(S_\CR^\tau-S_\CN^\sigma)\\
\ge&\left(\sum_{i\in \CR\bs\CN}-\sum_{i\in\CN\bs\CR}\right)(p-1)i'-\sum_{k=1}^b\sum_{i\in T\cap\CN^{(k)}}(d-e)\ov{e^{-1}(pi'-\tau(i)'+u_{-k})}\\
\ge&m(p-1)-2m(d-e)(d-1)>0.
\end{split}\]
The result then follows.
\end{proof}
\section{The Newton polygons}
\begin{lemma}\label{lem:rigidity}
The Newton polygon $\NP_m(f)$ lies over $\NP_T(f)$. Moreover, if the equality holds for one $m$, then it holds for all $m$.
\end{lemma}
\begin{proof}
See \cite[Theorem~2.3]{LiuWan2009} and \cite[Theorem~5.5]{LiuNiu2011}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:coincide}]
(1) Since $w(d+i)=1+w(i)$, both of $\NP_{u,m}(f)$ and $P_{u,e,d}$ across points $\bigl(di,H_{[0,d],u}^\infty(di)\bigr)$, we only need to show that $\NP_{u,m}(f)=P_{u,e,d}$ on $[1,d-1]$. By Lemma~\ref{lem:rigidity}, we may assume that $m=1$.
Assume $0\le n\le d-2$. Recall that $S_{t,n}^\circ$ is the set of $\tau\in S_n^*$ such that
\[\#\set{i\in I_n^*\mid R_{i,\alpha}+r_{\tau(i),\alpha}\ge d}=\bfC_{t,n,\alpha}\]
and every $pi-\tau(i)+t\in d\BN+e\BN$.
It's equivalently to say, the equality in Lemma~\ref{lem:lower_bound} holds.
Recall that
\[y_{t,i}^\tau=\ov{e^{-1}(pi-\tau(i)+t)},\quad x_{t,i}^\tau=\phi(pi-\tau(i)+t)-y_{t,i}^\tau.\]
Denote by $m$ the right hand side in Lemma~\ref{lem:lower_bound}.
Then we have
\[\begin{split}
&\det(\gamma_{pi-j+t})_{i,j\in I_n^*}
\equiv\pi^m\sum_{\tau\in S_{t,n}^\circ}\sgn(\tau)\prod_{i=0}^n\lambda_{x_{t,i}^\tau}\lambda_{y_{t,i}^\tau}\hat\lambda^{y_{t,i}^\tau}
\\
\equiv&\pi^m\hat\lambda^{v_{t,n}}\sum_{\tau\in S_{t,n}^\circ}\sgn(\tau)\prod_{i=0}^n\frac{1}{x_{t,i}^\tau!y_{t,i}^\tau!}\mod \pi^{m+1},
\end{split}\]
where
\[v_{t,n}:=\sum_{i=0}^n y_{t,i}^\tau=\sum_{i=1}^n(R_{i,\alpha}+r_{i,\alpha})-d\bfC_{t,n,\alpha}\]
is independent on $\tau\in S_n^\circ$.
Recall that $S_\CR^\tau>S_\CN^\sigma$ in the proof of Theorem~\ref{thm:lower_coe}.
Then modulo $\pi^{ab(p-1)P_{u,e,d}(n+1)+1}$, we have
\[\begin{split}
c_{ab(n+1)}&=\sum_{A\in\CA_{a(n+1)}}\det(A)\equiv \det\bigl((\gamma_{i,j})_{i,j\in \CN}\bigr)\\
&=\pm\Nm\left(\prod_{k=1}^b\det\left(\gamma_{(\frac{s_{k-1}}{q-1}+i,\frac{s_k}{q-1}+j)}\right)_{i,j\in I_n^*}\right)\\
&=\pm \Nm\left(\prod_{k=1}^b\det(\gamma_{pi-j+u_k})_{i,j\in I_n^*}\right)\\
&\equiv\pm\pi^{ab(p-1)P_{u,e,d}(n+1)}\Nm\left(\prod_{k=1}^b \hat\lambda^{v_{u_k,n}} h_{n,k}\right)
\end{split}\]
by \eqref{eq:expression in terms of minors}, \eqref{eq:gamma2}, \cite[Lemma~4.4]{LiuLiuNiu2009} and \cite[Lemma~3.5]{LiuNiu2011}.
Hence we get the first assertion by replacing $\pi$ by $\pi_1$.
(2) Denote by $t_k$ the minimal non-negative residue of $p^{-k} \mu$ modulo $c$.
Then $u_k=\frac{t_{k+1} p-t_k}{c}$.
Write $\bfp$ the minimal positive residue of $p$ modulo $cd$ and $p=c d\ell+\bfp$.
Denote by
\[\bfu_k=\frac{t_{k+1} \bfp-t_k}c,\
\bfy_{\bfu_k,i}^\tau=\ov{-e^{-1}(\bfp i-\tau(i)+\bfu_k)},\
\bfx_{\bfu_k,i}^\tau=\frac{\bfp i-\tau(i)+\bfu_k-e\bfy_{\bfu_k,i}^\tau}{d}.\]
Then
\[u_k=t_{k+1} d\ell+\bfu_k,\
y_{u_k,i}^\tau=\bfy_{\bfu_k,i}^\tau,\
x_{u_k,i}^\tau=(c i+t_{k+1})\ell+\bfx_{\bfu_k,i}^\tau.\]
It's easy to see that $\bfx_{\bfu_k,i}^\tau<\bfp$ and $x_{u_k,i}^\tau<p$.
Since
\[\bfx_{\bfu_k,i}^\tau\ge \frac{-n-e(d-1)}{d}>-e-1,\]
we have $\bfx_{\bfu_k,i}^\tau\ge -e$.
Note that $y_{t,i}^\tau$ does not depend on $\ell$.
Denote by
\begin{equation}\label{eq:hasse}
\begin{split}
H_{\mu,c,\bfp,e,d}=&\prod_{k=1}^b\prod_{n=0}^{d-2}\sum_{\tau\in S_n^\circ} \sgn(\tau)\prod_{i=1}^n \Arr{d-1}{d-1-\bfy_{\bfu_k,i}^\tau}
\times (cd)^{\bfp-1-\bfx_{\bfu_k,i}^\tau}\\
&\times \Arr{-\dfrac{\bfp(ci+t_{k+1})}{cd}+\bfp-1}{\bfp-1-\bfx_{\bfu_k,i}^\tau}\in \BZ.
\end{split}
\end{equation}
Then
\[\begin{split}
&H_{\mu,c,\bfp,e,d}\\
\equiv&\prod_{k=1}^b\prod_{n=0}^{d-2}\sum_{\tau\in S_n^\circ} \sgn(\tau)\prod_{i=1}^n \Arr{d-1}{d-1-\bfy_{\bfu_k,i}^\tau}
\times (cd)^{\bfp-1-\bfx_{\bfu_k,i}^\tau}
\\
&\times \Arr{(ci+t_{k+1})\ell+\bfp-1}{\bfp-1-\bfx_{\bfu_k,i}^\tau}\\
=&h_{u,e,d}\prod_{k=1}^b\prod_{n=0}^{d-2}\prod_{i=1}^n (d-1)! (cd)^{\bfp-1-\bfx_{\bfu_k,i}^\tau} \bigl((ci+t_{k+1})\ell+\bfp-1\bigr)!\mod p
\end{split}\]
Note that $d-1,(ci+t_{k+1})\ell+\bfp-1<p$.
Thus
\[\NP_{u,m}(f)=\NP_{u,T}(f)=P_{u,e,d}\iff p\nmid H_{\mu,c,\bfp,e,d}\]
for $p>(d-e)(2d-1)$.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:cong}]
Since $p\nmid H_{\mu,c,\bfp,e,d}$, we have $H_{\mu,c,\bfp,e,d}\neq 0$.
Hence $p'\nmid H_{\mu,c,\bfp,e,d}$ for any $p'>H_{\mu,c,\bfp,e,d}$.
Note that
\[\sum_{k=1}^b u_k=\frac{p-1}{c}\sum_{k=1}^b t_k,\]
thus $H_{[0,d],u}^\infty$ only depends on $\mu,c,\bfp,d$.
Since
\[P_{u,e,d}(n)-H_{[0,d],u}^\infty(n)=\frac{d-e}{bd(p-1)}\sum_{k=1}^bC_{u_k,n-1}\le \frac{(d-e)\ov n(d-1)}{d(p-1)}\]
tends to zero as $p$ tends to infinity, the result then follows.
\end{proof}
\begin{example}
Assume that $p\equiv 1\bmod d$ and $d\mid u_k$ for all $k$. Write $p=dk+1$ and $t=u_k$. Then
\[R_i:=R_{i,0}=\ov{e^{-1}i},\quad R_i:=r_{i,0}=\ov{-e^{-1}i},
\quad \bfC_{t,n}=n,\quad S_n^\circ=\set{1}\]
and $x_{t,i}^1=\frac{(p-1)i+t}{d}, y_{t,i}^1=0$. Since
\[h_{n,k}=\left(\prod_{i=0}^n\left(\frac{(p-1)i+u_k}{d}\right)!\right)^{-1}\in\BZ_p^\times,\]
we obtain that the Newton polygons coincide $H_{[0,d],u}^\infty$.
\end{example}
\section{The case \texorpdfstring{$e=d-1$}{e=d-1}}
If $pi-\tau(i)+t\notin d\BN+e\BN$ for some $i$, then $x_{t,i}^\tau<0$.
Set $1/k!=0$ for negative integer $k$.
Then
\[h_{n,k}=\sum_{\tau\in S_{u_k,n}^\bullet}\sgn(\tau)\prod_{i=1}^n\frac{1}{x_{u_k,i}^\tau!y_{u_k,i}^\tau!},\]
where $S_{t,n}^\bullet$ the set of $\tau\in S_n^*$ such that the size of $\set{i\in I_n^*\mid R_{i,\alpha}+r_{\tau(i),\alpha}\ge d}$ is $C_{t,n,\alpha}$.
\begin{lemma}\label{eq:estimate_factorial}
Denote by $c(j)=\Arr{-\alpha j+\beta}{j}$.
(1) If $u_i=\alpha v_i+\beta$ for any $i$, then the matrix
\begin{equation}\label{eq:matrix transform}
\bigl(\Arr{u_i}{j}\cdot \Arr{v_i+n}{n-j}\bigr)_{0\le j\le n}
\implies \left(c(j) v_i^{n-j}\right)_{0\le j\le n}
\end{equation}
by third elementary column transformations.
(2) If $u_i\equiv \alpha v_i+\beta\bmod p$ for any $i$,
then \eqref{eq:matrix transform} holds by third elementary column transformations, modulo $p$.
\end{lemma}
\begin{proof}
(1) Write
\[\Arr{\alpha x+\beta}{j}=\sum_{t=0}^{j}c_t(j)\cdot \Arr{x+j}{t},\]
then $c_0(j)=c(j)$ and
\begin{equation}\label{eq:expansion}
\begin{split}
&\Arr{u_i}{j}\cdot \Arr{v_i+n}{n-j}\\
=&\sum_{t=0}^j c_t(j)\cdot \Arr{v_i+j}{t}\cdot\Arr{v_i+n}{n-j}\\
=&\sum_{t=0}^j c_t(j)\cdot \Arr{v_i+n}{n-j+t}.
\end{split}
\end{equation}
Hence by third elementary column transformations,
\[ \bigl(\Arr{u_i}{j}\cdot \Arr{v_i+n}{n-j}\bigr)
\implies \bigl(c(j)\cdot \Arr{v_i+n}{n-j}\bigr)
\implies \left(c(j)v_i^{n-j}\right).\]
(2) In this case, \eqref{eq:expansion} holds modulo $p$. The result then follows easily.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:examples}]
Since $p>c(d^2-d+1)$, we have $p>(d-e)(2d-1)$.
Denote by $t=u_k$ and $t_k$ the minimal non-negative residue of $p^{-k} \mu$ modulo $c$.
Then $t=\frac{t_{k+1} p-t_k}{c}$.
If $c>1$, then $t\ge\frac{p-(c-1)}{c}\ge d(d-1)$ and $t<\frac{(c-1)p}{c}\le p-d(d-1)$. If $c=1$, then $t=0$.
Assume that $0\le n\le d-2$.
Denote by
\[R_i=R_{i,t}=\ov{e^{-1}(pi+t)}=\ov{-pi-t}=-pi-t+\ell_i d\]
and
\[r_i=r_{i,t}=\ov{-e^{-1}i}=\ov{i}.\]
Then
\[\set{d-r_i\mid i\in I_n^*}=\set{d,d-1,\dots,d-n}.\]
We have
\[\bfC_{t,n}=\#\set{i\in I_n^*\mid R_i\ge d-n}\]
and
\[S_n^\bullet=\set{\tau\in S_n^*\mid R_i+\tau(i)\ge d\ \text{for}\ R_i\ge d-n}.\]
For $R_i<d-n$, we have $R_i+\tau(i)<d$ and
\[x_{t,i}^\tau=pi+t-\ell_i e-\tau(i),\quad y_{t,i}^\tau=-pi-t+\ell_i d+\tau(i);\]
for $R_i\ge d-n$, we have $R_i+\tau(i)\ge d$ and
\[x_{t,i}^\tau=pi+t-\ell_i e+e-\tau(i),\quad y_{t,i}^\tau=-pi-t+\ell_i d-d+\tau(i).\]
If $\tau\notin S_n^\bullet$, there is $i$ such that $y_{t,i}^\tau<0$ or $x_{t,i}^\tau<0$.
Denote by
\[(u_i,v_i)=\begin{cases}
(pi+t-\ell_i e,-pi-t+\ell_i d),&\text{ if }R_i<d-n;\\
(pi+t-\ell_i e+e,-pi-t+\ell_i d-d),&\text{ if }R_i\ge d-n.
\end{cases}\]
Then
\[h_{n,k}=\det\left(\frac{1}{(u_i-j)!(v_i+j)!}\right).\]
Apply Lemma~\ref{eq:estimate_factorial}(2) with $\alpha=-d^{-1}e,\beta=t(1-d^{-1}e)$, we obtain that
\[\begin{split}
&h_{n,k}\cdot\prod_{i=0}^n u_i!\cdot(v_i+n)!\\
\equiv &\prod_{j=0}^n \Arr{d^{-1}e(j-t)+t}{j} \cdot \det\left(v_i^{n-j}\right)\\
\equiv &\prod_{j=0}^n \Arr{d^{-1}e(j-t)+t}{j} \cdot \prod_{0\le i<j\le n}(v_i-v_j)\mod p.
\end{split}\]
If $R_i<d-n$, then $v_i=R_i\ge 0$; if $R_i\ge d-n$, then $v_i+n=R_i-d+n\ge 0$. Hence $0\le v_i+n\le d-1$ are different and $(v_i+n)!, (v_i-v_j)\in\BZ_p^\times$ if $i\neq j$.
Note that $u_i=\ell_i-R_i$ or $\ell_i-R_i+e$.
When $c=1$, we have $t=R_0=\ell_0$, $u_0=0$ or $e$, and for $i\ge 1$,
\[u_i\ge \ell_i-R_i\ge \frac{pi+t}{d}-d+1\ge\frac{p}{d}-d+1\ge 0.\]
When $c>1$, we have
\[u_i\ge \ell_i-R_i\ge \frac{pi+t}{d}-d+1\ge\frac{t}{d}-d+1\ge 0.\]
Meanwhile,
\[u_i\le \ell_i-R_i+e=\frac{pi+t-(d-1)R_i+de}{d}\le\frac{p(d-2)+t+d e}{d}<p,\]
hence $u_i!\in\BZ_p^\times$.
For any $0\le k\le j-1$, we have
\[0<e(j-t)+d(t-k)=d(j-k)+t-j\le (d-1)j+p-d(d-1)<p,\]
which means that $p\mid \Arr{d^{-1}e(j-t)+t}{j}$.
Hence $h_{n,k}\in\BZ_p^\times$.
\end{proof}
\textbf{Acknowledgments.}
The author would like to thank Chuanze Niu and Daqing Wan for helpful discussions.
The author is partially supported by NSFC (Grant No. 12001510), Anhui Initiative in Quantum Information Technologies (Grant No. AHY150200) and the Fundamental Research Funds for the Central Universities (Grant No. WK0010000061).
\bibliographystyle{alpha}
|
2,877,628,089,834 | arxiv | \section{Introduction}
In the current scenario, electronic industries are facing the problems of power utilization and overheating of equipment. In the past decades, these issues were used to be solved by adopting the practice of reducing the size of traditional transistors which has also been miniaturized up to certain nanometres. However, if the size of the transistors are further scaled, the power and overheating problems will increase exponentially \cite{ieeespectrum,ieeespectrum1}. Moreover, the well-known Landauer's theory extricates the limitations of irreversible computation which also bounds the size of transistors in conventional logic circuits as they also involve loss of information in the form of heat \cite{landuer}. Meanwhile, the demands of more and more applications in single SOCs are also leading to a drastic increase in loss of information. Reversible logic is one of the promising techniques to reduce the power requirements, as these circuits are theoretically proven for providing nearly energy free computation by preventing the loss of information and have the capability of producing ultra high speed and compact electronic devices \cite{Bennett:1973:LRC:1664562.1664568}. However, the logic can be applied to traditional logic circuits, but its applications to quantum computation have been proven for achieving excellence in terms of power consumption, speed and size \cite{quantumbookNielsen}. The identification and implementation of reversible quantum circuits have also been achieved using several probabilistic methods and ideas. Fig. \ref{figintro} shows some of the dominating technologies where the researchers are currently exploring the possibilities for employing this logic at physical foregrounds \cite{selfintro}.
\begin{figure}[!h]
\centering
\includegraphics[width=85mm]{drawing}
\caption{Computing technologies}
\label{figintro}
\end{figure}
The framework of reversible logic circuits design and synthesis techniques is based on Toffoli and Fredkin gates, which can be further scaled into $n$-$th$ order gates and libraries, commonly known as Multiple Control Toffoli (MCT) and Multiple Controlled Fredkin (MCF). Several other gates have also been proposed in the literature, but the primary components of these gates are MCF and MCT. Moreover, the final quantum decomposition of the reversible circuits are based on them \cite{selftandf}. The efficiency of the designs are governed by several performance metrics defining their operating cost. These metrics are number of wires, gate cost, quantum cost and garbage output. Testing has also been extensively studied since last decade for the recognition of several types of fault models in reversible circuits. A number of novel paradigms have been presented in both the area of online and offline testing of reversible logic circuits. Online testable environment are provided over pristine design methodologies and circuit modification principles. Test data minimization in offline tesing is achieved over new deterministic, randomized test patter generation algorithms and circuit modification techniques for respective faults. The reduction of operating cost has been achieved to some needful extent in all the proposed approaches with respect to prior ones for narrowing the compensation with overall testing overheads \cite{selffaultmodels}.
A comprehensive and comparative analysis of the existing online and offline testing methodologies for nearly all fault models in reversible circuits has been completed in correlation with the problem statement \cite{selfprocediareviewonline,selfpertanika,selfintegration,selffaultmodels} in the beginning of the proposed work. An overview of reversible logic, cost metrics and associated faults models are also explained for providing background of the work. Overall work in the literature and deeply analyzed and organized into four set of categories that defining the plan of action for the novel development in the area. The illustration in Fig. \ref{figgen} which shows a generalized framework to achieve the quoted objectives. As overall framework is based on fundamental MCT and MCF gates. At first, an in-depth and comparative analysis of nearly all reversible gates has been done. A three level analysis i.e., gate, design and testability level, has been performed to confirm the efficacy of the fundamental gates \cite{selfprocediagates,selftandf}. At last, the proposed testable design methodologies for online testing are also applied to obtain an efficient set of testable Data Path Elements (DPE) designs.
\begin{figure*}[!h]
\centering{ \includegraphics[width=150mm]{Gen}}
\caption{Framework for designing testable reversible circuits}
\label{figgen}
\end{figure*}
The objectives of the work described in this paper from the authors thesis \cite{phdthesis} is to development of testable design methodologies at reduced testing overheads in terms of reversible circuit cost metrics, test data volume, design complexity and time. There are the following contributions toward the achievement of stated objectives:
\begin{itemize}
\item Novel design methodologies using Multiple Controlled Toffoli (MCT), Multiple Controlled Fredkin (MCF) and mixed Multiple Controlled Toffoli-Fredkin (MCTF) gates which shows built-in testability features towards single bit faults.
\item New circuit modification methodologies for MCT, MCF and MCTF circuits are introduced for the detection of single bit faults.
\item Efficient circuit modification methodologies along with general test sets are proposed for MCT, MCF and MCTF circuits for the detection of stuck-at faults to minimize the volume of test data.
\item New testable designs of Full Adder, Ripple Carry Adder, $4$-bit reversible array based Multiplier and Arithmetic \& Logic Unit are proposed using MCT and MCF gates.
\end{itemize}
Lining up with the targeted objectives, a detailed overview and analysis of the proposed work is described in the consecutive sections:
\section{Circuit Design and Modification Methodologies for Online Testing}
Extensive design methodologies are realized for the construction of MCT, MCF and MCTF circuits. The constructed circuits provide built-in testability feature for the detection of single bit faults. As the utilization of parity preserved architecture with arbitrary design methodology ensures the detection of single bit flip faults in logic circuits. The methodologies are engaged in the creation of parity preserving circuits using MCT, MCF and mixed MCTF gates followed by the fault detection process. The design and test flow of the formulation of these methodologies is depicted in Fig. \ref{figm1}.
\begin{figure*}[!h]
\centering
\includegraphics[width=150mm]{M1}
\caption{Synthesis and test flow of design methodologies for online testing}
\label{figm1}
\end{figure*}
First, an MCT gates placement technique is proposed for producing parity preserving circuits \cite{selfiet}. Second, the properties of MCF gates are exploited to showcase the scheme for the detection of faults \cite{selfmcfonlineoffline}. Third, the MCT gate placement method is utilized in combination with MCF gates to achieve testability in MCTF circuits \cite{selfijpap}. The fault detection is achieved by cascading a parity checker in the circuit using CNOT gates from the inputs and outputs to an additional wire. The circuits produced using proposed methods are incorporated with testability feature rather put extra efforts in converting original circuit into their testable form. A set of benchmark circuits and corresponding testable designs are implemented to observe the cost of designing and proving the efficacy of proposed schemes over existing ones.
Modifications for testability in logic circuits accounts a large increment in operating cost which enhances overall cost of manufacturing. New modification schemes for MCT, MCF and mixed MCTF circuits are introduced at lower operating cost. The modification procedures utilize the technique of parity preservation and generation for providing full coverage of single bit faults. The modification and test flow of the formulation and obtaining the measures for these methodologies is depicted in Fig. \ref{figm2}.
\begin{figure*}[!h]
\centering
\includegraphics[width=150mm]{M2}
\caption{Design and test flow of modification methodologies for online testing}
\label{figm2}
\end{figure*}
Gates cascading terminology is used for the modification of MCT circuits at the start as these gates are widely used for designing reversible circuits. The method requires only a single wire, around twice number of gates and zero garbage cost for its formulation \cite{selfoptik}. Derived gates technique is used for the modification of MCF circuits. The gates are transformed into corresponding testable that provide fault detection as well as location functionality in MCF circuits. It utilized parity preserving gates rather to than to convert a modify a gate into corresponding parity preserving form \cite{selfieee}. A three stage process is explored for converting MCT into MCTF circuits which largely decreases the gate cost using a wire by utilizing gates cascading and parity preserving the property of MCF gates \cite{selfmctfonline}. The method involves the steps of simplification of MCT circuits into MCTF cascades and modification of the resultant circuit. A large set of circuits and benchmarks are designed using proposed methodologies and fault simulations are performed to evaluate the performance and validate their functionality. The detection of single bit faults are targeted in all the proposed methodologies. Results prove that the present methods provides excellent reduction in the operating costs as compared to existing work in this area and provide full coverage of single bit faults.
\section{Circuit Modification Methodologies for Offline Testing}
Test set generation in reversible circuits is followed by a number of methodologies for the detection of faults. These methodologies utilize specific deterministic ATPG, randomized ATPG and modification approaches for the detection of stuck-at, bridging, missing gate, cross-point and cell faults. The existence of the trade-off between testability and overheads can be seen in all the prior methodologies in terms of performance measures like gate cost and quantum cost, test size and time utilization. New modification methodologies are introduced for the detection of stuck-at faults in MCT, MCF and MCTF circuits using test sets of minimal sizes. The flow for the formulation of these methods and simulations of GTS to obtain the effective measures is depicted in Fig. \ref{figm3}.
\begin{figure*}[!h]
\centering
\includegraphics[width=150mm]{M3}
\caption{Design and test set application flow of modification methodologies for offline testing}
\label{figm3}
\end{figure*}
The MCT circuits are modified in such a manner that the applied test vector reaches all the levels without any change in values on the wires of the circuit \cite{selfdrdo}. An ($n+1$) dimensional general test set ($GTS$) containing only two test vectors is presented, which provides full coverage of single and multiple stuck-at faults in the circuit. Here $n$ denotes the number wires contained by the circuit. Deterministic approaches for the identification and detection of different types of fault models in MCF circuits are introduced \cite{selfmcfonlineoffline}. The conservative property of MCF gates is utilized for multiple types of fault detection in these circuits by the three test sets of sizes $2$, $n$ and $2(n-2)$. Moreover, both the schemes are combined for the detection of stuck-at faults in MCTF circuits \cite{selfmctfoffline}. All the methodologies have experimented on several benchmark circuits, where an excellent reduction in overall operating costs has been achieved as compared to prior work experimented in the same platform. Moreover, these circuits can be tested by general test sets of fixed sizes without spending the excess time required to formulate specific algorithm under stuck-at fault detection.
\section{Testable Designs of Data Path Elements}
Modern digital processors are comprised of several data path elements (DPE) like adders, multipliers, multiplexers, logical shifters, arithmetic logic unit etc. These elements are the functional units within the microprocessor which are used to execute computational operations. New testable architectures of a full adder (FA), ripple carry adder (RCA), multiplier (MUL) and an arithmetic \& logic unit (ALU) using MCT and MCF gates \cite{selfmulandadde,selfalu}. The design and simulation flow of these elements is shown in Fig. \ref{figdpe}.
\begin{figure*}[!h]
\centering
\includegraphics[width=140mm]{DPE}
\caption{Design and Implementation flow of DPE}
\label{figdpe}
\end{figure*}
The major role is played by the creation of parity preserving circuits for incorporating testability in overall circuits realization. Firstly a full adder is created which is used to develop the architecture of RCA. A $4$-bit reversible array based multiplier with scalability factor of order $4N$ by using RCA and Fredkin gates \cite{selfmulandadde}. Design of Arithmetic Logic Unit (ALU) which can be scalable up to $N$ number of bits is proposed \cite{selfalu} by combining a novel structure of control unit (CU) using Fredkin gates and FA. that . The ALU design can be scalable for $N$-bit operations. These designs can be scalable for $N$-bit operations and incorporates testability features for the detection of single-bit faults at lower overheads which shows their exclusive features. The superiority of the designed circuits is acknowledged by implementing them using reversible circuit analyzer tool and obtaining corresponding operating costs.
\section{Results and Assessment}
The experiments for evaluating the efficacy of the proposed work in this thesis were performed on a machine with 64-bit Ubuntu-16.04LTS having Intel Core I7-4790, 3.60 GHz clock and 4GB memory. The prerequisites which can be seen in different part of the thesis are listed as follows:
\begin{itemize}
\item The benchmark circuits description in the form of \textit{pla} and \textit{tfc} are taken from reversible logic synthesis and benchmark pages \cite{benchmaslov,revlib}.
\item Revkit-A toolkit is used for reversible logic synthesis \cite{revkit}.
\item The circuits are synthesized using well-known garbage free transformation based synthesis algorithm.
\item RC-viewer and RC-Viewer+ tools for designing reversible circuits is used for calculating respective measures that define operating costs \cite{benchmaslov}.
\item QCA-designer is used for the implementation proposed designs and finding out physical reliability at QCA level. \cite{walus2004qcadesigner}.
\item Fault coverage is verified using simulations carried out in C++ and Java.
\end{itemize}
A large set of benchmark circuits are taken from the two platforms which are experimented on the basis of each design methodologies. Design, synthesis and implementation of the methodologies are done using RC Viewer and Revkit tools. QCA structures are also implemented in to obtain the cost measures in some of the cases. Fault simulation is done by creating programs that realize the circuits and computing the fault coverage after inducing a type of fault for which the design method has been developed. The implementation and simulation of the prior methodologies in the domain has also been executed to compare the presented work and calculate the efficacy.
It has been analyzed the present work in the domain of design methodologies for online testing (DMOnT) has achieved the maximum reduction in cost measures by $12\%$ in case of MCT based designs, $61\%$ in case of MCF based approaches and $51\%$ in case of MCTF based design methodologies. The proposed work in the domain of modification methodologies for online testing (MMOnT) has achieved the reduction in cost measures by $45\%$ in MCT based modification techniques, $49\%$ in case of MCF based techniques and $75\%$ in case of MCTF based modification methodologies. The work done in the domain of modification methodologies for offline testing (MMOffT) has achieved the reduction in cost measures up to $44\%$ in MCT based techniques, $100\%$ in case of MCF based techniques and $30\%$ in case of MCTF based methodologies. These analytics also pictured in Fig. \ref{con123}.
\pgfplotstableread[row sep=\\,col sep=&]{
interval & MCT & MCF & MCTF \\
DMOn.T & 32 & 61 & 51 \\
MMOn.T & 45 & 49 & 75 \\
MMOff.T & 44 & 100 & 30 \\
}\mydata
\begin{figure}[!h]
\centering
\begin{tikzpicture}
\begin{axis}[
xlabel={Testable Design Methodology},
ylabel={\%age reduction in cost measures},
ybar,
ymin=0, ymax=100,
ytick={20,40,60,80,100},
symbolic x coords={DMOn.T,MMOn.T,MMOff.T},
xtick=data,
nodes near coords,
legend pos=north west,
ymajorgrids=true,
grid style=dashed
]
\addplot table[x=interval,y= MCT]{\mydata};
\addplot table[x=interval,y=MCF]{\mydata};
\addplot table[x=interval,y=MCTF]{\mydata};
\legend{MCT,MCF,MCTF }
\end{axis}
\end{tikzpicture}
\caption{Result analysis of presented testing methodologies}
\label{con123}
\end{figure}
The performance of the presented DPE structures is analyzed by implementing $4$-$64$ bit circuits over reversible circuit analyzer tool and compare the characteristics with the prior efficient architectures . Reported results showing the reduction of cost measures are illustrated in Fig. \ref{con444}. Presented testable FA circuits achieved an average reduction by $11\%$ in the when all the considered parameters are combined together. RCA achieved a reduction by $12\%$ and $44\%$ has been achieved in case of MUL. A reduction up to $60\%$ in the gate cost has been achieved with respect to recently reported reversible ALU architectures from the literature.
\pgfplotstableread[row sep=\\,col sep=&]{
interval & Proposed \\
FA & 11 \\
RCA & 12 \\
MUL & 44 \\
ALU & 60 \\
}\mydata
\begin{figure}[!h]
\centering
\begin{tikzpicture}
\begin{axis}[
xlabel={DPE Design},
ylabel={\%age reduction in cost measures},
ybar,
ymin=0, ymax=100,
ytick={20,40,60,80,100},
symbolic x coords={FA,RCA,MUL,ALU},
xtick=data,
nodes near coords,
legend pos=north east,
ymajorgrids=true,
grid style=dashed
]
\addplot table[x=interval,y=Proposed]{\mydata};
\legend{Proposed Designs }
\end{axis}
\end{tikzpicture}
\caption{Result analysis of presented DPE architectures}
\label{con444}
\end{figure}
The fault coverage is also calculated in nearly all the proposed methodologies and designs. Full coverage has been achieved in the respective methodologies for a considered fault model. The calculations show a large reduction in operating costs when compared to the prior work in all designs and testing methodologies. The requirements of extra hardware and time to attain testability can be eliminated by utilizing these methodologies during the design process. Hence, the methods provide solutions to both the problem of designing and testability of reversible circuits which can be adopted by any synthesis algorithm to minimize testing overheads.
\section{Conclusion and Future Scope}
The change in technology will give rise to new challenges where the manufacturers would have to commence certain proofs for their truthful functionality to be deliverable in the huge market containing highly ambitious consumers. Testing is the only way out for them to rescue from these situations. However, it is a necessary exercise but it deals with a large increment in operating costs. Moreover, a huge amount of power consumption is governed by the testing methodologies used by the manufacturers. Numerous approaches for constructing built-in testable MCT, MCF and mixed MCTF circuits over novel design methodologies and circuit modification techniques are presented for the detection of single bit faults. The performance of all the approaches is analyzed by experimenting on a set of benchmark circuits. As the logic circuits are very much prone to the occurrence of stuck-at faults, new circuit modification techniques for minimization of test data in MCT, MCF and mixed MCTF reversible circuits are introduced for their detection. In addition, this work introduces new testable designs of scalable adders, multiplier and arithmetic logic unit for future microprocessors. MCT and MCF gates are taken into account for the formulation of all the proposed approaches as they are proven universal as well as superior for designing and testing of reversible circuits. The efficacy of all the modules is justified by providing the implementation on reliable tools for reversible circuits. The fault tolerance designing model has been identified that utilized the proposed methods in this paper \cite{selffaulttolerance}, however, some of the possible limitations and scope still exists:
\begin{itemize}
\item This work is merely a start to the research needed in determining the feasibility of reversible circuits as a replacement to present CMOS technology.
\item Development of a comprehensive tool for synthesizing reversible circuits based on the proposed framework.
\item Efficient ATPG Algorithms can be explored to minimize the test-data volume for the detection of multiple types of fault models in reversible circuits.
\item MCF gates based synthesis algorithm can be developed, as it has not gained significant attention by the researchers working in this area.
\end{itemize}
\section*{Acknowledgment}
Sincerest thanks to all the reviewers for their extensive and insightful comments and suggestions on the thesis and supported manuscripts. Each and every comment on the manuscript motivated the authors to perform better. Special thanks to Dr. Jimson Mathew, Associate Professor and Head, Department of Computer Science \& Engineering, IIT Patna, for his excellencies during the final defense of work.
.
\bibliographystyle{unsrtnat}
|
2,877,628,089,835 | arxiv | \section{Introduction}
It is well known that an infinitely-differentiable function $f:I\longrightarrow \mathbb{R}$ can represented by its Taylor series expansion around a single point $a\in I$. This in turn defines the single-point Taylor polynomial (STP) as the partial sum of the series, given by
\begin{equation}
\label{Taylor}
p_k(x) = \sum_{n=0}^{k}\frac{(x-a)^{n}}{n!}f^{(n)}(a),
\end{equation}
which produces a good approximation of the function for increasing values of $k$ and in fact exactly interpolates the function's derivatives at the point $a$. That is, the STP is the unique and of minimal (of order $k$) polynomial with the property that
\begin{equation}
p^{(n)}_k(a) = f^{(n)}(a) \quad \forall\, n = 0,...,k.
\end{equation}
Furthermore, the STP can be defined for functions that are just $k$-differentiable, which together with the previous properties makes it very suitable for polynomial interpolation.
On the other hand, for a set of different points $\mathcal{A} = \{a_1,...,a_m\}\subset I$, the Lagrange polynomial (LP)
\begin{equation}
\label{Lagrange}
q_{m}(x) = \sum_{g=1}^{m}\prod^{m}_{\substack{h=1\\
h\neq g}}\frac{(x-a_h)}{(a_g-a_h)}f(a_g),
\end{equation}
gives the unique polynomial (of minimum order $m$) that interpolates the value of the function at the given points
\begin{equation}
q_m(a_i) = f(a_i) \quad \forall\, i = 1,...,m,
\end{equation}
and can be found as a basic multi-point interpolating polynomial in many numerical analysis textbooks~\cite{Richard,Philip,Bulirsch,Hildebrand}.
One can go further and ask for a polynomial which combines both of these properties, defining the multi-point Taylor polynomial (MTP). That is, for a k-differentiable real function $f:I\longrightarrow \mathbb{R}$ and a set of different points $\mathcal{A} = \{a_1,...,a_m\}\subset I$, the MTP is the unique and minimal polynomial (of order $mk+m-1$) which fulfills the conditions
\begin{equation}
\label{property}
P^{(n)}_{k,m}(a_i) = f^{(n)}(a_i)\quad \forall \, i = 1,...,m\quad \&\quad \forall \, n = 0,...,k.
\end{equation}
The solution to these conditions is also known as the Hermite or Osculatory interpolation polynomial, and multiple standard methods and iterative algorithms can be found for producing it~\cite{Richard,Philip,Bulirsch,Note_2Point}, but these do not give its explicit and final form. This is what will be presented on this paper.
The use of MTP or similar interpolants has encountered many applications. For example, the authors in Ref.~\cite{NASA} presented a two-point Taylor series expansion whose coefficients can be iterated to produce higher order expressions and applied this to the two-body problem, which consisted of an expansion of the solutions both at the perigee and apogee. Their results turned out to be significantly better than a basic Taylor approximation around just the perigee. Another study~\cite{Lopez} used a convenient form of the MTP and applied it to the approximation of solutions of second order linear differential equations on an interval with boundary conditions on the extremes, which was done by approximating the solutions around the two boundary points. On the other hand, the study~\cite{Diff_eq} performed a multiple point expansion by requesting the satisfaction of the boundary conditions and of the differential equation at a cloud of points inside the domain. Finally, the work in~\cite{Image_comp} used a multi-point Taylor series formula in terms of general basis functions in order to construct an image compression/decompression method.
\section{The multi-point Taylor polynomial}
The explicit form of the MTP will now presented in the form of a theorem and is the main result of this paper.
\textbf{Theorem:}
For a k-differentiable real function $f:I\longrightarrow \mathbb{R}$ and a set of different points $\mathcal{A} = \{a_1,...,a_m\}\subset I$, the explicit expression for the MTP is
\begin{equation}
\label{Polynomial}
P_{k,m}(x) = \sum_{g=1}^{m}\bigg[\prod^{m}_{\substack{h=1\\
h\neq g}}\frac{(x-a_h)}{(a_g-a_h)}\bigg]^{k+1}\sum_{n=0}^{k}\frac{(x-a_g)^{n}}{n!}F^{n,g}_{k,m}[\mathcal{A}],
\end{equation}
where $F^{n,g}_{k,m}[\mathcal{A}]$ are constant factors given by
\begin{equation}
F^{n,g}_{k,m}[\mathcal{A}] = \sum_{j_1+...+j_m = n}\binom{n}{j_1,...,j_m}f^{(j_g)}(a_g)\prod^{m}_{\substack{l=1\\
l\neq g}}\frac{(k+j_l)!}{k!}(a_l-a_g)^{-j_l}.
\end{equation}
This polynomial satisfies the conditions (\ref{property}) and thus interpolates a function's derivatives at multiple points.
Under the understanding that empty sums are null and empty products are one, this expression recovers the STP of Eq. (\ref{Taylor}) for $m = 1$, while $k=0$ recovers the LP of Eq. (\ref{Lagrange}). This is a straightforward property which comes from the fact that the conditions (\ref{property}) recover the STP's or the LP's conditions respectively for each case.
\textbf{Proof:}
It will now be proven that the polynomial $P_{k,m}(x)$ of Eq. (\ref{Polynomial}) satisfies the conditions (\ref{property}). First notice that $P^{(n)}_{k,m}(a_i)$ will only have contributions from the $g=i$ term of the first sum, as the order of the derivative is not big enough to suppress the $(x-a_i)$ factor involved in the other terms which involve
\begin{equation}
\bigg[\prod^{m}_{\substack{h=1\\
h\neq g}}\frac{(x-a_h)}{(a_g-a_h)}\bigg]^{k+1},
\end{equation}
with $g\neq i$, as it is assumed that $n<k+1$. Therefore,
\begin{equation}
\label{a}
\begin{split}
P^{(n)}_{k,m}(a_i) = \frac{\mathrm{d}^{n}}{\mathrm{d}x^{n}}\Bigg[\bigg[\prod^{m}_{\substack{h=1\\
h\neq i}}\frac{(x-a_h)}{(a_i-a_h)}\bigg]^{k+1}\sum_{s=0}^{k}&\frac{(x-a_i)^{s}}{s!}F^{s,i}_{k,m}[\mathcal{A}]\Bigg]_{x=a_i}. \\
\end{split}
\end{equation}
With that, using the well known Leibniz rule for higher order derivatives
\begin{equation}
(fg)^{(n)} = \sum_{r=0}^{n}\binom{n}{r}f^{(n-r)}g^{(r)},
\end{equation}
then
\begin{equation}
\label{b}
\begin{split}
\frac{\mathrm{d}^{n}}{\mathrm{d}x^{n}}\Bigg[\bigg[\prod^{m}_{\substack{h=1\\
h\neq i}}\frac{(x-a_h)}{(a_i-a_h)}\bigg]^{k+1}\sum_{s=0}^{k}\frac{(x-a_i)^{s}}{s!}A_s\Bigg]_{x=a_i} & = \sum_{r=0}^{n}\binom{n}{r}\frac{\mathrm{d}^{r}}{\mathrm{d}x^{r}}\Bigg[\bigg[\prod^{m}_{\substack{h=1\\
h\neq i}}\frac{(x-a_h)}{(a_i-a_h)}\bigg]^{k+1}\Bigg]_{x=a_i} \\
&\hspace{4cm}\times \sum_{s=0}^{k}\frac{(a_i-a_i)^{s-n+r}}{(s-n+r)!}F^{s,i}_{k,m}[\mathcal{A}]. \\
\end{split}
\end{equation}
Again, only the $s = n-r$ term survives. Now, the general Leibniz rule for more than two factors is
\begin{equation}
\bigg(\prod_{h=1}^{m} f_h\bigg)^{(n)} = \sum_{k_1+...+k_m = n}\binom{n}{k_1,...,k_m}\prod_{h=1}^{m} f_h^{(k_h)}
\end{equation}
(a proof for this relation can be found for example in Ref. \cite{Gen_Leibniz}), so
\begin{equation}
\begin{split}
\frac{\mathrm{d}^{r}}{\mathrm{d}x^{r}}\Bigg[\prod^{m}_{\substack{h=1\\
h\neq i}}\frac{(x-a_h)^{k+1}}{(a_i-a_h)^{k+1}}\Bigg]_{x=a_i} & = \sum_{k_1+...+k_m = r}^{i}\binom{r}{k_1,...,k_m}^i\prod_{\substack{h=1\\
h\neq i}}^{m} \frac{(k+1)!}{(k+1-k_h)!}\frac{(a_i-a_h)^{k+1-k_h}}{(a_i-a_h)^{k+1}} \\
& = [(k+1)!]^{m-1}\sum_{k_1+...+k_m = r}^{i}\binom{r}{k_1,...,k_m}^i\prod_{\substack{h=1\\
h\neq i}}^{m} \frac{1}{(k+1-k_h)!}(a_i-a_h)^{-k_h},
\end{split}
\end{equation}
where the multinomial sum and multinomial coefficient with upper index $\sum_{k_1+...+k_m = r}^{i}$ and $\binom{r}{k_1,...,k_m}^i$ denote that $k_i$ is not taken into account. Joining this with equations (\ref{a}) and (\ref{b}), it follows that
\begin{equation}
\begin{split}
P^{(n)}_{k,m}(a_i) & = \sum_{r=0}^{n}\binom{n}{r} [(k+1)!]^{m-1}F^{n-r,i}_{k,m}[\mathcal{A}]\sum_{k_1+...+k_m = r}^{i}\binom{r}{k_1,...,k_m}^i\prod_{\substack{h=1\\
h\neq i}}^{m} \frac{1}{(k+1-k_h)!}(a_i-a_h)^{-k_h}. \\
\end{split}
\end{equation}
Now,
\begin{equation}
\begin{split}
\binom{n}{r}[(k+1)!]^{m-1}\binom{r}{k_1,...,k_m}^i&F^{n-r,i}_{k,m}[\mathcal{A}] \\
& = \frac{(k+1)^{m-1}}{(n-r)!}\binom{n}{k_1,...,k_m}^i\sum_{j_1+...+j_m = n-r}\binom{n-r}{j_1,...,j_m}f^{(j_i)}(a_i)\prod^{m}_{\substack{l=1\\
l\neq i}}\frac{(k+j_l)!}{(a_l-a_i)^{j_l}} \\
& = (k+1)^{m-1}\binom{n}{k_1,...,k_m}^i\sum_{j_1+...+j_m = n-r}\binom{1}{j_1,...,j_m}f^{(j_i)}(a_i)\prod^{m}_{\substack{l=1\\
l\neq i}}\frac{(k+j_l)!}{(a_l-a_i)^{j_l}}. \\
\end{split}
\end{equation}
Therefore, the evaluation of the polynomial can be expressed as the sum
\begin{equation}
\label{c}
\begin{split}
P^{(n)}_{k,m}(a_i) & = (k+1)^{m-1}\sum_{r=0}^{n}B_r, \\
\end{split}
\end{equation}
with
\begin{equation}
B_r = \sum_{k_1+...+k_m = r}^{i}\binom{n}{k_1,...,k_m}^i\sum_{j_1+...+j_m = n-r}\binom{1}{j_1,...,j_m}f^{(j_i)}(a_i)\prod^{m}_{\substack{l=1\\
l\neq i}}\prod_{\substack{h=1\\
h\neq i}}^{m}\frac{(k+j_l)!}{(k+1-k_h)!(a_i-a_h)^{k_h}(a_l-a_i)^{j_l}}.
\end{equation}
As the products range between the same indices, these can be joined together under the same label. Furthermore, when grouping the last two denominators, a global sign appears, which is given by
\begin{equation}
\prod_{\substack{h=1\\
h\neq i}}^{m}(-1)^{k_h} = (-1)^{r},
\end{equation}
because $k_1+...+k_m = r$, where $k_i$ is not considered. The result of this process is
\begin{equation}
B_r = (-1)^{r}\sum_{k_1+...+k_m = r}^{i}\binom{n}{k_1,...,k_m}^i\sum_{j_1+...+j_m = n-r}\binom{1}{j_1,...,j_m}f^{(j_i)}(a_i)\prod_{\substack{h=1\\
h\neq i}}^{m}\frac{(k+j_h)!}{(k+1-k_h)!}(a_h-a_i)^{-j_h-k_h}.
\end{equation}
The only term that could produce the case $j_i = n$, so that the expression includes a $f^{(n)}(a_i)$ term, is the one where $r=0$. Making an explicit calculation of this term,
\begin{equation}
\begin{split}
B_0 & = (-1)^{0}\sum_{k_1+...+k_m = 0}^{i}\binom{n}{k_1,...,k_m}^i\sum_{j_1+...+j_m = n}\binom{1}{j_1,...,j_m}f^{(j_i)}(a_i)\prod_{\substack{h=1\\
h\neq i}}^{m}\frac{(k+j_h)!}{(k+1-k_h)!}(a_h-a_i)^{-j_h-k_h} \\
& = \sum_{j_1+...+j_m = n}\binom{n}{j_1,...,j_m}f^{(j_i)}(a_i)\prod_{\substack{h=1\\
h\neq i}}^{m}\frac{(k+j_h)!}{(k+1)!}(a_h-a_i)^{-j_h}. \\
\end{split}
\end{equation}
Separating the sought $j_i = n$ term from the rest,
\begin{equation}
\begin{split}
B_0 & = f^{(n)}(a_i)\prod_{\substack{h=1\\
h\neq i}}^{m}\frac{k!}{(k+1)!}(a_h-a_i)^{0} + \sum_{j_1+...+j_m = n}^{j_i\neq n}\binom{n}{j_1,...,j_m}f^{(j_i)}(a_i)\prod_{\substack{h=1\\
h\neq i}}^{m}\frac{(k+j_h)!}{(k+1)!}(a_h-a_i)^{-j_h} \\
& = \frac{1}{(k+1)^{m-1}}f^{(n)}(a_i)+ \sum_{j_1+...+j_m = n}^{j_i\neq n}\binom{n}{j_1,...,j_m}f^{(j_i)}(a_i)\prod_{\substack{h=1\\
h\neq i}}^{m}\frac{(k+j_h)!}{(k+1)!}(a_h-a_i)^{-j_h}, \\
\end{split}
\end{equation}
which gives the right term when inserted in equation (\ref{c}). Thus, now it must only be proven that all the other terms cancel each other. For this, it is convenient to group every coefficient of the derivatives of $f(a_i)$. That is,
\begin{equation}
\label{Expansion}
\begin{split}
P^{(n)}_{k,m}(a_i) & = (k+1)^{m-1}\sum_{r=0}^{n}B_r = f^{(n)}(a_i)+n!\sum_{j=0}^{n-1}\frac{f^{(j)}(a_i)}{(j)!}C_{j},
\end{split}
\end{equation}
where, for $0\leq j \leq n-1$,
\begin{equation}
\begin{split}
C_{j} & = \frac{(k+1)^{m-1}}{n!}\sum_{r=0}^{n-j}(-1)^{r}\sum_{k_1+...+k_m = r}^{i}\binom{n}{k_1,...,k_m} \\
&\hspace{6cm}\times\sum_{j_1+...+j_m = n-r-j}^{i}\binom{1}{j_1,...,j_m}\prod_{\substack{h=1\\
h\neq i}}^{m}\frac{(k+j_h)!}{(k+1-k_h)!}(a_h-a_i)^{-j_h-k_h} \\
& =(k+1)^{m-1}\sum_{r=0}^{n-j}(-1)^{r}\sum_{\substack{k_1+...+k_m = r \\
j_1+...+j_m = n-r-j}}^{i}\prod_{\substack{h=1\\
h\neq i}}^{m}\frac{(k+j_h)!}{(k+1-k_h)!k_h!j_h!}(a_h-a_i)^{-j_h-k_h} \\
& = \sum_{r=0}^{n-j}(-1)^{r}\sum_{\substack{k_1+...+k_m = r \\
j_1+...+j_m = n-r-j}}^{i}\prod_{\substack{h=1\\
h\neq i}}^{m}\binom{k+j_h}{j_h}\binom{k+1}{k_h}(a_h-a_i)^{-j_h-k_h}. \\
\end{split}
\end{equation}
These terms automatically include the $j_i\neq n$ term from $B_0$, whose nullity was left pendent to be proven. The restricted limit on the sum over $r$ comes from the fact that $j> n-r$ is not taken. The notation $\sum_{j_1+...+j_m = n-r-j}^{i}$ is used again to denote that $j_i$ is not considered (in fact $j_i = j$). This applies for the sum over $k_i$ as well. The task is now to prove that each factor $C_j$ is equal to zero.
Now, because
\begin{equation}
\sum_{r=0}^{n-j}(-1)^{r}\sum_{\substack{k_1+...+k_m = r \\
j_1+...+j_m = n-r-j}}^{i}\big[...\big] = \sum_{S(k_h)+S(j_h) = n-j}^{i}(-1)^{S(k_h)}\big[...\big],
\end{equation}
where $S(k_h) = k_1+...+k_m$ and $S(j_h) = j_1+...+j_m$ (without $k_i$ nor $j_i$), it follows that
\begin{equation}
\begin{split}
C_{j} & = \sum_{S(k_h)+S(j_h) =n_j}^{i}(-1)^{S(k_h)}\prod_{\substack{h=1\\
h\neq i}}^{m}\binom{k+j_h}{j_h}\binom{k+1}{k_h}(a_h-a_i)^{-j_h-k_h}, \\
\end{split}
\end{equation}
where the positive integer $n_j = n-j$ was defined. It is now convenient to further join together the terms of the same power over $(a_h-a_i)$. That is, the terms with $j_w+k_w = s_w$, for $w = 1,...,m$ and $w\neq i$ will now be grouped, which is done by performing the sum over the possible $s_w$ terms. This results in
\begin{equation}
\begin{split}
C_{j} & = \sum_{s_1+...+s_m = n_j}^{i} \sum_{\substack{j_v+k_v=s_v\\
v=1,...,m}}^{i}\prod_{\substack{h=1\\
h\neq i}}^{m}(-1)^{k_h}\binom{k+j_h}{j_h}\binom{k+1}{k_h}(a_h-a_i)^{-j_h-k_h} \\
& = \sum_{s_1+...+s_m = n_j}^{i}\bigg[\prod_{\substack{w=1\\
w\neq i}}^{m}(a_w-a_i)^{-s_w}\bigg] \sum_{\substack{j_v+k_v=s_v\\
v=1,...,m}}^{i}\prod_{\substack{h=1\\
h\neq i}}^{m}(-1)^{k_h}\binom{k+j_h}{j_h}\binom{k+1}{k_h}. \\
\end{split}
\end{equation}
Once again, the upper index on the sum denotes that the respective term with sub-index equal to $i$ is not considered. Having grouped every non-compatible term, it is now left to be proven that the factor of each one is zero. That is, one must prove that, for any fixed combination $\mathcal{S} = \{s_1,...,s_m\}$ such that $s_1+...+s_m = n_j$ (without considering $s_i$), the term
\begin{equation}
D_{n_j,k}(\mathcal{S}) \equiv \sum_{\substack{j_v+k_v=s_v\\
v=1,...,m}}^{i}\prod_{\substack{h=1\\
h\neq i}}^{m}(-1)^{k_h}\binom{k+j_h}{j_h}\binom{k+1}{k_h}
\end{equation}
is identically zero, where $1 \leq n_j \leq n \leq k$. This will be done by strong induction over $n_j$. For $n_j=1$ the only possible combination is when only one element of $\mathcal{S}$ is equal to one, while the others are zero. Without loss of generality, this element can be taken to be $s_1$. Then, the sum
\begin{equation}
\sum_{\substack{j_v+k_v=s_v\\
v=1,...,m}}^{i}
\end{equation}
has two terms: One where $(j_1,k_1)=(1,0)$ and another where $(j_1,k_1)=(0,1)$, where in both cases all of the other $(j_v,k_v)$ pairs are zero. Thus
\begin{equation}
\begin{split}
D_{1,k}(\mathcal{S}) & = \bigg[(-1)^{0}\binom{k+1}{1}\binom{k+1}{0}+(-1)^{1}\binom{k+0}{0}\binom{k+1}{1}\bigg]\prod_{\substack{h=2\\
h\neq i}}^{m}(-1)^{0}\binom{k+0}{0}\binom{k+1}{0} \\
& = (k+1)-(k+1) \\
& = 0.
\end{split}
\end{equation}
Now the inductive step will be performed. Suppose that, for any integer $n'$ such that $1\leq n'\leq n_j$, any combination $\mathcal{S} = \{s_1,...,s_m\}$ such that $s_1+...+s_m = n'$ (without considering $s_i$) produces a $D_{n',k}(\mathcal{S})$ term which vanishes. Take $n_j+1$ and an arbitrary combination $\mathcal{S}' = \{s'_1,...,s'_m\}$ such that $s'_1+...+s'_m = n_j+1$ (without considering $s'_i$). Now it must be proven that $D_{n_j+1,k}(\mathcal{S}')$ is zero.
It is clear that there exists a combination $\mathcal{S} = \{s_1,...,s_m\}$ such that $s_1+...+s_m = n_j$ and all of the elements of $\mathcal{S}$ are equal to the ones of $\mathcal{S}'$ with the exception of one, which differs by one unity. This can be done by just taking a copy of $\mathcal{S}'$ except for one element $s'$ greater than zero (which always exists as $n'\geq 1$), for which the element $s'-1$ is taken instead. Without loss of generality, this element can be taken as the first one. Thus,
\begin{equation}
\mathcal{S}' = \{s_1+1,s_2,...,s_m\},
\end{equation}
and therefore
\begin{equation}
\begin{split}
D_{n_j+1,k}(\mathcal{S}') & = \sum_{j_1+k_1=s_1+1}\sum_{\substack{j_v+k_v=s_v\\
v=2,...,m}}^{i}\prod_{\substack{h=1\\
h\neq i}}^{m}(-1)^{k_h}\binom{k+j_h}{j_h}\binom{k+1}{k_h} \\
& = \sum_{j_1+k_1=s_1+1}(-1)^{k_1}\binom{k+j_1}{j_1}\binom{k+1}{k_1}\sum_{\substack{j_v+k_v=s_v\\
v=2,...,m}}^{i}\prod_{\substack{h=2\\
h\neq i}}^{m}(-1)^{k_h}\binom{k+j_h}{j_h}\binom{k+1}{k_h}. \\
\end{split}
\end{equation}
The second sum of this equation can be expressed as a particular case of $D$. That is, if one takes the combination $\mathcal{S}'' = \{0,s_2...,s_m\}$, then $0+s_2+...+s_m = n_j-s_1$ and thus
\begin{equation}
\begin{split}
D_{n_j-s_1,k}(\mathcal{S}'') & = (-1)^0\binom{k+0}{0}\binom{k+1}{0}\sum_{\substack{j_v+k_v=s_v\\
v=2,...,m}}^{i}\prod_{\substack{h=2\\
h\neq i}}^{m}(-1)^{k_h}\binom{k+j_h}{j_h}\binom{k+1}{k_h} \\
& = \sum_{\substack{j_v+k_v=s_v\\
v=2,...,m}}^{i}\prod_{\substack{h=2\\
h\neq i}}^{m}(-1)^{k_h}\binom{k+j_h}{j_h}\binom{k+1}{k_h}. \\
\end{split}
\end{equation}
This is because the sum over $(j_1,k_1)$ has only one term, where the values of these variables are zero, and thus can be factored out. This implies that
\begin{equation}
\begin{split}
D_{n_j+1,k}(\mathcal{S}') & = \sum_{j_1+k_1=s_1+1}(-1)^{k_1}\binom{k+j_1}{j_1}\binom{k+1}{k_1}D_{n_j-s_1,k}(\mathcal{S}''). \\
\end{split}
\end{equation}
There are two distinct possibilities. If $s_1<n_j$, then $n_j-s_1 \geq 1$ and thus $D_{n_j-s_1,k}(\mathcal{S}'')$ fulfills the requirements for the inductive step. This means that $D_{n_j-s_1,k}(\mathcal{S}'')$ vanishes and so $D_{n_j+1,k}(\mathcal{S}')$ vanishes too. The other case, for which $s_1 = n_j$, does not fulfills the inductive step conditions (one needs $n_j>0$) and in fact turns out to be equal to one: $D_{0,k}(\mathcal{S}'') = 1$. This in turn implies that
\begin{equation}
\label{Non_trivial}
\begin{split}
D_{n_j+1,k}(\mathcal{S}') & = \sum_{j_1+k_1=n_j+1}(-1)^{k_1}\binom{k+j_1}{j_1}\binom{k+1}{k_1} \\
& = \sum_{k_1=0}^{n_j+1}(-1)^{k_1}\binom{k+n_j+1-k_1}{n_j+1-k_1}\binom{k+1}{k_1}. \\
\end{split}
\end{equation}
Although non-trivial, this expression is more manageable and its nullity can be proven once again by induction. This is done in \hyperref[AppendixA]{Appendix A}. With this, in either case, $D_{n_j+1,k}(\mathcal{S}')$ vanishes and the inductive step is fulfilled.
Having proven that $D_{n_j,k}(\mathcal{S})$ vanishes for any integer $n_j$ and any combination $\mathcal{S}$ with the given conditions, this consequently proves that $C_j = 0$ for $j = 0,...,n-1$ and, by means of Eq. (\ref{Expansion}), finalizes the proof that (\ref{Polynomial}) fulfills the conditions (\ref{property}) for the MTP. $\blacksquare$
\section{Discussion and conclusions}
A benefit of using a MTP instead of a STP is that one can obtain a greater accuracy for the interpolation of a function with the knowledge of less derivatives. That is, if one knows the first $k$ derivatives of a function at $m$ different points, the MTP possesses the same degree (and thus the same accuracy) with respect to the STP where a higher number of derivatives ($mk+m-1$) is known.
Although generally expensive to be computed, there can be parts of the expression (\ref{Polynomial}) which can be calculated without knowledge of the function to be interpolated, and so can be obtained beforehand and saved for optimization. Such terms include
\begin{equation}
\label{comp1}
\bigg[\prod^{m}_{\substack{h=1\\
h\neq g}}\frac{(x-a_h)}{(a_g-a_h)}\bigg]^{k+1}
\end{equation}
(which are powers of LP's factors), and
\begin{equation}
\label{comp2}
\prod^{m}_{\substack{l=1\\
l\neq i}}\frac{(k+j_l)!}{k!}(a_l-a_i)^{-j_l},
\end{equation}
where the second one is to be calculated for $0\leq j_l \leq k$.
As for the limiting behaviour, this is explored in Ref.~\cite{franssens1999} for general interpolating basis functions, where convergence to the function is found in appropriate analytic regions of the function (i) when the number of derivatives $k$ tends to infinity for fixed $m$, and (ii) when the spacing between the points becomes zero for fixed $k$.
Apart from discussing convergence of the series with $k$ going to infinity, the work on Ref.~\cite{fine1962} gives a general method for obtaining the MTP for analytic functions in terms of an expansion
\begin{equation}
\sum_{t=0}^{k}[p(z)]^{t}p_{m-1,t}(z),
\end{equation}
where
\begin{equation}
p(z) = \prod_{h=1}^{m}(z-a_h),
\end{equation}
and $p_{m-1,t}(z)$ are polynomials of degree $m-1$ whose coefficients can be calculated in terms of a contour integral that could be generally computed via the Cauchy residue theorem. Nonetheless, an explicit expression for such coefficients is not given.
An explicit expression for the MTP was in fact already introduced by J. L. López and N. M. Temme for analytical functions, first for two distinct points~\cite{2Point_N_Temme} and later for an arbitrary number of points (some of which could also be repeated)~\cite{MPoint_N_Temme}. In this work, the authors presented a Taylor series expansion at different points, considering the remainder of the finite polynomial and convergence radius. In the context of this paper their result applied for a finite polynomial and for different points is
\begin{equation}
\label{Q_km}
P_{k,m}(z) = \sum_{n=0}^{k}q_{n,m}(z)\prod_{l=1}^{m}(z-a_l)^{n},
\end{equation}
where $q_{n,m}(z)$ are polynomials of degree $m-1$ given by
\begin{equation}
q_{n,m}(z) \equiv \sum_{g=1}^{m}A_{n,g}\prod^{m}_{\substack{h=1\\
h\neq g}}\frac{(z-a_h)}{(a_g-a_h)},
\end{equation}
and the constants $A_{n,g}$ can be given by the Cauchy integral
\begin{equation}
\label{A_ng_Cauchy}
A_{n,g} = \frac{1}{2\pi i}\int_{\mathcal{C}}\frac{f(w)\,\mathrm{d}w}{(w-a_g)\prod_{h=1}^{m}(w-a_h)^n},
\end{equation}
where the contour of integration $\mathcal{C}$ is a simple closed loop which encircles all of the points of the set $\mathcal{A}$ of points in the counterclockwise direction and is contained in the analytical region $\Omega$ of the function. The aforementioned explicit expression for the MTP is obtained when one takes an alternate expression for these constants, given by the composite derivative
\begin{equation}
\label{A_ng2}
A_{n,g} = \bigg[\frac{1}{n!}\frac{\mathrm{d}^{n}}{\mathrm{d}w^{n}}\frac{f(w)}{\prod^{m}_{\substack{h=1\\
h\neq g}}(w-a_h)^{n}}\bigg]\bigg|_{w=a_g}+\sum_{\substack{l=1\\
l\neq g}}^{m}\bigg[\frac{1}{(n-1)!}\frac{\mathrm{d}^{n-1}}{\mathrm{d}w^{n-1}}\frac{f(w)}{(w-a_g)\prod^{m}_{\substack{h=1\\
h\neq l}}(w-a_h)^{n}}\bigg]\bigg|_{w=a_l},
\end{equation}
and which was also presented in this work. This formula holds for $n=1,2,3...$, while for the $n=0$ case one can directly obtain that $A_{0,g} = a_g$ as Eq. (\ref{A_ng_Cauchy}) results in the trivial Cauchy formula. Furthermore, the study also presents the multi-point generalization of the disk as the region of convergence for one point, given by
\begin{equation}
O_m \equiv \{ z\in \Omega, \, \prod_{h=1}^{m}|z-a_h| < r\}, \qquad \qquad r \equiv \mathrm{Inf}_{w \in \mathbb{C}/\Omega}\bigg\{\prod_{h=1}^{m}|w-a_h|\bigg\}.
\end{equation}
It is also worth noting that a general Laurent series for multiple points is also analyzed, which is something that was not considered here.
Although the explicit expression of the polynomial (\ref{Q_km}) is not similar to the one presented here, the uniqueness of the solution must ensure that, under certain algebraic manipulations, Eqs. (\ref{Polynomial}) and (\ref{Q_km}) must be equal. The main difference between both results is that the expansion of both polynomials is performed in a different manner, which produces different expressions for the coefficients of each term. That is, the constants $A_{n,g}$ are expressed as n-th derivatives of quotients between the function $f(w)$ and powers of $(w-a_h)$, while the constants $F^{n,g}_{k,m}[\mathcal{A}]$ are expressed in terms of a multinomial sum.
The conclusion is thus that the expression presented here is novel in its form. The usefulness of either expression will depend on the specific purposes for which it is required. An example for where the polynomial (\ref{Polynomial}) is more applicable is in the context of interpolation problems where the numerical values of the function and its derivatives at specific points are known, but not its general form. Expanding the derivatives in the constant $A_{n,g}$ as given by Eq. (\ref{A_ng2}) would ultimately result in expressions similar to the factors $F^{n,g}_{k,m}[\mathcal{A}]$, but this can be computationally expensive to produce and the resulting terms would not be grouped so naturally (for example, additional algebraic manipulations would be needed to obtain the factors (\ref{comp2}) which are useful for optimizing the computation of the polynomial). Thus, the expression (\ref{Polynomial}) can be more appropriate for explicit computations.
\section*{Acknowledgements}
Thanks to Stefan Nellen for proofreading the manuscript.
\section*{Financial disclosure}
Acknowledgements to the project DGAPA-UNAM IN103319 for financial support.
\section*{Conflict of interest}
The author declares no potential conflict of interests.
|
2,877,628,089,836 | arxiv | \section{Introduction}
Kashiwara's crystals encode the structure of certain bases, called crystal bases \cite{Kashiwara 1991}, for highest weight representations of quantum groups $U_q(\mathfrak{g})$ as $q$ goes to zero \cite{Hong and Kang}. In this paper, we will focus on crystals corresponding to representations of both simple and affine Lie algebras \cite{carter}. As combinatorial objects, they can be visualized as directed graphs, whose edges are given by Kashiwara operators which are derived from the Chevalley generators of the corresponding quantum group \cite{Bump and Schilling}. The resulting combinatorics provides much insight into the original representations. For instance, for irreducible representations of simple Lie algebras, by developing a tensor product rule for the Kashiwara operators, one can visualize the decomposition of the tensor product of such representations into irreducible components \cite{Hong and Kang}.
Given a dominant weight $\lambda$ for a simple Lie algebra, there is a corresponding connected crystal graph, $B(\lambda)$, whose vertices can be realized as semistandard Young tableaux (certain fillings of Young diagrams \cite{Fulton}).
We can also consider crystals for affine Lie algebras. Here, we will focus on Kirillov-Reshetikhin (KR) crystals, which are finite affine connected crystals, but can be decomposed into disjoint unions of classical crystals upon the removal of the affine edges \cite{Kirillov Reshetikhin 1990}. These crystals are indexed by $r\times s$ rectangles and are denoted $B^{r,s}$. There are type specific models for KR crystals, like that given by Kashiwara-Nakashima (KN) tableaux \cite{FOS}, as well as type independent models such as the quantum alcove model, which works uniformly in all untwisted affine types \cite{Lenart Lubovsky 2015b,LNSSS 2016,LNSSS III}.
We provide an explicit crystal isomorphism between the two models of the above mentioned KR crystals: the tableau model and the quantum alcove model.
Lenart and Postnikov defined the so-called alcove model for highest weight crystals associated to a semisimple Lie algebra $\mathfrak{g}$. In fact, the model was defined more generally, for symmetrizable Kac-Moody algebras $\mathfrak{g}$ \cite{Lenart Postnikov 2007, Lenart Postnikov 2008}. The alcove model is a discrete couterpart of the Littelmann path model.
Lenart and Lubovsky then generalized the alcove model to one for Kirillov-Reshetikhin (KR) crystals of affine Lie algebras; this is known as the quantum alcove model \cite{Lenart Lubovsky 2015b}, as it is based on the quantum Bruhat graph. This graph first appeared in connection with the quantum cohomology of flag varieties of the corresponding finite Weyl group \cite{Fulton Woodward 2004}. The path enumeration is determined by the choice of a certain sequence of alcoves, called an alcove path, like in the classical alcove model. If we restrict to paths in the usual Bruhat graph, we recover the classical alcove model. The mentioned paths in the quantum Bruhat graph first appeared in \cite{Lenart 2012}, where they index the terms in the specialization to $t=0$ of the Ram-Yip formula \cite{Ram Yip 2011} for Macdonald polynomials $P_{\lambda}(X;q,t)$.
Further, \cite{Lenart Lubovsky 2015b} defined crystal operators for the quantum alcove model, both the classical ones $f_i$, $i>0$, and the affine operator $f_0$. It was shown in \cite{LNSSS 2016} that the quantum alcove model uniformly describes tensor products of column shape KR crystals for all untwisted affine types. An explicit crystal isomorphism was given in \cite{Lenart Lubovsky 2015b} for types $A$ and $C$ between the objects of the quantum alcove model and tensor products of Kashiwara-Nakashima (KN) columns \cite{Kashiwara Nakashima 1994}, using the bijections constructed in \cite{Lenart 2012}.
In recent years, many applications have resulted from the quantum alcove model. In \cite{LNSSS 2016} it was shown that the so-called height statistic in the Ram-Yip formula mentioned above expresses the energy function on a tensor product of KR crystals, which endows it with an affine grading. The translation of this energy function to the tableau model in type $C$ was then given in \cite{Lenart Schilling 2011}. On the other hand, extending Sch\"{u}tzenberger's \textit{jeu de taquin} on Young tableaux to the quantum alcove model (which is based on so-called \textit{quantum Yang-Baxter moves}) results in a realization of the \textit{combinatorial $R$-matrix} \cite{Lenart Lubovsky 2015a}. Further, the quantum alcove model was used to determine \textit{keys}, also known as \textit{initial direction} in the (quantum) LS path model. These detect Demazure crystals inside highest weight ones (in the alcove model), and Demazure-type crystals inside tensor products of KR crystals (in the quantum alcove model), see~\cite{LNSSS III}. The computation of these keys is easy in the (quantum) alcove model and the bijection given in this paper then transfers the key computation from the (quantum) alcove model to the tableau model, where algorithms are only known for types $A$ and $C$ \cite{Santos, Santos2, Sheats}.
While the tableau model is simpler, it has less easily accessible information, so it is generally hard to use in specific computations (like those of the energy function, the combinatorial $R$-matrix and keys, as described above). As these computations are much simpler in the quantum alcove model, an alternative is to relate them to the tableau model, via an explicit affine crystal isomorphism. As was mentioned above, this has been done in types $A$ and $C$. Here we extend this work to types $B$ and $D$, where the corresponding bijection is much more involved, and has important additional features.
\subsection*{Acknowledgements.}
C.B. was partially supported by the NSF grant DMS-1101264.
C.L. was partially supported by the NSF grants DMS-1362627 and DMS-1855592.
A.S. was partially supported by the NSF grant DMS-1362627 and the Chateaubriand Fellowship from the Embassy of France in the United States.
\section{Background}
\subsection{Root Systems}\label{section root systems}
\cite{Humphreys} Let $\mathfrak{g}$ be a complex semisimple Lie algebra and $\mathfrak{h}$ a Cartan subalgebra, whose rank is $n$.
Let $\Phi\subset \mathfrak{h}^*$ be the corresponding irreducible
\textit{root system},
$\mathfrak{h}^*_{\mathbb{R}}$
the real span of the roots, and $\Phi^{+}\subset\Phi$ the set of positive roots.
Let $\rho:=\frac{1}{2}(\sum_{\alpha\in\Phi^+}\alpha)$.
Let $\alpha_1,\hdots,\alpha_n\in\Phi^+$ be the corresponding \textit{simple roots}. We denote $\langle\cdot,\cdot\rangle$ the nondegenerate scalar product on $\mathfrak{h}^*_{\mathbb{R}}$ induced by the Killing form. Given a root $\alpha$, we consider the corresponding \textit{coroot} $\alpha^{\vee}:=2\alpha / \langle\alpha,\alpha\rangle$ and reflection $s_{\alpha}$.
Let $W$ be the corresponding \textit{Weyl group}, whose Coxeter generators are denoted, as usual, by $s_i:=s_{\alpha_i}$. The length function on $W$ is denoted by $l(\cdot)$. the \textit{Bruhat order} on $W$ is defined by its covers $w\lessdot ws_{\alpha},$ for $\alpha\in\Phi^+$, if $l(ws_\alpha) = l(w)+1$. The mentioned covers correspond to the labeled directed edges of the \textit{Bruhat graph} on $W:$ \[w\xrightarrow{\alpha} ws_\alpha \hspace{8pt}\text{for} \hspace{8pt}w\lessdot ws_{\alpha}.\]\label{bruhat order graph eq}
The \textit{weight lattice} $\Lambda$ is given by \[\Lambda = \{\lambda\in\mathfrak{h}^*_{\mathbb{R}}: \langle\lambda,\alpha\rangle\in\mathbb{Z}\hspace{8pt}\text{for}\hspace{6pt}\text{any}\hspace{6pt}\alpha\in\Phi^+\}.\]
The weight lattice $\Lambda$ is generated by the \textit{fundamental weights} $\omega_1,\hdots,\omega_n$, which form the dual basis to the basis of simple coroots, i.e., $\langle\omega_i,\alpha^{\vee}\rangle=\delta_{i,j}$. The set $\Lambda^+$ of \textit{dominant weights} is given by $$\Lambda^+:=\{\lambda\in\Lambda:\langle\lambda,\alpha^{\vee}\rangle\geq 0 \hspace{8pt}\text{for}\hspace{4pt}\text{any}\hspace{8pt}\alpha\in\Phi^+\}.$$
Given $\alpha\in\Phi$ and $k\in\mathbb{Z}$, we denote by $s_{\alpha,k}$ the reflection in the affine hyperplane $$H_{\alpha,k}:=\{\lambda\in\mathfrak{h}^*_{\mathbb{R}} : \langle\lambda,\alpha^{\vee}\rangle = k\}.$$ \label{affine hyperplane eq}
These reflections generate the \textit{affine Weyl Group} $W_{\text{aff}}$ for the \textit{dual root system} $\Phi^{\vee}:=\{\alpha^{\vee}|\alpha\in\Phi\}$. The hyperplanes $H_{\alpha,k}$ divide the real vector space $\mathfrak{h}^*_{\mathbb{R}}$ into open regions, called \textit{alcoves}. The \textit{fundamental alcove} $A_{\circ}$ is given by
$$A_{\circ}:= \{\lambda\in\mathfrak{h}^*_{\mathbb{R}} | 0<\langle\lambda,\alpha^{\vee}\rangle<1 \hspace{6pt}\text{for}\hspace{4pt}\text{all}\hspace{6pt}\alpha\in\Phi^+\}$$\label{Fund Alcove eq}
Define $w\triangleleft ws_{\alpha}$, for $\alpha\in\Phi^+$, if $l(ws_{\alpha}) = l(w) - 2\langle\rho,\alpha^{\vee}\rangle + 1$. These are the upward facing arrows in Figure~\ref{qbruhat graph}. The \textit{quantum Bruhat graph} \cite{Fulton Woodward 2004} is defined by adding to the Bruhat graph the following edges (the downward facing arrows in Figure~\ref{qbruhat graph}) labeled by positive roots $\alpha$:
$$w\xrightarrow{\alpha}ws_{\alpha}\hspace{8pt}\text{for}\hspace{6pt} w\triangleleft ws_{\alpha}.$$\label{quantum bruhat graph eq}
\begin{figure
\centering
\includegraphics[scale=.35]{qbruhat1.pdf}
\caption{The quantum Bruhat graph for the Weyl group of type $A_3$}\label{qbruhat graph}
\end{figure}
\subsection{Kirillov-Reshetikhin (KR) crystals}\label{KR section}
Given a simple or affine Lie algebra $\mathfrak{g}$, we define the corresponding \textit{Kashiwara Crystal} to be the basis resulting from taking the limit of the quantum group $U_q(\mathfrak{g})$ as $q$ goes to zero. Such bases can be given the structure of a colored oriented graph, which we call a \textit{Crystal Graph}, whose arrows are defined by the \textit{Kashiwara operators} which are related to the Chevalley generators \cite{Hong and Kang}. We will now detail the resulting combinatorial structure of such crystal graphs. Our definitions closely follow those given in \cite{Bump and Schilling}.
\begin{definition}\label{combinatorial def of crystal}
\end{definition} Let $\Phi$ be the root system and $\Lambda$ the weight lattice, with simple roots $\alpha_i$ for $i$ in an indexing set $I$, associated to a Lie algebra $\mathfrak{g}$. The \textit{crystal} of type $\Phi$ is then a nonempty set $B$ along with the following maps using $i\in I$ and an auxiliary element $0\notin B$:
\begin{enumerate}
\item the \textit{Kashiwara Operators} $e_i, f_i: B\rightarrow B \sqcup {0},$ with the conditions that if $x,y\in B$ then $e_i(x)=y$ if and only if $f_i(y) = x$.
\item the $e_i$-$string$ and $f_i$-$string$ operators $\varepsilon_i, \varphi_i : B\rightarrow \mathbb{Z} \sqcup \{-\infty\},$ with the condition that if $x,y\in B$ are such that $e_i(x) = y$, then $\varepsilon_i(y) = \varepsilon(x) -1$ and $\varphi_i(y) = \varphi_i(x)+1$. In the case that $\varphi_i(x) = -\infty$ or $\varepsilon_i(x) = -\infty$, we require that $e_i(x) = f_i(x) = 0$.
\item the \textit{weight} map $wt: B\rightarrow \Lambda,$ where if $x,y\in B$ are such that $e_i(x) = y$, then $wt(y) = wt(x) + \alpha_i$. Further, for each $x\in B$ and $i\in I$, we have that $ \langle wt(x),\alpha_i^{\vee}\rangle = \varphi_i(x) - \varepsilon_i(x).$
\end{enumerate}
The associated \textit{crystal graph} is then built with elements of $B$ as vertices and, for $x,y\in B$, we draw an edge $y\xrightarrow{i} x$ exactly when $f_i(y) = x$.
\vspace{12pt} Given two $\mathfrak{g}$-crystals $B_1$ and $B_2$, we define their tensor product $B_1\otimes B_2$ as follows. As a set, $B_1\otimes B_2$ is the Cartesian product of the two sets. For $b=b_1\otimes b_2\in B_1\otimes B_2$, the weight function is simply $wt(b) = wt(b_1)+wt(b_2)$. The crystal operator $f_i$ is given by
$$f_i(b_1\otimes b_2) = \begin{cases}
f_i(b_1)\otimes b_2 & \text{if}\hspace{5pt} \epsilon_i(b_1)\geq \varphi_i(b_2), \\
b_1\otimes f_i(b_2) & \text{otherwise,}
\end{cases}$$
while $e_i(b)$ is defined similarly.
\vspace{12pt} The \textit{highest weight crystal} $B(\lambda)$ of highest weight $\lambda\in\Lambda^+$ is a certain crystal with a unique element $\mu_{\lambda}$ such that $e_i(\mu_{\lambda}) = \textbf{0}$ for all $i\in I$ and $wt(\mu_{\lambda}) = \lambda$. It encodes the structure of the crystal basis of the $U_q(\mathfrak{g})$-irreducible representation with highest weight $\lambda$ as $q$ goes to $0$ \cite{Bump and Schilling}.
\vspace{12pt} A \textit{Kirillov-Reshetikhin (KR) crystal} \cite{Kirillov Reshetikhin 1990} is a finite crystal $B^{r,s}$ for an affine algebra, associated to a rectangle of height $r$ and length $s$. We now describe the KR crystals $B^{r,1}$ for type $A_{n-1}^{(1)}$ (where $r\in\{1,2,\hdots,n-1\}$) as well as for types $B_n^{(1)}, C_n^{(1)},$ and $D_n^{(1)}$ (where $r\in \{1,2,\hdots,n\}$). As a classical type crystal (i.e. with the removal of the $\tilde{f_0}$ arrows) in types $A_{n-1}^{(1)}$ and $C_n^{(1)}$, we have that the KR crystal
$B^{r,1}$ is isomorphic to the corresponding highest weight crystal $B(\omega_r)$. Similarly, in types $C_n^{(1)}$ and $D_n^{(1)}$, we have that the KR crystal $B^{r,1}$, as a classical type crystal, is isomorphic to the crystal $B(\omega_r) \sqcup B(\omega_{r-2}) \sqcup B(\omega_{r-4})\sqcup\hdots$
where each $B(\omega_k)$ is given by Kashiwara-Nakashima (KN) columns of height $k$ of the corresponding type.
\begin{definition}\label{def KN column}
{\rm Kashiwara-Nakashima (KN) columns} of height $k$ are strictly increasing fillings of length $k$ columns with entries
$\{1<2<\hdots <n\}$ in type $A_{n-1}$ and with entries $\{1<\hdots <n<\overline{n} <\hdots <\overline{1}\}$
in type $C_n$, $B_n$, and $D_n$ along
with the additional conditions:
\begin{enumerate}
\item The entries are strictly increasing from the top to bottom with the exception that:
\begin{enumerate}
\item the letter $0$ (ordered between $n$ and $\overline{n}$) can appear in type $B_n$ and can be repeated, and
\item the letters $n$ and $\overline{n}$ in type $D_n$ can alternate.
\end{enumerate}
\item If both letters $j$ and $\overline{\jmath}$ appear in the column, and $j$ is in the $a$-th box from the top and $\overline{\jmath}$ lies in the $b$-th box from the bottom, then $a+b\leq j$.
\end{enumerate}
\end{definition}
\begin{example}
A column of height 5 of type $B_6$
$$
\begin{array}{l}\tableau{{2}\\{3}\\{0}\\{0}\\{\overline{2}}} \end{array}
$$
\end{example}
For our purposes here, we would like to consider fillings of partition shapes, rather than just rectangular. We define the associated crystal as follows.
\begin{definition} For a partition $\textbf{p} = (p_1\geq p_2\geq\hdots\geq p_r)$, we define $$B^{\textbf{p}}:= B^{p_1,1}\otimes B^{p_2,1}\otimes\hdots\otimes B^{p_r,1}.$$
\end{definition}
\subsection{The quantum alcove model}
We say that two alcoves are \textit{adjacent} if they are distinct and have a common wall. Given a pair of adjacent alcoves $A$ and $B$, we write $A\xrightarrow{\beta} B$ if the common wall is of the form $H_{\beta,k}$ and the root $\beta\in\Phi$ points in the direction from $A$ to $B$.
\begin{definition}{\rm \cite{Lenart Postnikov 2007}}\label{def alcove path}
An {\rm alcove path} is a sequence of alcoves $(A_0,A_1,\hdots,A_m)$ such that $A_{j-1}$ and $A_j$ are adjacent, for $j=1,\hdots,m$. We say that an alcove path is {\rm reduced} if it has minimal length among all alcove paths from $A_0$ to $A_m$.
\end{definition}
\begin{definition}{\rm \cite{Lenart Postnikov 2007}}\label{def alcove lambda-chain}
The sequence of roots $(\beta_1,\beta_2,\hdots,\beta_m)$ is called a {\rm $\lambda$-chain} if $$A_\circ = A_0\xrightarrow{-\beta_1} A_1\xrightarrow{-\beta_2} \hdots \xrightarrow{-\beta_m} A_m = A_{-\lambda}$$ is a reduced alcove path.
\end{definition}
We now fix the dominant weight $\lambda=\omega_{p_1}+\hdots + \omega_{p_r}$ and an alcove path $\Pi = (A_0,\hdots,A_m)$ from $A_0 = A_\circ$ to $A_m = A_{-\lambda}$. Note that $\Pi$ is determined by the corresponding $\lambda$-chain of positive roots $\Gamma:=(\beta_1,\hdots,\beta_m)$. We let $r_i:=s_{\beta_i}$ and $J = \{j_1,j_2,\hdots,j_s\}\subseteq [m]$. The elements of $J$ are called \textit{folding positions}. We fold $\Pi$ in the hyperplanes corresponding to these positions and obtain a folded path. Like $\Pi$, the folded path can be recorded by a sequence of roots, namely $\Delta = \Gamma(J) = (\gamma_1,\gamma_2,\hdots,\gamma_m)$; here $\gamma_k = r_{j_1}r_{j_2}\hdots,r_{j_p}(\beta_k)$, with $j_p$ the largest folding position less than $k$. Given $i\in J$, we say that $i$ is a \textit{positive folding position} if $\gamma_i >0$, and a \textit{negative folding position} if $\gamma_i < 0 $. We denote the positive folding positions by $J^+$ and the negative ones by $J^-$.
\begin{definition}{\rm \cite{Lenart Lubovsky 2015b}}\label{admissible subset}
A subset $J = \{j_1<j_2<\hdots <j_s\}\subseteq [m]$ (possibly empty) is an {\rm admissible subset} if we have the following path in the quantum Bruhat graph on $W$:
$$1\xrightarrow{\beta_{j_1}}r_{j_1}\xrightarrow{\beta_{j_2}}r_{j_1}r_{j_2}\xrightarrow{\beta_{j_3}}\hdots\xrightarrow{\beta_{j_s}}r_{j_1}r_{j_2}\hdots r_{j_s}.$$ We call $\Delta = \Gamma(J)$ an \textit{admissible folding}. Let $\mathcal{A} = \mathcal{A}(\mu)$ be the collection of all admissible subsets.
\end{definition}
See Example~\ref{Type A example} for an example of a $\Gamma$-chain and an admissible subset.
\begin{theorem}{\rm \cite{LNSSS 2016}} The collection of all admissible subsets $\mathcal{A}(\lambda)$ is a combinatorial model for $B^{\textbf{p}}$.
\end{theorem}
\section{The bijection in types $A_{n-1}$ and $C_n$}
\subsection{The quantum alcove model and filling map in type $A_{n-1}$}
We start with the basic facts about the root system for type $A_{n-1}$. We can identify the space $\mathfrak{h}^*_{\mathbb{R}}$ with the quotient $V:=\mathbb{R}^n/\mathbb{R}(1,\ldots,1)$, where $\mathbb{R}(1,\ldots,1)$ denotes the subspace in $\mathbb{R}^n$ spanned by the vector $(1,\ldots,1)$. Let $\varepsilon_1,\ldots,\varepsilon_n\in V$ be the images of the coordinate vectors in $\mathbb{R}^n$. The root system is $\Phi = \{\alpha_{ij} := \varepsilon_i-\varepsilon_j : i\neq j, 1\leq i,j \leq n\}$. The simple roots are $\alpha_i = \alpha_{i,i+1}$, for $i = 1,\ldots,n-1$. The weight lattice is $\Lambda = \mathbb{Z}^n/\mathbb{Z}(1,\ldots,1)$. The fundamental weights are $\omega_i = \varepsilon_1 + \varepsilon_2 + \ldots + \varepsilon_i$, for $i = 1,2,\ldots,n-1$. A dominant weight $\lambda = \lambda_1\varepsilon_1 + \ldots + \lambda_{n-1}\varepsilon_{n-1}$ is identified with the partition $(\lambda_1\geq\lambda_2\geq\ldots\geq\lambda_{n-1}\geq\lambda_n=0)$ having at most $n-1$ parts. Note that $\rho = (n-1,n-2,\ldots,0)$. Considering the Young diagram of the dominant weight $\lambda$ as a concatenation of columns, whose heights are $\lambda'_1,\lambda'_2,\ldots,$ corresponds to expressing $\lambda$ as $\omega_{\lambda'_1}+\omega_{\lambda'_2}+\ldots$ (as usual, $\lambda'$ is the conjugate partition to $\lambda$).
The Weyl group $W$ is the symmetric group $S_n$, which acts on $V$ by permuting the coordinates $\varepsilon_1,\ldots\,\varepsilon_n$. Permutations $w\in S_n$ are written in one-line notation $w = w(1)\ldots w(n)$. For simplicity, we use the same notation $(i,j)$, with $1\leq i < j \leq n$, for the positive root $\alpha_{ij}$ and the reflection $s_{\alpha_{ij}}$, which is the transposition $t_{ij}$ of $i$ and $j$.
We now consider the specialization of the quantum alcove model to type $A_{n-1}$. For any $k = 1,\ldots, n-1$, we have the following $\omega_k$-chain, denoted by $\Gamma(k)$ {\rm \cite{Lenart Postnikov 2008}}:
\begin{equation*}
\begin{array}{lllll}
(&\!\!\!\!(k,k+1),&(k,k+2),&\ldots,&(k,n)\,,\\
&&&\ldots\\
&\!\!\!\!(2,k+1),&(2,k+2),&\ldots,&(2,n)\,,\\
&\!\!\!\!(1,k+1),&(1,k+2),&\ldots,&(1,n)\,\,)\,.
\end{array}
\end{equation*}
We construct a $\lambda$-chain $\Gamma = (\beta_1,\beta_2,\ldots,\beta_m)$ as the concatenation $\Gamma := \Gamma_1\ldots \Gamma_{\lambda_1}$, where $\Gamma_i := \Gamma(\lambda'_i)$. Let $J = \{j_1<\ldots <j_s\}$ be a set of folding positions in $\Gamma$, not necessarily admissible, and let $T$ be the corresponding list of roots of $\Gamma$. The factorization of $\Gamma$ induces a factorization on $T$ as $T = T_1T_2 \ldots T_{\lambda_1}$. We denote by $T_1\ldots T_i$ the permutation obtained by multiplying the transpositions in $T_1,\ldots,T_i$ considered from left to right. For $w\in W$, written $w = w_1 w_2 \ldots w_n$,
let $w[i,j] = w_i\ldots w_j$. To each $J$ we can associate a filling of a Young diagram $\lambda$, as follows.
\begin{definition}\label{Filling Map}
Let $\pi_i = \pi_i(T) := T_1\ldots T_i$. We define the {\rm filling map}, which produces a filling of the Young diagram $\lambda$, by $fill\_A(J) = fill\_A(T) := C_1\ldots C_{\lambda_1}$, where $C_i := \pi_i[1,\lambda'_i].$
We define the {\rm sorted filling map} $sfill\_A(J)$ to be the composition $sort\circ \mbox{fill\_A}(J)$, where {\rm sort} reorders increasingly each column of $fill\_A(J)$.
\end{definition}
\begin{definition}\label{circle order}
Define a {\rm circular order} $\prec_i$ on $[n]$ starting at $i$, by
$$i\prec_i i+1\prec_i\ldots\prec_i n\prec_i 1\prec_i\ldots\prec_i i-1.$$
\end{definition}
It is convenient to think of this order in terms of the numbers $1,\ldots,n$ arranged on a circle clockwise. We make that convention that, whenever we write $a\prec b\prec c\prec\ldots$, we refer to the circular order $\prec = \prec_a$. Below is a criterion for the quantum Bruhat graph on the Weyl group $W$, ${\rm{QBG}}(W)$, in type $A_{n-1}$ using these orders.
\begin{proposition}{\rm \cite{Lenart 2012}}\label{Type A QB criterion}
For $1\leq i<j\leq n$, we have an edge $w\xrightarrow{(i,j)} w(i,j)$ in ${\rm{QBG}}(W)$ if and only if there is no $k$ such that $i<k<j$ and $w(i)\prec w(k)\prec w(j)$.
\end{proposition}
\begin{example}\label{Type A example}
{\rm
Consider the dominant weight $\lambda = 3\varepsilon_1 +2\varepsilon_2 = \omega_1+2\omega_2$ in the root system $A_2$, which corresponds to the Young diagram $\begin{array}{l} \tableau{{}&{}&{}\\ {}&{}} \end{array}$. The corresponding $\lambda$-chain is $$\Gamma = \Gamma_1\Gamma_2\Gamma_3 = \Gamma(2)\Gamma(2)\Gamma(1)= \{\underline{(2,3)},\underline{(1,3)}|\underline{(2,3)},(1,3)|\underline{(1,2)},(1,3)\}\,.$$
Consider $J=\{1,2,3,5\}$, cf. the underlined roots, with
$T = \{(2,3),(1,3)|(2,3)|(1,2)\}.$
We write the permutations in Definition~\ref{admissible subset} as broken columns. Note that $J$ is admissible since, based on Proposition~\ref{Type A QB criterion}, we have
\begin{equation*}
\begin{array}{l} \tableau{{1}\\ {\textbf{2}}}\\ \\
\tableau{{\textbf{3}}} \end{array}
\begin{array}{l} \lessdot \end{array}
\begin{array}{l} \tableau{{\textbf{1}}\\ {3}}\\ \\
\tableau{{\textbf{2}}} \end{array}
\begin{array}{l} \lessdot \end{array}
\begin{array}{l} \tableau{{2}\\ {3}}\\ \\
\tableau{{1}} \end{array}
\:|\:
\begin{array}{l} \tableau{{{2}}\\ {\textbf{3}}}\\ \\
\tableau{{\textbf{1}}} \end{array}
\begin{array}{l} \triangleleft \end{array}
\begin{array}{l} \tableau{{2}\\ {1}}\\ \\
\tableau{{3}} \end{array}
\:|\:
\begin{array}{l} \tableau{{\textbf{2}}}\\ \\
\tableau{{\textbf{1}}\\ {3}} \end{array}
\begin{array}{l} \triangleleft \end{array}
\begin{array}{l} \tableau{{1}}\\ \\
\tableau{{2}\\ {3}} \end{array}
\:|\:
,
\end{equation*}
where the symbols $\lessdot$ and $\triangleleft$ signify Bruhat coverings as given in Section~\ref{section root systems}. By considering the top part of the last column in each segment, and by concatenating these columns left to right, we obtain $ fill\_A(J) = \begin{array}{l} \tableau{{2}&{2}&{1}\\ {3}&{1}} \end{array}$ and $ sfill\_A(J)= \begin{array}{l} \tableau{{2}&{1}&{1}\\ {3}&{2}} \end{array}$.
}\end{example}
\begin{theorem}{\rm \cite{Lenart 2012,Lenart Lubovsky 2015b}} The map ``$sfill\_A$'' is an affine crystal isomorphism between $\mathcal{A} (\lambda)$ and
$B^{\lambda'}:=B^{\lambda_1',1}\otimes B^{\lambda_2',1}\otimes\ldots$.
\end{theorem}
The proof of bijectivity is given in~\cite{Lenart 2012} by constructing an inverse map. We will now present the algorithm for constructing this map, as the corresponding construction in the other classical types is based on this algorithm.
\subsection{The inverse map in type $A_{n-1}$}
Consider $B^{{\lambda'}}:= B^{\lambda_1',1}\otimes B^{\lambda_2',1}\otimes\ldots = B(\omega_{\lambda_1'})\otimes B(\omega_{\lambda_2'})\otimes\ldots$. This is simply the set of column-strict fillings of the Young diagram $\lambda$ with integers in $[n]$. Fix a filling $b$ in $B^{{\lambda'}}$ written as a concatenation of columns $b_1\ldots b_{\lambda_1}$.
The algorithm for mapping $b$ to a sequence of roots $S\subset \Gamma$ consists of two sub-algorithms, which we call the \textit{Reorder algorithm} (this reverses the ordering of columns $b_i$ from \textit{sort} back to that of the corresponding column in the $fill\_A$ map) and the \textit{Path algorithm} (this provides the corresponding path in the quantum Bruhat graph).
The Reorder algorithm (Algorithm~\ref{Reorder algorithm}) takes $b$ as input and outputs a filling $ord\_A(b) = C$, a reordering of the column entries, based on the circle order given in Definition~\ref{circle order}.
\begin{algorithm}\label{Reorder algorithm}
(``ord\_A'')
let $C_1:=b_1$;
\hspace{8pt}for $i$ from $2$ to $\lambda_1$ do
\hspace{16pt} for $j$ from $1$ to $\lambda'_i$ do
\hspace{24pt} let $C_i(j):=min_{\prec_{C_{i-1}(j)}}(b_i\setminus \{C_i(1),\ldots,C_i(j-1)\})$
\hspace{16pt} end do;
\hspace{8pt} end do;
return $C:=C_1\ldots C_{\lambda_1}.$
\end{algorithm}
\begin{example}{\rm
Algorithm~\ref{Reorder algorithm} gives the filling $C$ from $b$ below.
$$b = \tableau{{3}\\{5}\\{6}} \tableau{{2}\\{3}\\{4}}\tableau{{1}\\{2}\\{4}}\tableau{{2}\\ \\ \\} \xrightarrow{ord\_A} \tableau{{3}\\{5}\\{6}} \tableau{{3}\\{2}\\{4}}\tableau{{4}\\{2}\\{1}}\tableau{{2}\\ \\ \\} = C $$
}\end{example}
The path algorithm (Algorithm~\ref{Greedy algorithm}) takes the reordered filling $C$ and outputs a sequence of roots $Path\_A(C) = S\subset \Gamma$. Let $C_0$ be the increasing column filled with $1,2,\ldots,n$.
\begin{algorithm}\label{Greedy algorithm}
(``Path\_A'')
for $i$ from $1$ to $\lambda_1$ do
\hspace{12pt} let $S_i:=\emptyset$, $A := C_{i-1}$;
\hspace{12pt} for $(l,m)$ in $\Gamma_i$ do
\hspace{24pt} if $A(l)\neq C_i(l)$ and $A(l)\prec A(m)\prec C_i(l)$ then let $S_i:=S_i,(l,m)$ and $A:=A(l,m)$;
\hspace{24pt} end if;
\hspace{12pt} end do;
end do;
return $S := S_1\ldots S_{\lambda_1}$.
\end{algorithm}
\begin{example}{\rm
Consider $b=\tableau{{1}&{1}&{2}\\{3}&{2}&\\{4}&&}\in B^{(3,2,1)}$, where $\lambda=\lambda' = (3,2,1)$ and $n=4$. We have $$\Gamma = \Gamma(3)\Gamma(2)\Gamma(1) = \{(3,4),(2,4),(1,4)|(2,3),(2,4),(1,3),(1,4)|(1,2),(1,3),(1,4)\}.$$
Notice that $ord\_A(b) = b$, and that $Path\_A\circ ord\_A(b)$ outputs
$S=S_1S_2S_3 = \{(3,4),(2,4)|(2,3),(2,4)|(1,2)\}$ via the following path in $\mbox{QBG}(W)$:
$$\begin{array}{l}\tableau{{1}\\{2}\\{ 3}} \\ \\ \tableau{{ 4}} \end{array} \!
\begin{array}{c} \\ \xrightarrow{(3,4)} \end{array}\!
\begin{array}{l}\tableau{{ 1}\\{ 2}\\{ 4} \\ \\ { 3}} \end{array} \begin{array}{c} \\ {\xrightarrow{(2,4)}}
\end{array}\! \begin{array}{l}\tableau{ {1}\\{ 3}} \\ \tableau{{ 4}\\ \\{ 2}}
\end{array}\! \:|\: \begin{array}{l}\tableau{{{ 1}}\\{{ 3}}} \\ \\
\tableau{{{4}}\\{ 2}}\end{array}\!\begin{array}{c} \\ \xrightarrow{(2,3)}
\end{array}\! \begin{array}{l}\tableau{{ 1}\\{ 4}}\\ \\ \tableau{{3}\\{ 2}}
\end{array} \begin{array}{c} \\ \xrightarrow{(2,4)} \end{array}\! \begin{array}{l}
\tableau{{ 1}\\{ 2}}\\ \\ \tableau{{3}\\{4}}\end{array} \:|\:
\begin{array}{l}\tableau{{ 1}\\ \\ {{ 2}}\\{3}\\{4}} \end{array}
\!\begin{array}{c} \\ \xrightarrow{(1,2)} \end{array}\! \begin{array}{l} \tableau{{ 2}\\ \\{1}\\{3}} \\ \tableau{{4}}\end{array} \! \, .$$
}\end{example}
\begin{theorem}{\rm \cite{Lenart 2012}}
If $\mbox{fill\_A}(T)=C$, then the output of the Greedy algorithm $C\mapsto S$ is such that $S = T$. Moreover, the map $``Path\_A\circ ord\_A''$ is the inverse of ``$\mbox{sfill\_A}$''.
\end{theorem}
\subsection{The quantum alcove model and filling map in type $C_n$}\label{type C setup}
We start with the basic facts about the root system for type $C_{n}$. We can identify the space $\mathfrak{h}^*_{\mathbb{R}}$ with $V:=\mathbb{R}^n$, with coordinate vectors $\varepsilon_1,\ldots,\varepsilon_n\in V$. The root system is $\Phi = \{\pm\varepsilon_i\pm\varepsilon_j \,:\, i\neq j,\, 1\leq i<j \leq n\}\cup\{\pm 2\varepsilon_i \,:\, 1\leq i \leq n\}$.
The Weyl group $W$ is the group of signed permutations $B_n$, which acts on $V$ by permuting the coordinates and changing their signs. A signed permutation is a bijection $w$ from $[\overline{n}]:=\{1<2<\ldots <n<\overline{n}<\overline{n-1}<\ldots <\overline{1\}}$ to $[\overline{n}]$ which satisfies $w(\overline{\imath}) = \overline{w(i)}$. Here, $\overline{\imath}$ is viewed as $-i$, so that $\overline{\overline{\imath}} = i$, and we can define $|i|$ and $sign(i)\in\{\pm 1\}$, for $i\in[\overline{n}]$. We will use the so-called \textit{window notation} $w = w(1)w(2)\ldots w(n)$. For simplicity, given $1\leq i<j\leq n$, we denote by $(i,j)$ and $(i,\overline{\jmath})$ the roots $\varepsilon_i-\varepsilon_j$ and $\varepsilon_i+\varepsilon_j$, respectively; the corresponding reflections, denoted in the same way, are identified with the composition of transpositions $t_{ij}t_{\overline{\jmath}\overline{\imath}}$ and $t_{i\overline{\jmath}}t_{j\overline{\imath}}$, respectively. Finally, we denote by $(i,\overline{\imath})$ the root $2\varepsilon_i$ and the corresponding reflection, identified with the transposition $t_{i\overline{\imath}}$.
We now consider the specialization of the quantum alcove model to type $C_n$. For any $k = 1,\ldots,n$, we have the following (split) $\omega_k$-chain, denoted by $\Gamma^l(k)\Gamma^r(k)$ \cite{Lenart 2012}, where:
\begin{equation}\label{omegakchain}\Gamma^l(k):= \Gamma^{kk}\ldots \Gamma^{k1}, \hspace{8pt} \Gamma^r(k):=\Gamma^k\ldots \Gamma^2\,,\end{equation}
\vspace{-12pt}
\begin{equation*}
\begin{array}{lllll}
\;\;\;\;\;\;\;\;\;\;\,\Gamma^{ki}:=(
&\!\!\!\! (i,k+1),&(i,k+2),&\ldots,&(i,n)\,,\\
&\!\!\!\! (i,\overline{\imath})\,,\\
&\!\!\!\! (i,\overline{n}),&(i,\overline{n-1}),&\ldots,&(i,\overline{k+1})\,,\\
&\!\!\!\! (i,\overline{i-1}),&(i,\overline{i-2}),&\ldots,&(i,\overline{1})\:)\,,
\end{array}
\end{equation*}
\vspace{-9pt}
$$\!\!\!\!\!\Gamma^{i}:=((i,\overline{i-1}),(i,\overline{i-2}),\ldots,(i,\overline{1}))\,.$$
We refer to the four rows above in $\Gamma^{ki}$ as stages I, II, III, and IV respectively.
We can construct a $\lambda$-chain as a concatenation $\Gamma:=\Gamma_{1}^l\Gamma_{1}^r\ldots \Gamma_{\lambda_1}^l\Gamma_{\lambda_1}^r$, where $\Gamma^l_i:=\Gamma^l(\lambda'_i)$ and $\Gamma^r_i:=\Gamma^r(\lambda'_i)$. We will use interchangeably the set of positions $J$ in the $\lambda$-chain $\Gamma$ and the sequence of roots $T$ in $\Gamma$ in those positions, which we call a \textit{folding sequence}. The factorization of $\Gamma$ with factors $\Gamma^l_i$,$\Gamma^r_i$ induces a factorization of $T$ with factors $T^l_i$,$T^r_i$. We define the circle order $\prec_a$ in a similar way to Definition~\ref{circle order}, but on the set $[\overline{n}]$. Below is a criterion for ${\rm{QBG}}(W)$ in type $C_n$, analogous to Proposition~\ref{Type A QB criterion}.
\begin{proposition}\label{type C bruhat conditions}{\rm \cite{Lenart 2012}}
The quantum Bruhat graph of type $C_n$ has the following edges:
\begin{enumerate}
\item given $1\leq i < j\leq n$, we have an edge $w\xrightarrow{(i,j)} w(i,j)$ if and only if there is no $k$ such that $i<k<j$ and $w(i)\prec w(k)\prec w(j)$;
\item given $1\leq i < j\leq n$, we have an edge $w\xrightarrow{(i,\overline{\jmath})} w(i,\overline{\jmath})$ if and only if $w(i)<w(\overline{\jmath})$, $sign(w(i))=sign(w(\overline{\jmath})$, and there is no $k$ such that $i<k<\overline{\jmath}$ and $w(i)\prec w(k)\prec w(\overline{\jmath})$;
\item given $1\leq i \leq n$, we have an edge $w\xrightarrow{(i,\overline{\imath})} w(i,\overline{\imath})$ if and only if there is no $k$ such that $i<k<\overline{\imath}$ (or equivalently, $i<k\leq n$) and $w(i)\prec w(k)\prec w(\overline{\imath})$.
\end{enumerate}
\end{proposition}
\begin{definition}\label{deffillc}
Given a folding sequence $T$, we consider the signed permutations
$\pi^l_i:=T_{1}^lT_1^r\ldots T_{i-1}^lT_{i-1}^rT^l_i$, $\pi^r_i:=\pi^l_iT^r_i.$
Then the {\rm filling map} is the map ``$\mbox{fill\_C}$'' from folding sequences $T$ in $\mathcal{A}(\lambda)$ to fillings $\mbox{fill\_C}(T) = C^l_{1}C^r_{1}\ldots C^l_{\lambda_1}C^r_{\lambda_1}$ of the shape $2\lambda$, which are viewed as concatenations of columns; here $C^l_i:=\pi^l_i[1,\lambda'_i]$ and $C^r_i:=\pi^r_i[1,\lambda'_i]$, for $i=1,\ldots,\lambda_1.$
We then define $\mbox{sfill\_C}: \mathcal{A}(\lambda)\rightarrow B^{\lambda'}$ to be the composition ``$\mbox{sort}\circ \mbox{fill\_C}$'', where ``{sort}'' reorders the entries of each column increasingly; here we represent a KR crystal $B^{r,1}$ as a {\rm split} (also known as {\rm doubled}) KN column of height $r$, see Section~{\rm \ref{invc}}.
\end{definition}
\begin{theorem}{\rm \cite{Lenart 2012,Lenart Lubovsky 2015b}}
The map ``$sfill\_C$'' is an affine crystal isomorphism between $\mathcal{A} (\lambda)$ and
$B^{\lambda'}$.
\end{theorem}
\subsection{The inverse map in type $C_n$}\label{inverse map type C section}\label{invc}
Recall from the construction of the filling map in type $A_{n-1}$ that we treated the columns of a filling as initial segments of permutations. However, the KN columns of type $C_n$ allow for both $i$ and $\overline{\imath}$ to appear as entries in such a column. In order to pursue the analogy with type $A_{n-1}$, cf. Definition~\ref{deffillc}, we need to replace a KN column with its \textit{split} version, i.e., two columns of the same height as the initial column. The splitting procedure, described below, gives an equivalent definition of KN columns, see Section~\ref{KR section}.
\begin{definition}\label{type C splitting}{\rm \cite{Lecouvey 2002}}
Let $C$ be a column and $I=\{z_1>\ldots >z_r\}$ be the set of unbarred letters $z$ such that the pair $(z,\overline{z})$ occurs in $C$. The column $C$ can be split when there exists a set of $r$ unbarred letters $J=\{t_1>\ldots>t_r\}\subset [n]$ such that
$t_1$ is the greatest letter in $[n]$ satisfying: $t_1<z_1, t_1\notin C$, and $\overline{t_1}\notin C$, and for $i=2,\ldots,r$, the letter $t_i$ is the greatest value in $[n]$ satisfying $t_i<min(t_{i-1},z_i),t_i\notin C$, and $\overline{t_i}\notin C$.
In this case we write:
\begin{enumerate}
\item $rC$ for the column obtained by changing $\overline{z_i}$ into $\overline{t_i}$ in $C$ for each letter $z_i\in I$, and by reordering if necessary,
\item $lC$ for the column obtained by changing $z_i$ into $t_i$ in $C$ for each letter $z_i\in I$, and by reordering if necessary.
\end{enumerate}
The pair $(lC,rC)$ is then called a {\rm split} (or {\rm doubled}) column.
\end{definition}
Given our fixed dominant weight $\lambda$, an element $b$ of $B^{\lambda'}$ can be viewed as a concatenation of KN columns $b_1\ldots b_{\lambda_1}$, with $b_i$ of height $\lambda_i'$. Let $b':=b^l_1b^r_1\ldots b^l_{\lambda_1}b^r_{\lambda_1}$ be the associated filling of the shape $2\lambda$, where $(b^l_i,b^r_i) := (lb_i,rb_i)$ is the splitting of the KN column $b_i$.
\vspace{12pt}The algorithm for mapping $b'$ to a sequence of roots $S\subset \Gamma$ is similar to the type $A_{n-1}$ one. The Reorder algorithm $``ord\_C''$ for type $C_n$ is the obvious extension from type $A_{n-1}$. The path algorithm $``Path\_C''$ is also similar to its type $A_{n-1}$ counterpart, but merits discussion. Recall that an $\omega_k$-chain in type $C_n$ factors as $\Gamma^l(k)\Gamma^r(k)$. While the path algorithm parses through $\Gamma^l(k)$, it outputs a chain from the previous right column to the current left column reordered. While the path algorithm parses through $\Gamma^r(k)$, it outputs a chain between the current left and right columns, both reordered.
\begin{theorem}{\rm \cite{Lenart 2012}}
The map ``$Path\_C\circ ord\_C\circ split\_C$'' is the inverse of the type $C_n$ ``$sfill\_C$'' map.
\end{theorem}
\section{The bijection in type $B_n$}\label{B}
We now move to the main content of this paper: extending the work done in types $A_{n-1}$ and $C_n$ to both types $B_n$ and $D_n$. The filling map naturally extends to all classical types, however the corresponding inverse maps become more interesting as we progress to type $B_n$, and further still with type $D_n$. The changes in the inverse maps are direct consequences of differences between the corresponding structure of the KN columns, as well as differences in the quantum Bruhat graphs.
\subsection{The type $B_n$ Kirillov-Reshetikhin crystals}\label{type b kr section}
We begin by recalling the basic facts of the type $B_n$ root system. Similar to type $C_n$, we can identify the space $\mathfrak{h}^*_{\mathbb{R}}$ with $V:=\mathbb{R}^n$, with coordinate vectors $\varepsilon_1,\ldots,\varepsilon_n\in V$. The root system is $\Phi = \{\pm\varepsilon_i\pm\varepsilon_j \,:\, i\neq j,\, 1\leq i<j \leq n\}\cup\{\pm \varepsilon_i \,:\, 1\leq i \leq n\}$.
The Weyl group $W$ is the group of signed permutations $B_n$, which acts on $V$ by permuting the coordinates and changing their signs. We again note that a signed permutation is a bijection $w$ from $[\overline{n}]:=\{1<2<\ldots <n<\overline{n}<\overline{n-1}<\ldots <\overline{1\}}$ to $[\overline{n}]$ which satisfies $w(\overline{\imath}) = \overline{w(i)}$. Here, $\overline{\imath}$ is viewed as $-i$, so that $\overline{\overline{\imath}} = i$, and we can define $|i|$ and $sign(i)\in\{\pm 1\}$, for $i\in[\overline{n}]$ in the obvious way. We will use the so-called \textit{window notation} $w = w(1)w(2)\ldots w(n)$. For simplicity, given $1\leq i<j\leq n$, we denote by $(i,j)$ and $(i,\overline{\jmath})$ the roots $\varepsilon_i-\varepsilon_j$ and $\varepsilon_i+\varepsilon_j$, respectively; the corresponding reflections, denoted in the same way, are identified with the composition of transpositions $t_{ij}t_{\overline{\jmath}\overline{\imath}}$ and $t_{i\overline{\jmath}}t_{j\overline{\imath}}$, respectively. Finally, we denote by $(i,\overline{\imath})$ the root $\varepsilon_i$ and the corresponding reflection, identified with the transposition $t_{i\overline{\imath}}$.
\vspace{12pt}
Recall from Section~\ref{KR section} that, given a fixed dominant weight $\lambda$, we can write
\[B^{\lambda'} = \bigotimes\limits^1_{i = \lambda_1}B^{\lambda'_i,1},\]
where each $B^{r,1}$ is a column shape type $B_n$ Kirillov-Reshitihkin crystal. When viewed as a classical type crystal, we have \[B^{r,1}\cong B(\omega_r) \sqcup B(\omega_{r-2}) \sqcup B(\omega_{r-4})\sqcup\ldots\] where, as before, the elements of the set $B(\omega_k)$ are given by KN columns of height $k$. As in type $C_n$, the KN columns in type $B_n$ are allowed to contain both $i$ and $\overline{\imath}$ values; they may also contain the value $0$. This is addressed in the type $B_n$ splitting algorithm ``{\it split\_B}'' by adding the $0$ values in the column to the set $I$ (see Definition~\ref{type C splitting}), and then by proceeding as in type $C_n$ {\rm \cite{lecsbd}}. As it was in Type $C_n$, it will be usefull to realize the tensor factors $B^{k,1}$ in terms of split columns of height $k$.
\subsection{The quantum alcove model and filling map in type $B_n$}\label{type B alcove def section}
We now consider the specialization of the quantum alcove model to type $B_n$. For any $k = 1,\ldots,n$, we define the following (split) $\omega_k$-chain, denoted by $\Gamma^l(k)\Gamma^r(k)$ \cite{Lenart 2012}, similarly to type $C_n$ where:
\begin{equation} \Gamma^l(k):= \Gamma^{kk}\ldots \Gamma^{k1}, \hspace{8pt} \Gamma^r(k):=\Gamma^k\ldots \Gamma^2\,,\end{equation}
\vspace{-12pt}
\begin{equation*}
\begin{array}{lllll}
\;\;\;\;\;\;\;\;\;\;\,\Gamma^{ki}:=(
&\!\!\!\! (i,k+1),&(i,k+2),&\ldots,&(i,n)\,,\\
&\!\!\!\! (i,\overline{\imath})\,,\\
&\!\!\!\! (i,\overline{n}),&(i,\overline{n-1}),&\ldots,&(i,\overline{k+1})\,,\\
&\!\!\!\! (i,\overline{i-1}),&(i,\overline{i-2}),&\ldots,&(i,\overline{1})\:)\,,
\end{array}
\end{equation*}
\vspace{-9pt}
$$\!\!\!\!\!\Gamma^{i}:=((i,\overline{i-1}),(i,\overline{i-2}),\ldots,(i,\overline{1}))\,.$$
We will continue to refer to the four rows above in $\Gamma^{ki}$ as stages I, II, III, and IV respectively (c.f. Figure~\ref{stages_fig}).
We can construct a $\lambda$-chain as a concatenation $\Gamma:=\Gamma_{1}^l\Gamma_{1}^r\ldots \Gamma_{\lambda_1}^l\Gamma_{\lambda_1}^r$, where $\Gamma^l_i:=\Gamma^l(\lambda'_i)$ and $\Gamma^r_i:=\Gamma^r(\lambda'_i)$. We will use interchangeably the set of positions $J$ in the $\lambda$-chain $\Gamma$ and the sequence of roots $T$ in $\Gamma$ in those positions, which we call a \textit{folding sequence}. The factorization of $\Gamma$ with factors $\Gamma^l_i$,$\Gamma^r_i$ induces a factorization of $T$ with factors $T^l_i$,$T^r_i$. We use the same circle order $\prec_a$ on the set $[\overline{n}]$ as the one in type $C_n$.
The following are conditions on the quantum Bruhat graph of type $B_n$.
\begin{proposition}\label{type B bruhat conditions}{\rm \cite{Briggs}}
The quantum Bruhat graph of type $B_n$ has the following edges.
\begin{enumerate}
\item Given $1\leq i < j\leq n$, we have an edge $w\xrightarrow{(i,j)} w(i,j)$ if and only if there is no $k$ such that $i<k<j$ and $w(i)\prec w(k)\prec w(j)$.
\item Given $1\leq i<j\leq n$, we have an edge $w\xrightarrow{(i,\overline{\jmath})} w(i,\overline{\jmath})$ if and only if one of the following conditions holds:
\begin{enumerate}
\item $w(i)<w(\overline{\jmath})$, $sign(w(i))=sign(w(\overline{\jmath}))$, and there is no $k$ such that $i<k<\overline{\jmath}$ and $w(i)< w(k)< w(\overline{\jmath})$;
\item $sign(w(i))=-1$, $sign(w(\overline{\jmath}))=1$, and there is no $k$ such that $i<k\neq j< \overline{\jmath}$ and $w(i)\prec w(k)\prec w(\overline{\jmath})$.
\end{enumerate}
\item Given $1\leq i\leq n$, we have an edge $w\xrightarrow{(i,\overline{\imath})} w(i,\overline{\imath})$ if and only if:
\begin{enumerate}
\item $w(i)<w(\overline{\imath})$ and there is no $k$ such that $i<k<\overline{\imath}$ and $w(i)\prec w(k)\prec w(\overline{\imath})$;
\item or $w(\overline{\imath})<w(i)$ and $i=n$.
\end{enumerate}
\end{enumerate}
\end{proposition}
\begin{figure}[H]
\centering
\includegraphics[scale=.55]{Gamma_stages.png}
\caption{A visualization of the four stages of roots in $\Gamma^{ki}$.}\label{stages_fig}
\end{figure}
Note that there are two major differences from the type $C_n$ quantum Bruhat graph criterion.
\begin{enumerate}
\item Since the root $(n,\overline{n})$ is not in $\Gamma(k)$ for any $k<n$, we will never have the case $3b$. This means that we lose the ability to apply the transposition $t_{i\overline{\imath}}$ when $w(\overline{\imath})<w(i)$.
\item In return, we gain some extra transpositions through $2b$, as there are now cases where the quantum Bruhat graph criterion allows for an arrow $\xrightarrow{(i,\overline{\jmath})}$ when $w(i)>w(\overline{\jmath})$.
\end{enumerate}
We now provide a description of the filling map in type $B_n$. Note that it is the natural extension from type $C_n$.
\begin{definition}\label{deffillb}
Given a folding sequence $T$, we consider the signed permutations
$\pi^l_i:=T_{1}^lT_1^r\ldots T_{i-1}^lT_{i-1}^rT^l_i$, $\pi^r_i:=\pi^l_iT^r_i.$
Then the {\rm filling map} is the map ``$\mbox{fill\_B}$'' from folding sequences $T$ in $\mathcal{A}(\lambda)$ to fillings $\mbox{fill\_B}(T) = C^l_{1}C^r_{1}\ldots C^l_{\lambda_1}C^r_{\lambda_1}$ of the shape $2\lambda$, which are viewed as concatenations of columns; here $C^l_i:=\pi^l_i[1,\lambda'_i]$ and $C^r_i:=\pi^r_i[1,\lambda'_i]$, for $i=1,\ldots,\lambda_1.$
We then define $\mbox{sfill\_B}: \mathcal{A}(\lambda)\rightarrow B^{\lambda'}$ to be the composition ``$\mbox{sort}\circ \mbox{fill\_B}$'', where ``{sort}'' reorders the entries of each column increasingly; here we represent a KR crystal $B^{k,1}$ as a {\rm split} (also known as {\rm doubled}) KN column of height $k$, see Section~{\rm \ref{type b kr section}}.
\end{definition}
\subsection{The type $B_n$ inverse map }
The main process remains similar to that of the type $C_n$ inverse map. However, the differences in the type $B_n$ quantum Bruhat graph, $B^{r,1}$ Kirillov-Reshetikhin crystals, and $KN$ columns add key new features. The fact that type $B_n$ columns can contain the value $0$ was acknowledged in Section~\ref{type b kr section} with the revised algorithm $split\_B$. We now address the difficulties associated to the QBG criterion and KR crystals.
\vspace{12pt} Recall that the KR crystals of column shape can be written as $B^{k,1}\cong B(\omega_k) \sqcup B(\omega_{k-2}) \sqcup B(\omega_{k-4})\sqcup\ldots$ where the elements of the set $B(\omega_r)$ are given by KN columns of height $r$. This means that each $B^{k,1}$ contains columns of height less than $k$. We need to extend them to full height $k$ so that the transpositions of the corresponding $\Gamma^l(k)\Gamma^r(k)$ may be correctly applied. The respective algorithm ``{\it extend}'' is given below.
\begin{algorithm}{\rm \cite{Briggs}}
Given a split column $(lC,rC)$ of length $1\leq r<n$ and $r\leq k<n$, append $\{\overline{\imath}_1<\ldots<\overline{\imath}_{r-k}\}$ to $lC$ and $\{i_1<\ldots<i_{r-k}\}$ to $rC$, where $i_1$ is the minimal value in $[\overline{n}]$ such that $i_1,\overline{\imath}_1\notin lC,rC$, and $i_t$ for $2\leq t\leq r-k$ is minimum value in $[\overline{n}]$ such that $i_t,\overline{\imath}_t\notin lC,rC$ and $i_t>i_{t-1}$. Sort the extended columns increasingly. Let $(\widehat{lC},\widehat{rC})$ be the {\rm extended split column}.
\end{algorithm}
\begin{example}
The following is a $KN$ column $C$ of type $B_8$ with its split columns and then extended columns to a height of $6$.
\[
C=\begin{array}{l}\tableau{{5}\\{0}\\{\overline{8}}\\{\overline{5}}} \end{array} \!\begin{array}{c} (lC,rC)= \end{array}\! \begin{array}{ll}\tableau{{ 4}\\{7}\\{ \overline{8}} \\ { \overline{5}}}\tableau{{ 5}\\{ \overline{8}}\\{ \overline{7}} \\ { \overline{4}}} \end{array}\begin{array}{c}(\widehat{lC},\widehat{rC})= \end{array} \begin{array}{ll}\tableau{{ 4}\\{7}\\{ \overline{8}} \\ { \overline{5}}\\ { \overline{2}}\\ { \overline{1}}}\tableau{{1}\\{2}\\{ 5}\\{ \overline{8}}\\{ \overline{7}} \\ { \overline{4}}} \end{array}
\]
\end{example}
Recall the type $B_n$ quantum Bruhat graph criterion (cf. Proposition~\ref{type B bruhat conditions}). The main differences from the type $C_n$ QBG are the loss of the ability to change negative to positive entries with the stage II roots, $(i,\overline{\imath})$, but gaining the ability to change negative to positive entries with the stage IV roots, $(i,\overline{\jmath})$. At first glance, this does not seem to hinder the path algorithm $Path\_C$ from Section~\ref{inverse map type C section}: if such a sign change is necessary, it is merely postponed. However, while the $(i,\overline{\imath})$ root only changes the sign in position $i$, the $(i,\overline{\jmath})$ root changes the sign in position $j$ as well. The subtle difference in the quantum Bruhat criterion makes both the reorder and path algorithms from type $C_n$ fail in type $B_n$. We discuss two modifications to these algorithms, which depend on a certain pattern avoidance in two adjacent columns.
\begin{remark}\label{reason for mod remark} {\rm In types $A_{n-1}$ and $C_n$, the algorithm for forming the correct sequence of roots followed the rule that for the current word, $w$, and the root $(i,j)$, if $w(i)\prec wt_{ij}(i)\preceq C'(i)$, then add the root to the sequence, and otherwise do not and proceed. We will call a transposition following the above inequality a \textit{Path\_C transposition}. Further, if $w(i)\prec C'(i)\prec wt_{ij}(i)$, we will say that the transposition \textit{passes the target} (in row $i$). The original path algorithm has two underlying rules: one, we never apply a root which forces us to pass the target, and two, we always use a $Path\_C$ root. In type $B_n$ (and, as we will later see, in type $D_n$ as well) there are exceptions to these two rules. Both exceptions come directly from the need to avoid the following pattern in two adjacent columns. }
\end{remark}
\begin{definition}\label{block-off def} We say that columns $C = (l_1,l_2,...,l_k)$ and $C' = (r_1,r_2,...,r_k)$ are {\rm blocked off at $i$ by $b:=r_i$} if and only if the following hold:
\begin{enumerate}
\item $ |l_i| \leq b < n$, where $|l_i| = b$ if and only if $l_i = \overline{b}$;
\item $\{1,2,...,b\}\subset \{|l_1|,|l_2|,...,|l_i|\}$ and $\{1,2,...,b\}\subset \{|r_1|,|r_2|,...,|r_i|\}$;
\item $|\{j : 1\leq j\leq i, l_j<0, r_j> 0\}|$ is odd.
\end{enumerate}
\end{definition}
\begin{example}
The following columns $CC'$ of height 5 with entries from $[\overline{8}]$ are blocked-off at 4 by 3:
$$
\begin{array}{l}\tableau{{1}&{1}\\{4}&{5}\\{\overline{2}}&{\overline{2}}\\{\overline{3}}&{3}\\{5}&{8}} \end{array}
$$
\end{example}
\vspace{12pt} We will find that if two columns (of length $k$) are blocked off at any $i\in [k-1]$, there will be no corresponding path in the quantum Bruhat graph between the columns. Therefore, given two columns $C$ and $C'$, we must not only avoid this pattern in the reordering of column $C'$, but also make sure that at any point in the aplication of roots in $T$, we never force the current column to be bocked off with $C'$ at any $i\in [k-1]$ either. The latter part will give way to the $Path\_B$ algorithm.
We now define the type $B_n$ versions of the {\it reorder} and {\it path} algorithms. Let $b:=b^l_1b^r_1\ldots b^l_{\lambda_1}b^r_{\lambda_1}=b_1\ldots b_{2\lambda_1}$ be extended split columns indexing a vertex of the crystal $B^{\lambda'}$ of type $B_n$. Similarly, let $\Gamma:=\Gamma^l_1\Gamma^r_1\ldots \Gamma^l_{\lambda_1}\Gamma^r_{\lambda_1}=\Gamma_1\ldots \Gamma_{2\lambda_1}$. Algorithm~\ref{Mod-Reorder algorithm} takes $b,\Gamma$ as input and returns a reordered filling $C$ of a Young diagram of shape $2\lambda$.
\begin{algorithm}\label{Mod-Reorder algorithm}
(``ord\_B'')
let $C_1:=b_1$;
\hspace{6pt}for $i$ from $2$ to $2\lambda_1$ do
\hspace{12pt} for $j$ from $1$ to $\lambda'_i-1$ do
\hspace{18pt} let $C_i(j):=min_{\prec_{C_{i-1}(j)}}(b_i\setminus \{C_i(1),\ldots,C_i(j-1)\}$ so that $C_{i-1},C_{i}$ not blocked off at $j$
\hspace{12pt} end do;
\hspace{12pt} let $C_i(\lambda'_i) := min_{\prec_{C_{i-1}(j)}}(b_i\setminus \{C_i(1),\ldots,C_i(\lambda'_i-1)\}$
\hspace{6pt} end do;
return $C:=C_1\ldots C_{2\lambda_1}=C^l_1C^r_1\ldots C^l_{\lambda_1}C^r_{\lambda_1}.$
\end{algorithm}
\begin{example}{\rm
Algorithm~\ref{Mod-Reorder algorithm} gives the filling $C$ from $b$ below. Note that Algorithm~\ref{Reorder algorithm} would have paired the $3$ with the $\overline{3}$ in the $4^{\rm{th}}$ row. However, this would cause the two columns to be blocked off at $4$ by $3$, so the modified algorithm skips to the next value and pairs the $8$ with the $\overline{3}$ instead.
$$b= \tableau{{1}\\{4}\\{\overline{2}}\\{\overline{3}}\\{5}} \tableau{{1}\\{3}\\{5}\\{8}\\{\overline{2}}} \xrightarrow{ord\_B} \tableau{{1}\\{4}\\{\overline{2}}\\{\overline{3}}\\{5}} \tableau{{1}\\{5}\\{\overline{2}}\\{8}\\{3}} =C$$
}\end{example}
The $``Path\_B''$ algorithm (Algorithm~\ref{Mod-Greedy algorithm}) takes the reordered, extended, split filling $C=C_1\ldots C_{2\lambda_1}$ given by Algorithm~\ref{Mod-Reorder algorithm}, and outputs a sequence of roots \textit{Path\_B}$(C) = S\subset \Gamma$. We define $C_0$ to be the increasing column filled with $1,2,\ldots,n$. Note that the major difference between the $Path\_B$ and $Path\_C$ algorithms is the addition of blocked-off avoidance.
\begin{algorithm}\label{Mod-Greedy algorithm}
(``Path\_B'')
for $i$ from $1$ to $2\lambda_1$ do
\hspace{6pt} let $S_i:=\emptyset$, $A := C_{i-1}$;
\hspace{6pt} for $(l,m)$ in $\Gamma_i$ do
\hspace{12pt} if $(l,m)=(i,i+1)$ and $A,C_i$ are blocked off at $i$ by $C_i(i)$, then let $S_i:=S_i,(i,i+1)$, $A:=A(i,i+1)$;
\hspace{12pt} elsif $A(l)\neq C_i(l)$ and $A(l)\prec A(m)\prec C_i(l)$ and $A(l,m),C_i$ not blocked off at $l$ by $C_i(l)$, then let $S_i:=S_i,(l,m)$, $A:=A(l,m)$;
\hspace{12pt} end if;
\hspace{6pt} end do;
end do;
return $S := S_1\ldots S_{2\lambda_1}=S_1^lS_1^r\ldots S_{\lambda_1}^l S_{\lambda_1}^r$.
\end{algorithm}
\begin{example}{\rm
Consider the crystal $B^{(2,2)}$ of type $B_3$. Then $\lambda' = \lambda = (2,2)$ and $\Gamma = \Gamma(2)\Gamma(2)$. Suppose that we have $\widehat{rC_1}=\tableau{{\overline{3}}\\{\overline{2}}\\{1}}$ and $\widehat{lC_2}=\tableau{{1}\\{3}\\{2}}$. Algorithm~\ref{Mod-Greedy algorithm} produces the following subset of $\Gamma^l(2)=\{(2,3),(2,\overline{2}),(2,\overline{3}),(2,\overline{1}),(1,3),(1,\overline{1}),(1,\overline{\vspace{-1.1mm}\vspace{-1.1mm}3})\}$:
$$\begin{array}{l}\tableau{{\overline{3}}\\{\overline{2}}} \\ \\ \tableau{{ 1}} \end{array} \!
\begin{array}{c} \\ \xrightarrow{(2,3)} \end{array}\!
\begin{array}{l}\tableau{{ \overline{3}}\\{ 1} \\ \\ { \overline{2}}} \end{array} \begin{array}{c} \\ {\xrightarrow{(2,\overline{3})}} \end{array}\!
\begin{array}{l}\tableau{{ \overline{3}}\\{ 2} \\ \\ { \overline{1}}} \end{array}
\begin{array}{c} \\ {\xrightarrow{(2,\overline{1})}} \end{array}\!
\begin{array}{l}\tableau{{ \overline{2}}\\{ 3} \\ \\ { \overline{1}}} \end{array}
\begin{array}{c} \\ {\xrightarrow{(1,\overline{3})}} \end{array}\!
\begin{array}{l}\tableau{{1}\\{ 3} \\ \\ { 2}} \end{array}
\, .$$
Notice that Algorithm~\ref{Greedy algorithm} would have called for the use of $(1,3)$ instead of $(1,\overline{3})$. This would have caused the resulting word to be blocked off with $\widehat{lC_2}$ at $1$ by $1$ and thus the natural extension of algorithm $Path\_C$ to type $B_n$ would not have terminated correctly.
}\end{example}
\begin{theorem}\label{main theorem}
The map ``$\mbox{path\_B}\circ \mbox{ord\_B}\circ \mbox{extend}\circ \mbox{split\_B}$'' is the inverse of the type $B_n$ $``sfill\_B''$ map.
\end{theorem}
The proof of Theorem~\ref{main theorem} is based off of the following proposition. First we consider the following conditions on a pair of adjacent columns $CC'$ in a filling of a Young diagram.
\begin{conditions}\label{nec conditions for B columns}
For any pair of indices $1 \leq i < l \leq k$,
\begin{enumerate}
\item $C(i)\neq C'(l)$
\item $ C(i) \prec C'(l) \prec C'(i)$ only if the columns $C$ and $C't_{il}$ are blocked off at $i$ by $C'(l)$
\item $CC'$ are not blocked off at $i$ by $C'[i]$ for any $1\leq i <k$
\end{enumerate}
\end{conditions}
\begin{remark}\label{main theorem remark}
{\rm Let $b$ be a filling in $B^{\lambda'}$, represented with split columns (as a filling of the shape $2\lambda$). Then Algorithm~\ref{Mod-Reorder algorithm} constructs the unique filling $\sigma$ satisfying the following conditions: (i) the first column of $\sigma$ (of height $\lambda'_1$) is increasing, (ii) the adjacent columns of $\sigma$ satisfy Conditions~\ref{nec conditions for B columns}, and (iii) $sort(\sigma)=b$.}
\end{remark}
\begin{proposition}\label{main prop}
The restriction of the filling map $``fill\_B''$ to $\mathcal{A}(\lambda)$ is injective. The image of this restriction consists of all fillings $C_1^l C_1^r \hdots C_l^{\lambda_1} C_r^{\lambda_1}$ of the shape $2\lambda$ with integers in $[\overline{n}]$ satisfying the following conditions: $(i)$ $C_l^1$ is increasing; $(ii)$ $(ord\_B(C_l^j),ord\_B(C_r^j))=(lD^j,rD^j)$ for some $KN$ column $D^j$, for all $j$; $(iii)$ any two adjacent columns satisfy Conditions~\ref{nec conditions for B columns}.
\end{proposition}
The proof of Proposition~\ref{main prop} is based on the following two results, which will be proved in Sections~\ref{R2nextL} and~\ref{L2R} respectively.
\vspace{12pt} \begin{proposition}\label{Total Path Prop}
Consider a signed permutation $u$ in $B_n$, the column $C:=u[1,k]$, and another column $C'$ of height $k$. The pair of columns $C'C$ satisfies Conditions~\ref{nec conditions for B columns} if and only if there is a path $u = u_0,u_1,\hdots,u_p=v$ in the corresponding quantum Bruhat graph such that $v[1,k] = C'$ and the edge labels form a subsequence of $\Gamma_l(k)$. Moreover, the mentioned path is unique, and we have $$C(i) = u_0(i)\preceq u_1(i)\preceq\hdots\preceq u_p(i) = C'(i), \text{ for } i = 1,\hdots,k.$$
\end{proposition}
To prove Proposition~\ref{Total Path Prop}, we will first determine the necessary conditions for a path in the quantum Bruhat graph to exist and then construct a path using a more detailed version of Algorithm~\ref{Mod-Greedy algorithm}.
\vspace{12pt} \begin{proposition}\label{SER prop}
Consider a signed permutation $u$ in $B_n$, the column $C:=u[1,k]$, and another column $C'$ of height $k$. The pair of columns $CC'$ satisfy Conditions~\ref{nec conditions for B columns} and $(mod\_ord(C),mod\_ord(C')) = (rD,lD)$ for some $KN$ column $D$ if and only if there is a path $u=u_0,u_1,\hdots, u_p = v$ in the corresponding quantum Bruhat graph such that $v[1,k]=C'$ and the edge labels form a subsequence of $\Gamma_r(k)$. Moreover, the mentioned path is unique and, for each $i=1,\hdots, k$, we have the following weakly increasing sequence:
$$C(i) = u_0(i)\preceq u_1(i)\preceq \hdots \preceq u_p(i) = C'(i).$$
\end{proposition}
To prove Proposition~\ref{SER prop}, we will first classify the split, extended and reordered KN columns, and then show that the resulting columns are exactly the necessary conditions for constructing a path in the quantum Bruhat graph.
\begin{proof}(of Proposition~\ref{main prop})
Consider a filling $\sigma = C_l^1C_r^1\hdots C_l^{\lambda'_1}C_r^{\lambda'_1}$ satisfying conditions (i)-(iii), and let $u$ be the identity permutation. Apply Proposition~\ref{Total Path Prop} to $u$ and the culumn $C_l^1$; then set $u$ to the output signed permutation $v$, and apply Proposition~\ref{SER prop} to $u$ and the column $C_r^1$. Continue in the way, by considering the columns $C_l^2,C_r^2,\hdots ,C_l^{\lambda'_1},C_r^{\lambda'_1}$, while each time setting $u$ to be the previously obtained $v$. This procedure constructs the unique folding pair $(w,T)$ mapped to $\sigma$ by the filling map $fill\_B$. Viceversa, a similar application of the two propositions shows that any filling in the image of $fill\_B$ satisfies conditions (i)-(iii).
\end{proof}
\begin{proof}(of Theorem~\ref{main theorem})
This in now immediate, based on Remark~\ref{main theorem remark} and Proposition~\ref{main prop}.
\end{proof}
\section{Proof of Proposition~\ref{Total Path Prop}}\label{R2nextL}
\subsection{Necessary conditions for reordered columns}
Let $C$ and $C'$ be two columns of height $k$ with entries from $[\overline{n}]$ where each column cannot contain both $i$ and $\overline{\imath}$ for $1\leq i\leq n$. Recall Algorithm~\ref{Mod-Reorder algorithm} (the type $B_n$ reorder algorithm) and note that for each such pair of columns, it reorders the second based on the first in a way that is consistant with Conditions~\ref{nec conditions for B columns}. In this section, we show that the first two parts of Conditions~\ref{nec conditions for B columns} are necessary for the existance of a path between the two columns in the quantum Bruhat graph.
The necessity for the third condition will be realized in Section~\ref{constructing segment of path in B}.
We begin by recalling Conditions~\ref{nec conditions for B columns} on adjacent columns $C C'$ where we now write the second condition in an equivalent way which will be more beneficial in this section.
\vspace{12pt}
For any pair of indices $1 \leq i < l \leq k$,
\begin{enumerate}
\item $C(i)\neq C'(l)$
\item the statement $ C(i) \prec C'(l) \prec C'(i)$ is false unless the following are true.
\begin{enumerate}
\item $\{1,2,\hdots,|C'(l)|\} \subseteq \{|C(j)|\}_{1\leq j \leq i}$ and $\{1,2,\hdots,|C'(l)|-1\} \subseteq \{|C'(j)|\}_{1\leq j \leq i-1}$
\item $ C'(l)>0$ and $|C(i)|\leq C'(l)$ with equality iff $C(i) = \overline{C'(l)}$,
\item and $p'$ is odd, where $p=\#\{j : 1\leq j\leq i-1, C(j)<0, C'(j)> 0\}$ and $p' = p+1$ if $C(i)<0$ and $p' = p$ otherwise.
\end{enumerate}
\item $C,C'$ are not blocked off at $i$ by $C'[i]$ for any $1\leq i <k$
\end{enumerate}
\vspace{12pt} The following lemma will show that for a given path in the quantum Bruhat graph, the only possible transposition in $T_i$ that can pass the target value $C'(i)$ is $(k,k+1)$. It also gives us the monotonicity of the values in position $i$ during the application of transpositions of $T_i$.
\begin{lemma}\label{row monotonicity}
For $T_i = (i,j_1)(i,j_2)\hdots (i,j_q)$, some $1\leq i \leq k$, let $u_0:=\pi_{i+1}$ and $ u_r := \pi_{i+1}t_{i,j_1}t_{i,j_2}\hdots t_{i,j_r}$ for $1\leq r\leq q$. Then $u_r(i)\prec u_{r+1}(i)\preceq C'(i)$ for $r=1,\hdots, q-1$, while for $r=0$ this fails only if $(i,j_1) = (k,k+1)$.
\end{lemma}
\begin{proof}
Suppose that there is some $0\leq r<q-1$ such that $u_r(i)\prec C'(i) \prec u_{r+1}(i)$. We first show that this cannot happen during the application of transpositions in $T_i$ for $i<k$ and then we show that it can only happen for $j_1 = k+1$ in $T_k$.
Let $i<k$. Then if we have $u_r(i)\prec C'(i) \prec u_{r+1}(i)$, the QBG criterion then gives that \[u_r(i)\prec C'(i) \prec u_{r+1}(i)\prec u_r(k) = C'(k)\prec u_r(i).\] Therefore some transposition $(i,j_{r'})$, for $r+1 < r' \leq q$ will transpose over the value $C'(k)$, breaking the QBG criterion.
Now let $i=k$. Consider the case where $j_{r+1}\neq k+1$. We will show that $u_r(i)\prec C'(i) \preceq u_{r+1}(i)$ cannot occur and the result follows. We break into two cases: when $j_{r+1} \neq \overline{k+1}$ and when $j_{r+1} = \overline{k+1}$.
\underline{Case 1:} consider when $j_{r+1} \neq \overline{k+1}$. Then by the QBG criterion, we have \[u_r(k)\prec C'(k)\prec u_{r+1}(k) \prec u_r(k+1)\prec u_r(k).\] However, the only way to transpose over the value in position $k+1$ is with the root $(k,\overline{k+1})$, after which there is no longer a root which can transpose over the root in position $j_r$.
\underline{Case 2:} consider when $j_{r+1} = \overline{k+1}$. Then there is still the conflict of transposing over the value in position $j_r$. Indeed, this could only be done with a Stage III transposition, of which $(k,\overline{k+1})$ is the last.
Therefore the only possible transposition where passing the target value $C'(i)$ can possibly occur is with the root $(k,k+1)$.
\end{proof}
\vspace{12pt} The next lemma shows us that if we have two columns $CC'$ satisfying Condition 2, we know exactly where in the application of roots in $T$ the current word's value in position $i$ becomes greater than $C'(l)$ in the circle order starting at $C(i)$.
\begin{lemma}\label{Lenart reorder lemma} Let $CC'$ be a pair of columns such that $a:=C(i)\prec b:=C'(l) \prec c:=C'(i)$ for some $1\leq i < l \leq k$. Then the reflection $(l,\overline{\imath})$ is in $T_l$. Furthermore, if $w$ is the word immediately prior to the application of $(l,\overline{i})$, then we have that $w(i)\prec C'(l)\prec C'(i)$ and $C'(l)\prec wt_{l,\overline{\imath}}(i)\prec C'(i)$.
\end{lemma}
\begin{proof}
Let us assume that we have $a:=C(i)\prec b:=C'(l) \prec c:=C'(i)$ for some $1\leq i < l \leq k$. We first show that there is a root with the said properties and then show that this root is exactly the root $(l,\overline{\imath})$.
First, note that we cannot have $C_l(i)\preceq b\prec c$, otherwise the entry in position $i$ of the signed permutation would then change from $C_l(i)$ to $C'(i)$ via reflections in $T_{l-1}T_{l-2}\hdots T_i$. One of these reflections would then transpose entries across $b$, violating the quantum Bruhat graph criterion. It must then be that $b \prec C_l(i)\preceq c$. This means that at some point in the process of applying to $u$ the reflections in $T_k,T_{k-1},\hdots , T_l$, we apply to the current signed permutation $w$ a reflection $(i,\overline{m})$ such that $a\prec a':= w(i)\prec b\prec c':= w(\overline{m})\prec c$. Let $(i_0,\overline{m}),\hdots , (i_p,\overline{m})$ be the segment of $T_m$ starting with $(i,\overline{m})$ and consisting of roots $(- , \overline{m})$, where $i = i_0 > i_1 > \hdots >i_p\geq 1$. Let $a_r:=w(i_r)$ for $0\leq r\leq p$.
We now show that $(i,\overline{m})$ is our desired root, or equivalently, that $m=l$. Since $wt_{i,\overline{m}}t_{i_1,\overline{m}}\hdots t_{i_j,\overline{m}}=\pi_m$, the quantum Bruhat criterion gives that $\pi_m[i_p,\overline{m}]$ cannot have any entries between $a_p$ and $c'$ except maybe $\overline{a_p}$ in the $m^{th}$ position of $C_m$. It now follows that since $i_p\leq i<l\leq m <\overline{m}$, we cannot have $a_p\prec C_m(l)\prec c'$ unless $m = l$ where $C_m(m) = \overline{a_p}$. Suppose that $m\neq l$. Then $C_m(l)\prec a_p\prec c'$. Note that by Lemma~\ref{row monotonicity}, $\overline{c'}\prec \overline{a'}=\overline{a_0}\prec \overline{a_1}\prec\hdots\prec \overline{a_p}$. This gives us that $a_p \prec a' \prec c'$ and so $C_m(l)\prec a_p \prec a'\prec b \prec c'$. Since $C'(l) = b$, there would then have to be some transposition in $T_{m-1},\hdots, T_1$ which would transpose two values over $a_p$ in position $\overline{m}$. This contradicts the quantum Bruhat criterion, and so it must have been that $\overline{a_p} = b$ and $m=l$.
\end{proof}
\begin{example}\label{Lenart reorder lemma example}
{\rm Consider the following quantum Bruhat graph path in $B_8$. Here we have $u[1,6]$ and $v[1,6]$ follow the proposed reorder criterion. Note that $u[4]\prec v[6]\prec v[4]$ and that $u[1,6]$, $vt_{4,6}[1,6]$ is blocked off at $4$ by $3$. We see not only that $t_{6,\overline{4}}$ is indeed in the path, but also that while $w[4]\prec v[6]\prec v[4]$ the next transposition gives $wt_{6,\overline{4}}[4]\prec v[4]\prec v[6]$ .
$$\begin{array}{l}{u} \\ \tableau{{1}\\{4}\\{ \overline{2}}\\{\overline{3}}\\{8}\\{7}}
\\ \\ \tableau{{5}\\{6}} \end{array}
\!\begin{array}{c} \\ \xrightarrow{(6,\overline{6})} \end{array}\!\begin{array}{l} \\ \tableau{{1}\\{4}\\{ \overline{2}}\\{\overline{3}}\\{8}\\{\overline{7}}}
\\ \\ \tableau{{5}\\{6}} \end{array}
\!\begin{array}{c} \\ \xrightarrow{(6,\overline{8})} \end{array}\! \begin{array}{l} \\ \tableau{{1}\\{4}\\{ \overline{2}}\\{\overline{3}}\\{8}\\{\overline{6}}}
\\ \\ \tableau{{5}\\{7}} \end{array}
\!\begin{array}{c} \\ \xrightarrow{(6,\overline{7})} \end{array}\! \begin{array}{l} {w} \\ \tableau{{1}\\{4}\\{ \overline{2}}\\{\overline{3}}\\{8}\\{\overline{5}}}
\\ \\ \tableau{{6}\\{7}} \end{array}
\!\begin{array}{c} \\ \xrightarrow{(6,\overline{4})} \end{array}\! \begin{array}{l} {wt_{6\overline{4}}} \\ \tableau{{1}\\{4}\\{ \overline{2}}\\{5}\\{8}\\{3}}
\\ \\ \tableau{{6}\\{7}} \end{array}
\!\begin{array}{c} \\ \xrightarrow{(4,7)} \end{array}\! \begin{array}{l} \\ \tableau{{1}\\{4}\\{ \overline{2}}\\{6}\\{8}\\{3}}
\\ \\ \tableau{{5}\\{7}} \end{array}
\!\begin{array}{c} \\ \xrightarrow{(2,7)} \end{array}\! \begin{array}{l} {v}\\ \tableau{{1}\\{5}\\{ \overline{2}}\\{6}\\{8}\\{3}}
\\ \\ \tableau{{4}\\{7}} \end{array}
\!
\,$$
}
\end{example}
\begin{remark}\label{C_m contains 1,...,b}
{\rm Let $CC'$ be a pair of columns such that $a:=C(i)\prec b:=C'(l) \prec c:=C'(i)$ for some $1\leq i < l \leq k$. We claim that $\{1,2,\hdots,|b|-1\}\subseteq\{|C_m[j]|\}_{1\leq j\leq i}$ and $\{1,2,\hdots,|b|-1\}\subseteq\{|C_{m+1}[j]|\}_{1\leq j\leq i}$. For the first, recall from the proof of Lemma~\ref{Lenart reorder lemma} that there cannot be any values between $\overline{b}$ and $c'$ in $\pi_m[i,\overline{m}]$. The same proof gave way to inequalities which tell us that $|b|<|c'|$. This means that there cannot be any values between $\overline{b}$ and $b$ in $\pi_m[i,\overline{m}]$. Since $i<m$, we have that there cannot be any values between $\overline{b}$ and $b$ in $\pi_m[i,\overline{i}]$. The first claim then follows. For the second, notice that the only difference between the content of $C_{m+1}[1,i]$ and $C_m[i,i-1]$ is the loss of $a_p=\overline{b}$.}
\end{remark}
\vspace{12pt}
The following Lemma will be used for showing the necesity of Condition 2c.
\begin{lemma}\label{pos order} Suppose that there is $1\leq i < l \leq k$ such that $C(i)\prec C'(l)\prec C'(i)$, and Conditions 2a,b are met. Then for all $1\leq j\leq i-1$, if $C(j),C'(j)>0$, then $C(j) < C'(j)$.
\end{lemma}
\begin{proof}
Suppose that for such a $j$, there is an $r\in\{1,\hdots,q\}$ such that $u_r(j)<0$. Recall that by the quantum Bruhat criterion, all upsteps in stages III and IV must maintain the same sign. This means that the transposition acting on the current word $w$ changing position $j$ from positive to negative must come from a stage I or II reflection which can only happen in $T_j$ at $(j,s_0)$ for some $s_0\in\{k+1,\hdots,\overline{k+1}\}\cup\{\overline{j}\}$. Since Condition 2a holds, $wt_{j,s_0}(j)\notin [\overline{b}]$. Let the remaining stage I reflections be $(j,s_1),\hdots,(j,s_p)$ where $k+1\leq s_0 < \hdots <s_p\leq n$. Notice that $w[s_i]\notin [\overline{b}]$ for any $0\leq i\leq p$ by Condition 2a. This means that $b\prec w'(j)=wt_{j,s_0}\hdots t_{j,s_p}[j]\prec \overline{b}$. However, if $w'[j]\in \{b,\hdots,n\}$, then one of the reflections $(j,s_i)$ would have to transpose two values over $b$ in position $l$, contradicting the quantum Bruhat criterion. Thus $w'(j)\in\{\overline{n},\hdots,\overline{b+1}\}$. Note that we cannot apply the $(j,\overline{j})$ reflection at this time by quantum bruhat criterion, so we move on to stage III reflections. Again by Condition 2b, each stage III reflection can only change position $j$ to a value in $\{b+1,\hdots,\overline{b+1}\}$. However, no stage III reflection can take position $j$ to anything in $\{b+1,\hdots,w'[j]\}$ without contradicting either the quantum Bruhat criterion or Lemma~\ref{row monotonicity}. Thus if $w''$ is the word at the end of applying stage III transpositions, we have that $w''(j)\in\{\overline{n},
\hdots,\overline{b+1}\}$. This means that there must be a stage IV reflection taking $w''(j)$ to a positive value. This cannot happen, as it would require a reflection to transpose over $\overline{b}$ in position $\overline{l}$, contradicting the quantum Bruhat graph criterion. Thus there is no such $u_r(j)<0$ and the lemma holds.
\end{proof}
\begin{remark}\label{One Downstep per row}
{\rm By Lemma~\ref{row monotonicity}, it is clear that for each $1\leq j < k$, there may only be at most one downstep during $T_j$. Otherwise we would inevitably pass over the target.
}
\end{remark}
\vspace{12pt} The following Lemma was given by Lenart in~\cite{Lenart 2012} and is not type dependant so we will use it freely.
\begin{lemma}\label{Lenart 7.1}
Fix $i$ and $l$ with $1\le i < l \leq k$. Let $a$ be the entry in position $i$ of the signed permutation obtained at some point in the process of applying to $u$ the reflections in $T_k,T_{k-1},\hdots, T_{l+1}$. Then either $a$ appears in $C_{l+1}[1,i]$ or $\overline{a}$ appears in $C_{l+1}[l+1,k]$.
\end{lemma}
\begin{proposition}\label{Reorder Nec}The pair of columns $CC'$ satisfy Conditions $1$ and $2$.\end{proposition}
\begin{proof} The proof of condition 1 is the same as in type $C_n$. Let us now assume that we have $a:=C(i)\prec b:=C'(l) \prec c:=C'(i)$ for some $1\leq i < l \leq k$. Let $a',c',m,$ and $w$ be as in Lemma~\ref{Lenart reorder lemma}.
\vspace{12pt} We first show that $\{1,\hdots,b-1\}\subseteq\{|C'[j]|\}_{1\leq j\leq i}$. It is equivalent to show that values in $[\overline{b-1}]$ are not in $\pi_1[i+1,n]$. This is true for positions $m$ though $k$, as $\pi_1[m,k] = \pi_m[m,k]$ which was shown in Remark~\ref{C_m contains 1,...,b} to not contain elements of $[\overline{b-1}]$. Suppose that $|\pi_1(r)|\in\{1,\hdots,b-1\} $ for some $r\in\{i,\hdots,m-1\}$. Then (from the proof of Lemma~\ref{Lenart reorder lemma}) $\pi_m(r)$ is between $c'$ and $\overline{b}$ and $C'(r)=\pi_1(r)$ is in either $\pi_m[1,i-1]$ or $\pi_m[\overline{i-1},\overline{1}]$ and by Lemma~\ref{Lenart 7.1}, necessarily the latter. But during the reflections in $T_r,\hdots,T_m$ there would have to be a reflection that transposes over $\overline{b}$ in position $\overline{m}$, contradicting the quantum Bruhat criterion. Now suppose that $|\pi_1(r)|\in[\overline{b-1}]$ for some $r\in\{k,\hdots,n,\overline{n},\hdots,\overline{k+1}\}$. But this would mean that some reflection in $T_{m-1},\hdots,T_1$ would have to transpose an entry over $b$ in position $m$, contradicting the the quantum Bruhat criterion. Thus there are no elements of $[\overline{b-1}]$ in $\pi_1[i+1,\overline{i+1}]$ and so $\{1,\hdots,b-1\}\subseteq\{|C'[j]|\}_{1\leq j\leq i}$ as desired.
\vspace{12pt}
We now show that $\{1,\hdots,b\}\subseteq\{|C[j]|\}_{1\leq j\leq i}$. If not, then, given what we know about the structure of $\pi_{m+1}$ from Remark~\ref{C_m contains 1,...,b}, there must be a $t\in\{m+1,\hdots,k\}$ and $i_1'\in\{1,\hdots,i\}$ where the application of the reflection $(i_1',\overline{t})$ to the current word $w'$ transposes value $\overline{\gamma}=w'[\overline{t}]$ and $w'[i_1']$ where $\overline{b}\preceq\gamma\preceq b$.
Let $(i_1',\overline{t}),\hdots , (i_r',\overline{t})$ be the remainder of $T_t$ starting with $(i_1',\overline{t})$ and consisting of roots $(\cdot , \overline{t})$, where $ i_1' > i_2' > \hdots >i_r'\geq 1$.
We claim that $\overline{\mu}:=w't_{i_1',\overline{t}}\hdots t_{i_{r-1}',\overline{t}}[\overline{i_r'}]=w't_{i_1',\overline{t}}\hdots t_{i_r',\overline{t}}[t]=C_t[t]=C'[t]\in\{c',\hdots,\overline{c'}\}$, $c'>0$, and further that no values in $w'[i,\overline{t}]$ can lie between $\mu$ and $\overline{\gamma}$. This means that $w'[m]$ lies between $\overline{\gamma}$ and $\mu$. But, from the construction in the proof in Lemma~\ref{Lenart reorder lemma}, we had that $C_m[m] = \overline{c'}$, which would be impossible, as we would then have to traverse over $\overline{\mu}$ in position $t$ during one of the reflections in $T_{t-1},\hdots,T_{m+1}$, breaking the quantum Bruhat criterion. Therefore there was no such $\gamma$ and it must be that $\{1,\hdots,b\}\subseteq\{|C[j]|\}_{1\leq j\leq i}$.
We finish this part by proving the two claims. First, that $\overline{\mu}:=w't_{i_1',\overline{t}}\hdots t_{i_{r-1}',\overline{t}}[\overline{i_r'}]=w't_{i_1',\overline{t}}\hdots t_{i_r',\overline{t}}[t]=C_t[t]=C'[t]\in\{c',\hdots,\overline{c'}\}$. Recall from construction in the proof of Lemma~\ref{Lenart reorder lemma} that the reflection $(m,\overline{i})$ will be used to transpose $a'$ and $c'$ in $w$ during the application of reflections in $T_m$ and so there are no values between $a'$ and $c'$ in $w[i,\overline{m}]$ and nothing betwen $\overline{c'}$ and $\overline{a'}$ in $w[m,\overline{i}]$. This means that all values in $w[m,\overline{m}]$ lie in $\{\overline{a'},\hdots,a'\}\cup\{c',\hdots,\overline{c'}\}$. Notice that $w[m+1,\overline{m+1}]=\pi_m[m+1,\overline{m+1}]=\pi_1[m+1,\overline{m+1}]$ and since $\overline{b}\leq a'<b$, we have that $a'\in [\overline{b}]$, so by the condition set on the position of the values on $\pi_m$ in Remark~\ref{C_m contains 1,...,b}, we get that values in $w[m+1,\overline{m+1}]$ must come from $\{c',\hdots,\overline{c'}\}$. Note that this forces $c'>0$. The claim follows, as $t\in\{m+1,\hdots,k\}$.
For the second part of the claim, notice that by the quantum Bruhat criterion, no values in $w'[i_1',\overline{t}]$ can lie between $w'[i_1']$ and $w'[\overline{t}]=\overline{\gamma}$. Similarly, no values in $w't_{t,\overline{i_1'}}[i_2',\overline{t}]$ can lie between $w'[i_2']$ and $w't_{t,\overline{i_1'}}[\overline{t}]=\overline{w'[i_1']}$, so no values in $w'[i,\overline{t}]$ can lie between $w'[i_2']$ and $\overline{\gamma}$. Continue the same way with the rest of the $(t,\cdot)$ transpositions and we see that no values in $w't_{i_1',\overline{t}}\hdots t_{i_r',\overline{t}}[i_r',\overline{t}]$ can lie between $w'[i_r']=\mu\in\{c',\hdots,c\}$ and $\overline{w'[i_{r-1}']}$, so no values in $w'[i,\overline{t}]$ can lie between $\mu$ and $\overline{\gamma}$.
\vspace{12pt} We now wish to show that the existence of $a:=C(i)\prec b:=C'(l) \prec c:=C'(i)$ implies Condition 2b. First, suppose that $b<0$. Recall that $\overline{b}\prec a'\prec b\prec c'$. We have already shown that $c'>0$ and so $c'<a'$ and $\overline{c'}>\overline{a'}$. Then the reflection $(m,\overline{\imath})$ which transposes $\overline{c'}$ with $\overline{a'}$ is a downstep and, by the quantum Bruhat criterion, it must be that $a'$ and $c'$ are of different sign. This means that $a'<0$. But then via Lemma~\ref{Lenart reorder lemma}, the reflections $(m,\overline{\imath_1}),\hdots(m,\overline{\imath_p})$ taking $\overline{a'}$ to $b$ in position $m$ must all be upsteps. The quantum Bruhat criterion then gives that each transposition must maintain the same sign, and so $sign(\overline{a'})=sign(b)$, a contradiction. Thus $b>0$.
It remains to be shown that $\overline{b}\preceq a\prec b$. We do so by showing that $a=a'$, for which this inequality already holds. Suppose that $a\neq a'$. Then $C[i]=a$ by assumption, and $a'=C[j]$ for some $j\in\{1,\hdots,i-1\}\cup\{\overline{i-1},\hdots,\overline{1}\}$ by Condition 2a. Then by Lemma~\ref{Lenart 7.1}, with $i=j$ and $l=m$, we get that $a'$ must appear in either $C_{m+1}[1,i]$ or $C_{m+1}[m+1,k]$. But by a claim in the proof of 2a, we saw that neither $a'$ nor $\overline{a'}$ can appear in the latter, and thus one of them must appear in the prior. Since $j<i$ the transposition $(m,\overline{i})$ occurs before the transposition $(m,\overline{j})$, the transposition $(m,\overline{i})$ cannot transpose the values $\overline{c'}$ with $a'$ as it had originally been constructed, a contradiction. Thus $a=a'$ and Condition 2b holds.
\vspace{12pt} We now wish to show that the existence of $a:=C(i)\prec b:=C'(l) \prec c:=C'(i)$ implies implies Condition 2c. We first count the number of downsteps made with transpositions with values in positions $[1,i]$.
First we look at the number of downsteps in $T_k,\hdots, T_{l+1}$. Here, the downsteps including positions $1,\hdots,i-1$ are only those of stage IV. By our construction thus far, we know that this transposes a negative value in $\{b+1,\hdots,\overline{b+1}\}$ with a positive value. This then transposes over $\overline{a}$ in position $\overline{i}$, so by the quantum bruhat criterion, we know that no such downstep exists.
Second, we look at the number of downsteps in $T_l$. By Condition 2b, $C[l]\in\{b+1,\hdots,\overline{b+1}\}$, and by hypothesis, $C'[l]=b$. Thus there must be at least one downstep during $T_l$. From Remark~\ref{C_m contains 1,...,b} we have that $\{1,\hdots,b\}$ are in $\pi_l[1,i]$ or $\pi_l[\overline{i},\overline{1}]$. By Lemma~\ref{Lenart reorder lemma}, the downstep must be a stage IV move with a value in some position in $\{1,\hdots,i\}$. By Remark~\ref{One Downstep per row} there is at most one downstep in $T_l$, so there is exactly one downstep in $T_l$. We note here that the value in position $i$ will never become negative again, as the only means of doing so would be with the root $(i,\overline{i})$ in $T_i$, but this would pass the target $b>0$.
Third, we look at the downsteps in $T_{l-1},\hdots, T_{i}$. By similar argumentation to that of $T_k\hdots T_{l+1}$, we see that any possible downstep with values in $1,\hdots, i$ would pass over $b$ in position $l$, breaking the quantum bruhat criterion.
Finaly, we look at the downsteps in $T_{i-1},\hdots,T_1$. Notice that there are no downsteps with stages I,III without passing over $b$ in position $l$. There are no stage II downsteps at all in type $B_n$. So all downsteps are of stage IV. Each of these change two positions in $1,\hdots i$ from negative to positive. We note here that none of these positions can became negative again. Indeed the only way for this to happen would be with a stage II upstep, but this root has already been passed to use the stage IV downstep.
We have seen that there are an even number of downsteps in positions $[1,i]$ contributed by $T_{i},\hdots,T_1$, one from $T_l$, and none from anywhere else. We noted along the way that none of these values will become negative again during the remainder of the transpositions. This along with Lemma~\ref{pos order} and Remark~\ref{One Downstep per row} gives that there is an odd number of such negative to positive pairs in the said positions of $C$ and $C'$, as desired.
\end{proof}
\subsection{Necessary conditions for the construction of a segment of the quantum Bruhat path}\label{Nec cond for construction}
The following proposition shows a partial result for Proposition~\ref{Total Path Prop}.
\vspace{12pt} \begin{proposition}\label{Path Prop}
Let $u,i,C,C'$ be as previously defined and assume that the pair of columns $CC'$ satisfies conditions 1 and 2. Assume also that $C[i+1,k]=C'[i+1,k]$ for some $i$ with $1\leq i\leq k$. Then there is a unique path $u = u_0,u_1,\hdots,u_q = v$ in the corresponding quantum Bruhat graph such that $v(i) = C'(i)$ and the edge labels form a subsequence of $\Gamma_{ki}$. Moreover, we have that if $u(l)\neq v(l)$, then $C(l)=u(l)\prec v(l)\preceq C'(l)$ for $l= 1,\hdots, i-1$.
\end{proposition}
\vspace{12pt} In this section, we will provide necessary conditions for the results of Proposition~\ref{Path Prop}. Assume for the moment that a path with the property stated in Proposition~\ref{Path Prop} exists. Let $T$ be the sequence of edge labels for this path. Recall that the sequence of roots $\Gamma_{ki}$ were split into four stages. We will factorize accordingly, giving $T=T_iT_{ii}T_{iii}T_{iv}$ and define $u_i:=uT_i$, $u_{ii}:=uT_iT_{ii}$, $u_{iii}:=uT_iT_{ii}T_{iii}$, and $u_{iv}:=uT=v$.
\vspace{12pt} The following lemmas give necessary conditions for the construction in the Proposition. They show that we will pass the target (cf. Remark~\ref{reason for mod remark}) only with the root $(k,k+1)$ exactly when the reordered columns $CC'$ are blocked off at $k$ and that we will skip a $Path\_C$ transposition exactly when the $Path\_C$ transposition would lead the resulting column to be blocked off with $C'$. We note here that these necessary conditions give the uniqueness of the path proposed in Proposition~\ref{Path Prop}.
\vspace{12pt} Due to the third part of the block off condition, we will often need to discuss the signs of values in the same row of two columns. For two columns $L$ and $R$, we will denote the signs of position $i$ of these columns by $sgn(l_i)sgn(r_i)$.
\begin{example}\label{+- example}
{\rm Consider the columns $C = (1,4,\overline{2},\overline{3},8,\overline{5})$ and $C' = (1,5,\overline{2},6,8,3)$. We then say that the values of $CC'$ in position $4$ are -+. We will commonly discuss how these sign pairs change after applying transpositions to the left word. We see that $Ct_{6\overline{4}}$ and $C'$ then has a ++ pair in position $4$.
$$\hspace{10pt} C \hspace{4pt} C' \hspace{20pt} Ct_{6\overline{4}} \hspace{4pt} C'$$
$$\begin{array}{ll} \tableau{{1}&{1}\\{4}&{5}\\{ \overline{2}}&{ \overline{2}}\\{\overline{3}}&{6}\\{8}&{8}\\{\overline{5}}&{3}} \end{array}
\! \begin{array}{c} \\ \rightarrow \end{array}\! \begin{array}{ll}\tableau{{1}&{1}\\{4}&{5}\\{ \overline{2}}&{ \overline{2}}\\{5}&{6}\\{8}&{8}\\{3}&{3}} \end{array}
\!
\,$$
}
\end{example}
We note that we can restate part 3 from Definition~\ref{block-off def} as \textit{there are an odd number of $-+$ pairs in positions 1 through $i$ in the coloumns $CC'$}.
\begin{lemma}\label{no path for block off} Suppose that $u[i+1,k] = C'[i+1,k]$ and $u,C'$ are blocked off at $i$ by $b:=C'(i)$. Then, if we never pass the target, there is no $T\subset \Gamma_{ki}$ such that $uT[1,k]=C'$.
\end{lemma}
\begin{proof} Let $u$ and $C'$ be as hypothesised. By the block off condition, either $u(i) = \overline{b}$ or $|u(i)|<b$. We first consider the case where $u(i) = \overline{b}$. Since our target value is in position $\overline{\imath}$, the only root available to us are those in stage I and stage II. However, by the block off condition, there is no $k<l\leq n$ such that $\overline{b}\prec u(l)\prec b$ and the root $(i,\overline{\imath})$ does not follow the quantum Bruhat graph criterion, as $\overline{b}<0$. Since we do not pass the target, there is no path.
We now consider the case where $|u(i)|<b$. First note that there are an odd number of $-+$ pairs in $u[1,i]$, $C'[1,i]$ due to the block off condition. We claim that if there is some subchain $T_i\subset \Gamma_{ki}$ such that $uT_i(i)=b$, then there are still an odd number of $-+$ pairs between $uT_i[1,i-1]$ and $C'[1,i-1]$. Indeed, if $1\leq u(i)<b$, then by Lemma~\ref{row monotonicity}, only upsteps will be used in $T_i$ and so $sgn(u(l)) = sgn(uT_i(l))$ for all $1\leq l <i$. We also consider the case $\overline{b}\prec u(i)\preceq \overline{1}$. Here, there must be a downstep at some point in $T_i$. It can not be in stage I or III, otherwise we would pass the target due to the block off condition. There are no stage II downsteps by quantum Bruhat criterion. Thus the downstep must be in stage IV. This single downstep will not only change the values in position $i$ from $-+$ to $++$, but also a $-+$ to a $++$ in some position $1\leq l < i$. Therefore, regardless of the sign of $u(i)$, the number of $-+$ values in $uT_i[1,i-1]$ and $C'[1,i-1]$ will be odd.
The conclusion of the proof is done by showing that if there is a path $T\subset rev(\Gamma)$ such that $uT_iT = C'$, there must have been an even number of $-+$ pairs between $uT_i[1,i-1]$ and $C'[1,i-1]$. We will do so by first showing that no $++$ pairs will give way to $-+$ at any time during $T$ and further that all $-+$ pairs are only changed to $++$ pairs via stage IV downsteps, each of which turn two $-+$ pairs to $++$. For the remainder of the proof, let $u' = uT_i$ and note that $u'[i,k] = C'[i,k]$ and $u'(i) = b = C'(i)$.
We first consider some $1\leq l <i$ where $u'(l)>0$ and $C'(l)>0$ and will show that position $l$ will never be negative thoughout the application of the remainder of the roots in $T$. First note that by the quantum Bruhat criterion, we have that position $l$ remains positive throughout $T_{i-1}T_{i-2}\hdots T_{l+1}(l)$. Let $s:=u'T_{i-1}T_{i-2}\hdots T_{l+1}(l)$. Then we have the following two cases: either $1\leq s \leq C'(l)$ or $C'(l)< s\leq n$. For the first case, we are done via Lemma~\ref{row monotonicity}. Now suppose that $C'(l)< s \leq n$. During stages I, II, and III of $T_l$, we can only transpose with values between $b$ and $\overline{b}$, due to the nature of the block off condition. This means that at some point during stage IV of $T_l$ two values would have to transpose over $\overline{b}$ in position $\overline{\imath}$, breaking the quantum Bruhat criterion. Therefore only the first case holds and no $++$ pair between $u'[1,i-1]$ and $C'[1,i-1]$ will ever give way to a $-+$ pair during the remainder of $T$.
Finally, we show that all $-+$ pairs in $u'[1,i-1]$ and $C'[1,i-1]$ can only become $++$ pairs via stage IV downsteps, effectively changing two such $-+$ pairs at a time. We now consider some $1\leq l <i$ such that $u'(l)<0$ and $C'(l)>0$ and again let $s = u'T_{i-1}T_{i-2}\hdots T_{l+1}(l)$. Suppose that during $T_{i-1}T_{i-2}\hdots T_{l+1}(l)$ there are no stage IV down steps using position $l$. We then have that $s$ is negative. Further, we note that $1\leq C'(l) <b$, otherwise there would come a point where we would need to transpose over $b$ in position $i$. Due to the nature of having been blocked off, nothing between $s'$ and $t$ can be in stages I or III. As we do not pass the target, none of the roots in those positions will be used. Further, we can not use a stage II move, as downsteps are not promitted here by the quantum Bruhat criterion. Therefore there must be a downstep in stage IV. This concludes the proof.
\end{proof}
\begin{remark}
{\rm Since we never pass the target except potentially at $(k,k+1)$, for a path to exist, two columns can never be blocked off with each other except for possibly at $u[1,k],C'$. Further, Lemma~\ref{no path for block off} tells us that there may be times when the $Path\_C$ algorithm would call for a certain transposition, but it can not be included in the path, as its use would cause the current column to be blocked off with $C'$. The following lemmas will show that block-off avoidance is the only time a path is allowed to skip a root called by the $Path\_C$ algorithm or pass the target with the root $(k,k+1)$.}
\end{remark}
\begin{lemma}\label{skip implies block off}Let the sequence $m_1,m_2,\hdots , m_r$ be such that $u t_{i,m_1} t_{i,m_2}\hdots t_{i,m_r}(i)=C'(i)$
and the roots $(i,m_l)$ form a path in the corresponding quantum Bruhat graph. Suppose that there is some
\\ $m_d\in \{k,\hdots,\overline{k},\overline{i},\hdots\overline{1}\}$, such that at the current word $w$, we have $w(i)\prec wt_{i,m_d}(i)\preceq C'(i)$, but $m_d$ is not in $\{m_1,m_2,\hdots,m_r\}$. Then $wt_{i,m_d}$ was blocked off at $i$ by $C'(i)$.
\end{lemma}
\begin{proof} In the hypothesised construction, at some point in the process of applying the transpositions $(i,m_l)$ to $u$, the entry in position $i$ has to change from a value $a$ to a value $b$ across the value $u(m_d)$ where $a\prec u(m_d)\prec b$. This violates the quantum Bruhat graph criterion in all but the following two special cases:
Case 1: $a<0$ and $b>0$, the current transposition $(i,m_j)$ is in $T_{iii}$, and $u(m_d)=\overline{b}$.
Case 2: $a<0$ and $b>0$, the current transposition $(i,m_j)$ is in $T_{iv}$, and $u(m_d)=\overline{a}$.
\vspace{12pt} We show that the lemma holds for case one and case two follows similarly. Let $w' := ut_{i,m_1}\hdots t_{i,m_{j-1}}$. Then $w't_{i,m_j}(i) = b$. We need $b=C'(i)>0$. Note that after stage II, the quantum Bruhat criterion no longer allows any positive to negative sign changes. Since $b>0$, it must be that the target value $t = C'(i)$ is positive as well. Lemma~\ref{row monotonicity} then gives that $1\leq b \leq t \leq n$, providing us with the desired $|u(m_d)|\leq t$ as $u(m_d) = \overline{b}$ by hypothesis.
We now show that $\{1,\hdots, t-1\}\subset\{|w(j)|\}_{1\leq j\leq i}$ and $\{1,\hdots, t\}\subset\{|C'(j)|\}_{1\leq j\leq i}$. First note that we cannot have $b < \overline{a} < t$. Otherwise there would have to be root later in the sequence transposing two values $c$ and $d$ over $\overline{a}$ in position $\overline{m_j}$. Again, this is only possible if $d = \overline{\overline{a}} = a$, but that would mean that the transposition would pass the target. Lemma~\ref{row monotonicity} gave that this is only possible for the root $(k,k+1)$, but $m_j$ is in $T_{iii}$ and is therefore not $k+1$. So we now have
$$\overline{n}\preceq a \preceq \overline{b}\preceq\overline{1}\prec 1\preceq b\preceq t \preceq \overline{a}\preceq n.$$
By the quantum Bruhat criterion, there are no values between $a$ and $b$ in $w'[i,m_j]$ except $\overline{b}$ in position $\overline{m_j}=m_d$. Suppose that some value between $b$ and $t$ lies in $w'[i,m_j]$. Then for $w'(i)$ to have the value $a$, either we have $i=k$ and we passed the target during $(k,k+1)$, or we skipped a different $Path\_C$ step as well. The latter cannot be, as it would then require another downstep, of which there can only be one for each $T_i$. The prior cannot be true either, as the passing of the target would lead to the need for an aditional downstep as well.
Thus no values between $a$ and $t$ can lie in $w'[i,m_j]$ other than $b$ and $\overline{b}$. This means that no values in $[\overline{t}]-\{b,\overline{b}\}$ can lie in $w'[i,m_j]$. Since the root $(i,m_j)$ is in $T_{iii}$, we know that $w[1,i-1] = w'[1,i-1]$. Thus $\{1,\hdots,t-1\}\setminus\{b\}\subset\{|w(j)|\}_{1\leq j\leq i}$. We further note that during $T_l,T_{l-1},\hdots,T_1$ none of the values in $[\overline{b}]$ can be transposed out of positions $1$ through $i$ without passing over $t$ in position $i$. Thus $\{1,\hdots,t\}\subset\{|C'[j]|\}_{1\leq j\leq i}$ as desired.
We now show that $wt_{i,m_d}[1,i]$ and $C'[1,i]$ have an odd number of $-+$ pairs. Since we have already shown that conditions $2a$ and $2b$ hold and $1\leq b\leq t$, there must be an even number of minus plus pairs between $w't_{i,m_j}$ and $C'$ in the first $i$ positions. Otherwise, the application of the root $(i,m_j)$ would cause $w't_{i,m_j}[1,k]$ to be blocked off with $C'$ at $i$ by $t$. This then means that there would have been an odd number of minus plus pairs between $wt_{i,\overline{m_j}}$ and $C'$ in the first $i$ positions. Thus the use of the root $(i,m_d)$ would have caused the culumns $wt_{i,m_d}[1,k]$ and $C'$ to be blocked off at $i$ by $t$.
\end{proof}
\begin{remark}\label{skip only in stage I}
{\rm The proof of Lemma~\ref{skip implies block off} also gives us that a skipped $Path\_C$ transposition may only occur during stage I.}
\end{remark}
\begin{lemma}\label{twice implies block off} Suppose that we have a path such that the root $(k,k+1)$ is used and its application to the word $u$ passes the target in position $k$. Then the columns $u[1,k]$ and $C'$ are blocked off at $k$ by $C'(k)$.\end{lemma}
\begin{proof}Note that this set up is equivalent to having some $u' = u(k,k+1)$ and skipping the root $(k,k+1)$ which would now be called by the $Path\_C$ algorithm. Then by Lemma~\ref{skip implies block off}, we know that $u'(k,k+1)[1,k]$ is blocked off with $C'$ at $k$ by $C'(k)$. The proof concludes with the realization that $u'(k,k+1) = u$.
\end{proof}
\begin{remark}\label{uniqueness of path}
{\rm The Lemmas~\ref{row monotonicity},~\ref{no path for block off},~\ref{skip implies block off}, and \ref{twice implies block off} give conditions that dictate a unique path.}
\end{remark}
\subsection{Constructing a segment of the quantum Bruhat path.}\label{constructing segment of path in B}
In this section we will provide an explicit algorithm for the unique path following the coditions set by Lemmas~\ref{row monotonicity},~\ref{no path for block off},~\ref{skip implies block off}, and`\ref{twice implies block off}. Further, we will show that this path is the desired path for Proposition~\ref{Path Prop}.
\vspace{12pt} First we distinguish the following two cases:
\begin{enumerate}
\item $C(i)\preceq C'(i) \prec \overline{C'(i)}$
\item $C(i)\preceq\overline{C'(i)}\prec C'(i)$.
\end{enumerate}
We will also need the following notation:
\begin{enumerate}
\item $M_I(u,i,C'):=max(\{u(i)\}\cup\{u(l): k +1\leq l\leq n, u(i)\prec u(l)\preceq C'(i)\})$
\item $M_{III}(u,i,C'):=max(\{\pm u(i)\}\cup\{u(l): k +1\leq l\leq \overline{k+1}, u(i)\prec u(l)\preceq C'(i)\})$
\end{enumerate}
\vspace{12pt} The following lemma gives a little insight into the nature of $M_I$. Its proof mirrors a similar lemma given by Lenart in~\cite{Lenart 2012} but cators to type $B_n$.
\vspace{12pt}\begin{lemma}\label{lenart lemma} Under the hypothesis of Proposition~\ref{Path Prop}, in Case 2 we have $$\overline{C'(i)}\preceq M_I(u,i,C')\preceq C'(i).$$\end{lemma}
\vspace{12pt} \begin{proof} Without loss of generality, assume $C'(i)>0$ and that $u(i)\neq C'(i)$. Let $a = C'(i)\in [n]$ and $A = \{\overline{a},\overline{a-1},\hdots,\overline{1},1,\hdots,a-1,a\}$. We need to show that $u[k+1,n]$ contains no elements in $A$, so assume the contrary. Suppose that $u[i+1,k]=C'[i+1,k]$ contains an element of $A$. Then by Conditions 1 and 2, $u(i)\in A$ and thus $\overline{C'(i)}\preceq u(i)\preceq M_I(u,i,C')\preceq C'(i)$. Now let $u[i+1,k]=C'[i+1,k]$ not contain any element of $A$. We conclude that $u[1,i-1]$ contains an element from each pair $\{x,\overline{x}\}$ of elements in $A$. If $u(i')\in A$ for $i'<i$, we say that $u(i')$ is matched with $C'(i')$. Since $C'(i) = a$, the only possile matches for $u[1,i-1]\cap A$ are elements in $A\setminus \{a,\overline{a}\}$, by the first part of the reorder condition. But these are too few to match $a$ elements, which is a contradiction.
\end{proof}
\begin{remark}\label{M_i M_iii order}
{\rm Since $u(i)\preceq M_{I}\preceq M_{III}\preceq C'(i)$, we then have that $\overline{C'(i)}\preceq M_{III}(u,i,C')\preceq C'(i)$.}
\end{remark}
\vspace{12pt} We now describe the algorithm that constructs the path in Proposition~\ref{Path Prop}. The algorithm inputs the signed permutation $u$, the target column $C'$, and the position $i$; it outputs the list of reflections $T$ determining the path and the permutation $v$. It calls on the algorithm \textit{is-blocked-off} which inputs the first $i$ entries of the permutation $u$ and column $C'$ as columns and the value $i$; it then returns whether or not the given columns are blocked off at $i$ by $C'(i)$. Note that the following procedure is one iteration of Algorithm~\ref{Mod-Greedy algorithm} where we explicitly go through the four stages of $\Gamma^{ki}$.
\vspace{12pt}
procedure path-B(u,i,$C'$);
\tab Let $c:=C'(i)$;
\tab if $u(i)=c$ then return $\emptyset,u$;
\tab else
\tab\tab Let $S:=\emptyset$, $L:=(k+1,\hdots,n)$ $v:u$;
\tab\tab if $i=k$ and is-blocked-off$(v[1,i],C'[1,i],i)$ then let $S:=S,(k,k+1)$, $v:=v(k,k+1)$, and $L:=L-(k+1)$;
\tab\tab end if;
\tab\tab for m in $L$;
\tab\tab\tab if $v(i)\prec v(m)\preceq c$ and not is-blocked-off$(v(i,m)[1,i],C'[1,i],i)$ then let $S:=S,(i,m)$, $v:=v(i,m)$;
\tab\tab\tab end if;
\tab\tab end for;
\tab\tab let $u_i:=v$; $T_i:=S$
\tab\tab if $sign(u_i)>0$ and $u_i\prec \overline{u_i}\preceq c$ then let $T_{ii}:=T_i,(i,\overline{\imath})$, $u_{ii}:=u_i(i,\overline{\imath})$;
\tab\tab else let $T_{ii}:=T_i$, $u_{ii}:=u_i$;
\tab\tab end if;
\tab\tab let $L:=(\overline{n},\hdots,\overline{k+1})$, $S:=T_{ii}$, $v:=u_{ii}$;
\tab\tab for $m$ in $L$;
\tab\tab\tab if $v(i)\prec v(m)\preceq c$ then let $S:=S,(i,m)$, $v:=v(i,m)$;
\tab\tab\tab end if;
\tab\tab end for;
\tab\tab Let $T_{iii}:=S$, $u_{iii}:=v$, $L:=(\overline{i-1},\hdots,\overline{1})$;
\tab\tab for $m$ in $L$;
\tab\tab\tab if $v(i)\prec v(m)\preceq c$ then let $S:=S,(i,m)$, $v:=v(i,m)$;
\tab\tab\tab end if;
\tab\tab end for;
\tab\tab return($S$,$v$);
\tab end if;
end.
\vspace{.5in}
is-blocked-off(u[1,i],$C'$[1,i],i);
\tab Let $a:= u(i)$, $b:=C'(i)$;
\tab if $a\neq b$ and $|a|\leq b$ and $b>0$
\tab\tab and $\{1,\hdots, b\}\subseteq |u[1,i]|,|
C'[1,i]|$
\tab\tab and $\#\{l: u(l)<0, C'(l)>0\}\% 2 = 1$
\tab\tab then return TRUE;
\tab else return FALSE;
\tab end if;
end.
\begin{remark}\label{correct termination and uniqueness}
{\rm Condition $1$ gives that $C(l)\neq C'(i)$ for any $l<i$. This implies that $C'(i)$ appears in some position $q>k$ in $u$. Further, since two columns cannot be blocked off at $i$ with $C(i)=C'(i)$, there is no fear of the algorithm skipping the target value. This ensures that the algorithm terminates correctly. The algorithm also respects Lemmas~\ref{row monotonicity},~\ref{no path for block off},~\ref{skip implies block off}, and~\ref{twice implies block off} which together give the uniqueness of the path.}
\end{remark}
There are two parts of quantum Bruhat criterion in type $B_n$: the sign criterion and the transposing over values criterion. The following will show that the constructed path follows both.
\begin{lemma}\label{u_i and M_i} If the skip $Path\_C$ step is not called in the algorithm, then $u_i(i)=M_I$. Otherwise, $u_{iii}(i) = \overline{M_I}$.\end{lemma}
\begin{proof} Suppose that $u_i\neq M_I$, then we have that $M_I\in u_i[k+1,n]$ and $u_i(i)\prec M_I\preceq C'(i)$. Note that $M_I\neq C'(i)$, as we know that the algorithm terminates correctly. The remaining strict inequality means that we skipped $M_I$ in the $Path\_C$ algorithm. By Lemma~\ref{skip implies block off}, this means that the act of transposing the value $M_I$ into position $i$ would have caused the current word to have been blocked off with $C'$ at $i$ by $C'(i)$. Recall from the construction in Lemma~\ref{skip implies block off}, we must have that if $M_I<0$, then $1\preceq \overline{M_I}\preceq u_{iii}(i)\prec t\preceq n$ and if $M_I>0$, then $\overline{n}\preceq \overline{M_I}\preceq u_{iii}(i)\prec M_I \prec t\prec n$. Further, by the block off condition imposed on the entries in positions $k+1$ through $\overline{k+1}$, we have in both cases that the inequality is really an equality and so $u_{iii} = \overline{M_I}$.
\end{proof}
\begin{lemma}\label{sign criterion lemma}
Let $u,i,C'$ be inputs for the algorithm and $u_{ii}$ be as given in the algorithm. Then $\overline{n}\preceq u_{ii}(i)\preceq C'(i)$.
\end{lemma}
\begin{proof}
If the skip $Path\_C$ step was not called in the algorithm, then by Lemma~\ref{u_i and M_i} and Lemma~\ref{Lenart reorder lemma} we know that $u_i(i) = M_I \in [\overline{C'(i)},C'(i)]$. If $C'(i)>0$, the result follows. Otherwise, we may have that $1\preceq C'(i)\prec u_i(i)\prec \overline{n}$, in which case stage II of the algorithm would apply $(i,\overline{i})$ to $u_i$, giving $\overline{n}\preceq u_{ii}(i)\preceq C'(i)$.
If the skip $Path\_C$ step is called at some point in stage I of the algorithm, then we know that $C'(i)>0$ and $C'(i)\prec u(i)\prec \overline{C'(i)}$. Further, the only value from $[\overline{C'(i)}]$ in $u[k+1,n]$ is $M_I$. However, the algorithm skips $M_I$ and so we continue to have $C'(i)\prec u_i(i)\prec \overline{C'(i)}$. After stage II of the algorithm, we can refine this inequality to be $\overline{n}\preceq u_{ii}(i)\prec \overline{C'(i)}\prec C'(i)$.
\end{proof}
\begin{remark}\label{sign criterion remark}
{\rm There is no sign criterion for stage I transpositions and the criterion for stage II is built in to the algorithm. The fact that the procedure uses the $Path\_C$ algorithm in stages III and IV along with Lemma~\ref{sign criterion lemma} give that stage III and IV upsteps maintain the same sign and downsteps have a sign change of negative to positive. }
\end{remark}
\begin{remarks}\label{pass over criterion remark}
{\rm We now consider the criterion respecting the transposing of the value in position $i$ across the positions $i+1,\hdots,n,\overline{n},\hdots,\overline{k+1}$ and $\overline{i},\hdots,\overline{2}$.
\begin{enumerate}
\item For the positions $i+1,\hdots,k$, recall from the proof of Proposition~\ref{Reorder Nec} that even if there is some $i<l\leq k$ such that $C(i)<C'(l)<C'(i)$, we have that $C'(l)\prec u(i) = CT_kT_{k-1}\hdots T_{i+1}(i)\preceq C'(i)$. Therefore every value in $x\in u[i+1,k]$ is such that $x\prec u(i)\prec C'(i)$. The criterion is then met by Lemma~\ref{row monotonicity} giving the monotonicity of path in position $i$ during the reflections in $T_i$.
\item If we do not skip $Path\_C$ transposition in stage I, then the $Path\_C$ algorithm will be used throughout, and there is no fear of tansposing a value in position $i$ over the positions $k+1,\hdots,n,\overline{n},\hdots,\overline{k+1}$ and $\overline{i},\hdots,\overline{2}$.
\item If we do skip $Path\_C$ transposition in stage I, then we know from Lemma~\ref{u_i and M_i} that the skipped value was $M_I$. Further, the procedure will only skip a $Path\_C$ step if the corresponding transposition would result in the current word being blocked off with $C'$ at $i$ by $C'(i)$. This means that there are no values from $[\overline{C'(i)}]$, other than $M_I$ and $\overline{M_I}$, in positions $[k+1,\overline{k+1}]$ in $u,u_i,u_{ii}$ and $u_{iii}$. Thus the only value in positions $[k+1,\overline{k+1}]$ which would lead to possibly transposing over $M_I$ would be $\overline{M_I}$, but this is permitted by the quantum bruhat criterion. Recall that $u_{iii} = \overline{M_I}$. If $\overline{M_I}=M_{III}$, then the Pat\_C algorithm guarantees that the value in position $i$ will not transpose across positions $[\overline{i},\overline{2}]$ during stage IV transpositions. If $\overline{M_I}=\overline{M_{III}}$, then we note that transposing $\overline{M_I}$ across $M_I$ is permissible by quantum bruhat criterion, and thereafter the $Path\_C$ algorithm procedure will guarantee that the value in position $i$ will not transpose across positions $[\overline{i},\overline{2}]$ during the remainder of the stage IV transpositions.
\end{enumerate}
}
\end{remarks}
\vspace{12pt} The following lemma will further the results from Lemma~\ref{row monotonicity} to all of $T$ rather than just in $T_i$. The proof follows similarly to work in~\cite{Lenart 2012} with some additions concerning type $B_n$.
\begin{lemma}\label{StageIV Monotonicity}
If such a path hypothesised in Proposition~\ref{Path Prop} exists and $u(l)\neq v(l)$, then $C(l) = u(l)\prec v(l)\preceq C'(l)$ for $l = 1,\hdots,i-1$.
\end{lemma}
\begin{proof}
Suppose that this fails for some $l$ and let $l_1<i$ be the largest such $l$. Let $w$ be the signed permutation to which the reflection $(i,\overline{l_1})$ is applied in stage IV. Then we have that $$a:=w(i)\prec b:=\overline{C'(l_1)}\prec c_1:=\overline{w(l_1)}.$$ Now let $\widetilde{C}:=w[1,k]$ and $(i,\overline{l_1}),(i,\overline{l_2}),\hdots,(i,\overline{l_p})$ be the remainder of the stage IV roots in $T_i$ where $l_1>l_2>\hdots >l_p$. If $c_i:=\overline{w(l_i)}$, we then have that $a\prec b\prec c_1\prec c_2\prec \hdots\prec c_p$ by Lemma~\ref{row monotonicity}.
\vspace{12pt} By the quantum Bruhat criterion, there can be no values between $a$ and $c_p$ in $w[i,\overline{l_1}]$. Therefore, for any $x\in\{b,b+1,\hdots,c_p\},$ either $x$ or $\overline{x}$ is in $\widetilde{C}[1,i]$, say in position $j$. We claim that the possible values for $C'(j)$ are $\{\pm b, \pm (b+1), \hdots , \pm (c_p-1)\}$. Note then that overall we have too few choices for these positions, which gives a contradiction. We prove the claim in the following two cases.
\vspace{12pt}\hspace{12pt} Case 1: $\widetilde{C}(j)\in\{b,b+1,\hdots,c_p\}$. since $w(l_p)=\overline{c_p}$, we know that $x$ is not $c_p$. Also, since $C'(i)=c_p$, we have that $C'(j)\in\{x,x+1,\hdots , c_p-1\}$ by Reorder condition $2$ unless $\widetilde{C}C't_{ij}$ is blocked off at $j$ by $c_p$. However, since $w$ is in stage IV of $T_i$, we know from Lemma~\ref{u_i and M_i} that $\overline{c_p}\prec a\prec c_p$, but $a = \widetilde{C}(i)$, so the blocked off condition cannot hold.
\vspace{12pt}\hspace{12pt} Case 2: $\widetilde{C}(j)\in\{\overline{c_p},\hdots,\overline{b}\}$. Then $j\leq l_1$, otherwise one of the roots $(i,\overline{l_r})$ in $T$ will transpose values over $x$ in position $\overline{j}$, breaking the quantum Bruhat criterion. Since $C'(l_1)=\overline{b}$, we can assume $j<l_1$. Reorder criterion 2 then gives that $C'(j)\in\{\overline{x},\hdots,\overline{b+1}\}$ unless $\widetilde{C}C't_{jl_1}$ is blocked off at $j$ by $\overline{b}$. But this cannot be, as $|\widetilde{C}(j)|>\overline{b}$ by assumption.
Thus the claim is proven and the lemma holds.
\end{proof}
\begin{proof}\textit{(Proof of Proposition~\ref{Path Prop})}
Assume that $u(i)\neq C'(i)$. Then, given Remarks~\ref{correct termination and uniqueness},~\ref{sign criterion remark}, and~\ref{pass over criterion remark}, the only fact left to prove is that the reflections in stage IV satisfy the condition in the corresponding quantum Bruhat graph criterion which refers to the transposition of the value in position $i$ across the positions $\overline{k},\hdots,\overline{i+1}$. Suppose that at some point in stage IV we apply to the current permutation $w$ a reflection $(i,\overline{l})$, with $l<i$, such that for some $j\in\{i+1,\hdots,k\}$ we have $w(i)\prec w(\overline{\jmath}) = \overline{C'(j)}\prec w(\overline{l})$ or equivalently,
$$w(l)\prec w(j) = C'(j)\prec w(\overline{\imath}).$$
Then by Lemma~\ref{StageIV Monotonicity}, we have that
$$C(l)\preceq w(l)\prec w(\overline{\imath})\preceq C'(l).$$ However Proposition~\ref{Reorder Nec} showed that we have $C(l)\prec C'(j)\prec C'(l)$ only if $CC't_{lj}$ is blocked off at $j$ by $C'(j)$. But from Lemma~\ref{Lenart reorder lemma}, we know that $C_j(j)\preceq C_j(l)\preceq C'(l)$ where we note that $C_j(j) = w(j) = C'(j)$. Lemma~\ref{StageIV Monotonicity} also gives that $C_j(l)\preceq w(l)\preceq C'(l)$. However we now have that $C_j(j) = w(j)\preceq C_j(l)\preceq w(l)\preceq C'(l)$ and $w(l)\prec w(j)\prec C'(l)$, a contradiction. Thus no such $j$ exists. This completes the proof that the algorithm constructs a path in the quantum Bruhat graph.
\end{proof}
Now that we have shown that there is a unique path to determine each $T_i$ and that our algorithm follows that path, we can now show that the third condition given on two columns $CC'$ from Conditions~\ref{nec conditions for B columns} is indeed necessary to build the entire path given in Proposition~\ref{Total Path Prop}.
\begin{lemma}\label{nec condition 3}
The two columns $CC'$ are not blocked off by $C'[i]$ for any $1\leq i <k$.
\end{lemma}
\begin{proof}
It suffices to show that if $CC'$ is blocked off at some $1\leq i <k$ by $b=C'[i]$, then $CT_kT_{k-1}\hdots T_{i+1}C'$ is blocked off at $i$ as well in which case the result follows from Lemma~\ref{no path for block off}. Suppose there is such an occurrence. Then for some transposition in $t_{l\overline{m}}\in T_l$ for $1\leq m\leq i<l\leq k$ and the current word $w$ we have that $wC' $ is blocked off at $i$ but $wt_{l\overline{m}}C'$ is not blocked off at $i$. Due to Lemma~\ref{Lenart 7.1}, we must only check that $t_{l\overline{m}}$ does not make it so that $wt_{l\overline{\imath}}[i]\in [b,\overline{b+1}]$ and that $w t_{l\overline{m}}[1,i]$ and $C'[1,i]$ do not have an even number of $-+$ pairs.
For the first, we suppose that $m=i$. Since $wC'$ is blocked off at $i$, it must be that $w[i],w[\overline{\imath}\in [\overline{b},b]$ and $w[l], C'[l]\in [b+1,\overline{b+1}]$. By monotonicity of path given in Lemma~\ref{row monotonicity} we know that $w[l]\prec w[\overline{\imath}]\prec C'[l]$ but then $\overline{b}\preceq w[i]\prec b = C'[i] \prec \overline{w[l]}=wt_{l\overline{\imath}}$ which contradicts Lemma~\ref{StageIV Monotonicity}. Thus $m\neq i$.
For the second, we suppose that $1\leq m <i$ and $t_{l\overline{m}}$ is a stage IV downstep. Then $w[l]<0$ and $wt_{l\overline{m}}>0$. Lemma~\ref{row monotonicity} gives that $w[l]\prec wt_{l\overline{m}}\prec C'[l]$ and since stage IV transpositions cannot change sign $+$ to $-$, it must be that $C'[l]>0$ as well. Therefore $w[l]\prec \overline{a} \prec C'[l]$ which means that at some point during $T_i$ we would need to transpose over $\overline{a}$ in position $\overline{i}$, breaking the quantum Bruhat criterion. Thus there is no such stage IV downstep.
\end{proof}
The procedure $path$-$B$ does not allow adjacent columns to be blocked off at $i$ while applying transpositions in $T_i$. The following lemma shows that these same transpositions never cause the columns to be blocked off at any other position either.
\vspace{25pt} \begin{lemma}\label{no block off above ith row}
Let $u = u_0,u_1,\hdots,u_q = v$ be a path as hypothesised in Proposition~\ref{Path Prop}. Then $u_j[1,l],C'[1,l]$ is not blocked off at $l$ by $C'(l)$ for any $0\leq j\leq q$ and $1\leq l \leq i$ except possibly for $u_0$ if $i=l=k$.
\end{lemma}
\begin{proof}
This is clearly true for $l=i$ by the procedure $path$-$B$. Further, this is true for $j=0$ by the Conditions 1 and 2. Suppose that there is some $j>0$ and $l<i$ such that $u_j[1,l],C'[1,l]$ is blocked off at $l$ by $C'(l)$. Without loss of generality, let $j$ be the minimum and $l$ be the maximum of all such occurences. This means that $u_{j-1}[1,l],C'[1,l]$ is not blocked off at $l$ by $C'(l)$ and $u_j = u_{j-1}t_{i\overline{m}}$ some $1\leq m\leq l$. We will show that this implies that $u_{j}[1,i],C'[1,i]$ is blocked off at $i$ by $C'(i)$, contradicting procedure $path$-$B$.
We split this into two cases: one where the transposition $t_{i\overline{m}}$ places a value between $\overline{b}$ and $b$ into position $m$ and a second where the transposition $t_{i\overline{m}}$ changes the number of $-+$ pairs in the first $l$ positions of $u_j,C'$ to be odd.
\vspace{12pt}Case 1: Here we have that $u_{j-1}[i]$ is in $ [\overline{C'[l]},C'[l]]$ but $u_{j-1}[m]$ is not and neither is $C'[i]$ since $u_j$ and $C'$ are blocked off at $l$ by $C'[l]$ by hypothesis. Also, by Lemma~\ref{row monotonicity} we have that $u_{j-1}[i]\prec \overline{u_{j-1}[m]}\preceq C'[i]$.
We first show that $C'[i]>0$ and that $|u_{j-1}[i]|\leq C'[i]$. Note that if $u_{j-1}[i]<0$, then $\overline{u_{j-1}[m]}>0$ since there are no stage IV sign preserving down steps. Further, this means that $C'[i]>0$ as well, since there are no stage IV sign changing up steps. Similarly, if $u_{j-1}[i]>0$, then $\overline{u_{j-1}[m]}, C'[i] >0$ since all stage IV up steps preserve sign. So either way, $C'[i]>0$ and it follows that $|u_{j-1}[i]|\leq C'[i]$ since $u_{j-1}[i]$ is in $ [\overline{C'[l]},C'[l]]$ but $C'[i]$ is not.
For the second part of the block off criterion, we not that there are no values between $u_{j-1}[i]$ and $C'[i]$ in $u_{j-1}[i+1,\overline{\imath +1}]$ otherwise the QBG criterion would be broken at some point during the application of transpositions in $T_i$. This, along with the fact that values $\{1,\hdots, C'[l]\}\setminus \{u_{j-1}[i]\}$ are in $u_{j-1}[1,l]$ from the hypothesis, gives that $u_{j-1}[1,i]$ contains the values of $\{1,\hdots, C'[i]\}$ up to absolute value. Further, by Lemma~\ref{Lenart 7.1} during $T_i$ and the fact that no entry in $\{1,\hdots, C'[i]\}$ can be transposed over $C'[i]$ in position $i$ during $T_{i-1}\hdots T_{1}$ we have that $C'[1,i]$ contains $\{1,\hdots, C'[i]\}$ up to absolute value as well.
To show that $u_{j-1}$ and $C'$ are blocked off at $i$ by $C'[i]$, it only remains to be shown that the number $-+$ pairs in the first $i$ positions of $u_{j-1}$ and $C'$ is odd. As a starting point, since $u_j$ and $C'$ are blocked off at $l$, it must be that there are an odd number of $-+$ pairs in the first $l$ positions of these two columns. We claim that there are are no $-+$ pairs in $u_{j-1}[l+1,i-1]$ and $C'[l+1,i-1]$. Since stage IV moves preserves the signs in both positions $i$ and $m$ or changes both from $-$ to $+$, we get that $u_{j-1}$ and $C'$ also has an odd number of $-+$ pairs in the first $i$ positions.
We conclude by proving the claim. Suppose that there is some $l<x<i$ such that $u_{j-1}[x]<0$ and $C'[x]>0$. First note that because the transposition $t_{i\overline{m}}$ followed the QBG criterion, it must be that $\overline{n}\preceq u_{j-1}[x]\prec u_{j-1}[m]$. Also, $C'[i]\prec \overline{u_{j-1}[x]}\preceq n$, otherwise we would transpose over $\overline{u_{j-1}[x]}$ in position $\overline{x}$ at some point during the transpositions in $T_i$ while placing $C'[i]$. But then $u_{j-1}[x]$ will have to transpose over $\overline{C'[i]}$ in position $\overline{\imath}$ at some point during the application of $T_{i-1}\hdots T_x$ while placing $C'[x]>0$ in position $x$. This contradics the QBG criterion, and so no such $-+$ pair can exist.
\vspace{12pt} Case 2: Here we show that if the first two block off conditions hold for blocking off $u_{j-1}$ and $C'$ at $l$ then the transposition $t_{i\overline{m}}$ cannot change the number of $-+$ pairs in the first $l$ rows to be odd. Since there are no changes of positive to negative sign via a stage IV transposition, the change to an odd number of $-+$ pairs must be throught the removal of a $-+$ pair in position $m$. This means that $u_{j-1}[i]<0$ and $u_{j}[i]>0$. By our starting assumption for this case, we know that $C'[l]>0$ and $u_{j-1}[l]$ is in $[\overline{C'[l]},C'[l]]$ but $u_{j-1}[i]<0$ and $u_{j}[i]>0$ are not. This means that when applying $t_{i\overline{m}}$, $u_{j-1}[i]$ transposes over $\overline{u_{j-1}[l]}$ in position $\overline{l}$, breaking the QBG criterion.
\end{proof}
\begin{proof} \textit{(of Proposition~\ref{main prop})}
The necessity of conditions $1,2,$ and $3$ where shown in Proposition~\ref{Reorder Nec} and Lemma~\ref{nec condition 3}. The uniqueness of path and monotonicity follows from Proposition~\ref{Path Prop}. To prove that the 3 conditions implies the existence of the chain, we iterate the construction in Proposition~\ref{Path Prop} using Algorithm $Path\_B$ for $i=k,k-1,\hdots,1$. It remains to be shown that if the 3 necessary conditions hold for $C_iC'$, then they hold for $C_{i-1}C'$ as well. We see that Lemma~\ref{StageIV Monotonicity} gives that the first condition holds as well as that if $C_i[l_1] \prec C'[l_1]\prec C'[l_2]$ then $C_{i-1}[l_1] \prec C'[l_1]\prec C'[l_2]$ for $1\leq l_1< l_2\leq i$. Now, if there is $1\leq l_1< l_2\leq i$ such that $C_i[l_1] \prec C'[l_2]\prec C'[l_1]$ then we have that $C_i$ and $C't_{l_1l_2}$ are blocked off at $l_1$ by $C'[l_2]$ by hypothesis. Then if $C_{i-1}[l_1] \prec C'[l_2]\prec C'[l_1]$, it must be that $C_{i-1}$ and $C't_{l_1l_2}$ are blocked off at $l_1$ by $C'[l_2]$ by proof of Lemma~\ref{nec condition 3}. To finish, we acknowledge that $C_{i-1}$ and $C'$ are not blocked off at any $l\leq i-1$ by Lemma~\ref{no block off above ith row}.
\end{proof}
\section{Proof of Proposition~\ref{SER prop}}\label{L2R}
\subsection{Classifying Split, Extended columns}
We now work towards building a subpath of $\Gamma_r(k)$ between the two height $k$ columns resulting from the split extension of a KN column of height some multiple of $2$ less than $k$. We do so by first classifying some properties of such pairs of columns, and then showing that these properties are sufficient to build a unique path in the QBG. For this part, we will follow the work of~\cite{Briggs} where a subcase of such pairs of columns were classified.
\begin{conditions}\label{conditions SER}
{\rm
Consider the following conditions on a pair of columns $CC'$.
\begin{enumerate}
\item We have $\{|C(i)|: i = 1,\hdots, k\} = \{|C'(i)|: i = 1,\hdots, k\}$.
\item If $$int(C,C'):=\left(\bigcup\limits_{i=1}^{k}\{j\in [\overline{n}]:C(i)\prec j\prec C'(i)\}\right)\setminus \{\pm C(i): i= 1,\hdots,k\},$$ then we have $int(C,C') = \emptyset$.
\item If $C(i)$ and $C'(i)$ are the same sign, then $C(i)<C'(i)$. Additionally, there are an even number of entries where $C(i)$ is negative and $C'(i)$ is positive.
\item $CC'$ follow the Conditions~\ref{nec conditions for B columns}.
\end{enumerate}
}
\end{conditions}
We will first define an \textit{initial matching} on the set of columns $\widehat{KN}_k$ and then show that this matching satisfies Conditions 1,2, and 3. We will follow up with a reordering of this matching, and then show that these reordered matchings continue to follow the three conditions as well as Condition 4. Further, we will later find that these conditions are the necessary requirements to determine a segment of the QBG path.
\begin{definition}\label{initial matching def}{\rm \cite{Briggs}}
We define a pair of columns $BB'$ which we will call the {\rm initial matching}. Let $B:=\widehat{lA}$ and define $B'$ by matching each value in $B$ from top to bottom with a value in $\widehat{rA}$ as follows:
\begin{enumerate}
\item If $a$ was not involved with the splitting or extending process, match $a$ to itself.
\item If $a\in [n]$ is non-zero and required splitting, match $b$ to $a$ and $\overline{a}$ to $\overline{b}$ where $b\in [n]$ is the value used to split $a$.
\item If $a\in [n]$ is the result of a zero splitting, match $a$ with $\overline{a}$.
\item If $a\in [n]$ is a result of extending, match $\overline{a}$ with $a$.
\end{enumerate}
\end{definition}
Since the initial matching is a direct result of the splitting extending algorithms, we will refer to an initial matching and a split extended $KN$ column interchangeably.
\begin{lemma}\label{initial match has 1,2,3}{\rm \cite{Briggs}}
Any initial matching satisfies Conditions 1,2, and 3.
\end{lemma}
We now give an algorithm which takes an initial matching $BB'$ and produces what we will call a \textit{corrected matching} $CC'$ where $C$ is the increasing colomn of entries from $B$ and $C'$ is a reordering of $B'$.
\begin{definition}
Given a pair of columns $BB'$ defined by an initial matching, we produce a {\rm corrected matching} $CC'$ by the following algorithm:
Let $C:=B$ and $C': = B'$;
for $i$ from $1$ to $k-1$ do
\hspace{24pt} let $j\geq i$ be such that
$C'(j) = min_{\prec_{C(j)}}\{C'(l):l=i,\hdots,k\}$;
\hspace{24pt} let $C' = C't_{ij}$;
end;
\end{definition}
We also recall the conditions set on two columns in types $A_{n-1}$ and $C_n$.
\begin{definition}
We will refer to the following two conditions as {\rm Conditions $4'$}. For any pair of indices $1 \leq i < l \leq k$,
\begin{enumerate}
\item $C(i)\neq C'(l)$
\item and the statement $C(i) \prec C'(l) \prec C'(i)$ is false
\end{enumerate}
\end{definition}
\begin{lemma}\label{corrected matching}
Any corrected matching $CC'$ satisfies Conditions 1,2, 3, and 4.
\end{lemma}
\begin{proof}
It was shown in~\cite{Briggs} that the corrected matching $CC'$ satisfies Conditions 1,2,3, and 4'. It remains to be shown that $4$ holds. Note that $4'$ implies that parts 1 and 2 of Condition 4. We finish by showing that the corrected matching $CC'$ is not blocked off for any $1\leq i \leq k$. Suppose that there is such an $i$ where $CC'$ is blocked off. Since there must be an odd number of $-+$ pairs in the first $i$ rows, there must be at least one negative value in $C$ in position less than or equal to $i$. Since $C$ is increasing, it must be that $i=k$. If not, then $|C[k]|<|C[i]|\leq C'[i]$, which contradicts the block off assumption. However, $CC'$ cannot be blocked off at $k$ because Condition 3 gives that there are an even number of $-+$ pairs.
\end{proof}
We now have that there is a matching which follows the four conditions for when $C$ is ordered increasingly. Next, we show that there is a matching that follows the four conditions for any reordering of $C$.
\begin{definition}\label{reorder matching}
Given a corrected matching $CC'$ and $\sigma\in S_k$, we produce a {\rm reordered matching} $DD'$ by the following algorithm:
Let $D:=C\sigma$ and $D': = C'\sigma$;
for $i$ from $1$ to $k-1$ do
\hspace{24pt} let $j\geq i$ be such that
$D'(j) = min_{\prec_{D(j)}}\{D'(l):l=i,\hdots,k\hspace{4pt}\text{such that}\hspace{4pt}DD't_{il}\hspace{4pt}\text{is not blocked off at}\hspace{4pt} i \}$;
\hspace{24pt} let $D' = D't_{ij}$;
end;
\vspace{12pt} We will refer to the result of each $i^{th}$ iteration of the algorithm by $DD'_i$. We let $CC' = DD'_0$.
\end{definition}
\begin{remark}\label{int at i def}
{\rm For the following lemmas, we consider an equivalent version of condition $2$. For any two columns $CC'$, let $$int_i(C,C'):= \{j\in [\overline{n}]: C(i)\prec j\prec C'(i)\}\cap\{\pm C(i):i=k+1,\hdots,n\}. $$ Then $int(C,C') = \bigcup\limits_{i=1}^k int_i(C,C')$.}
\end{remark}
The following lemma shows that transpositions, which do not result from block-off avoidence, in the reorder algorithm maintain conditions 1,2 and 3. the method for proving follows similarly to techniques used in~\cite{Briggs}, as it pertained to a subset of columns where the the block-off condition would not appear.
\begin{lemma}\label{not block off avoiding reorder step}
Let $DD'_{i-1}$ be the result of applying ${i-1}$ iterations of the algorithm in~\ref{reorder matching} to corrected columns $CC'$ and suppose that the $i^{th}$ iteration of of the algorithm, which calls for some transposition $t_{ij}$, does not result from a block off avoidance. If $int_i(D,D'_{i-1})=int_j(D,D'_{i-1})=\emptyset$ and condition 3 holds for $DD'_{i-1}$, then $int_i(D,D'_{i})=int_j(D,D'_{i})=\emptyset$ and condition 3 holds for $DD'_{i}$.
\end{lemma}
\begin{proof}
We begin by noting that if $i=j$, then the result follows, as we then have $DD'_{i-1}=DD'_{i}$. Now assume that $j>i$. Since the algorithm did not choose $D'_{i-1}[j]$ as an act of block-off avoidence, we have that $D[i]\prec D'_{i-1}[j]\prec D'_{i-1}[i]$. There are then three places where $D[j]$ could lie within this ordering. Notice that we cannot have $D[i]\prec D'_{i-1}[j]\prec D[j]\prec D'_{i-1}[i]$, as it would break the hypothesis that $int_i(D,D'_{i-1})=int_j(D,D'_{i-1})=\emptyset$ because $k<n$. It must then be that we have either $ D[j]\prec D[i]\prec D'_{i-1}[j]\prec D'_{i-1}[i]$ or $D[i]\prec D[j]\prec D'_{i-1}[j]\prec D'_{i-1}[i]$, which we will refer to as case $I$ and case $II$ respectively. Notice that in both cases, we have that $int_{i}(D,D'_{i}), int_j(D,D'_{i})\subseteq int_i(D,D'_{i-1})\bigcup int_j(D,D'_{i-1})=\emptyset $. It remains to be shown that condition 3 holds for $DD'_{i}$. This clearly holds for pairs in positions other tha $i$ and $j$, as they are unchanged in this stage of the algorithm and it was assumed that condition 3 held for $DD'_{i-1}$.
\vspace{12pt} We now show the monotonicity in matched pairs of the same sign and first check for position $i$. Since $k<n$ and $int_i(D,D'_{i})=\emptyset$, there is some $a\in [n]$ such that $a,\overline{a}\notin [D[i],D'_{i}[i]]$. Suppose that $sgn(D[i]) = sgn(D'_{i}[i])$. If condition 3 does not hold at $i$, we have that $D'_{i}[i]<D[i]$. If $D[i]\in [n]$, then we must have that $D[i]\prec \overline{a}\prec D'_{i}[i]$, a contradiction to our choice of $a$. Similarly, if $D[i]\in [\overline{n}]\setminus [n]$, then $D[i]\prec a \prec D'_{i}[i]$, again contradicting our choice of $a$. Thus $D[i]<D'_{i}[i]$ as desired. The proof for position $j$ follows similarly.
\vspace{12pt} We now show that the number of $-+$ pairs remains unchanged after the application of $t_{ij}$ to $D'$. As before, since $k<n$ and $int_i(D,D'_{i-1})\bigcup int_j(D,D'_{i-1})=\emptyset$ , there is some $a\in [n]$ such that $a,\overline{a}\notin [D[i],D'_{i-1}[i]]$ and $a,\overline{a}\notin [D[j],D'_{i-1}[j]]$. This means that the progression through the four values in the orderings of case $I$ and case $II$ above never pass over the values $a$ or $\overline{a}$ and so the sign can only change once (either positive to negative or negative to positive). If we consider all eight configurations of four $+$ and $-$ values ordered with only one sign change (for example, $+++-$ or $--++$) and compare them to the inequalities of case $I$ and case $II$, we see that in both cases the number of $-+$ pairs is preserved.
\end{proof}
\begin{corollary}\label{non block off avoidance preserves two and three}
Let $DD'_{i-1}$ be the result of applying $i-1$ iterations of the algorithm in~\ref{reorder matching} to corrected columns $CC'$ and suppose that the $i^{th}$ iteration of of the algorithm, which calls for some transposition $t_{ij}$, does not result from a block off avoidance. If conditions 2 and 3 hold for $DD'_{i-1}$, then conditions 2 and 3 hold for $DD'_{i}$.
\end{corollary}
\begin{lemma}\label{block off avoiding reorder step}
Let $DD'_{i-1}$ be the result of applying $i-1$ iterations of the algorithm in~\ref{reorder matching} to corrected columns $CC'$ and suppose that the $i^{th}$ iteration of of the algorithm, which calls for some transposition $t_{ij}$, results from a block off avoidance. If conditions 2 and 3 hold for $DD'_{i-1}$, then conditions 2 and 3 hold for $DD'_{\hat{\jmath}}$ for some $i<\hat{\jmath}\leq j$.
\end{lemma}
\begin{proof}
The case where $i=j$ is trivial, so suppose that $j>i$. Since the algorithm choose $D_{i-1}'[j]$ from a block-off avoidence, we have that $D[i]\prec D'_{i-1}[i]\prec D'_{i-1}[j]$. There are then three places where $D[j]$ could lie in within this ordering. If we have $D[j]\prec D[i]\prec D'_{i-1}[i]\prec D'_{i-1}[j]$ or $D[i]\prec D[j]\prec D'_{i-1}[i]\prec D'_{i-1}[j]$, then we again have that $int_i(D,D'_{i}), int_j(D,D'_{i})\subseteq int_i(D,D'_{i-1})\bigcup int_j(D,D'_{i-1})=\emptyset $ and so condition 2 still holds. Condition 3 then holds via similar argumentation to that in the proof of Lemma~\ref{not block off avoiding reorder step}.
\vspace{12pt} Unfortunately, if we have $D[i]\prec D'_i[i]\prec D[j]\prec D'_i[j]$, we are not guaranteed that $int_j(D,D'_{i})=\emptyset$ or the monotonicity of values in row $j$ of $DD'_{i}$. For this case, we will first show that the pairity condition of $-+$ pairs still holds for $DD'_{i}$ and that the monotonicity of same sign row entries holds for all rows not $j$. Then we use Lemma~\ref{not block off avoiding reorder step} and the fact that the transposition is avoiding a block off configuration to then show that by the end of the $j^{th}$ iteration of the algorithm, conditions 2 and 3 are restored.
\vspace{12pt} We first note that $DD'_{i-1}$ must have been blocked off at $i$ by $D'_{i-1}[i]$. If not, the $i^{th}$ iteration of the algorithm would have called for the $t_{ii}$. Since $DD'_{i-1}$ was blocked off at $i$ (and therefore an odd number of $-+$ pairs in the first $i$ rows), and condition 3 held for $DD'_{i-1}$ (so there is an even number of $-+$ pairs in the columns), there must be some $-+$ pair in some row $j'>i$. Since all of the values (up to absolute value) between $\overline{D'_{i-1}[i]}$ and $D'_{i-1}[i]$ must lie in the first $i$ rows of $DD'_{i-1}$, it must be that $\overline{n}\preceq D[j']\prec \overline{D'_{i-1}[i]}\prec D[i] \prec D'_{i-1}[i]\prec D'_{i-1}[j]\preceq D'_{i-1}[j']\preceq n$. This containment of inequalities along with the fact that condition 3 held for $DD'_{i-1}$ gives that $int_i(D,D'_{i}) \subset int_{j'}(D,D'_{i}) =int_{j'}(D,D'_{i-1}) = \emptyset$. Furthermore, the algorithm gives that $D'_{i-1}[i]\prec D'_{i-1}[j]\prec D'_{i-1}[j']$, so it must be that $D'_{i-1}[j]>0$. Thus $D'_{i-1}[i]\prec D[j] \prec D'_{i-1}[j]$ are all positive values and so $DD'_{i}$ fails the monotonicity of condition 3 in row $j$. Since $DD'_{i-1}$ was blocked off at $i$, it must be that $D'_{i-1}[i]>0$ as well. Thus the number of $-+$ pairs is unchanged by the transposition $t_{ij}$ and so the number of $-+$ pairs in $DD'_{i}$ remains even. Finaly, since we have from above that $ \overline{D'_{i-1}[i]}\prec D[i] \prec D'_{i-1}[i]\prec D'_{i-1}[j]\preceq n$, we have the monotonicity of same sign values (if it applies) in row $i$ of $DD'_{i}$.
\vspace{12pt} Let $\hat{\jmath}=min({j,j'})$. Note that during the $i+1$ through $\hat{\jmath}-1$ iterations of the algorithm, there will never be a block off avoidance move. Indeed, it would have to be blocked off at some value greater that $D'_{i-1}[i]$, but this value is in position $j$. We further note that none of these stages of the algorithm will ever call for a transposition with position $j$. This follows from the fact that none of the pairs in these iterations are $-+$ pairs by our choice of $j'$ and that $D'_{i-1}[i]$ is the minimum positive value below position $i$. Therefore, by Lemma~\ref{not block off avoiding reorder step}, we have that $DD'_{\hat{\jmath}-1}$ still follows conditions 2 and 3 everywhere but in position $j$. We now break into two cases, depending on the value of $\hat{\jmath}$.
\vspace{12pt} Suppose that $\hat{\jmath}=j'$. Then the algorithm will call for the transposition $t_{j'j}$ and if we consider the fact that $int_{j'}(DD'_{j'-1})=\emptyset$ and the ordering of these values given above, we see that both conditions 2 and 3 once again hold.
\vspace{12pt} Suppose that $\hat{\jmath}=j$. Then the $j^{th}$ iteration of the algorithm again calls for the transposition $t_{j'j}$ and places some $x$ in a position where $D[j]\prec x \prec D'_{i-1}[j']$ and conditions 2 and 3 are restored.
\end{proof}
We will use the following proposition to prove Proposition~\ref{SER prop}.
\begin{proposition}\label{l2r reorder follows all conditions}
A corrected matching $CC'$ satisfies conditions 1,2,3 and 4 if and only if the corresponding reordered matching $DD'$ satisfies conditions 1,2,3 and 4.
\end{proposition}
\begin{proof}
We first show that if $CC'$ follows the conditions, then so to does the corresponding pair $DD'$. Condition 1 clearly holds, as the values themselves are unaltered in the columns when converting from a corrected matching to a reordered matching. Iterations of Corollary~\ref{non block off avoidance preserves two and three} and Lemma~\ref{block off avoiding reorder step} give that conditions 2 and 3 hold for the reordered columns $DD'$. Condition 4 holds because it is equivalent to the reorder algorithm. We note here that $DD'$ is not blocked off at $k$ because condition 3 gave that there are an even number of $-+$ pairs in the two columns. The opposite implication is clear by the same reasoning.
\end{proof}
It remains to be shown that these same four conditions on two columns are exactly the conditions needed to describe split extended $KN$ columns of type $B_n$.
\begin{proposition}\label{Conditions are KN columns}
The set of corrected matchings $DD'$ following conditions 1,2,3 and 4 are in bijection with the split extended reordered $KN$ columns of type $B_n$.
\end{proposition}
\begin{proof}
Note that Proposition~\ref{l2r reorder follows all conditions} reduces the columns to be considered to just those $CC'$ sorted columns. This case was shown in Theorem 4.4 of~\cite{Briggs}.
\end{proof}
\subsection{Building a segment of the QBG Path between Split, Extended columns}
\begin{lemma}\label{nec and suff of SER conditions}
The pair $DD'$ satisfies Conditions~\ref{conditions SER} if and only if there is a path $u=u_0,u_1,\hdots ,u_p=v$ in the corresponding quantum Bruhat graph such that $v[1,k]=C'$ and the edge labels form a subsequence of $\Gamma_r(k)$. Moreover, the mentioned path is unique, and for each $i= 1,\hdots,k$, we have $$C(i)=u_0(i)\preceq u_1(i)\preceq \hdots\preceq u_p(i) = C'(i).$$
\end{lemma}
\begin{proof}
The necessity of Condition 4 is given by Propositions~\ref{Reorder Nec} and~\ref{nec condition 3}. The necessity of Condition 1 is clear from the selection of roots available in $\Gamma_r(k)$. The necessity of Conditions 2 and 3 follow from the quantum Bruhat criterion of type $B_n$. Indeed Condition 2 prevents us from ever transposing over a value in positions $k+1,\hdots,n,\overline{n},\hdots, \overline{k+1}$ while Condition 3 follows from the sign restrictions of stage $IV$ moves.
\vspace{12pt} Now, if the four conditions are satisfied, then we obtain the desired path through the iteration of the $Path\_C$ algorithm (cf. Proposition~\ref{Total Path Prop} and its proof). Indeed, Condition 1 assures us that the algorithm terminates correctly even with the limited selection of transpositions in $\Gamma_r(k)$. The monotonicity of same sign pairs from Condition 3 along with Condition 2 assures us that the algorithm never selections reflections of the form $(i,m)$ or $(i,\overline{m})$ for $m>k$. The pairity condition of $-+$ pairs from Condition 3 assures us that $DD'$ are not blocked off at at $k$ so that there is no need for the root $(k,k+1)$, which is unavailable in $\Gamma_r(k)$. Condition 4 grants use of Lemma~\ref{no block off above ith row}, so every iteration starts off with two columns which are not blocked off.
\end{proof}
We can now prove Proposition~\ref{SER prop}.
\begin{proof}\textit{(of Proposition~\ref{SER prop} )}
This follows directly from Lemma~\ref{nec and suff of SER conditions} and Proposition~\ref{Conditions are KN columns}.
\end{proof}
\section{The bijection in type $D_n$}\label{D}
We briefly outline the major differences in the type $D_n$ constructions. First, since KN columns of type $D_n$ have no relation in the ordering of $n$ and $\overline{n}$, the type $D_n$ splitting algorithm ``{\it split\_D}'' begins by converting all $(n,\overline{n})$ pairs in a given column to $0$ values, and then it continues as in type $B_n$ {\rm \cite{lecsbd}}. There is still need for the extending algorithm, and we use the same one as in type $B_n$ (``{\it extend}'').
The quantum Bruhat graph criterion in type $D_n$ differs from type $B_n$ in that we no longer have any arrows of the form $(i,\overline{\imath})$, but in return we have less restriction concerning arrows of the form $(i,\overline{\jmath})$. This change requires further modifications to the path and reordering algorithms, based on the following \textit{type $D_n$ blocked off} condition.
\begin{definition} We say that columns $C = (l_1,l_2,...,l_k)$ and $C' = (r_1,r_2,...,r_k)$ are {\rm type $D_n$ blocked off at $i$ by $b=r_i$} if and only if
$C$ and $C'$ are blocked off at $i$ by $b=r_i$, or the following hold:
\begin{enumerate}
\item $ -|l_i| \leq b <0$, where $-|l_i| = b$ if and only if $l_i = \overline{b}$;
\item $\{b,b+1,\ldots,n\}\subset \{|l_1|,|l_2|,...,|l_i|\}$ and $\{b,b+1,\ldots,n\}\subset \{|r_1|,|r_2|,...,|r_i|\}$;
\item and $|\{j : 1\geq j\geq i, l_j>0, r_j<0\}|$ is odd.
\end{enumerate}
We then define $``{Path\_D}''$ and $``{ord\_D}''$ to be as in type $B_n$, but by replacing ``\textit{blocked off}'' with ``\textit{type $D_n$ blocked off}''.
\end{definition}
\begin{theorem}
The map ``$\mbox{Path\_D}\circ \mbox{ord\_D}\circ \mbox{extend}\circ \mbox{split\_D}$'' is the inverse of the type $D_n$ ``sfill\_D'' map.
\end{theorem}
|
2,877,628,089,837 | arxiv |
\section{#1}\label{sec:#2}\vspace{-.1cm}}
\newcommand{\mysubsection}[2]{\subsection{#1}\label{sec:#2}\vspace{-.1cm}}
\newcommand{\mysubsubsection}[2]{\subsubsection{#1}\label{sec:#2}\vspace{-.1cm}}
\definecollection{mymaths}
\newcommand{\mymath}[2]{
\newcommand{#1}{\TextOrMath{$#2$\xspace}{#2}}
\begin{collect}{mymaths}{}{}{}{}
#1
\end{collect}
}
\definecolor{colorA}{HTML}{4285f4}
\definecolor{colorB}{HTML}{ea4335}
\definecolor{colorC}{HTML}{fbbc04}
\definecolor{colorD}{HTML}{34a853}
\definecolor{colorE}{HTML}{ff6d01}
\definecolor{colorF}{HTML}{46bdc6}
\definecolor{colorG}{HTML}{000000}
\definecolor{colorH}{HTML}{777777}
\definecolor{colorI}{HTML}{bdd6ff}
\definecolor{colorJ}{HTML}{6a9e6f}
\usepackage{pifont}
\newcommand{\cmark}{\checkmark}%
\newcommand{\xmark}{\scalebox{0.85}{\ding{53}}}%
\newcommand{\expnum}[2]{$#1\!\!\times\!\!10^{#2}$}
\newcommand{\bexpnum}[2]{\boldsymbol{\expnum{#1}{#2}}}
\newcommand\blfootnote[1]{%
\begingroup
\renewcommand\thefootnote{}\footnote{#1}%
\addtocounter{footnote}{-1}%
\endgroup
} |
2,877,628,089,838 | arxiv |
\section{Introduction}
The ATLAS and CMS collaborations have performed a large number of searches for new physics during Run~I of the LHC, targeting in particular supersymmetry in analyses based on missing transverse momentum. The implications of the (so far) negative results for new physics go well beyond the interpretations given in the experimental papers. Separate, validated implementations of the analyses using public fast simulation tools
are necessary for theorists to fully exploit the potential of these searches. This will also give useful feedback to the experiments on the impact of their searches.
Recent developments of \madanalysis~\cite{Conte:2014zja,Conte:2012fm}, the framework we use for reimplementing analyses, are presented in Section~\ref{sec:MA5new}.
The public database of reimplemented LHC analyses is then introduced in Section~\ref{sec:pad}.
Finally, a summary of the
validation of one ATLAS and one CMS search for supersymmetry (SUSY) can be found in Sections~\ref{sec:atlas-susy-13-05} and~\ref{sec:cms-sus-13-011}, and conclusions are given in Section~\ref{sec:conclusions}.
\section{New developments in \madanalysis}\label{sec:MA5new}
In most experimental analyses performed at the LHC, and in particular
the searches considered in this work, a branching set of
selection criteria (``cuts'') is used to define several
different sub-analyses (``regions'') within the same analysis.
In conventional coding frameworks, multiple regions are implemented with a nesting
of conditions checking these cuts, which grows exponentially more complicated
with the number of cuts. The scope of this project has therefore motivated us to
extend the \madanalysis\ package to facilitate the handling of analyses with multiple regions,
as described in detail
in~\cite{Conte:2014zja}.
From version 1.1.10 onwards, the implementation of an analysis in the \madanalysis\ framework
consists of implementing three basic functions:
{\it i)}~\texttt{Initialize}, dedicated to the initialization of the signal regions,
histograms, cuts and any user-defined variables;
{\it ii)}~\texttt{Execute}, containing the analysis cuts and weights applied to each event; and
{\it iii)}~\texttt{Finalize}, controlling the production of the results of the analysis, {\it i.e.},
histograms and cut-flow charts.
To illustrate
the handling of multiple regions, we present a few
snippets of our implementation \cite{ma5code:cms-sus-13-011}
of the CMS search for stops in final states with one lepton~\cite{Chatrchyan:2013xna}
(see Section \ref{sec:cms-sus-13-011}).
This search comprises 16 signal regions (SRs), all of which must be declared in the
\texttt{Initialize} function.
This is done through the \texttt{AddRegionSelection} method
of the analysis manager class, of which \texttt{Manager()} is an instance provided
by default with each analysis. It takes as its argument
a string uniquely defining the SR under consideration.
For instance, two of the 16 SRs of the CMS analysis are declared as
\begin{verbatim}
Manager()->AddRegionSelection("Stop->t+neutralino,LowDeltaM,MET>150");
Manager()->AddRegionSelection("Stop->t+neutralino,LowDeltaM,MET>200");
\end{verbatim}
The \texttt{I\-ni\-ti\-a\-li\-ze}
function
should also contain the declaration of selection cuts. This
is handled by the \texttt{AddCut} method of
the analysis manager class. If a cut is common to all SRs, the
\texttt{AddCut} method takes as a single argument a string that uniquely identifies the cut.
An example of the declaration of two common cuts is
\begin{verbatim}
Manager()->AddCut("1+ candidate lepton");
Manager()->AddCut("1 signal lepton");
\end{verbatim}
If a cut is not common to all regions,
the \texttt{AddCut} method requires a second argument, either a
string or an array of strings, consisting of the names of all the regions to which
the cut applies. For example, an $E_T^{\rm miss}>150$~GeV cut that applies to four SRs
could be declared as
\begin{verbatim}
string SRForMet150Cut[] = {"Stop->b+chargino,LowDeltaM,MET>150",
"Stop->b+chargino,HighDeltaM,MET>150",
"Stop->t+neutralino,LowDeltaM,MET>150",
"Stop->t+neutralino,HighDeltaM,MET>150"};
Manager()->AddCut("MET>150GeV",SRForMet150Cut);
\end{verbatim}
Histograms are initialized in a similar fashion using the \texttt{AddHisto} method
of the manager class. A string argument is hence required
to act as a unique identifier for the histogram, provided together with its number
of bins and bounds. A further optional argument consisting
of a string or array of strings can then be used to associate it with specific
regions. The exact syntax can be found in the manual~\cite{Conte:2014zja}.
Most of the logic of the analysis is implemented in the \texttt{Execute} function.
This relies both on standard methods to declare particle objects and to compute
the observables of interest for event samples including detector
simulation~\cite{Conte:2012fm} and on the new manner in which cuts are
applied and histograms filled via the analysis manager class~\cite{Conte:2014zja}.
Below we provide a couple of examples for applying cuts and filling
histograms. After having declared and filled two vectors,
\texttt{Si\-gnal\-E\-lec\-trons} and \texttt{SignalMuons}, with objects satisfying the signal
lepton definitions used in the CMS-SUS-13-011 analysis,
we
require exactly one signal lepton with the following selection cut:
\begin{verbatim}
if( !Manager()->ApplyCut( (SignalElectrons.size()+SignalMuons.size())>0,
"1+ candidate lepton") ) return true;
\end{verbatim}
The \texttt{if(...)} syntax guarantees that a given event is discarded
as soon as all regions fail the cuts applied so far.
Histogramming is as easy as applying a cut. For example, as we are interested in
the transverse-momentum distribution of the leading lepton, our code contains
\begin{verbatim}
Manager()->FillHisto("pT(l)",SignalLeptons[0]->pt());
\end{verbatim}
This results in the filling of a histogram, previously declared with
the name \texttt{"pT(l)"} in the \texttt{Initialize} method, but only
when all cuts applied to the relevant regions are satisfied.
After the execution of the program, a set of {\sc Saf} files (an {\sc Xml}-inspired
format used by \madanalysis) is created. They contain general information on the analyzed events,
as well as the cut-flow tables for all SRs and the histograms.
The structure of the various {\sc Saf} files is detailed in~\cite{Conte:2014zja}.
\section{Public analysis database of LHC new physics searches} \label{sec:pad}
A public database of reimplemented analyses in the {\sc MadAnalysis~5}\ framework and using {\sc Delphes}~3~\cite{deFavereau:2013fsa} was presented in~\cite{Dumont:2014tja}.
The list of analyses presently available in the database can be found on the wiki page~\cite{ma5wiki}. Each analysis code, in the {\sc C++}\ language used in {\sc MadAnalysis~5}, is submitted to INSPIR
, hence is searchable and citeable.
The information on the number of background and observed events is required for setting limits and is provided in the form of an {\sc Xml} file that is submitted to INSPIRE together with the analysis code.
Finally, detector tunings (contained in the detector card for {\sc Delphes}) as well as detailed validation results for each analysis can be found on the wiki page.
To date, there are five SUSY analyses in the database, two from ATLAS and three from CMS.
From an event file in \texttt{StdHep} or \texttt{HepMc} format, the acceptance$\times$efficiency can be found in the output of \madanalysis\ for each SR.
The limit setting can subsequently done under the CL$_s$ prescription with the code {\tt exclusion\_CLs.py}. It reads the cross section and the acceptance$\times$efficiency from the output of {\sc MadAnalysis~5}, while the luminosity and the required information on the signal regions is taken from the {\sc Xml} file mentioned above.
\section{ATLAS-SUSY-2013-05: search for third-generation squarks
in final states with zero leptons and two $b$-jets}
\label{sec:atlas-susy-13-05}
In this ATLAS analysis \cite{Aad:2013ija}, stops and sbottoms
are searched for in final states with
large missing transverse momentum and two jets identified as $b$-jets. The
results are presented for an integrated luminosity of $20.1$~fb$^{-1}$ at
$\sqrt{s} = 8$ TeV. Two possible sets of SUSY mass spectra were
investigated in this analysis:
{\it i)}~sbottom $\tilde{b}_1$ pair production with $\tilde{b}_1 \rightarrow b\tilde{\chi}_1^0$, and
{\it ii)}~stop $\tilde{t}_1$ pair production with $\tilde{t}_1 \rightarrow b
\tilde{\chi}_1^\pm$, where the subsequent decay of the $\tilde{\chi}_1^\pm$ is
invisible due to a small mass splitting with the $\tilde{\chi}_1^0$.
Two sets of SRs, SRA and SRB, are defined to provide sensitivity
to respectively large and small mass splittings between the squark and the neutralino.
\begin{table*}[!t]
\begin{center}
\begin{tabular}{ l ||c|c||c|c}
& \multicolumn{2}{|c||}{$m_{\tilde b_1}=350$~GeV} &
\multicolumn{2}{c}{$m_{\tilde t_1}=500$~GeV} \\
cut & ATLAS result & {\sc MA}\,5 result & ATLAS result & {\sc MA}5 result \\
\hline\noalign{\smallskip}
$E^{\rm miss}_T> 80$~GeV~filter & $6221.0$ & $5963.7$ & $1329.0$ & $1117.9$\\
+ Lepton veto & $4069.0$ & $4987.9$ & $669.0$ & $932.9$ \\
+ $E^{\rm miss}_T > 250$~GeV & $757.0$ & $802.9$ & $93.0$ & $117.2$ \\
+ Jet Selection & $7.9$ & $5.4$ & $6.2$ & $5.3$ \\
+ $H_{T,3} < 50$~GeV & $5.2$ & $4.6$ & $3.0$ & $4.2$\\
\noalign{\smallskip}\hline
\end{tabular}
\end{center}
\caption{Summary of yields for SRB of ATLAS-SUSY-2013-05 corresponding to the benchmark points
$(m_{\tilde b_1}, m_{\tilde \chi^0_1}) = (350,320)$~GeV and
$(m_{\tilde t_1},m_{\tilde \chi^\pm_1}, m_{\tilde\chi^0_1})=(500,420,400)$~GeV,
as compared to official ATLAS results given on \cite{Aad:2013ija}.
An $E^{\rm miss}_T$ filter is applied at the particle level. See \cite{Aad:2013ija} for more detail.
\label{tab:atlas-13-05-cutflow-SRB}}
\end{table*}
The analysis is very well documented regarding physics, but for
recasting purposes more information than provided in~\cite{Aad:2013ija}
was needed.
This made the validation of the recast code seriously difficult
in the earlier stages of the project.
Since then, fortunately, cut-flow tables were made public, as well as SUSY Les Houches Accord (SLHA) input files and the exact version of Monte Carlo tools used to generate the signal.
However, the collaboration did not provide information on trigger-only and $b$-tagging efficiencies.
\begin{figure*}[!t]
\centering
\includegraphics[width=5.2cm]{atlas-05-MCTSRA.pdf} \qquad
\includegraphics[width=5.2cm]{atlas-05-HT3SRB.pdf}
\caption{\label{fig:SRAHistos}Distributions of $m_{CT}$ for SRA and $H_{T,3}$ for SRB of ATLAS-SUSY-2013-05 without their respective cut. On the left plot, the benchmark points used are $(m_{\tilde b_1},m_{\tilde\chi_1^0})$ = $(500,1)$~GeV (in blue)
and $(m_{\tilde t_1},m_{\tilde \chi^\pm_1},m_{\tilde \chi^0_1})$ = $(500,105,100)$~GeV (in red). On the right plot, $(m_{\tilde t_1},m_{\tilde \chi^\pm_1},m_{\tilde \chi^0_1})$ = $(250,155,50)$~GeV (in blue) and $(m_{\tilde b_1},m_{\tilde\chi_1^0})$ = $(300,200)$~GeV (in red).
The solid lines correspond to our re-interpretation within \madanalysis\ and the dashed lines to the ATLAS result.}
\end{figure*}
The comparison between the official cut flows and the ones obtained within \madanalysis\ is presented
in the case of SRB in Table~\ref{tab:atlas-13-05-cutflow-SRB}. Moreover, distributions of the contransverse variable $m_{CT}$ and of $H_{T,3}$ are shown in Fig.~\ref{fig:SRAHistos}. ($H_{T,3}$ is defined as the sum of the $p_T$ of the $n$ jets without including the leading three jets.)
The largest discrepancy is observed in SRB, as be seen in the distribution of $H_{T,3}$.
To investigate this issue more deeply, a
more detailed cut flow about the ``Jet selection'' line in Table~\ref{tab:atlas-13-05-cutflow-SRB}
would be appreciable since it directly
impacts the $H_{T,3}$ variable.
Overall the agreement is quite satisfactory, considering the expected accuracy
for a fast simulation.
For SRA the agreement is very good. For SRB, the importance of the treatment of soft jets induces a sizable discrepancy with respect to the ATLAS results. Further tunings of the fast detector simulation are needed, and are currently under investigation.
However, the current results (for which detailed validation material can be found at~\cite{ma5wiki}) lead us to conclude that this implementation is validated.
The \madanalysis\ recast code is available as~\cite{ma5code:atlas-susy-2013-05}.
\section{CMS-SUS-13-011: search for stops in the single-lepton final state} \label{sec:cms-sus-13-011}
The CMS search for stops in the single lepton and missing energy, $\ell + E^{\rm miss}_T$, final state with full luminosity at
$\sqrt{s} = 8$~TeV~\cite{Chatrchyan:2013xna} has been taken as a ``template analysis'' to develop a common language and framework for the analysis implementation.
The analysis targets two possible decay modes of the stop: $\tilde{t} \to t \tilde{\chi}^{0}_1$ and
$\tilde{t} \to b \tilde{\chi}^{+}_1$.
Since the stops are pair-produced, their decays give rise to two $W$ bosons in each event, one of which is assumed to decay leptonically, while the other one is assumed to decay hadronically.
In the cut-based version of the analysis,
that we consider,
two sets of signal regions with different cuts, each dedicated to one of the two decay modes, are defined. These two sets are further divided into ``low $\Delta M$'' and ``high $\Delta M$'' categories, targeting small and large mass differences with the lightest neutralino $\tilde\chi_1^0$, respectively. Finally, each of these four categories are further sub-divided using four different $E^{\rm miss}_T$ requirements. In total, 16 different, potentially overlapping SRs are defined.
Overall, this analysis is well documented.
Detailed trigger efficiencies and the identification-only efficiencies for electron and muons were provided by the CMS collaboration upon request and are now available on the analysis Twiki page~\cite{Chatrchyan:2013xna} in the section ``Additional Material to aid the Phenomenology Community with Reinterpretations of these Results''.
The $b$-tagging efficiency as a function of $p_T$ was taken from~\cite{Chatrchyan:2013fea}.
Another technical difficulty came from the isolation criteria.
Since we used a simplified isolation criteria, we applied on the events a weighting factor of $0.885$ that was determined from the two cut flows (see Table~\ref{tab:cms-13-011-cutflow}).
The validation was done using the eleven benchmark points
presented in the experimental paper.
The validation process was based on (partonic) event samples, in LHE format, provided by the CMS collaboration.
Some examples of histograms reproduced for the validation are shown in Fig.~\ref{fig:kinvarsus13011}. The shapes of the distributions shown---as well as all other distributions that we obtained but do not show here---follow closely the ones from CMS, which indicates the correct implementation of the analysis and all the kinematic variables.
\begin{figure*}[!t]\centering
\includegraphics[width=5.5cm]{cms-011-MT2W.pdf}\quad
\includegraphics[width=5.5cm]{cms-011-pTleadingbjet.pdf}
\caption{Distributions of $M^W_{T2}$ (left) and of the $p_T$ of the leading $b$-tagged jet (right) after the preselection cuts of the analysis CMS-SUS-13-011. The solid lines are obtained from our re-interpretation within \madanalysis, while the dash-dotted lines correspond to the CMS results, given in Fig.~2 of~\cite{Chatrchyan:2013xna}.
} \label{fig:kinvarsus13011}
\end{figure*}
Upon our request, the CMS SUSY group furthermore provided detailed cut-flow tables, which are now also available on the Twiki page of the analysis~\cite{Chatrchyan:2013xna}.
These proved extremely useful because they allowed us to verify our implementation step-by-step in the analysis.
A comparison of our results with the official CMS ones is given in Table~\ref{tab:cms-13-011-cutflow}.
For both cases shown, CMS results are reproduced within about 20\%.
On the whole, we conclude that our implementation gives reasonably accurate results
(to the level that can be expected from fast simulation).
The \madanalysis\ code for this analysis, including extensive comments, is published as \cite{ma5code:cms-sus-13-011}.
\begin{table*}[!t]
\begin{center}
\begin{tabular}{ l ||c|c||c|c}
& \multicolumn{2}{|c||}{$m_{\tilde t}=650$~GeV} & \multicolumn{2}{c}{$m_{\tilde t}=250$~GeV} \\
cut & CMS result & {\sc MA}\,5 result & CMS result & {\sc MA}5 result \\
\hline\noalign{\smallskip}
$1\ell\, + \ge 4{\rm jets} + E_T^{\rm miss}>50$~GeV & $31.6\pm0.3$ & $29.0$ & $8033.0\pm38.7$ & $7365.0$ \\
+ $E_T^{\rm miss}>100$~GeV & $29.7\pm0.3$ & $27.3$ & $4059.2\pm 27.5$ & $3787.2$ \\
+ $n_b\ge1$ & $25.2\pm0.2$ & $23.8$ & $3380.1\pm25.1$ & $3166.0$ \\
+ iso-track veto & $21.0\pm0.2$ & $19.8$ & $2770.0\pm22.7$ & $2601.4$ \\
+ tau veto & $20.6\pm0.2$ & $19.4$ & $2683.1\pm22.4$ & $2557.2$ \\
+ $\Delta\phi_{\rm min}>0.8$ & $17.8\pm0.2$ & $16.7$ & $2019.1\pm19.4$ & $2021.3$ \\
+ hadronic $\chi^2<5$ & $11.9\pm0.2$ & $9.8$ & $1375.9\pm16.0$ & $1092.0$ \\
+ $M_T>120$~GeV & $9.6\pm0.1$ & $7.9$ & $355.1\pm8.1$ & $261.3$ \\
${\rm high\,} \Delta M, E^{\rm miss}_T > 300~{\rm GeV}$ & $4.2\pm0.1$ & $3.9$ & --- & ---\\
${\rm low\,} \Delta M, E^{\rm miss}_T > 150~{\rm GeV}$ & --- & --- & $124.0\pm4.8$ & $107.9$\\
\noalign{\smallskip}\hline
\end{tabular}
\end{center}
\caption{Summary of yields for the $\tilde{t} \rightarrow t \tilde{\chi}^0_1$ model for two benchmark points with
$m_{\tilde\chi^0_1}=50$~GeV, as compared to official CMS-SUS-13-011 results given on~\cite{Chatrchyan:2013xna}.
The uncertainties given for the CMS event numbers are statistical only.
See \cite{Chatrchyan:2013xna} for more details on the definition of the cuts.
\label{tab:cms-13-011-cutflow}}
\end{table*}
\section{Conclusions} \label{sec:conclusions}
We presented recent developments of \madanalysis\ that were necessary for the implementation and for recasting LHC new physics analyses. After validation, reimplemented analyses are stored in a new public database.
We discussed the validation of two SUSY analyses. A growing number of such analysis codes, including detailed validation material, is being made available in a public analysis database, see~\cite{ma5wiki}.
\bigskip \bigskip \begin{center} \begin{large
I am grateful to the ATLAS and CMS collaborations for their help in validating the results. I would also like to thank my colleagues of~\cite{Conte:2014zja} and~\cite{Dumont:2014tja} for their collaboration and for inspiring discussions.
|
2,877,628,089,839 | arxiv | \section{Introduction}
The global existence in time for nonlinear wave equation with small
data usually require high Sobolev regularity, when one dealt with
them by classical energy method (see \cite{Ch85}, \cite{Kl85} for
example). The purpose of this note is to give the sharp regularity
global existence for semilinear equation with the power nonlinearity
of the derivative, the counterpart of quasilinear equation or the
quadratic nonlinearity seems still unreachable.
Consider the following Cauchy problem(denote $\Box:=\partial_t^2-\Delta$
and $\partial=(\partial_t,\partial_x)$) \begin{equation}\label{SLW}\left\{\begin{array}{l} \Box u
= \sum_{|\alpha|=k} c_\alpha (\partial u)^\alpha := N(u) \\ u(0,x)=u_0\in H^s, \
\partial_t u(0,x) = u_1 \in H^{s-1} \end{array}\right. \end{equation} Let
$s_c=\frac{n+2}{2}-\frac{1}{k-1}$ be the scaling index, we have
\begin{thm}\label{fw5-thm-Glob}
Let $\|u_0\|_{H^s}+\|u_1\|_{H^{s-1}}\le \epsilon$ with $\epsilon$ small
enough, and \begin{equation}\label{sreq}\left\{\begin{array}{ll}s>s_c& {\rm if}\
k-1=\frac{4}{n-1}\vee 2\ {\rm and}\ n\neq 3\\
s\ge s_c & {\rm if}\ k-1>\frac{4}{n-1}\vee 2,
\end{array}\right.\end{equation} then the equation \eqref{SLW} has a
unique global solution in $C_t H^s$ such that $\partial u\in L_t^\infty
H^{s-1}\cap L_t^{k-1}L^\infty$. Moreover, if $k=n=3$, then the
lifespan $T_*$ of the solution with $s>2$ is at least of order
$\exp(c \epsilon^{-2})$ with $c\ll 1$.
\end{thm}
We will prove a similar result for the initial data which are
spherical symmetric in addition. For such purpose, we introduce a
concept here. We say that the equation \eqref{SLW} is {\bf radial},
if $u(t,x)$ is any solution of the equation, then for any rotation
$S$ in $\mathbb{R}^n$, $u(t,Sx)$ is still a solution of the same equation.
For example, when $k=2$, the radial equation must take the form of
$$\Box u=c_1 (\partial_t u)^2+ c_2 |\nabla u|^2\ .$$
\begin{thm}\label{fw5-thm-RadialGlob}
Let $n\ge 2$ and $k> \frac{n+1}{n-1}\vee 2$, and consider the radial
equation, then there exists a global solution in time for $s\ge s_c$
with small radial data.
\end{thm}
\begin{rem}
The requirement for regularity in Theorem \ref{fw5-thm-Glob} and
\ref{fw5-thm-RadialGlob} are essentially sharp. Since for the
equation
$$\Box u= |\partial_t u|^{k-1} \partial_t u\ ,$$
it's well-known that the problem is ill posed in $H^s$ for $s<s_c$
(see Theorem 2 in \cite{FW3} for example), in the sense that, there
is a sequence of data $f_j, g_j\in C^\infty_0 (B_{R_j})$, for which
the lifespan of the solutions $u_j$ tends to zero as the data's norm
and $R_j$ goes to $0$, under the condition that the solutions obey
finite speed of propagation. Note that the initial data $f_j, g_j$
can be radial functions. Thus for such $s$, we can not hope any
existence results as in these Theorems.
\end{rem}
\begin{rem} For the case $n=k=3$, we have almost global existence in
general and global existence for the radial data. Thus a natural
question is: To what extent does the result of global existence
depend on the radial symmetry? The answer is that it is very little.
In fact, in \cite{MaNaNaOz05}, the authors show that for any small
data with additional rotation regularity, there is global existence
for the equation \eqref{SLW}.
\end{rem}
\begin{rem}
It's regret that such argument can not apply to the more interesting
case $k=2$, since it's well known that the corresponding $L^1
L^\infty$ Strichartz estimate is not hold true in general. For the
local result for semilinear and quasilinear equation, one can refer
to \cite{Ta99}, \cite{SmTa05} and references therein.
\end{rem}
We will use the Strichartz estimates to prove the result. For the
details of the Strichartz estimates, one may consult \cite{FW2} and
references therein.
\begin{prop}[Strichartz Estimate]\label{stri}
Let $u$ be the solution of the linear wave equation and $q< \infty$,
then for $(q,n)\neq (2,3)$ \begin{equation}\label{StrichartzEstimate}\|\partial
u\|_{L^q L^{\infty}\cap L^{\infty}H^{s-1}}\le C_q \|\partial
u(0)\|_{H^{s-1}}\end{equation} with $s\ge \frac{n+2}{2}-\frac{1}{q}$ and $q>
\frac{4}{n-1}\vee 2$ or $s> \frac{n+2}{2}-\frac{1}{q}$ and $q=
\frac{4}{n-1}\vee 2$. For the case $(q,n)=(2,3)$ and $s>2$, we have
\begin{equation}\label{endptSEst}\|\partial u\|_{L^2([0,T],
L^{\infty})}+(\ln(1+T))^{1/2} \|\partial
u\|_{L^{\infty}([0,T],H^{s-1})}\le C (\ln(1+T))^{1/2} \|\partial
u(0)\|_{H^{s-1}}.\end{equation} Moreover, if $u$ is spatial radial function,
then we have \eqref{StrichartzEstimate} with $s\ge
\frac{n+2}{2}-\frac{1}{q}$ for all $q>\frac{2}{n-1}$ and $q\ge 2.$
\end{prop}
We'll use Picard's iteration argument to give the proof. First, we
give the proof for the case $k-1\ge \frac{4}{n-1}\vee 2$ and
$(n,k)\neq (3,3)$.
Let $u^{(0)}=0$ and then define $u^{(m+1)}$ ($m\in\mathbb{N}$) to be the
solution of the problem
$$\Box u^{(m+1)}=N(u^{(m)})$$ with the given data $(u_0, u_1)$.
We'll see below that $(\partial_t u^{(m)}, \partial_x u^{(m)})$ is a Cauchy
sequence in $C_t H^{s-1}\cap L_t^{k-1}L^\infty$ with the norm
$L_t^\infty H^{s-1}\cap L_t^{k-1}L^\infty$ if
$\|u_0\|_{H^s}+\|u_1\|_{H^{s-1}}=\epsilon$ is small enough.
We claim that for any $m\in \mathbb{N}$, $u^{(m)}\in C H^s\cap C^1
H^{s-1}$ and \begin{equation}\label{inBall}\|\partial u^{(m)}\|_{L^\infty H^{s-1}\cap
L^{k-1}L^\infty}\le M \epsilon\end{equation} with $M$ large enough. In fact, it's
true for $m=0$, and we assume it's true for some $m$, then by
Proposition \ref{stri} with $q=k-1$ and $s$ as in \eqref{sreq},
$$
\begin{array}{lcl}
\|\partial u^{(m+1)}\|_{L^\infty H^{s-1}\cap L^{k-1}L^\infty} &\le&
C(\epsilon+\|N(u^{(m)})\|_{L^1 H^{s-1}})
\\
&\le &
C(\epsilon+\|\partial u^{(m)}\|_{L^{k-1}L^\infty}^{k-1} \|\partial u^{(m)}\|_{L^\infty
H^{s-1}})\\
&\le & C(\epsilon + (M\epsilon)^k)\le M \epsilon.
\end{array}
$$ Thus we get \eqref{inBall} by induction.
Now we show that $(\partial_t u^{(m)}, \partial_x u^{(m)})$ is a Cauchy
sequence in $C_t H^{s-1}\cap L_t^{k-1}L^\infty$ with norm
$L_t^\infty H^{s-1}\cap L_t^{k-1}L^\infty$. Note that for any
$m\in\mathbb{N}_+$, $u^{(m+1)}-u^{(m)}$ is the solution of equation
$$\Box (u^{(m+1)}-u^{(m)})=N(u^{(m)})-N(u^{(m-1)})$$ with the null
data. Then
$$\begin{array}{lcl}\|\partial(u^{(m+1)}-u^{(m)})\|_{L^\infty H^{s-1}\cap
L^{k-1}L^\infty}&\le& C \|N(u^{(m)})-N(u^{(m-1)})\|_{L^1 H^{s-1}}\\
&\le& C \epsilon^{k-1} \|\partial(u^{(m)}-u^{(m-1)})\|_{L^\infty H^{s-1}\cap
L^{k-1}L^\infty}\\
&\le& \frac{1}{2}\|\partial(u^{(m)}-u^{(m-1)})\|_{L^\infty H^{s-1}\cap
L^{k-1}L^\infty}.
\end{array}
$$
Thus we have
$$\|\partial(u^{(m+1)}-u^{(m)})\|_{L^\infty H^{s-1}\cap
L^{k-1}L^\infty}\le 2^{-m} \|\partial(u^{(1)}-u^{(0)})\|_{L^\infty
H^{s-1}\cap L^{k-1}L^\infty}\le 2^{-m} M \epsilon$$ by induction and
\eqref{inBall}. So
\begin{equation}\label{Cauchy}\|\partial(u^{(m)}-u^{(l)})\|_{L^\infty H^{s-1}\cap
L^{k-1}L^\infty}\le 2^{1-\max(m,l)} M \epsilon.\end{equation}
Therefore, there exist $u^i$, $i\in \{0,1,\cdots,n\}$, such that
$$\partial_{i} u^{(m)}\rightarrow u^i\ \mathrm{in}\ C H^{s-1}\cap
L^{k-1}L^\infty\ .$$ Now we define $$u(t)=u_0+\int_0^t u^0\in
CH^{s-1}.$$ Since
$$u^{(m)}(t)=u_0+\int_0^t \partial_t u^{(m)},$$
thus for any $0 < T<\infty$, $t\in [0,T],$
$$\partial_i u^{(m)}(t)=\partial_i u_0+\int_0^t \partial_i \partial_t u^{(m)}\rightarrow
\partial_i u_0+\int_0^t \partial_i u^0=\partial_i u(t)\ \mathrm{in}\ C([0,T],
H^{s-2})$$ and so $\partial_i u=u^i$,
$$\partial_{i}
u^{(m)}\rightarrow \partial_i u\ \mathrm{in}\ C H^{s-1}\cap
L^{k-1}L^\infty\ .$$ Then we can get the solution $u\in CH^s\cap
C^1 H^{s-1}$ of equation \eqref{SLW}.
For the uniqueness and continuous dependence of the initial data,
it's essentially as the above proof. Let $\|(u_0,u_1)\|_{H^s\times
H^{s-1}}\le \epsilon$ and $\|(v_0,v_1)\|_{H^s\times H^{s-1}}\le \epsilon$.
Assume $u$ and $v$ are two solutions of \eqref{SLW} with data
$(u_0,u_1)$ and $(v_0,v_1)$ respectively, then $u-v$ is the
solution of equation
$$\Box (u-v)=N(u)-N(v)$$ with the data $(u_0-v_0,u_1-v_1)$.
$$\begin{array}{lcl}\|\partial(u-v)\|_{L^\infty H^{s-1}\cap
L^{k-1}L^\infty}&\le& C(\|(u_0-v_0,u_1-v_1)\|_{H^s\times H^{s-1}} + \|N(u)-N(v)\|_{L^1 H^{s-1}})\\
&\le& C(\|(u_0-v_0,u_1-v_1)\|_{H^s\times H^{s-1}} + \epsilon^{k-1}
\|\partial(u-v)\|_{L^\infty H^{s-1}\cap
L^{k-1}L^\infty})\\
&\le&C\|(u_0-v_0,u_1-v_1)\|_{H^s\times H^{s-1}}+
\frac{1}{2}\|\partial(u-v)\|_{L^\infty H^{s-1}\cap L^{k-1}L^\infty}.
\end{array}
$$
Thus we have \begin{equation}\label{unique}\|\partial(u-v)\|_{L^\infty H^{s-1}\cap
L^{k-1}L^\infty}\le C\|(u_0-v_0,u_1-v_1)\|_{H^s\times H^{s-1}}\end{equation}
This complete the proof for the case $k-1\ge \frac{4}{n-1}\vee 2$
and $(n,k)\neq (3,3)$.
For the case $n=k=3$, it remains to claim alternatively that
\begin{equation}\label{induction} \|\partial u^{(m)}\|_{L^\infty_{[0,T]} H^{s-1}}\le
M\epsilon,\ \|\partial u^{(m)}\|_{L^{2}_{[0,T]}L^\infty}\le c\ll 1\end{equation} if
$\ln(1+T)\ll \epsilon^{-2}$. In fact, let $$A_m:=\|\partial
u^{(m)}\|_{L^2([0,T], L^{\infty})}+(\ln(1+T))^{1/2} \|\partial
u^{(m)}\|_{L^{\infty}([0,T],H^{s-1})}\ ,$$ then by inductive
assumption,
$$
\begin{array}{lcl}
A_{m+1}&\le& C \ln(1+T)^{\frac{1}{2}}
(\epsilon+\|N(u^{(m)})\|_{L^1_{[0,T]} H^{s-1}})
\\
&\le &
C\ln(1+T)^{\frac{1}{2}}(\epsilon+\|\partial u^{(m)}\|_{L^{2}_{[0,T]}L^\infty}^{2} \|\partial u^{(m)}\|_{L^\infty_{[0,T]}
H^{s-1}})\\
&\le & C \ln(1+T)^{\frac{1}{2}}(\epsilon + c^2 M\epsilon)\\
&\le& M \epsilon \ln(1+T)^{\frac{1}{2}}\ll 1.
\end{array}
$$ Thus we have \eqref{induction} for any $m$.
For the radial cases, it only needs to replace the usual Strichartz
estimate by the required radial $L^{k-1} L^\infty$ estimate in
Proposition \ref{stri}.
|
2,877,628,089,840 | arxiv | \section{Introduction}
User attributes are included in the seminal data sets of recommender system research, e.g., MovieLens~\cite{Harper2015MovieLens}.
From the days of demographic recommender systems, mentioned in~\cite{burke2007hybrid}, attempts have been made to use user attributes to improve recommendation.
With the rise of Graph Neural Networks, interest in leveraging user attributes has been recently renewed~\cite{Huang2021Knowledge,do2022heterogeneous,Jiancan2022GCM}.
In this paper, we take a closer look at how helpful user attributes are in a conventional context-aware recommender system that makes use of user side information.
We study the impact of user attributes that go \emph{in}to a recommender system and the extent to which these attributes come \emph{out} of a recommender system, i.e., whether they strengthen the signal of user information that can be inferred from a user's recommendation list.
Our title mentions gender as a user attribute, but next to binary gender, we also investigate age, occupation and location.
We offer a broad, newly updated view on user attributes in recommendation, by reporting on a set of experiments that make three main contributions.
First, we demonstrate that user attributes are not always helpful to improve recommender prediction performance (Section~\ref{sec:recex}).
This point may not be surprising in light of the well-known disappointment of item attributes~\cite{pilaszy2009recommending}.
However, many papers studying side information combine item and user side information~\cite{dong2017hybrid, chen2018collective, zhou2021leverage}, rather than separating out user attributes as we do.
Second, we show that user attributes actually have the potential to harm recommendation when we look beyond prediction performance to metrics like coverage and diversity (Section~\ref{sec:recdiv}).
Third, we study whether user attributes \emph{survive} from the training data into the recommender system output.
We establish that there is a weak but consistent user signal in recommendation lists that can be detected by a machine learning classifier (Section~\ref{sec:recsuv}).
Interestingly, adding user side information can amplify this signal without actually helping the recommender, opening new research questions for future work.
\section{Related Work}
\label{sec:related}
In this section, we cover the related work that forms the background for each of our three contributions.
\subsection{Context-Aware Recommendation with User Side Information}
Context-aware recommenders integrate one or more of three types of side information: information related to users (e.g., age, gender), related to items (e.g., genre, price), and related to the interaction between users and items (e.g., time, location)~\cite{shi2014collaborative}.
In this paper, we focus on user side information because it is relatively less well studied than item side information and because of its potentially privacy sensitive nature, which makes it interesting and important to today's research community.
Use of user attributes in recommender systems dates at least back to demographic recommender systems~\cite{burke2007hybrid}, as previously mentioned.
Here, we briefly cover some examples of more recent collaborative filtering systems that have integrated user side information.
Variational Autoencoder approaches include~\cite{dong2017hybrid}, which stacks denoising auto-encoders (SDAE) to integrate side information into the latent factors, and~\cite{chen2018collective}, which uses a collective Variational Autoencoder (cVAE) for integrating side information for Top-N recommendation.
More recent work includes a clustering-based collaborative filtering algorithm that integrates user side information (such as age, gender and occupation) in a deep neural network~\cite{zhao2020clustering} and a Gaussian process based recommendation framework that leverages side information~\cite{zhou2021leverage}.
These approaches illustrate that researchers are interested in user attributes not just for improving cold start and sparsity, but also recommender performance across users.
For further examples of recent work, see~\cite{SI_comment2019, kulkarni2020context}.
In this work, we choose to focus on Factorization Machines (FMs)~\cite{rendle2010factorization}, classically used for context-aware recommendation.
FMs are a tried-and-true approach to context-aware recommendation, which allow easy integration of side information via extension of the user-item vector.
The advantage of FMs is that we can easily implement two recommender systems, one with and one without user attributes, and be confident that the use of the user attributes in the training data is the only difference between them.
\subsection{Diversity in Recommender Systems}
Diversity in recommender systems has drawn attention in recent years~\cite{Castells2015Diversity}.
Diversity can be defined as the potential of recommender system algorithms to recommend different or diverse content, e.g., recommending less popular items and targeting more niche items, while making personalized recommendation to users.
In~\cite{MatevZ2017Diversity}, the authors provide an overview of
different definitions and measurements for diversity.
Here, we are interested in the impact of side information on the diversification of the recommendation output.
Diversification is important for recommendations to be useful to the user.
Its importance is reflected in a surge of recent work on improving diversity, such as~\cite{Tommaso2017Adaptive}, a multi-attribute diversification method, and~\cite{steenvoorden2020attribute}, attribute-aware diversifying sequential recommender (SR).
Other work on diversity has focused on enhancing the user experience with system, such as~\cite{Tsai2017Leveraging}, which showed the importance of diversity.
In this paper, our focus is measuring diversity rather than attempting to improve it.
\subsection{User Signals in Recommender Output}
We are interested in whether recommendation lists contain a signal of user attributes and whether this signal is strengthened when the user attribute is explicitly part of the training data.
Previous work studying a user signal in recommender output is limited.
In~\cite{calandrino2011you}, the output of a recommender system is combined with a limited number of known transactions to infer unknown transactions of a target user.
Our work is closer to~\cite{Beigi2020,zhang2021graph}, which focus on user attributes, specifically, infer gender, age, and occupation of target users based on recommendation lists for those users combined with additional information.
In~\cite{Beigi2020}, the additional information is user embeddings that represent users internal to the recommender system.
In~\cite{zhang2021graph}, the additional information is the user's original profile, which is also internal to the recommender system.
To our knowledge, we are the first to carry out user attribute inference \emph{only} on the recommendations that were produced by the system without adding internal information.
Our interest in whether information in the training data is also present in the output of the recommender is reminescent of the idea of \emph{calibration}~\cite{Steck2018Calibrated}.
An uncalibrated recommender system has a mismatch between properties of the training data and of the output.
The properties conventionally studied in the literature, e.g., by~\cite{Steck2018Calibrated}, are item attributes.
Here we are looking at user attributes.
Like~\cite{Steck2018Calibrated} we find that consistency between the input and the output has an interesting impact above and beyond producing better recommendations in terms of prediction accuracy.
\section{Experimental Setup}
\label{sec:exp}
In this section, we first describe data sets.
Then, we describe the recommender system algorithms and classification algorithms that we will use in our experiments.
\subsection{Data Sets}
Our experiments use three publicly available data sets.
First, we use two MovieLens data sets ML100K and ML1M~\cite{Harper2015MovieLens}.
We choose ML100K and ML1M because it includes demographic attributes of users such as gender, age, occupation, zipcode and also the timestamp needed for our temporal splitting.
We used zipcode to generate the State attribute.
In order to convert MovieLens data from explicit feedback to implicit feedback, we set a \textit{cutoff}$ >= 3$, such that items with ratings $>=3$ are defined to be relevant, and the rest as non-relevant.
Then, we pre-processed the resulting implicit data such that we have at least 20 interactions per user.
ML100K subset contains 845 users and a total of 1574 movies for 80961 interactions.
ML1M subset contains a total of 5755 users and 3624 movies for 831745 interactions.
We also use a subset of LastFM~\cite{Thierry2011Million}, a music data set.
We use artists as the items.
We preprocessed LastFM data, retaining only users who listened to at least 20 artists and artists to which at least 10 users have listened.
The result is a subset of 836 users and 12k artists.
For each user in LastFM data, gender and country location attributes are provided.
We used the Country attribute to generate the Continent and the EU vs Rest attributes.
We choose these data sets because they contain user attributes and they are publicly available.
Table~\ref{tab:data} summarizes the statistics of the data sets.
\begin{table*}[!h]
\centering
\caption{Statistics of the data sets used for the experiments, including user attributes}
\label{tab:data}
\begin{tabular}{llllll}
\textbf{Data set} & \textbf{\# Users} & \textbf{\# Items} & \textbf{\# Ratings} & \textbf{User attributes (\#Number of categories)} \\ \hline
MovieLens (ML100K) & 845 & 1574 & 80961 & Gender (2), Age (7), Occupation (21), State (52) \\ \hline
MovieLens (ML1M) & 5755 & 3624 & 831745 & Gender (2), Age (7), Occupation (21), State (52) \\ \hline
LastFM subset & 836 & 12155 & 501827 & Gender (2), Continent (7), EU vs Rest (2) \\ \hline
\end{tabular}%
\end{table*}
\subsection{Recommender System Algorithms}
We generated our recommendation lists using Factorization Machines~\cite{rendle2010factorization} and also include BPRMF~\cite{Rendle2009BPR} for comparison.
A Factorization Machine models pair-wise interactions with factorized parameterization and is suited to ranking problems with implicit feedback\footnote{For Factorization Machine implementation, we used RankFM toolkit: \url{https://rankfm.readthedocs.io/en/latest/}. Since RankFM does not include an implementation of hyper-parameter optimization, we implemented our own hyper-parameter optimization function by following initialization of hyper-parameters used in \url{https://github.com/lyst/lightfm}.}.
BPRMF is a matrix factorization algorithm using Bayesian personalized ranking for implicit data\footnote{For BPRMF implementation, we used Elliot Toolkit \url{https://elliot.readthedocs.io/en/latest/index.html}. We followed hyper-parameters optimization suggested in Elliot.}.
We used the RankFM implementation, including two variants of loss: Bayesian Personalized Ranking (BPR)~\cite{Rendle2014Pairwise} and Weighted Approximate-Rank Pairwise (WARP)~\cite{weston2013learning} to learn model weights via Stochastic Gradient Descent (SGD)~\cite{Rendle2014Pairwise}.
WARP loss is often described as performing
better than BPR loss~\cite{abbas2019one,Maciej2015Metadata}.
Our exploratory experiments confirmed that WARP loss was generally better than BPR loss, and we focus on WARP loss in our investigation.
User attributes (gender, age, occupation, and location) are one-hot-encoded before being used by FM.
We note that in each run we add one user attribute at a time.
In other words, we do not test combinations such as gender and location.
In this way, we can isolate the impact of the user attribute.
We used temporal splitting strategy such that we select 10\% of users' most recent interactions for test set, and 10\% of interactions for validation set and the 80\%, which are the remaining interactions, are used as training set.
We used validation set for tuning
hyper-parameters including: batch size, the learning rate (lr), user and bias regularization, and the number of latent factors.
For our factorization machine implementation \footnote{If accepted the code will be released on GitHub.},
we search for the best: lr in $\{0.001, .., 0.1\}$, number of training epochs in $\{5, .., 500\}$, and latent factor in $\{5, .., 200\}$. We left
alpha and beta parameters at default.
Last but not least, we used MAP metric for optimizing hyper-parameters~\cite{shi2012TFMAP}.
In order to assess the quality of recommendation, we selected a set of commonly used TopN recommendation metrics namely, Precision (P@N), Recall (R@N), normalized Discounted Cumulative Gain (nDCG), and HR@N.
We set the size $ N $ of recommendation list to 50.
The diversity of recommendation lists is measured with item coverage, Gini index, and Shannon entropy~\cite{Castells2015Diversity}.
Item coverage computes the proportion of items that a recommender system can recommend from the entire items catalog.
Gini index and Shannon entropy are two different metrics used to measure the distributional inequality.
These measures take into account that an item is recommended to only some users, in addition to the item's distribution and to how many users it is recommended~\cite{Castells2015Diversity}.
The entropy based measure calculates the diversity offered by an item $i$ for a user $u$ in terms of the popularity of the item among the evaluated recommenders~\cite{Bellogin2010Heterogeneity}.
\subsection{Classification Algorithms}
We select two machine learning algorithms: Logistic Regression (LogReg) and Random Forest (RF) because they are widely used in literature~\cite{weinsberg2012blurme,chen2014effectiveness}.
We note that in our experimentation results we found that the LogReg classifier has close and comparable results to the RF classifier, with LogReg somewhat better.
In the remainder of the paper, for space reasons, we will focus on classification results from the LogReg classifier\footnote{Full tables of both classifiers including standard deviations can be found in the online folder \href{https://surfdrive.surf.nl/files/index.php/s/2wEglnYmGBQlgx4}{\nolinkurl{All_results.com}}.}.
We compare the performance of the Logistic Regression classifier to the performance of a random classifier using most frequent strategy (used as a baseline).
Our classifiers take users' topN recommendation lists as the input.
We split our data using a stratified k-fold cross validation with $k=5$.
We measure the performance of classifiers using F1-score with macro-average.
We choose F1-score because user attributes in our data sets are highly imbalanced.
\section{Leveraging User Attributes in the Recommender Input}
\label{sec:recex}
Our first experiment assesses the contribution that user side information makes to recommendation prediction performance when it is added to the training data.
Recommendation results from this experiment are shown in Table~\ref{tab:RecCARS}.
We report topN recommendations measured with precision, recall, nDCG, and HR.
We compare the performance of the Factorization Machine with WARP loss with and without (`None') side information.
We report BPRMF for comparison with FM without side information and confirm that the FM delivers better performance.
We note that results of FM using BPR loss are comparable, but not included here.
In Table~\ref{tab:RecCARS}, we observe it is possible to obtain improvements in recommendation performance when using user attributes as side information.
However, the improvements differ from one attribute type to another and from one data set to another.
For ML100K and LastFM, recommendation with side information outperforms recommendation without side information.
For ML1M, we see that only the attribute \textit{State} helps to improve recommendation performance by a very small amount.
These results demonstrate that adding user attributes can possibly help, but is far from fail-safe strategy for improving recommendations.
\begin{table}[!h]
\centering
\caption{TopN (N=50) recommendation performance measured in terms of Precision@50 (P), Recall@50 (R), nDCG, and HR@50 on BPRMF and FM with WARP loss.}
\label{tab:RecCARS}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{ccccccc}
\hline
\multirow{2}{*}{\textit{Data Sets}} & \multirow{2}{*}{\textbf{Algorithms}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}User\\ Attributes\end{tabular}}} & \multicolumn{4}{c}{\textbf{Top-50 Recommendation}} \\ \cline{4-7}
& & & \textit{P} & \textit{R} & \textit{nDCG} & \textit{HR} \\ \hline
\multirow{6}{*}{\textbf{ML100K}} & \textit{BPRMF} & \textit{None} & 0.2438 & 0.0383 & 0.0759 & 0.7479 \\ \cline{2-7}
& \multirow{5}{*}{\textit{WARP}} & \textit{None} & 0.2877 & 0.0444 & 0.0888 & 0.7751 \\ \cline{3-7}
& & \textit{Gender} & 0.3340 & 0.0522 & 0.1042 & 0.8462 \\ \cline{3-7}
& & \textit{Age} & 0.3210 & 0.0496 & 0.1002 & 0.8497 \\ \cline{3-7}
& & \textit{Occupation} & 0.3114 & 0.0493 & 0.0980 & 0.8414 \\ \cline{3-7}
& & \textit{\begin{tabular}[c]{@{}c@{}}State\end{tabular}} & 0.3268 & 0.0509 & 0.1015 & 0.8462 \\ \specialrule{.2em}{.1em}{.1em}
\multirow{6}{*}{\textbf{ML1M}} & \textit{BPRMF} & \textit{None} & 0.1519 & 0.0345 & 0.0582 & 0.6488 \\ \cline{2-7}
& \multirow{5}{*}{\textit{WARP}} & \textit{None} & 0.2135 & 0.0425 & 0.0743 & 0.7583 \\ \cline{3-7}
& & \textit{Gender} & 0.2028 & 0.0423 & 0.0731 & 0.7498 \\ \cline{3-7}
& & \textit{Age} & 0.1956 & 0.0415 & 0.0718 & 0.7359 \\ \cline{3-7}
& & Occupation & 0.1908 & 0.0417 & 0.0715 & 0.7225 \\ \cline{3-7}
& & \textit{\begin{tabular}[c]{@{}c@{}}State\end{tabular}} & 0.2216 & 0.0434 & 0.0768 & 0.7618 \\ \specialrule{.2em}{.1em}{.1em}
\multirow{6}{*}{\textbf{LastFM}} & \textit{BPRMF} & \textit{None} & 0.2023 & 0.2209 & 0.2061 & 0.9474 \\ \cline{2-7}
& \multirow{5}{*}{\textit{WARP}} & \textit{None} & 0.2093 & 0.2035 & 0.1985 & 0.9665 \\ \cline{3-7}
& & \textit{Gender} & 0.2141 & 0.2178 & 0.2082 & 0.9677 \\ \cline{3-7}
& & \textit{continent} & 0.2101 & 0.2113 & 0.2015 & 0.9605 \\ \cline{3-7}
& & \textit{EU vs Rest} & 0.2154 & 0.2221 & 0.2103 & 0.9665 \\ \specialrule{.2em}{.1em}{.1em}
\end{tabular}%
}
\end{table}
\section{Diversity and Coverage of the Recommender Output}
\label{sec:recdiv}
Next, we move to investigate the impact of user side information on coverage and diversity.
Coverage is reported as the percent of items recommended and, diversity is measured in Shannon entropy and Gini index (higher is more diverse).
Table~\ref{tab:div_coverage} reports the results of recommendation using FM with WARP loss with and without user attributes.
We observe that compared to the recommender without side information (`None'), most user attributes depress coverage.
The exception is attributes with many values such as \textit{State} and \textit{Occupation} (in the case of ML100K).
We also observe that user attributes deteriorate diversity. (The exception is ML100K with \textit{Occupation} attribute.)
In some cases the drop is not very large, but the results support our conclusion that side information has the potential to harm recommendation.
\begin{table}[!h]
\centering
\caption{Item coverage and diversity of recommendation lists from Factorization Machine with WARP loss. }
\label{tab:div_coverage}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{ccccc}
\hline
\textbf{Data Sets} & \textit{User attributes} & \textit{\textbf{Items coverage}} & \textit{\textbf{Shannon Entropy}} & \textit{\textbf{Gini index}} \\ \hline
\multirow{5}{*}{\textbf{ML100K}} & \textit{None} & 970 & 8.9572 & 0.2453 \\ \cline{2-5}
& \textit{Gender} & 825 & 8.6667 & 0.2005 \\ \cline{2-5}
& \textit{Age} & 867 & 8.6623 & 0.1994 \\ \cline{2-5}
& \textit{Occupation} & 1008 & 8.9657 & 0.2465 \\ \cline{2-5}
& \textit{State} & 1009 & 8.8817 & 0.2321 \\ \specialrule{.2em}{.1em}{.1em}
\multirow{5}{*}{\textbf{ML1M}} & \textit{None} & 1988 & 9.3369 & 0.1361 \\ \cline{2-5}
& \textit{Gender} & 1703 & 9.1161 & 0.1159 \\ \cline{2-5}
& \textit{Age} & 1583 & 8.8275 & 0.0974 \\ \cline{2-5}
& \textit{Occupation} & 1606 & 8.9066 & 0.1017 \\ \cline{2-5}
& \textit{State} & 2002 & 9.1889 & 0.1249 \\ \specialrule{.2em}{.1em}{.1em}
\multirow{4}{*}{\textbf{LastFM}} & \textit{None} & 4508 & 10.3876 & 0.0969 \\ \cline{2-5}
& \textit{Gender} & 3866 & 10.2768 & 0.0858 \\ \cline{2-5}
& \textit{Continent} & 4010 & 10.3793 & 0.0918 \\ \cline{2-5}
& \textit{EU vs Rest} & 3044 & 9.8399 & 0.0622 \\ \specialrule{.2em}{.1em}{.1em}
\end{tabular}
}
\end{table}
\section{User Signal in the Recommender Output}
\label{sec:recsuv}
Finally, we turn to explore the user signal in the recommendation lists.
First, we will discuss our classification results.
Recall that we use a classifier to attempt to predict user attributes using the lists our recommender has output for each user.
We focus on Logistic Regression (LogReg) because it outperformed other classifiers we tested (in particular, Random Forest).
Results are shown in Table~\ref{tab:Classification}.
For comparison, we report scores of a random classifier with most frequent strategy as a baseline.
In all cases, our classifier outperforms this random baseline, which tells us that there is a user signal present in the recommender output.
Interestingly, both recommendation lists generated without user attributes (`None') and recommendation lists generated with user attributes contain at least a weak signal.
\begin{table*}[!h]
\centering
\caption{Classification results measured in terms of F1-score with macro-average. Recommendation lists are generated using FM with WARP loss. Random classifier uses most frequent strategy. The standard deviation over 5 folds is in between 0.000 and 0.0330.}
\label{tab:Classification}
\begin{tabular}{ccccccc}
\hline
\textbf{Data Sets} & \textit{\textbf{User Attributes}} & \textit{\textbf{Classification}} & \textit{Gender} & \textit{Age} & \textit{Occupation} & \textit{State} \\
\hline
\multirow{3}{*}{\textbf{ML100K}} & \multirow{2}{*}{None} & \textit{Random} & 0.4198 & 0.1657 & 0.0741 & 0.0287 \\
\cline{3-7}
& & \textit{LogReg} & 0.4861±0.0330 & 0.2416 & 0.1103 & 0.0419 \\
\cline{2-7}
& \begin{tabular}[c]{@{}c@{}}With side \\ information\end{tabular} & \textit{LogReg} & 0.5347 & 0.2219 & 0.1363 & 0.0579 \\
\specialrule{.2em}{.1em}{.1em}
\multirow{3}{*}{\textbf{ML1M}} & \multirow{2}{*}{None} & \textit{Random} & 0.4188 & 0.1820 & 0.0282 & 0.0569 \\
\cline{3-7}
& & \textit{LogReg} & 0.4958 & 0.2343 & 0.0891 & 0.0675 \\
\cline{2-7}
& \begin{tabular}[c]{@{}c@{}}With Side\\ Information\end{tabular} & \textit{LogReg} & 0.5022 & 0.2434 & 0.0866 & 0.0700 \\
\specialrule{.2em}{.1em}{.1em}
\multicolumn{3}{l}{} & \textit{Gender} & \textit{Continent} & \multicolumn{2}{c}{\textit{EU vs Rest}} \\
\hline
\multirow{3}{*}{\textbf{LastFM}} & \multirow{2}{*}{None} & \textit{Random} & 0.3666 & 0.3545 & \multicolumn{2}{c}{0.3416} \\
\cline{3-7}
& & \textit{LogReg} & 0.4942 & 0.3912 & \multicolumn{2}{c}{0.5015} \\
\cline{2-7}
& \begin{tabular}[c]{@{}c@{}}With Side\\ Information\end{tabular} & \textit{LogReg} & 0.5079 & 0.3917 & \multicolumn{2}{c}{0.5019} \\
\specialrule{.2em}{.1em}{.1em}
\end{tabular}
\end{table*}
Next, to further understand this signal, in Table~\ref{tab:Classification}, we provide the raw difference and the percent change in classification performance on the recommendation lists before and after side information is added. (See rows labeled `Classification'.)
We interpret a relatively high percent change to mean that the information provided by a user attribute has \emph{survived} from the training data into the output data.
In cases, where the percent change is low, negative or zero, this information has become lost.
It is natural to expect that survival would depend on the type of user attributes or the number of values of a user attribute.
However, in Table~\ref{tab:Classification} we see that user attributes with the strongest survival vary across data sets.
Table~\ref{tab:Classification} also includes the Raw difference and Percent Change for recomendation results (nDCG).
We notice that survival is somewhat stronger across the board for ML100K and that this corresponds to a larger improvement in recommendation that are achieved by adding user attributes.
However, overall there is no indication of a clear and simple relationship of the usefulness of user attributes to a recommender system and the survival of those attributes in recommender system output.
We discuss the implication of this finding the final section of the paper.
\begin{table*}[!h]
\centering
\caption{Survival signal and recommendation improvement reported as absolute difference and relative change in classification between recommendation without user side information and recommendation with user side information. Recommendation lists are generated using FM with WARP loss and values calculated using nDCG metric. The negative values in $\% Change$ of recommendation mean that user attribute does not help to improve recommendation performance, but made it worse. The negative values in $\% Change$ of classification mean that user signal did not survive to the recommendation outputs. }
\label{tab:RelativeChange}
\begin{tabular}{ccccccc}
\hline
\textbf{Data Sets} & \multicolumn{2}{c}{\textit{Task}} & \textit{Gender} & \textit{Age} & \textit{Occ} & \textit{State} \\ \hline
\multirow{4}{*}{\textbf{ML100K}} & \multirow{2}{*}{Classification} & \textit{Raw difference} & 0.0486 & -0.0197 & 0.026 & 0.0160 \\ \cline{3-7}
& & \textit{\% Change} & 0.1000 & -0.0800 & 0.2400 & 0.3800 \\ \cline{2-7}
& \multirow{2}{*}{Recommendation} & \textit{Raw difference} & 0.0154 & 0.0114 & 0.0092 & 0.0127 \\ \cline{3-7}
& & \textit{\% Change} & 0.1700 & 0.1300 & 0.1000 & 0.1400 \\ \specialrule{.2em}{.1em}{.1em}
\multirow{4}{*}{\textbf{ML1M}} & \multirow{2}{*}{Classification} & \textit{Raw difference} & 0.0064 & 0.0091 & -0.0025 & 0.0025 \\ \cline{3-7}
& & \textit{\% Change} & 0.0100 & 0.0390 & -0.0300 & 0.0370 \\ \cline{2-7}
& \multirow{2}{*}{Recommendation} & \textit{Raw difference} & -0.0012 & -0.0025 & -0.0028 & 0.0025 \\ \cline{3-7}
& & \textit{\% Change} & -0.0200 & -0.0300 & -0.0400 & 0.0300 \\ \specialrule{.2em}{.1em}{.1em}
\multicolumn{3}{l}{} & \multicolumn{1}{l}{\textit{Gender}} & \multicolumn{1}{l}{\textit{Continent}} & \multicolumn{2}{l}{\textit{EU vs Rest}} \\ \hline
\multirow{4}{*}{\textbf{LastFM}} & \multirow{2}{*}{Classification} & \textit{Raw difference} & 0.0137 & 0.0004 & \multicolumn{2}{c}{0.0005} \\ \cline{3-7}
& & \textit{\% Change} & 0.0300 & 0.0000 & \multicolumn{2}{c}{0.0000} \\ \cline{2-7}
& \multirow{2}{*}{Recommendation} & \textit{Raw difference} & 0.0097 & 0.0118 & \multicolumn{2}{c}{0.0030} \\ \cline{3-7}
& & \textit{\% Change} & 0.0500 & 0.0600 & \multicolumn{2}{c}{0.0200} \\ \specialrule{.2em}{.1em}{.1em}
\end{tabular}
\end{table*}
\section{Conclusion and Future Work}
In this paper, we have studied a conventional Factorization Machine over which we exercise tight control.
We have shown that user attributes do not always help recommendation and can harm coverage and diversity.
Our results point to the need for caution with user attributes.
The survival of user information into recommender output constitutes a privacy leak of the sort that has concerned~\cite{Beigi2020,zhang2021graph}, but here measured without access to recommender-internal information.
Future work must avoid increasing the user signal in the recommendation list of a user without good cause in order to protect user privacy and respect data minimization.
Future work should also extend the possible parallel with calibration~\cite{Steck2018Calibrated}.
More research is necessary to gain insight into how measuring or manipulating the match between user attributes in the input and the output can be used to understand and improve recommender systems, also moving beyond accuracy.
\bibliographystyle{ACM-Reference-Format}
|
2,877,628,089,841 | arxiv | \section{Introduction}
\label{intro}
The multi-armed bandit (MAB) problem considers the strategy one must devise when playing a row of slot machines: i.e., which arms to play to maximize returns. This analogy extends to a wide range of interesting real-world challenges that require online learning while simultaneously maximizing some notion of reward.
The arm may be a medicine a doctor must prescribe to a patient, the reward being the outcome of such treatment on the patient; or the set of resources a manager needs to allocate to competing projects, with the reward being the revenue attained at the end of the month; or the ad/product/content an online recommendation algorithm must choose to display to maximize click-through rate in e-commerce. The contextual MAB setting, where at each interaction with the world side information (known as `context') is available, is a natural extension of this abstraction. The `context' is the physiology of the patient, the type of resources available for each project, or the features of the website user.
Interest in sequential decision processes has recently intensified in both academic and industrial communities, with the surge of advanced reinforcement learning techniques within the machine learning community~\cite{b-Sutton1998} . Reinforcement learning has now been successfully applied in a wide variety of domains, from hyperparameter tuning~\cite{ip-Kandasamy2018} and Monte Carlo tree search~\cite{ic-Bai2013} for complex optimization in science and engineering problems, to revenue maximization~\cite{j-Ferreira2018} and marketing solutions~\cite{j-Schwartz2017} in business and operations research. Besides, reinforcement learning is gaining popularity in e-commerce and digital services as well, improving online advertising at LinkedIn~\cite{ip-Agarwal2013}, engagement with website services in Amazon~\cite{ip-Hill2017}, recommending targeted news at Yahoo~\cite{j-Li2010}, and allowing full personalization of content and art at Netflix~\cite{Netflix2017}. The techniques used in these online success stories are grounded on statistical advances on sequential decision processes, yet raise interesting challenges that state-of-the art MAB algorithms do not address.
The MAB setting, as it crystallizes the fundamental trade-off between exploration and exploitation in sequential decision making, has been studied throughout the 20th century, with important contributions by~\citet{j-Thompson1935} and later~\citet{j-Robbins1952}.
Over the years, several algorithms have been proposed to overcome the exploration-exploitation tradeoff in the MAB problem~\cite{b-Lattimore2019}.
The $\epsilon$-greedy approach (i.e., to be greedy with probability $1-\epsilon$, and to play the arm with best reward so far, otherwise to randomly pick an arm) has become very popular due to its simplicity, while retaining often good performance \cite{j-Auer2002}. \citet{j-Gittins1979} formulated a more sophisticated method, based on computing the optimal strategy for certain bandit scenarios, where geometrically discounted future rewards are considered. Since the exact computation of the Gittins index is complicated in practical settings, approximations have been developed as well \cite{j-Brezzi2002}.
\citet{j-Lai1985} introduced a new class of algorithms, based on the upper confidence bound (UCB) of the expected reward of each arm, for which strong theoretical guarantees have been proven~\cite{j-Lai1987}, and many extensions proposed~\cite{ip-Garivier2011,ip-Garivier2011a}. Bayesian counterparts of UCB-type algorithms~\cite{ip-Kaufmann2012}, where quantiles of posterior distributions are used as proxies for confidence bounds, have been shown to provide an unifying framework for many UCB-based variants for distinctive bandit problems. In par with UCB, Thompson sampling~\cite{j-Thompson1935, j-Russo2018} is one of the most celebrated and studied exploration strategies in stochastic MABs, which readily fit into the Bayesian learning framework as well. Bayesian modeling of the MAB problem facilitates not only generative and interpretable modeling, but sequential and batch processing algorithm development as well.
Both UCB and Thompson sampling based strategies have been theoretically studied, and shown to be near optimal in classic~\cite{ip-Garivier2011a, j-Agrawal2011} and linear~\cite{ic-Abbasi-yadkori2011, j-Agrawal2012,j-Agrawal2012a,j-Russo2014,j-Russo2016} bandits. However, these algorithms do not typically generalize easily to complex problems, as maintaining exact posteriors is intractable for distributions not in the exponential family~\cite{ic-Korda2013,j-Russo2018}. In general, efficient approximations to high-probability confidence sets, as well as posterior distributions, need to be designed. Developing practical MAB methods to balance exploration and exploitation in complex domains remains largely unsolved.
In an effort to extend MAB algorithms to more complex scenarios, researchers have considered other flexible reward functions and Bayesian inference.
A first step beyond the classic Bernoulli MAB setting for context-dependent binary rewards was the use of Laplace approximations for Thompson sampling~\cite{ic-Chapelle2011}, and more recently, the Polya-Gamma augmentation~\cite{ic-Dumitrascu2018}. These techniques are specifically targeted to binary rewards, modeled via the logistic function.
Recent approaches have embraced Bayesian neural networks and approximate inference to accommodate complex reward functions.
Bootstrap sampling for example has been considered for Thompson sampling in classic bandits~\cite{j-Eckles2019}, as well as for deep network based reinforcement learning~\cite{j-Osband2015}.
Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors~\cite{ip-Blundell2015, ic-Kingma2015, j-Lipton2016, ic-Osband2016, ip-Li2016}.
\citet{ip-Riquelme2018} have recently benchmarked some of these techniques, and reported that neural networks with approximate inference, even if successful for supervised learning, under-perform in the MAB setting.
In particular,~\citet{ip-Riquelme2018} emphasize the issue of adapting the slow convergence uncertainty estimates of neural net based methods to MABs. In parallel, others have focused on extending Thompson sampling to complex online problems~\cite{ip-Gopalan2014} by leveraging ensemble methods~\cite{ic-Lu2017} or generalized sampling techniques~\cite{j-Li2013}. However, all these assume stationary reward functions.
Interesting work to address switching bandit problems has already been proposed, both for UCB~\cite{ip-Garivier2011} and Thompson sampling~\cite{ip-Mellor2013}. However, these are limited to specific bandit models, i.e., those with abrupt and finite number of changes in the mean rewards. Recent efforts, such as~\cite{ic-Besbes2014} and~\cite{j-Raj2017}, have studied more flexible time-varying models. The former is targeted to Bernoulli rewards, and it applies discounting to the parameters of its prior Beta distribution, which requires careful case-by-case tuning of the discount parameters for successful performance. The latter imposes a reward `variation' constraint on the evolution of the arms, as the variation of the expected rewards over the relevant time horizon is bounded by a variation budget, designed to depend on the number of bandit pulls.
On the contrary, we here propose to relax these constraints, and yet consider very flexible time-varying models. We study dynamic --- also known as restless --- bandits, where the world might evolve over time, and rewards are sequentially observed for the played arms. To devise flexible MAB algorithms that are performant, interpretable, and ultimately useful in real-life applications, we ground our research on (sequential) importance sampling-based methods, also known as sequential Monte Carlo (SMC).
Our goal is to design efficient and interpretable sequential decision algorithms by leveraging well-established techniques from statistics and probability theory.
We focus on importance sampling based Monte Carlo, which is a general technique for estimating properties of a distribution, using only samples generated from a different distribution.
SMC methods~\cite{j-Arulampalam2002,b-Doucet2001,j-Djuric2003} are used to estimate posterior densities or expectations in problems with probabilistic models that are too complex to treat analytically, and have been successful in many applications of science and engineering \cite{b-Ristic2004,j-Leeuwen2009,j-Ionides2006,j-Creal2012}.
In the approximate inference literature, Monte Carlo based methods complement the variational approach~\cite{b-Bishop2006}, recently proposed for both general reinforcement learning problems~\cite{ip-Blundell2015, j-Lipton2016}, and posterior sampling-based algorithms as well~\cite{ip-Lamprier2017,ip-Urteaga2018}. Variational inference provides a very general method for approximating generative models, but does not provide optimality guarantees.
On the contrary, SMC methods provide tight convergence guarantees under general assumptions \cite{j-Crisan2002,j-Chopin2004}.
We here consider SMC for dynamic bandits, where the world (i.e., bandit parameters) is time-varying, and rewards are sequentially observed for the played arms.
In order to compute sufficient statistics of the rewards of each arm over time, Bayesian MAB algorithms require sequential updates of their parameter posteriors.
To that end, we propose to leverage SMC to approximate each per-arm parameter posterior. These methods extend the applicability of Bayesian MAB algorithms by permitting more complex models: those for which sampling may be performed even if analytic computation of summary statistics is infeasible.
The use of SMC for Thompson sampling has been previously considered, for a probit MAB reward model by~\citet{j-Cherkassky2013}, as well as to update the posterior of the latent features in a probabilistic matrix factorization model in~\cite{ic-Kawale2015} --- where a Rao-Blackwellized particle filter that exploits the structure of the assumed model is proposed. With more ambitious goals in mind, ~\citet{ip-Gopalan2014} show that computing posterior distributions which lack an explicit closed-form with sequential Monte Carlo results in bounded Thompson sampling regret in complex online problems, such as bandit subset arm selection and job scheduling problems. We consider these efforts valuable, as they provide empirical evidence that approximate MC inference can be successfully combined with Thompson sampling.
Here, we argue that SMC-based sequentially updated random measures approximating the true per-arm parameter posteriors allows for the computation of any statistic a MAB policy might require in dynamic bandits. The proposed SMC-based MAB framework diverges from state-of-the-art MAB techniques, and provides a flexible framework for solving a rich class of MAB problems:
($i$) it leverages SMC for both posterior sampling and estimation of sufficient statistics for Bayesian MAB algorithms --- i.e., Thompson sampling and upper confident bound-based policies;
($ii$) it addresses restless bandits via the general linear dynamical system --- with unknown parameters via Rao-Blackwellization; and
($iii$) it targets complex nonlinear reward models --- both stateless and context-dependent distributions.
Our work extends existing MAB policy algorithms beyond their original settings
by leveraging the advances in SMC methods from the approximate inference community.
We study the general linear dynamical system (which allows for application of the Kalman filter when the parameters are known), and provide the solution for the more interesting unknown parameter case (by combining Rao-Blackwellization and SMC methods).
Our \textbf{contribution} is unique to the MAB problem in that we provide an SMC-based MAB method that:
\begin{enumerate}[(i)]
\item approximates the posterior densities of interest via random measures, with high-probability convergence guarantees;
\item requires knowledge of the reward function only up to a proportionality constant, i.e., it accommodates nonlinear rewards; and
\item is applicable to time-varying parameter models, i.e., dynamic or restless bandits.
\end{enumerate}
In summary, we provide a flexible framework for solving a rich class --- dynamic and nonlinear --- bandits.
We formally introduce the MAB problem and (sequential) Monte Carlo methods in Section~\ref{sec:problem_statement}, before providing the description of the proposed SMC-based MAB framework in Section~\ref{sec:proposed_framework}. We evaluate its performance for Thompson sampling and Bayes-UCB based policies in Section~\ref{sec:evaluation}, and conclude with promising research directions suggested by these results in Section~\ref{sec:conclusion}.
\section{Problem Statement}
\label{sec:problem_statement}
\subsection{Multi-armed bandits}
\label{ssec:problem_statement_mab}
We study the problem of maximizing the rewards resulting from sequentially chosen actions $a\in\mathcal{A}$, named \textit{arms} in the bandit literature. The reward function is stochastic, parameterized by the intrinsic properties of each arm (i.e., parameters $\theta \in \Theta$), and can potentially depend on a context $x$, e.g., $x\in \Real^{d}$.
At each round $t$, the reward $y_t$ is observed only for the chosen arm $a_t\in\mathcal{A}$ (one of $|\mathcal{A}|$ possible arms), and is independently and identically drawn from its distribution: $y_t\sim p_{a}(Y|x_t,\theta_{t,a}^*)$,\footnote{Random variables are capitalized, their realizations denoted in lower-case.} the conditional reward distribution for arm $a$, where we allow for time-varying context and parameters (note the subscript $_t$ in both). These per-arm reward distributions are parameterized by $\theta \equiv \theta_{1:t}\equiv \{\theta_{t=1}, \cdots, \theta_{t}\}$, where $\theta_t$ refers to the union of all per-arm parameters at time $t$, i.e., $\theta_t \equiv \{\theta_{a=1,t}, \cdots, \theta_{a=A,t}\}$. Note that the true reward distribution corresponds to a unique $\theta^* \in \Theta$.
This same problem formulation includes static bandits --- where parameters are constant (i.e., $\theta_{t,a}^*=\theta_a^*, \; \forall t$) --- and non-contextual bandits --- described by fixing the context to a constant value $x_t=x, \forall t$.
In the contextual MAB, one must decide at each time $t$, which arm $a_{t}$ to play based on the available context, e.g., $x_{t}\in\Real^{d_X}$. Given the true model, the optimal action is $a_t^* = \mathop{\mathrm{argmax}}_{a^\prime \in \mathcal{A}} \mu_{t,a^\prime}(x_t,\theta^*)$, where $\mu_{t,a}(x_t,\theta^*)=\eValue{Y|a,x_t,\theta^*}$ is the conditional expectation of each arm $a$, given the context at time $t$, and the true parameters $\theta^*$.
The challenge in MABs is the lack of knowledge about the reward-generating distribution, i.e., uncertainty about $\theta^*$ induces uncertainty about the true optimal action $a_t^*$. One needs to simultaneously learn the properties of the reward distribution, and sequentially decide which action to take next. MAB policies choose the next arm to play, with the goal of maximizing the expected reward, based upon the history observed so far. Previous history contains the set of contexts, played arms, and observed rewards up to time $t$, denoted as $\mathcal{H}_{1:t}=\left\{x_{1:t}, a_{1:t}, y_{1:t}\right\}$, with $x_{1:t} \equiv (x_1, \cdots , x_t)$, $a_{1:t} \equiv (a_1, \cdots , a_t)$ and $y_{1:t} \equiv (y_{1,a_1}, \cdots , y_{t,a_t})$.
We use $\pi(a)$ to denote a multi-armed bandit policy, which is in general stochastic on its choices of $a\in\mathcal{A}$. The goal of a policy is to maximize its cumulative reward, or equivalently, to minimize the cumulative regret --- the loss incurred due to not knowing the best arm $a_t^*$ at each time $t$ --- i.e., $R_T=\sum_{t=1}^T y_{t,a^*_t}-y_{t,a_t}$, where $a_t \sim \pi(a)$ denotes the arm picked by the policy. Due to the stochastic nature of the problem, we study the \emph{expected} cumulative regret at time horizon $T$ (not necessarily known a priori)
\vspace*{-1ex}
\begin{equation}
R_T=\eValue{\sum_{t=1}^T y_{t,a^*_t}-y_{t,a_t} } \; ,
\label{eq:mab_cumulative_regret}
\vspace{-1ex}
\end{equation}
where the expectation is taken over the randomness in the outcomes $Y$, the arm selection policy $\pi(\cdot)$, and the uncertainty in the true model $\theta^*$.
When the true parameters $\theta^*$ of the arms are known, one can readily determine the optimal selection policy $a_t^* = \mathop{\mathrm{argmax}}_{a^\prime \in \mathcal{A}} \mu_{t,a^\prime}(x_t,\theta^*)$. However, when there is a lack of knowledge about the model, one needs to learn the properties of the environment (i.e., the parameters of the reward distribution) as it interacts with the world (i.e., decides which action to take next). Hence, one must take into account the uncertainty on the unknown (and possibly dynamic) parameters.
In a Bayesian approach to the MAB problem, prior knowledge on the model and parameters is incorporated into the algorithm, and as data from interactions with the environment are collected, a Bayesian algorithm updates the parameter posterior, capturing the full state of knowledge via
\begin{equation}
p(\theta_t|\mathcal{H}_{1:t}) \propto p_{a_t}(y_t|x_t,\theta_t)p(\theta_t| \mathcal{H}_{1:t-1}) \; ,
\label{eq:mab_param_posterior}
\end{equation}
where $p_{a_t}(y_t | x_t, \theta_t)$ is the likelihood of the observed reward $y_t$ after playing arm $a_t$ at time $t$.
Computation of this posterior is critical in the MAB setting, for algorithms based on both posterior sampling and confidence intervals.
For the former (e.g., Thompson sampling~\cite{j-Russo2018}), one uses $p(\theta_t|\mathcal{H}_{1:t})$ to compute the probability of an arm being optimal, i.e., $\pi(a) \sim \Prob{a=a_{t+1}^*|x_{t+1}, \theta_t, \mathcal{H}_{1:t}}$, where the uncertainty over the parameters must be accounted for. Unknown parameters are modeled as random variables with appropriate priors, and the goal is to marginalize over their posterior probability after observing history $\mathcal{H}_{1:t}$ up to time instant $t$, i.e.,
\begin{equation}
\begin{split}
\pi(a|x_{t+1},\mathcal{H}_{1:t})&=\Prob{a=a_{t+1}^*|x_{t+1},\mathcal{H}_{1:t}} = \int \Prob{a=a_{t+1}^*|x_{t+1},\theta_t,\mathcal{H}_{1:t}} p(\theta_t|\mathcal{H}_{1:t}) \dd{\theta} \\
&=\int \myind{a=\mathop{\mathrm{argmax}}_{a^\prime \in \mathcal{A}} \mu_{t+1,a^\prime}(x_{t+1},\theta_t)} p(\theta_t|\mathcal{H}_{1:t}) \dd{\theta_t} \; .
\end{split}
\label{eq:theta_unknown_pr_arm_optimal}
\end{equation}
For the latter (e.g., Bayes-UCB), $p(\theta_t|\mathcal{H}_{1:t})$ is critical to determine the distribution of the expected rewards
\begin{equation}
p(\mu_{t,a}) = \int p(\mu_{t,a}|x_t,\theta_{t}) p(\theta_t|\mathcal{H}_{1:t}) \dd{\theta_t} \;,
\label{eq:density_expected_rewards}
\end{equation}
required for computation of the expected reward quantile value of interest $q_{t,a}(\alpha_{t})$, i.e.,
\begin{equation}
\mathrm{Pr}\left[\mu_{t,a}>q_{t,a}(\alpha_{t})\right]=\alpha_{t} \; .
\label{eq:quantile_expected_rewards}
\end{equation}
Note that we are considering the case wherein $\alpha_t$ depends on time, as in \cite{ip-Kaufmann2012}.
Analytical expressions for the parameter posteriors $p(\theta_t|\mathcal{H}_{1:t})$ are available only for few reward functions (e.g., Bernoulli and linear contextual Gaussian), but not for many other useful cases, such as logistic or categorical rewards. Furthermore, computation of the key summary statistics in Eqns.~\eqref{eq:theta_unknown_pr_arm_optimal} and \eqref{eq:quantile_expected_rewards} can be challenging for many distributions. These issues become even more imperative when dealing with dynamic parameters, i.e., in environments that evolve over time. To overcome these issues, we propose to leverage (sequential) importance sampling.
\subsection{Sequential Importance Sampling and Sequential Monte Carlo}
\label{ssec:problem_statement_sis}
Monte Carlo (MC) methods are a family of numerical techniques based on repeated random sampling, which have been shown to be flexible enough for both numerical integration and drawing samples from probability distributions of interest~\cite{b-Liu2001}.
Importance sampling (IS) is a MC technique for estimating properties of a distribution when obtaining samples from the distribution is difficult. The basic idea of IS is to draw, from an alternative distribution, samples which are subsequently weighted to guarantee estimation accuracy (and often reduced variance). These methods are used both to approximate posterior densities, and to compute expectations in probabilistic models, i.e.,
\begin{equation}
\bar{f}=\int f(\varphi) p(\varphi) \mathrm{d}\varphi \;,
\end{equation}
when these are too complex to treat analytically.
In short, IS relies on a proposal distribution $\pi(\cdot)$, from which one draws $M$ samples $\varphi^{(m)} \sim \pi(\varphi), \; m=1, \cdots , M$, and a set of weights
\begin{equation}
\widetilde{w}^{(m)}=\frac{p(\varphi^{(m)})}{\pi(\varphi^{(m)})} \;, \quad \text{with} \quad w^{(m)}=\frac{\widetilde{w}^{(m)}}{\sum_{m=1}^M\widetilde{w}^{(m)}} \; .
\end{equation}
If the support of $\pi(\cdot)$ includes the support of the distribution of interest $p(\cdot)$, one computes the IS estimator of a test function based on the normalized weights $w^{(m)}$,
\begin{equation}
\bar{f}_M=\sum_{m=1}^M w^{(m)} f\left(\varphi^{(m)}\right) \; ,
\end{equation}
with convergence guarantees under weak assumptions~\cite{b-Liu2001}
\begin{equation}
\bar{f}_M \mathop{\longrightarrow}_{M\rightarrow \infty}^{a.s} \bar{f} \; .
\label{eq:is_convergence}
\end{equation}
Note that IS can also be interpreted as a sampling method where the true posterior distribution is approximated by a random measure
\begin{equation}
p(\varphi) \approx p_M(\varphi) = \sum_{m=1}^M w^{(m)} \delta\left(\varphi^{(m)}-\varphi\right) \;,
\end{equation}
which leads to estimates that are nothing but the test function integrated with respect to the empirical measure
\begin{equation}
\bar{f}_M=\int f(\varphi) p_M(\varphi) \mathrm{d}\varphi = \sum_{m=1}^M f\left(\varphi^{(m)}\right) w^{(m)} \; .
\end{equation}
In many science and engineering problems, data are acquired sequentially in time, and one is interested in learning about the state of the world as observations are collected. In these circumstances, one needs to infer all the unknown quantities in an online fashion. Furthermore, it is likely that the underlying parameters evolve over time. If the dynamics are modeled with known linearity and Gaussianity assumptions, then one can analytically obtain the posterior distributions of interest in closed form: i.e., the celebrated Kalman~\cite{j-Kalman1960} filter.
On the contrary, practical scenarios often require more lax assumptions: nonlinear functions, uncertainty on parameters, non-Gaussian distributions, etc. For these cases, sequential importance sampling (SIS), also known as sequential Monte Carlo (SMC)~\cite{b-Doucet2001} or particle filtering (PF)~\cite{ib-Djuric2010}, has been shown to be of great flexibility and value\cite{b-Ristic2004,j-Leeuwen2009,j-Ionides2006,j-Creal2012}. These are simulation-based methods that provide a convenient solution to computing online approximations to posterior distributions.
In sequential importance sampling, one considers a proposal distribution that factorizes over time
\begin{equation}
\pi(\varphi_{0:t})=\pi(\varphi_t|\varphi_{1:t-1}) \pi(\varphi_{1:t-1})=\prod_{\tau=1}^{t} \pi(\varphi_{\tau}|\varphi_{1:\tau-1}) \pi(\varphi_0) \; ,
\end{equation}
which helps in matching the model dynamics $p(\varphi_t|\varphi_{1:t-1})$ to allow for recursive evaluation of the importance weights
\begin{equation}
w_t^{(m)} \propto \frac{p(\varphi_{t}|\varphi_{1:t-1})}{\pi(\varphi_{t}|\varphi_{1:t-1})} w_{t-1}^{(m)} \; .
\end{equation}
One problem with SIS following the above weight update scheme is that, as time evolves, the distribution of the importance weights becomes more and more skewed, resulting in few (or just one) non-zero weights.
To overcome this degeneracy, an additional selection step, known as resampling \cite{j-Li2015}, is added. In its most basic setting, one replaces the weighted empirical distribution with an equally weighted random measure at every time instant, where the number of offspring for each sample is proportional to its weight. This is known as the Sequential Importance Resampling (SIR) method \cite{j-Gordon1993}, which we rely on for our proposed framework in Section \ref{sec:proposed_framework}. We acknowledge that any of the numerous methodological improvements within the SMC literature --- such as alternative resampling mechanisms \cite{j-Li2015,j-Martino2017} or advanced SMC algorithms~\cite{ip-Merwe2001, j-Andrieu2010} --- are readily applicable to our proposed methods, and likely to have a positive performance impact on the proposed MAB.
\section{Proposed SMC-based MAB framework}
\label{sec:proposed_framework}
In this work, we leverage sequential Monte Carlo to compute the posteriors and sufficient statistics of interest for a rich-class of MAB problems: nonstationary bandits --- modeled via the general linear dynamical system --- with complex --- both stateless and context-dependent --- reward distributions.
Given any reward function, the stochastic MAB with dynamic parameters is governed by the combination of the in-time transition distribution, and the stochastic reward function,
\begin{equation}
\begin{cases}
\theta_{t}^* \sim p(\theta_{t}^*|\theta_{t-1}^*) \; ,\\
y_t\sim p_{a_t}(Y|x_t,\theta_{t}^*) \; .
\end{cases}
\label{eq:dynamic_mab}
\end{equation}
In dynamic MAB models, the transition distribution is key to compute the necessary statistics of the unknown parameter posterior $p(\theta_t|\mathcal{H}_{1:t})$ in Eqn.~\eqref{eq:mab_param_posterior}, which is updated over time via
\begin{equation}
p(\theta_{t}|\mathcal{H}_{1:t}) \propto p_{a_t}(y_t|x_t, \theta_{t}) p(\theta_{t} | \theta_{t-1}) p(\theta_{t-1}|\mathcal{H}_{1:t-1}) \; .
\label{eq:dynamic_posterior}
\end{equation}
The challenge in these type of bandits is on computing this posterior according to the underlying time-varying dynamics $p(\theta_{t} | \theta_{t-1})$ and reward function $p_{a_t}(Y|x_t,\theta_{t})$. In general, MABs are modeled with per-arm reward functions, each described with its own idiosyncratic parameters, i.e., $p_{a}(Y| x_t,\theta_{t}^*)=p_{a}(Y|x_t,\theta_{t,a}^*)$, that evolve independently in time: $p(\theta_{t}^*|\theta_{t-1}^*)=\prod_{a=1}^{A} p(\theta_{t,a}^*|\theta_{t-1,a}^*)$. In this work, we adhere to the same approach, and propose to leverage SMC to approximate the true posterior of each per-arm reward parameters.
We present a flexible SIR-based MAB framework that ($i$) is described in terms of generic likelihood and transition distributions, i.e., $p_a(Y|x_t,\theta_{t,a})$ and $p(\theta_{t,a}|\theta_{t-1,a})$, respectively; and ($ii$) is readily applicable for Bayesian MAB algorithms that compute test functions of per-arm parametric reward distributions, e.g., Thompson sampling and upper confident bound-based policies.
We describe in Algorithm~\ref{alg:sir-mab} how to combine the sequential Importance Resampling (SIR) method as in~\cite{j-Gordon1993} with Bayesian algorithms --- Thompson sampling and Bayes-UCB policies --- in dynamic bandits. The flexibility of SMC allows us to consider any likelihood function, which is computable up to a proportionality constant, as well as any time-varying dynamic that can be described by a transition density from which we can draw samples from.
We first describe in detail the proposed SIR-based framework in Section~\ref{ssec:sir-policies}, before describing in Section~\ref{ssec:linear_mixing_dynamics} the specific transition densities $p(\theta_{t,a}|\theta_{t-1,a})$ for the general linear model dynamics considered here, as well as different reward functions that can be readily accommodated within the proposed framework in Section~\ref{ssec:mab_reward_models}.
\begin{algorithm}
\caption{SIR for MAB}
\label{alg:sir-mab}
\begin{algorithmic}[1]
\REQUIRE $A$, $p(\theta_a)$, $p(\theta_{t,a}|\theta_{t-1,a})$, $p_a(y|x,\theta)$, $M$ (for UCB we also require $\alpha_t$)
\STATE Draw initial samples from the parameter prior
\begin{equation}
\overline{\theta}_{0,a} \sim p(\theta_a), \forall a \in A \;, \quad \text{ and } \quad w_{0,a}^{(m)}=\frac{1}{M} \; .
\nonumber
\end{equation}
\FOR{$t=0, \cdots, T-1$}
\STATE Receive context $x_{t+1}$
\FOR{$a=1, \cdots, A$}
\STATE Estimate sufficient statistics of the MAB policy for all arms, \\
\qquad given $\{w_{t,a}^{(m_{t,a})} \}$ and $\{\theta_{t,a}^{(m_{t,a})}\}$, $\forall m_{t,a}$, $\forall a\in\mathcal{A}$.\\
\small
\quad \textit{Thompson sampling:}\\
\qquad Draw a sample $s \sim \Cat{w_{t,a}^{(m_{t,a})}}$, \\
\qquad Propagate the sample parameter $\theta_{t+1,a}^{(s)}\sim p\left(\theta_{t+1,a}|\theta_{t,a}^{(s)}\right)$, \\
\qquad Set $\mu_{t+1,a}\left(x_{t+1}, \theta_{t+1,a}^{(s)}\right)=\eValue{Y|a,x_{t+1}, \theta_{t+1,a}^{(s)}}$ .\\
\quad \textit{Bayes-UCB:}\\
\qquad Draw $M$ candidate samples $m_{a}^\prime \sim \Cat{w_{t,a}^{(m_{t,a})}}$,\\
\qquad Propagate parameters $\theta_{t+1,a}^{(m_{a}^\prime)} \sim p\left(\theta_{t+1,a}|\theta_{t,a}^{(m_{a}^\prime)}\right)$, \\
\qquad Set $\mu_{t+1,a}\left(x_{t+1}, \theta_{t+1,a}^{(m_{a}^\prime)}\right)=\eValue{Y|a,x_{t+1}, \theta_{t+1,a}^{(m_{a}^\prime)}}$,\\
\qquad Estimate quantile $q_{t+1,a}(\alpha_{t+1})$ as in Eqn.~\eqref{eq:mc_quantile_value}.
\normalsize
\ENDFOR
\STATE Decide next action $a_{t+1}$ to play\\
\small
\quad \textit{Thompson sampling:} \hspace*{0.6cm} $a_{t+1}=\mathop{\mathrm{argmax}}_{a^\prime \in \mathcal{A}} \mu_{t+1,a^\prime}\left(x_{t+1}, \theta_{t+1,a^\prime}^{(s)}\right)$ \\
\quad \textit{Bayes-UCB:} \hspace*{1.8cm} $a_{t+1}=\mathop{\mathrm{argmax}}_{a^\prime \in \mathcal{A}}q_{t+1,a^\prime}(\alpha_{t+1})$
\normalsize
\STATE Observe reward $y_{t+1}$ for played arm
\STATE Update the posterior approximations $p_M(\theta_{t,a}|\mathcal{H}_{1:t})$ for all arms\\
\small
\vspace*{-2ex}
\begin{enumerate}[(a)]
\item \hspace*{-3ex}Resample $m_{t+1,a}=1,\cdots, M$ parameters $\overline{\theta}_{t,a}^{(m_{t+1,a})}=\theta_{t,a}^{(m_{t,a}^\prime)}$ per arm $a\in \mathcal{A}$, \\
\hspace*{-3ex} where $m_{t,a}^\prime$ is drawn with replacement according to the importance weights $w_{t,a}^{(m_{t,a})}$.
\vspace*{-1ex}
\item \hspace*{-3ex}Propagate resampled parameters by drawing from the transition density
\vspace*{-1ex}
\begin{equation}
\theta_{t+1,a}^{(m_{t+1,a})} \sim p\left(\theta_{t+1,a}|\overline{\theta}_{t,a}^{(m_{t+1,a})}\right) \; , \; m_{t+1,a}=1,\cdots, M, \; \forall a \in \mathcal{A} \; .
\label{eq:sir-mab-propagate}
\vspace*{-2ex}
\end{equation}
\item \hspace*{-3ex}Weight samples of the played arm $a_{t+1}$ based on the likelihood of the observed $y_{t+1}$
\vspace*{-1ex}
\begin{equation}
\widetilde{w}_{t+1,a_{t+1}}^{\left(m_{t+1,a_{t+1}}\right)} \propto p_{a_{t+1}}\left(y_{t+1}|x_{t+1},\theta_{t+1,a_{t+1}}^{\left(m_{t+1,a_{t+1}}\right)}\right) \; ,
\label{eq:sir-mab-weights}
\vspace*{-1ex}
\end{equation}
\hspace*{-3ex} and normalize the weights
\vspace*{-1ex}
\begin{equation}
w_{t+1,a_{t+1}}^{\left(m_{t+1,a_{t+1}}\right)}=\frac{\widetilde{w}_{t+1,a_{t+1}}^{\left(m_{t+1,a_{t+1}}\right)}}{\sum_{m_{t+1,a_{t+1}}=1}^M\widetilde{w}_{t+1,a_{t+1}}^{\left(m_{t+1,a_{t+1}}\right)}} \; , \; m_{t+1,a}=1,\cdots, M.
\label{eq:sir-mab-weights-norm}
\vspace*{-2ex}
\end{equation}
\end{enumerate}
\normalsize
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{SMC-based MAB policies}
\label{ssec:sir-policies}
We leverage SMC for both posterior sampling and the estimation of sufficient statistics in Bayesian MAB algorithms.
In the proposed SIR-based MAB framework, the fundamental operation is to sequentially update the random measure $p_M(\theta_{t,a}|\mathcal{H}_{1:t})=\sum_{m=1}^M w_{t,a}^{(m)} \delta\left(\theta_{t,a}^{(m)}-\theta_{t,a}\right)$ that approximates the true posterior $p(\theta_{t,a}|\mathcal{H}_{1:t})$, as it allows for computation of any statistic of per-arm parametric reward distributions a MAB policy might require.
Specifically, we follow the SIR method~\cite{j-Gordon1993}:
\begin{itemize}
\item The proposal distribution in Step (9.b) follows the assumed parameter dynamics, i.e., $\pi(\theta_{t,a})=p(\theta_{t,a}|\theta_{t-1,a})$;
\item Weights in Step (9.c) are updated based on the likelihood of observed rewards, i.e., $p_a(y_t|x_t,\theta_{t,a})$; and
\item The approximating random measure is resampled at every time instant in Step (9.a).
\end{itemize}
In the following, we describe how SIR can be used for both posterior sampling-based and UCB-type policies; i.e., which are the specific instructions to execute in steps 5 and 7 within Algorithm~\ref{alg:sir-mab} for Thompson sampling and Bayes-UCB.
\subsubsection{SMC-based Thompson Sampling}
\label{sssec:sir-ts}
Thompson sampling is a probability matching algorithm that randomly selects an action to play according to the probability of it being optimal~\cite{j-Russo2018}. Thompson sampling has been empirically proven to perform satisfactorily and to enjoy provable optimality properties, both for problems with and without context \cite{j-Agrawal2012,j-Agrawal2012a,ic-Korda2013,j-Russo2014,j-Russo2016}. It requires computation of the optimal probability as in Eqn.~\eqref{eq:theta_unknown_pr_arm_optimal}, which is in general analytically intractable. Alternatively, Thompson sampling operates by drawing a sample parameter $\theta_t^{(s)}$ from its updated posterior $p(\theta_t|\mathcal{H}_{1:t})$, and picking the optimal arm for such sample, i.e.,
\begin{equation}
a_t=\mathop{\mathrm{argmax}}_{a^\prime \in \mathcal{A}} \mu_{t,a^\prime}^{(s)}\left(x_t,\theta_{t,a^\prime}^{(s)}\right) \; .
\label{eq:mc_expected_reward}
\end{equation}
As pointed out already, the posterior distribution $p(\theta_t|\mathcal{H}_{1:t})$ is for many cases of applied interest either analytically intractable or hard to sample from. We propose to use the SIR-based random measure $p_M(\theta_t|\mathcal{H}_{1:t})$ instead, as it provides an accurate approximation to the true with high probability.
\subsubsection{SMC-based Bayes-UCB}
\label{sssec:sir-bucb}
Bayes-UCB \cite{ip-Kaufmann2012} is a Bayesian approach to UCB type algorithms, where quantiles are used as proxies for upper confidence bounds. Kaufmann \cite{ip-Kaufmann2012} has proven the asymptotic optimality of Bayes-UCB's finite-time regret bound for the Bernoulli case, and argues that it provides an unifying framework for several variants of the UCB algorithm for parametric MABs. However, its application is limited to reward models where the quantile functions are analytically tractable.
We propose instead to compute the quantile function of interest by means of the SIR approximation to the parameter posterior, where one can evaluate the expected reward at each round $t$ based on the available posterior samples, i.e., $\mu_{t,a}^{(m)}\left(x_t,\theta_{t,a}^{(m)}\right)$, $m=1,\cdots,M$. The quantile value
$
\mathrm{Pr}[\mu_{t,a}>q_{t,a}(\alpha_t)] = \alpha_{t}
$
is then computed by
\begin{equation}
q_{t,a}(\alpha_t):=\max \{\mu \; |\sum_{m|\mu_{t,a}^m>\mu} w_{t,a}^m\ge\alpha_t\} \; .
\label{eq:mc_quantile_value}
\end{equation}
\subsection{Dynamic multi-armed bandits}
\label{ssec:linear_mixing_dynamics}
In the proposed SIR-based MAB framework, whether a Thompson sampling or UCB policy is used, the fundamental operation is to sequentially update the random measure $p_M(\theta_{t,a}|\mathcal{H}_{1:t})$ that approximates the true per-arm posterior $p(\theta_{t,a}|\mathcal{H}_{1:t})$. This allows for computation of any statistic a MAB policy might require, and extends the applicability of existing policy algorithms beyond their original assumptions, from static to time-evolving bandits. We here focus on dynamic bandits, yet illustrate how to deal with static bandits within this same framework as well in
\ifx\undefined the appendix \else \autoref{assec:mab_static_bandits} \fi.
In Algorithm~\ref{alg:sir-mab}, one needs to be able to draw samples from the transition density $p(\theta_{t,a}|\theta_{t-1,a})$, which will depend on the case-by-case MAB dynamics. A widely applicable model for time-evolving bandit parameters is the general linear model~\cite{b-Whittle1951, b-Box1976, b-Brockwell1991, b-Durbin2001, b-Shumway2010, b-Durbin2012}, where the parameters of the bandit $\theta \in \Real^{d_{\Theta}}$ are modeled to evolve over time according to
\begin{equation}
\theta_{t,a}=L_a \theta_{t-1,a}+\epsilon_a \;, \qquad \epsilon_a\sim\N{\epsilon_a|0, \Sigma_a} \; ,
\label{eq:linear_mixing_dynamics}
\end{equation}
where $L_a \in \Real^{d_a \times d_a}$ and $\Sigma_a \in \Real^{d_a \times d_a}$. Note that the transition density model is specified per-arm.
With known parameters, the transition distribution is Gaussian, i.e., $\theta_{t,a}\sim \N{\theta_{t,a}|L_a \theta_{t-1,a}, \Sigma_a} $.
For the more interesting case of unknown parameters, we marginalize the unknown parameters of the transition distributions above (i.e., we perform Rao-Blackwellization), in order to reduce the degeneracy and variance of the SMC estimates \cite{ip-Doucet2000,ib-Djuric2010}.
The marginalized transition density is a multivariate-t, i.e., $\theta_{t,a} \sim \T{\theta_{t,a}|\nu_{t,a}, m_{t,a}, R_{t,a}}$ with sufficient statistics as in Eqns.~\ref{eqn:dynamicone}-\ref{eqn:dynamictwo} below,\footnote{Details of the derivation are provided in
\ifx\undefined the supplementary material \else \autoref{assec:mab_linear_mixing_dynamics} \fi.} where each equation holds separately for each arm $a$ (the subscript has been suppressed for clarity of presentation, and subscript $0$ indicates assumed prior parameters):
\begin{equation}
\begin{cases}
\nu_{t}=\nu_{0}+t-d \; ,\\
m_{t}=L_{t-1} \theta_{t-1} \; , \\
R_{t} = \frac{V_{t-1}}{\nu_{t}\left(1-\theta_{t-1}^\top(U U^\top)^{-1}\theta_{t-1}\right)} \; ,\\
\end{cases}
\label{eqn:dynamicone}
\end{equation}
and
\begin{equation}
\begin{cases}
\Theta_{t_0:t_1}=[\theta_{t_0} \theta_{t_0+1} \cdots \theta_{t_1-1} \theta_{a,t_1}] \in \Real^{d\times (t_1-t_0)} \; , \\
B_{t-1} = \left(\Theta_{0:t-2}\Theta_{0:t-2}^\top + B_0^{-1} \right)^{-1} \; ,\\
L_{t-1} = \left(\Theta_{1:t-1}\Theta_{0:t-2}^\top+L_0B_0^{-1}\right) B_{t-1} \; ,\\
V_{t-1}= \left(\Theta_{1:t-1}-L_{t-1} \Theta_{0:t-2}\right)\left(\Theta_{1:t-1}-L_{t-1} \Theta_{0:t-2}\right)^\top \\
\qquad \qquad + \left(L_{t-1}-L_0\right) B_0^{-1} \left(L_{t-1}-L_0\right)^\top + V_0 \; ,\\
U U^\top = \left(\theta_{t-1}\theta_{t-1}^\top+B_{t-1}^{-1}\right) \; .\\
\end{cases}
\label{eqn:dynamictwo}
\end{equation}
The predictive posterior of per-arm parameters is approximated with a mixture of the transition densities conditioned on previous samples, i.e.,
\begin{equation}
p_M(\theta_{t+1,a}|\mathcal{H}_{1:t}) = \sum_{m=1}^{M} w_{t,a}^{(m)} p(\theta_{t+1,a}|\theta_{t,a}^{(m)}) \; .
\end{equation}
These transition distributions are used when propagating per-arm parameters in Steps 5 and 7 of Algorithm~\ref{alg:sir-mab}.
The propagation of parameter samples in the SIR algorithm is fundamental for the accuracy of the sequential approximation to the posterior, and the performance of the SIR-based MAB policy as well.
\subsection{MAB reward models}
\label{ssec:mab_reward_models}
Algorithm~\ref{alg:sir-mab} is described in terms of a generic likelihood function $p_a(Y|x_t,\theta_{t,a})$, where the likelihood function must be computable up to a proportionality constant. We describe below some of the reward functions that are applicable in many MAB use-cases, where the time subscript $_t$ has been suppressed for clarity of presentation, and subscript $0$ indicates assumed prior parameters.
\subsubsection{Bernoulli rewards}
\label{sssec:bernoulli_rewards}
The Bernoulli distribution is well suited for applications with binary returns (i.e., success or failure of an action) that don't depend on a context. The rewards $y\in\{0,1\}$ of each arm are modeled as independent draws from a Bernoulli distribution with success probabilities $\theta_a$,
\begin{equation}
p_a(Y|\theta)=\Ber{Y|\theta_a}=\theta_a^{y}(1-\theta_a)^{(1-y)} \;,
\end{equation}
for which the parameter conjugate prior distribution is the Beta distribution
\begin{equation}
\begin{split}
p(\theta_a|\alpha_{0,a}, \beta_{0,a})&=\Beta{\theta_a|\alpha_{0,a}, \beta_{0,a}} =\frac{\Gamma\left(\alpha_0+\beta_0\right)}{\Gamma\left(\alpha_0\right)\Gamma\left(\beta_0\right)} \theta_a^{\alpha_0-1}(1-\theta_0)^{\beta_0-1} \; .
\end{split}
\end{equation}
After observing actions $a_{1:t}$ and rewards $y_{1:t}$, the parameter posterior follows an updated Beta distribution
\begin{equation}
\begin{split}
p(\theta_a|a_{1:t}, y_{1:t}, \alpha_{0,a}, \beta_{0,a}) &= p(\theta_a|\alpha_{t,a}, \beta_{t,a}) =\Beta{\theta_a|\alpha_{t,a}, \beta_{t,a}}\; ,
\end{split}
\end{equation}
with sequential updates
\begin{equation}
\begin{cases}
\alpha_{t,a}=\alpha_{t-1,a} + y_{t} \cdot \mathds{1}[a_t=a] \; ,\\
\beta_{t,a}=\beta_{t-1,a} + (1 - y_{t}) \cdot \mathds{1}[a_t=a] \; ,
\end{cases}
\end{equation}
or, alternatively, batch updates
\begin{equation}
\begin{cases}
\alpha_{t,a}=\alpha_{0,a} + \sum_{t|a_t=a} y_{t} \; ,\\
\beta_{t,a}=\beta_{0,a} + \sum_{t|a_t=a} (1-y_{t}) \; .
\end{cases}
\end{equation}
The expected reward for each arm follows
\begin{equation}
p(\mu_{a}|\theta_a)=p(\theta_{a}|a_{1:t},y_{1:t})=\Beta{\theta_a|\alpha_{t,a}, \beta_{t,a}} \; ,
\end{equation}
and the quantile function is based on the Beta distribution
\begin{equation}
q_{t+1,a}(\alpha_{t+1})=Q\left(1-\alpha_{t+1}, \Beta{\theta_a|\alpha_{t,a}, \beta_{t,a}}\right) \;.
\end{equation}
\subsubsection{Contextual linear-Gaussian rewards}
\label{sssec:linear_gaussian_rewards}
For bandits with continuous rewards, the Gaussian distribution is typically considered, where contextual dependencies can also be included. The contextual linear-Gaussian reward model is suited for these scenarios, where the expected reward of each arm is modeled as a linear combination of a $d$-dimensional context vector $x\in\Real^{d}$ and the idiosyncratic parameters of the arm $w_a\in\Real^{d}$, i.e.,
\begin{equation}
\begin{split}
p_a(Y|x,\theta)&=\N{Y|x^\top w_a, \sigma_a^2} =\frac{e^{-\frac{(y-x^\top w_a)^2}{2\sigma_a^2}}}{\sqrt{2\pi\sigma_a^2}} \; .
\end{split}
\end{equation}
We denote as $\theta\equiv\{w, \sigma\}$ the set of all the parameters.
For this reward distribution, the parameter conjugate prior distribution is the Normal Inverse Gamma distribution
\begin{equation}
\begin{split}
p(w_a, \sigma_a^2|u_{0,a}, V_{0,a}, \alpha_{0,a}, \beta_{0,a}) &= \NIG{w_a, \sigma_a^2|u_{0,a}, V_{0,a},\alpha_{0,a}, \beta_{0,a}} \\
& = \N{w_a|u_{0,a}, \sigma_a^2 V_{0,a}} \cdot \Gamma^{-1}{\sigma_a^2|\alpha_{0,a}, \beta_{0,a}} \\
& = \frac{e^{-\frac{1}{2}(w_a-u_{0,a})^\top(\sigma_a^2 V_{0,a})^{-1}(w_a-u_{0,a})}}{(2\pi)^{1/2}\sigma_a \mydet{V_{0,a}}^{-1/2}} \cdot \frac{\beta_0^{\alpha_0}}{\Gamma\left(\alpha_0\right)} (\sigma_a^2)^{-\alpha_0-1}e^{-\frac{\beta_0}{(\sigma_a^2)}} \; .
\end{split}
\end{equation}
After observing actions $a_{1:t}$ and rewards $y_{1:t}$, the parameter posterior follows an updated NIG distribution
\begin{equation}
\begin{split}
p(w_a, \sigma_a^2|a_{1:t},y_{1:t},u_{0,a}, V_{0,a},\alpha_{0,a}, \beta_{0,a}) &= p\left(w_a, \sigma_a^2|u_{t,a}, V_{t,a},\alpha_{t,a}, \beta_{t,a}\right) \\
&=\NIG{w_a, \sigma_a^2|u_{t,a}, V_{t,a},\alpha_{t,a}, \beta_{t,a}} \; ,
\end{split}
\end{equation}
with sequential updates
\begin{equation}
\begin{cases}
V_{t,a}^{-1} = V_{t-1,a}^{-1} + x_t x_t^\top \cdot \mathds{1}[a_t=a] \; ,\\
u_{t,a}= V_{t,a} \left( V_{t-1,a}^{-1} u_{t-1,a} + x_t y_{t}\cdot \mathds{1}[a_t=a]\right) \; ,\\
\alpha_{t,a}=\alpha_{t-1,a} + \frac{\mathds{1}[a_t=a]}{2} \; ,\\
\beta_{t,a}=\beta_{t-1,a} + \frac{\mathds{1}[a_t=a](y_{t_a}-x_t^\top u_{t-1,a})^2}{2\left(1+x_t^\top V_{t-1,a} x_t\right)} \; ,
\end{cases}
\end{equation}
or, alternatively, batch updates
\begin{equation}
\begin{cases}
V_{t,a}^{-1}= V_{0,a}^{-1}+x_{{1:t}|t_a} x_{{1:t}|t_a}^\top \; ,\\
u_{t,a}=V_{t,a}\left(V_{0,a}^{-1}u_{0,a}+x_{{1:t}|t_a} y_{{1:t}|t_a}\right) \; ,\\
\alpha_{t,a}=\alpha_{0,a} + \frac{|t_a|}{2} \; ,\\
\beta_{t,a}=\beta_{0,a} + \frac{\left(y_{{1:t}|t_a}^\top y_{{1:t}|t_a} + u_{0,a}^\top V_{0,a}^{-1}u_{0,a} - u_{t,a}^\top V_{t,a}^{-1}u_{t,a} \right)}{2} \; ,
\end{cases}
\end{equation}
where $t_a=\{t|a_t=a\}$ indicates the set of time instances when arm $a$ is played.
The expected reward for each arm follows
\begin{equation}
p(\mu_{a}|x, \sigma_a^2, u_{t,a}, V_{t,a}) = \N{\mu_{a}|x^\top u_{t,a}, \; \sigma_a^2 \cdot x^\top V_{t,a} x}.
\end{equation}
and the quantile function is based on this Gaussian distribution
\begin{equation}
q_{t+1,a}(\alpha_{t+1})=Q\left(1-\alpha_{t+1}, \N{\mu_{a}|x^\top u_{t,a}, \; \sigma_a^2 \cdot x^\top V_{t,a} x}\right) \;.
\end{equation}
For the more realistic scenario where the reward variance $\sigma^2_a$ is unknown, we can marginalize it and obtain
\begin{equation}
\begin{split}
&p(\mu_{a}|x, u_{t,a}, \sigma_a^2, V_{t,a}) = \T{\mu_{a}|2\alpha_{t,a}, x^\top u_{t,a}, \; \frac{\beta_{t,a}}{\alpha_{t,a}} \cdot x^\top V_{t,a} x} \\
& \qquad = \frac{\Gamma\left(\frac{2\alpha_{t,a}+1}{2}\right)}{\Gamma\left(\frac{2\alpha_{t,a}}{2}\right)\sqrt{\pi 2\alpha_{t,a} \frac{\beta_{t,a}}{\alpha_{t,a}} x^\top V_{t,a} x}} \cdot \left(1+\frac{1}{(2\alpha_{t,a})}\left(\frac{(\mu_a-x^\top u_{t,a})^2}{\frac{\beta_{t,a}}{\alpha_{t,a}} \cdot x^\top V_{t,a} x}\right)\right)^{-\frac{2\alpha_{t,a}+1}{2}} \; .
\end{split}
\end{equation}
The quantile function for this case is based on the Student's-t distribution
\begin{equation}
q_{t+1,a}(\alpha_{t+1})=Q\left(1-\alpha_{t+1}, \T{\mu_{a}| 2\alpha_{t,a}, x^\top u_{t,a}, \; \frac{\beta_{t,a}}{\alpha_{t,a}} \cdot x^\top V_{t,a} x}\right) \;.
\end{equation}
Note that one can use the above results for bandits with no context, by replacing $x=I$ and obtaining $\mu_{a}=u_{t,a}$.
\subsubsection{Contextual categorical bandits}
\label{sssec:categorical_softmax_rewards}
For MAB problems where returns are categorical, and contextual information is available, the softmax function is a natural reward density modeling choice. Given a $d$-dimensional context vector $x\in\Real^{d}$, and per-arm parameters $\theta_a=\{\theta_{a,1}, \cdots, \theta_{a,C}\}$ for each category $c\in\{1,\cdots,C\}$, the contextual softmax reward model is
\begin{equation}
p_a(Y=c|x,\theta_a)=\frac{e^{(x^\top\theta_{a,c})}}{\sum_{c'=1}^C e^{(x^\top\theta_{a,c'})} } \; .
\label{eq:softmax_rewards}
\end{equation}
We note that categorical variables, in general, assign probabilities to an unordered set of outcomes (not necessarily numeric). However, in this work, we refer to categorical rewards where, for each categorical outcome $c\in\Natural$, there is a numeric reward $y=c$ associated with it. For this reward distribution, the posterior of the parameters can not be computed in closed form and, neither, the quantile function of the expected rewards $\mu_{t,a}=y_t\cdot(x_t^\top\theta_{t,a})$.
Note that, when returns are binary (i.e., success or failure of an action), but dependent on a $d$-dimensional context vector $x\in\Real^{d}$, the softmax function reduces to the logistic reward model:
\begin{equation}
p_a(Y|x,\theta)=\frac{e^{y\cdot(x^\top\theta_a) }}{1+e^{(x^\top\theta_a)}} \; .
\end{equation}
\subsection{Discussion on SMC-based MAB policies}
\label{ssec:sir-discussion}
Algorithm~\ref{alg:sir-mab} presents SIR for the general MAB setting, where the likelihood function must be computable up to a proportionality constant (as in examples in Section~\ref{ssec:mab_reward_models}), and one needs to draw samples from the transition density. Specifically, in this paper, we draw samples from the transition densities $p(\theta_{t,a}|\theta_{t-1,a})$ presented in Section~\ref{ssec:linear_mixing_dynamics}.
The transition density is used to sequentially propagate parameter posteriors, and estimate its sufficient statistics:
\begin{itemize}
\item \textbf{In Step 5 of Algorithm~\ref{alg:sir-mab}}, where we estimate the predictive posterior of per-arm parameters, as a mixture of the transition densities conditioned on previous samples
\begin{equation}
p_M(\theta_{t+1,a}|\mathcal{H}_{1:t}) = \sum_{m_{t,a}=1}^{M} w_{t,a}^{(m_{t,a})} p(\theta_{t+1,a}|\theta_{t,a}^{(m_{t,a})}) \; , \; m_{t,a}=1,\cdots, M, \; \forall a\in \mathcal{A} \; .
\end{equation}
\item \textbf{In Step 5 of Algorithm~\ref{alg:sir-mab}}, where we propagate the sequential random measure by drawing new samples form the transition density conditioned on previous \textit{resampled} particles
\begin{equation}
\theta_{t+1,a}^{(m_{t+1,a})} \sim p(\theta_{t+1,a}|\overline{\theta}_{t,a}^{(m_{t+1,a})}) \; , \; m_{t+1,a}=1,\cdots, M, \; \forall a\in \mathcal{A} \; .
\end{equation}
\end{itemize}
In both cases, one draws with replacement according to the importance weights, i.e., from a categorical distribution with per-sample probabilities $w_{t,a}^{(m)}$: $m_{t,a}^\prime \sim \Cat{w_{t,a}^{(m)}}$.
Careful propagation of parameter samples is fundamental for the accuracy of the sequential approximation to the posterior, and the performance of the proposed SMC-based MAB policy as well. The uncertainty of the parameter posterior encourages exploration of arms that have not been played recently, but may have evolved into new parameter spaces with exploitable reward distributions. As the posteriors of unobserved arms --- driven by the assumed dynamic --- result in broad SIR posteriors --- converging to the stationary distribution of the dynamic models --- MAB policies are more likely to explore such arm, reduce their posterior's uncertainty, and in turn, update the exploration-exploitation balance. The time-varying uncertainty of the parameter posterior encourages exploration of arms that have not been played recently, but may have reached new exploitable rewards.
We emphasize that the proposed SMC-based MAB method approximates each per-arm parameter posterior separately. Therefore, the dimensionality of the estimation problem depends on the size of per-arm parameters, and not on the number of bandit arms. As we approximate the per-arm posterior density at each time instant --- we compute the filtering density $p(\theta_{t,a}|\mathcal{H}_{1:t})$, for which there are strong theoretical SMC convergence guarantees \citep{j-Crisan2002,j-Chopin2004} --- there will be no particle degeneracy due to increased number of arms. We reiterate that the resampling and propagation steps in Algorithm~\ref{alg:sir-mab} are necessary to attain accurate and non-degenerate sequential approximations to the true posterior.
Caution must be exercised when modeling the underlying dynamics of a time-varying model. On the one hand, certain parameter constraints might be necessary for the model to be wide-sense stationary~\cite{b-Box1976, b-Shumway2010, b-Durbin2012}. On the other, the impact of non-Markovian transition distributions in SMC performance must be taken into consideration --- note that the sufficient statistics in Eqns.~\ref{eqn:dynamicone}-\ref{eqn:dynamictwo} depend on the full history of the model dynamics.
Here, we have proposed to leverage the general linear model, for which it can be readily shown that (if stationarity conditions are met), the autocovariance function decays quickly, i.e., the dependence of general linear AR models on past samples decays exponentially~\cite{j-Urteaga2016,j-Urteaga2016a}. When exponential forgetting holds in the latent space --- i.e., the dependence on past samples decays exponentially, and is negligible after a certain lag --- one can establish uniform-in-time convergence of SMC methods for functions that depend only on recent states, see~\cite{j-Kantas2015} and references therein. In fact, one can establish uniform-in-time convergence results for path functionals that depend only on recent states, as the Monte-Carlo error of $p_M(\theta_{t-\tau:t}|\mathcal{H}_{1:t})$ with respect to $p(\theta_{t-\tau:t}|\mathcal{H}_{1:t})$ is uniformly bounded over time. This property of quick forgetting is the key justification for the successful performance of SMC methods for linear dynamical latent states, see~\cite{j-Urteaga2017b,j-Urteaga2016,j-Urteaga2016a}. Nevertheless, we acknowledge that any improved solution that mitigates the path-degeneracy issue can only be beneficial for our proposed method.
Note that our proposed SIR-based Thompson algorithm is similar to Thompson sampling, the difference being that a draw from the posterior distribution is replaced by a draw from the approximating SMC random measure. \citet{ip-Gopalan2014} have shown that a logarithmic regret bound holds for Thompson sampling in complex problems, for bandits with discretely-supported priors over the parameter space without additional structural properties, such as closed-form posteriors, conjugate prior structure or independence across arms. Posterior sampling in the original Thompson sampling algorithm can be replaced by procedures which provably converge to the true posterior --- e.g., the Bootstrapped Thompson sampling in~\cite{j-Osband2015} or SMC here.
One can show that the SMC posterior converges to the true posterior --- the interested reader can consult~\cite{b-Liu2001,j-Crisan2002,j-Chopin2004} and references therein. Here, we hypothesize that an SMC based algorithm can indeed achieve bounded MAB regret, as long as the time-varying dynamics of the bandit result in a controlled number of optimal arm changes: i.e., the regret will be linear if optimal arms change rapidly, yet SMC will provide accurate enough posteriors when the optimal arm changes in a controlled manner. Ongoing work consists on a more thorough understanding of this tradeoff for both Thompson sampling and UCB-based policies, as well as providing a theoretical analysis of the dependency between optimal arm changes, posterior convergence, and regret of the proposed SMC-based framework.
\section{Evaluation}
\label{sec:evaluation}
We now empirically evaluate the proposed SIR-based MAB framework in complex bandit scenarios. Note that we have first validated the performance of Algorithm~\ref{alg:sir-mab} for Thompson sampling and Bayes-UCB in their original formulations.
Results provided in
\ifx\undefined the supplementary material \else \autoref{assec:evaluation_static_bandits} \fi
validate the proposed SIR-based method for static bandits with Bernoulli and contextual linear-Gaussian reward functions, where SIR-based algorithms perform similarly to the benchmark policies with analytical posteriors as in~\cite{ip-Kaufmann2012,ip-Garivier2011a,ic-Korda2013,j-Agrawal2012}.
Performance is satisfactory across a wide range of parameterizations and bandit sizes, as well as for new static bandit scenarios where Bayesian closed-form posterior updates are not available: i.e., see results for context-dependent binary rewards modeled by a logistic reward function~\cite{ic-Chapelle2011,j-Scott2015} in
\ifx\undefined the supplementary material \else \autoref{asssec:static_bandits_logistic_2}-\autoref{asssec:static_bandits_logistic_5} \fi
Our proposed SIR-based Thompson sampling and Bayes-UCB approaches are readily applicable to logistic models, and they achieve the right exploration-exploitation tradeoff.
Full potential of the proposed SIR-based algorithm is harnessed when facing the most interesting and challenging bandits: those with time-evolving parameters. We now show the flexibility of the proposed method in these dynamic scenarios.
In all cases, the key performance metric is the cumulative regret defined in Eqn.~\eqref{eq:mab_cumulative_regret}, all results are averaged over 1000 realizations, and SIR-based methods are implemented with $M=1000$ samples. Figures shown here are illustrative selected examples, although drawn conclusions are based on extensive experiments with different number of bandit arms and parameterizations.
We first consider the contextual linear dynamic model (as formulated in Section~\ref{ssec:linear_mixing_dynamics}), because it allows us to ($i$) validate the SIR-based approximation to the optimal posterior (i.e., the Kalman filter for the linear and Gaussian case); and ($ii$) show its flexibility and robustness to more realistic and challenging MAB models (with unknown parameters, nonlinear functions, and non-Gaussian distributions).
We have evaluated different parameterizations of the model as in Eqn.~\eqref{eq:linear_mixing_dynamics}, which are described below for a two-armed contextual dynamic bandit, with the time-evolution of a realization of the expected rewards illustrated in Figure~\ref{fig:linear_mixing_dynamics}
\begin{equation}
\text{Scenario A} \hspace*{-10ex}
\begin{split}
\begin{pmatrix}
\theta_{0,0,t}\\
\theta_{0,1,t}\\
\end{pmatrix} &= \begin{pmatrix}
0.9 & -0.1 \\
-0.1 & 0.9 \\
\end{pmatrix} \begin{pmatrix}
\theta_{0,0,t-1}\\
\theta_{0,1,t-1}\\
\end{pmatrix} + \epsilon_0 \; , \hspace*{1ex} \epsilon_0 \sim \N{\epsilon|0,0.1 \cdot\mathrm{I}} \;,\\
\begin{pmatrix}
\theta_{1,0,t}\\
\theta_{1,1,t}\\
\end{pmatrix} &= \begin{pmatrix}
0.9 & 0.1 \\
0.1 & 0.9 \\
\end{pmatrix} \begin{pmatrix}
\theta_{1,0,t-1}\\
\theta_{1,1,t-1}\\
\end{pmatrix} + \epsilon_1 \; , \hspace*{5ex} \epsilon_1 \sim \N{\epsilon|0,0.1 \cdot\mathrm{I}} \;.
\end{split}
\label{eq:linear_mixing_dynamics_a}
\end{equation}
\begin{equation}
\text{Scenario B} \hspace*{-10ex}
\begin{split}
\begin{pmatrix}
\theta_{0,0,t}\\
\theta_{0,1,t}\\
\end{pmatrix} &= \begin{pmatrix}
0.5 & 0.0 \\
0.0 & 0.5 \\
\end{pmatrix} \begin{pmatrix}
\theta_{0,0,t-1}\\
\theta_{0,1,t-1}\\
\end{pmatrix} + \epsilon_0 \; , \hspace*{1ex} \epsilon_0 \sim \N{\epsilon|0,0.1 \cdot\mathrm{I}} \;,\\
\begin{pmatrix}
\theta_{1,0,t}\\
\theta_{1,1,t}\\
\end{pmatrix} &= \begin{pmatrix}
0.9 & 0.1 \\
0.1 & 0.9 \\
\end{pmatrix} \begin{pmatrix}
\theta_{1,0,t-1}\\
\theta_{1,1,t-1}\\
\end{pmatrix} + \epsilon_1 \; , \hspace*{1ex} \epsilon_1 \sim \N{\epsilon|0,0.1 \cdot\mathrm{I}} \;.
\end{split}
\label{eq:linear_mixing_dynamics_b}
\end{equation}
\begin{figure}[!h]
\centering
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=0.95\textwidth]{./figs/dynamic/dynamics_a}
\caption{Expected per-arm rewards over time for a contextual linear dynamic bandit with dynamics as in Eqn.~\eqref{eq:linear_mixing_dynamics_a}. Note how the optimal arm switches around $t=\{50, 500, 900\}$, and how arm 0 becomes the best for $t\geq1000$.}
\label{fig:linear_mixing_dynamics_a}
\end{subfigure}\qquad
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=0.95\textwidth]{./figs/dynamic/dynamics_b}
\caption{Expected per-arm rewards over time for a contextual linear dynamic bandit with dynamics as in Eqn.~\eqref{eq:linear_mixing_dynamics_b}. Note the optimal arm switches before $t=400$, and how arm 1 becomes the best for $t\geq800$.}
\label{fig:linear_mixing_dynamics_b}
\end{subfigure}
\caption{Expected per-arm rewards over time for different contextual linear dynamic bandit with dynamics as in Eqn.~\eqref{eq:linear_mixing_dynamics}.}
\label{fig:linear_mixing_dynamics}
\end{figure}
We consider this setting of special interest because the induced expected rewards change over time and so, the decision on the optimal arm swaps accordingly. We evaluate the proposed SIR-based methods for bandits with dynamics as in Eqns.~\eqref{eq:linear_mixing_dynamics_a}-\eqref{eq:linear_mixing_dynamics_b}, and both contextual linear-Gaussian and logistic rewards.
We show in Fig.~\ref{fig:dynamic_bandits_linearGaussian_dknown} that the regret of SIR-based methods, for the contextual linear-Gaussian case with known parameters, is equivalent to the optimal case (i.e., the Kalman filter). Furthermore, even for cases when the reward variance $\sigma^2$ is unknown, and thus the Gaussianity assumption needs to be dropped (instead modeling bandit rewards via Student-t distributions), SIR-based methods perform comparably well.
\begin{figure}[!h]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{./figs/dynamic/dynamics_a}
\caption{Expected per-arm rewards over time for Scenario A in Eqn.~\eqref{eq:linear_mixing_dynamics_a}.}
\label{fig:linear_mixing_dynamics_a_gaussian}
\end{subfigure}\qquad
\begin{subfigure}[b]{0.45\textwidth}
\hspace*{2ex}
\includegraphics[width=\textwidth]{./figs/dynamic/dynamics_b}
\caption{Expected per-arm rewards over time for Scenario B in Eqn.~\eqref{eq:linear_mixing_dynamics_a}.}
\label{fig:linear_mixing_dynamics_b_gaussian}
\end{subfigure}
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{./figs/dynamic/linearGaussian/a_cumulative_regret_all_dknown_cstatic}
\caption{Scenario A with known dynamic parameters.}
\label{fig:dynamic_bandits_linearGaussian_a_cstatic_dknown}
\end{subfigure}\qquad
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{./figs/dynamic/linearGaussian/b_cumulative_regret_all_dknown_noise_mixing_cstatic_sigma1}
\caption{Scenario B with known dynamic parameters.}
\label{fig:dynamic_bandits_linearGaussian_b_cstatic_dknown}
\end{subfigure}
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{./figs/dynamic/linearGaussian/a_cumulative_regret_all_dunknown_cstatic}
\caption{Scenario A with unknown dynamic parameters.}
\label{fig:dynamic_bandits_linearGaussian_a_cstatic_dunknown}
\end{subfigure}\qquad
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{./figs/dynamic/linearGaussian/b_cumulative_regret_all_dunknown_noise_mixing_cstatic_sigma1}
\caption{Scenario B with unknown dynamic parameters.}
\label{fig:dynamic_bandits_linearGaussian_b_cstatic_dunknown}
\end{subfigure}
\vspace*{-2ex}
\caption{Mean regret (standard deviation shown as shaded region) in contextual linear-Gaussian bandits with known dynamics. Notice the regret bumps when optimal arms swap, and how our proposed SIR-based methods adjust.}
\label{fig:dynamic_bandits_linearGaussian_dknown}
\end{figure}
We further evaluate in Figs.~\ref{fig:dynamic_bandits_linearGaussian_a_cstatic_dunknown} and ~\ref{fig:dynamic_bandits_linearGaussian_b_cstatic_dunknown} the most challenging contextual linear-Gaussian bandit case, where none of the parameters of the model ($L_a,\Sigma_a$) are known; i.e., one must sequentially learn the underlying dynamics, in order to make informed online decisions.
\clearpage
\begin{figure}[!h]
\begin{subfigure}[b]{0.45\textwidth}
\hspace*{2ex}
\includegraphics[width=\textwidth]{./figs/dynamic/dynamics_a}
\caption{Expected per-arm rewards over time for Scenario A in Eqn.~\eqref{eq:linear_mixing_dynamics_a}.}
\label{fig:linear_mixing_dynamics_a_logistic}
\end{subfigure}\qquad
\begin{subfigure}[b]{0.45\textwidth}
\hspace*{4ex}
\includegraphics[width=\textwidth]{./figs/dynamic/dynamics_b}
\caption{Expected per-arm rewards over time for Scenario B in Eqn.~\eqref{eq:linear_mixing_dynamics_a}.}
\label{fig:linear_mixing_dynamics_b_logistic}
\end{subfigure}
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{./figs/dynamic/logistic/a_cumulative_regret_all_cstatic}
\caption{Scenario A with known and unknown dynamic parameters.}
\label{fig:dynamic_bandits_a_logistic_cstatic}
\end{subfigure}\qquad
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{./figs/dynamic/logistic/b_cumulative_regret_all_noise_mixing_cstatic}
\caption{Scenario B with known and unknown dynamic parameters.}
\label{fig:dynamic_bandits_b_logistic_cstatic}
\end{subfigure}
\vspace*{-1ex}
\caption{Mean regret (standard deviation shown as shaded region) in contextual linear logistic dynamic bandits. Notice the regret bumps when optimal arms swap, and how our proposed SIR-based methods adjust.}
\label{fig:dynamic_bandits_logistic}
\end{figure}
Even if there is a regret performance loss due to the need to learn the unknown dynamic parameters, SIR-based Thompson sampling and Bayes-UCB are both able to learn their evolution and thus, reach the exploitation-exploration balance. We observe noticeable increases in regret when the dynamics of the parameters swap the optimal arm. This effect is also observed for dynamic bandits with non-Gaussian rewards. We evaluate our proposed framework with logistic reward functions with both static and random contexts: see regret performance in Fig.~\ref{fig:dynamic_bandits_logistic}.
\clearpage
We now evaluate the proposed SIR-based framework with categorical multi-armed contextual dynamic bandits, which to the best of our knowledge, have not been studied before. We again leverage the time-varying dynamics of Eqn.~\eqref{eq:linear_mixing_dynamics}, evaluated for different realizations of a diverse set of parameters.
We focus on two- and three-armed bandits, with numerical rewards $c=\{0,1,2\}$ dependent on a two-dimensional context $x_t\in\Real^2$ with time-varying parameters $\theta_{a,c,t}$ as below:
\begin{equation}
\text{Two-armed bandit} \hspace*{-20ex}
\begin{split}
\mySmallMatrix{
\theta_{0,c,0,t}\\
\theta_{0,c,1,t}\\
} &=\mySmallMatrix{
0.9 & -0.1 \\
-0.1 & 0.9 \\
}\mySmallMatrix{
\theta_{0,c,0,t-1}\\
\theta_{0,c,1,t-1}\\
} + \epsilon_{0,c}, \; \forall c \in \{0,1,2\} \; ,\\
\mySmallMatrix{
\theta_{1,c,0,t}\\
\theta_{1,c,1,t}\\
} &= \mySmallMatrix{
0.9 & 0.1 \\
0.1 & 0.9 \\
}\mySmallMatrix{
\theta_{1,c,0,t-1}\\
\theta_{1,c,1,t-1}\\
} + \epsilon_{1,c}, \; \forall c \in \{0,1,2\} \; , \\
\text{ with i.i.d. } & \epsilon_{a,c} \sim \N{\epsilon|0,0.1 \cdot\mathrm{I}} \forall a \in \{0,1\} \; , \forall c \in \{0,1,2\} \; .
\end{split}
\label{eq:dynamic_two_armed_bandit_dynamics}
\hspace*{-10ex}
\end{equation}
\begin{equation}
\text{Three-armed bandit} \hspace*{-20ex}
\begin{split}
\mySmallMatrix{
\theta_{0,c,0,t}\\
\theta_{0,c,1,t}\\
} &=\mySmallMatrix{
0.9 & -0.1 \\
-0.1 & 0.9 \\
}\mySmallMatrix{
\theta_{0,c,0,t-1}\\
\theta_{0,c,1,t-1}\\
} + \epsilon_{0,c}, \; \forall c \in \{0,1,2\} \; ,\\
\mySmallMatrix{
\theta_{1,c,0,t}\\
\theta_{1,c,1,t}\\
} &= \mySmallMatrix{
0.9 & 0.1 \\
0.1 & 0.9 \\
}\mySmallMatrix{
\theta_{1,c,0,t-1}\\
\theta_{1,c,1,t-1}\\
} + \epsilon_{1,c},\; \forall c \in \{0,1,2\} \; ,\\
\mySmallMatrix{
\theta_{2,c,0,t}\\
\theta_{2,c,1,t}\\
} &= \mySmallMatrix{
0.9 & 0.1 \\
0.1 & 0.9 \\
}\mySmallMatrix{
\theta_{2,c,0,t-1}\\
\theta_{2,c,1,t-1}\\
} + \epsilon_{2,c}, \; \forall c \in \{0,1,2\} \; , \\
\text{ with i.i.d. } & \epsilon_{a,c} \sim \N{\epsilon|0,0.1 \cdot\mathrm{I}} \forall a,c \in \{0,1,2\} \; .
\end{split}
\label{eq:dynamic_three_armed_bandit_dynamics}
\hspace*{-10ex}
\end{equation}
The parameterizations above accommodate a diverse set of expected reward dynamics, depending on parameter initializations, and realizations of the noise process. We illustrate the per-arm expected reward time-evolution for the studied two-armed bandit scenarios in Figures~\ref{fig:expected_reward_A2_C3_tmax2000_d} and \ref{fig:expected_reward_A2_C3_tmax2000_b}, and for the three-armed bandits, in Figures~\ref{fig:expected_reward_A3_C3_tmax2000_a} and \ref{fig:expected_reward_A3_C3_tmax2000_d}. We note that in all these settings, the expected rewards of each arm change over time, resulting in transient and recurrent swaps of the optimal arm.
We show the cumulative regret over time of Algorithm~\ref{alg:sir-mab} in Figures~\ref{fig:cumulative_regret_A2_C3_tmax2000_d}, \ref{fig:cumulative_regret_A2_C3_tmax2000_b}, \ref{fig:cumulative_regret_A3_C3_tmax2000_a} and \ref{fig:cumulative_regret_A3_C3_tmax2000_d}, and observe that SMC-based Thompson sampling and Bayes-UCB are both able to reach the exploitation-exploration balance (the cumulative regret plateaus after optimal arm changes).
We observe increases in cumulative regret when the parameter dynamics swap the optimal arm --- around $t\approx1000$ in Fig~\ref{fig:expected_reward_A2_C3_tmax2000_d}, $t\approx250$ and $t\approx1600$ in Fig.~\ref{fig:expected_reward_A2_C3_tmax2000_b}, $t\approx250$ in Fig.~\ref{fig:expected_reward_A3_C3_tmax2000_a}, $t\approx250$ and $t\approx1250$ in Fig.~\ref{fig:expected_reward_A3_C3_tmax2000_d} --- and how the SIR-based algorithms, via their updated dynamics, are able to readjust into a new exploitation-exploration balance.
We also observe that, when expected reward changes occur later in time (e.g., $t\approx1750$ in Fig.~\ref{fig:expected_reward_A2_C3_tmax2000_b} and $t\approx1250$ in Fig.~\ref{fig:expected_reward_A3_C3_tmax2000_d}), the impact on Bayes-UCB seems to be more pronounced: the mean increases drastically, as well as the variance (after $t\approx1600$ in Fig.~\ref{fig:cumulative_regret_A2_C3_tmax2000_b}, and $t\approx1250$ in Fig.~\ref{fig:cumulative_regret_A3_C3_tmax2000_d}).
In the most interesting and challenging setting --- those with time-evolving unknown parameters --- both algorithms incur in an increased regret loss. Since the algorithm must sequentially learn the unknown model parameters $\{L_{a,c},C_{a,c}\}$ of the transition density that SMC uses, making informed decisions becomes more troublesome, thus the reward loss.
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{./figs/dynamic/categorical/A2_C3_tmax2000_d}
\caption{Two-armed bandit expected reward time-evolution: notice the optimal arm change around $t\approx1000$.}
\label{fig:expected_reward_A2_C3_tmax2000_d}%
\end{subfigure}\qquad
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{./figs/dynamic/categorical/A2_C3_tmax2000_b}
\caption{Two-armed bandit expected reward time-evolution: notice the late optimal arm change around $t\approx1600$.}
\label{fig:expected_reward_A2_C3_tmax2000_b}
\end{subfigure} %
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{./figs/dynamic/categorical/cumulative_regret_A2_C3_tmax2000_d}
\caption{Cumulative regret over time for a two-armed bandit with expected reward dynamics as above (Fig.~\ref{fig:expected_reward_A2_C3_tmax2000_d}).}
\label{fig:cumulative_regret_A2_C3_tmax2000_d}%
\end{subfigure}\qquad
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{./figs/dynamic/categorical/cumulative_regret_A2_C3_tmax2000_b}
\caption{Cumulative regret over time for a two-armed bandit with expected reward dynamics as above (Fig.~\ref{fig:expected_reward_A2_C3_tmax2000_b}).}
\label{fig:cumulative_regret_A2_C3_tmax2000_b}%
\end{subfigure}
\vspace*{-1ex}
\caption{Expected reward time-evolution, and mean regret (standard deviation shown as shaded region) of the proposed methods, for a two-armed contextual three-categorical dynamic bandits as in Eqn.~\ref{eq:dynamic_two_armed_bandit_dynamics}.}
\label{fig:dynamic_bandits_categorical_A2_C3}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{./figs/dynamic/categorical/A3_C3_tmax2000_a}
\caption{Three-armed bandit expected reward time-evolution: notice the optimal arm change around $t\approx250$.}
\label{fig:expected_reward_A3_C3_tmax2000_a}%
\end{subfigure} \qquad
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{./figs/dynamic/categorical/A3_C3_tmax2000_d}
\caption{Three-armed bandit expected reward time-evolution: notice the optimal arm changes around $t\approx250$ and $t\approx1250$.}
\label{fig:expected_reward_A3_C3_tmax2000_d}%
\end{subfigure}
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{./figs/dynamic/categorical/cumulative_regret_A3_C3_tmax2000_a}
\caption{Cumulative regret over time for a three-armed bandit with expected reward dynamics as above (Fig.~\ref{fig:expected_reward_A3_C3_tmax2000_a}).}
\label{fig:cumulative_regret_A3_C3_tmax2000_a}%
\end{subfigure}\qquad
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{./figs/dynamic/categorical/cumulative_regret_A3_C3_tmax2000_d}
\caption{Cumulative regret over time for a three-armed bandit with expected reward dynamics as above (Fig.~\ref{fig:expected_reward_A3_C3_tmax2000_d}).}
\label{fig:cumulative_regret_A3_C3_tmax2000_d}%
\end{subfigure}
\vspace*{-1ex}
\caption{Expected reward time-evolution, and mean regret (standard deviation shown as shaded region) of the proposed methods, for a three armed contextual three-categorical dynamic bandits as in Eqn.~\ref{eq:dynamic_three_armed_bandit_dynamics}.}
\label{fig:dynamic_bandits_categorical_A3_C3}
\end{figure*}
\clearpage
Overall, the random measure approximation to the posteriors of the parameters of interest is accurate enough, allowing both studied MAB policies to dynamically find and adjust to the right exploration-exploitation tradeoff in all the studied dynamic bandits (Gaussian, logistic and categorical).
The proposed SMC-based framework not only estimates the evolving parameters $\theta_t$, but also their corresponding uncertainty. Both because an arm's dynamics are unclear, or due to an arm not being sampled for a while, the uncertainty of its estimated SMC posterior grows. As a result, a Bayesian policy is more likely to explore that arm again.
We observe a slightly deteriorating behavior over time for Bayes-UCB in all studied cases, which we argue is due to the shrinking quantile value $\alpha_t\propto1/t$, originally proposed by \citet{ip-Kaufmann2012}. Confidence bounds of static reward models tend to shrink with more observations of the bandit. However, with evolving parameters, such assumption does not hold anymore.
The uncertainty of the evolving parameter posteriors (driven by the underlying dynamics of each arms) might result in broader distributions. Therefore, the inadequacy of shrinking $\alpha_t$, as it is not able to capture the evolving uncertainty of the parameter posteriors in the long run.
More generally, the need to determine appropriate quantile values $\alpha_t$ for each reward and dynamic model is a drawback for Bayes-UCB, as its optimal value will depend on the specific combination of underlying dynamics and reward function.
On the contrary, Thompson sampling relies on samples from the posterior, which SMC methods are able to approximate accurately enough in all studied cases for it to operate successfully without any parameter tweaking.
\subsection{Bandits for personalized news article recommendation}
\label{ssec:logged_data_bandits}
Finally, we consider the application of the proposed SIR-based methods for recommendation of personalized news articles, in a similar fashion as done by \citet{ic-Chapelle2011}. Online content recommendation represents an important example of reinforcement learning, as it requires efficient balancing of the exploration and exploitation tradeoff.
We use a dataset\footnote{Available at \href{https://webscope.sandbox.yahoo.com/catalog.php?datatype=r\&did=49}{R6A - Yahoo! Front Page Today Module User Click Log Dataset.}} that contains a fraction of user click logs for news articles displayed in the Featured Tab of the Today Module on Yahoo! Front Page during the first ten days in May 2009. The articles to be displayed were originally chosen uniformly at random from a hand-picked pool of high-quality articles. As such, the candidate pool was originally dynamic. However, we picked a subset of 20 articles shown in May 06th and collected all logged user interactions, for a total of 500354.
The goal is to choose the most interesting article to users, or in bandit terms, to maximize the total number of clicks on the recommended articles, i.e., the average click-through rate (CTR). In the dataset, each user is associated with six features: a bias term and 5 features that correspond to the membership features constructed via the conjoint analysis with a bilinear model described in \cite{ip-Chu2009}.
We treat each article as an arm ($A=20$), and the logged reward is whether the article is clicked or not by the user ($y_t=\{1,0\}$). We pose the problem as a MAB with logistic rewards, so that we can account for the user features ($x_t\in \Real^6$).
One may further hypothesize that the news recommendation system should evolve over time, as the relevance of news might change during the course of the day. As a matter of fact, our proposed framework readily accommodates these assumptions.
We consider both static and dynamic bandits with logistic rewards, and implement the proposed SIR-based Thompson sampling, due to its flexibility and the lack of parameter tuning required. Summary CTR results are provided in Table \ref{tab:yahoo_logistic_crt}. Observe the flexibility of the dynamic bandit in Fig. \ref{fig:yahoo_logistic_dynamic}, which is able to pick up the dynamic popularity of certain articles over time.
\begin{figure}[!hb]
\centering
\includegraphics[width=0.85\textwidth]{./figs/yahoo/yahoo_logistic_dynamic}
\vspace*{-2ex}
\caption{Empirical probability of SIR-based contextual dynamic logistic Thompson sampling policy picking each arm over time. Notice how the algorithm captures the changing popularity of articles over time.}
\label{fig:yahoo_logistic_dynamic}
\end{figure}
\vspace*{-2ex}
\input{table_yahoo_logistic}
\vspace*{-4ex}
\section{Conclusion}
\label{sec:conclusion}
We have presented a (sequential) importance sampling-based framework for the MAB problem, where we combine sequential Monte Carlo inference with state-of-the-art Bayesian MAB policies. The proposed algorithmic setting allows for interpretable modeling of complex reward functions and time-evolving bandits. The methods sequentially learn the dynamics of the bandit from online data, and are able to find the exploration-exploitation balance.
In summary, we extend the applicability of Bayesian MAB policies (Thompson sampling and Bayes-UCB in particular) by accommodating complex models of the world with SIR-based inference of the unknowns. Empirical results show good cumulative regret performance of the proposed framework in simulated challenging models (e.g., contextual categorical dynamic bandits), and practical scenarios (personalized news article recommendation) where complex models of data are required.
Important future work remains on a deeper understanding of the regret of both Thompson sampling and UCB-based policies within the proposed SMC-based framework. A theoretical analysis of the dependency between optimal arm changes, posterior convergence, and regret of the proposed SMC-based framework must be devised.
\subsection{Software and Data}
The implementation of the proposed method is available in \href{https://github.com/iurteaga/bandits}{this public repository}. It contains all the software required for replication of the findings of this study.
\subsubsection*{Acknowledgments}
This research was supported in part by NSF grant SCH-1344668.
We thank Luke Bornn for bringing \cite{j-Cherkassky2013} to our attention.
\input{sis_bandits.bbl}
\ifx\undefined \end{document} \else \iftrue \fi
\clearpage
|
2,877,628,089,842 | arxiv | \section{Correlated electrical conductance and Raman response data}
Here are several examples of correlated electrical conductance and
Raman response from nanojunctions with OPV3 assembled on them. In the
first three examples (Figures S1-3) the electrical conductance is on
the order of 0.01~$G_{0}$ with conductance fluctuations on the order
of 0.001~$G_{0}$. The conductance fluctuations are correlated with
large changes in the Raman response measured at the nanojunction.
This is consistent with a single molecule switching between different
conformations in the nanojunction. In all of these cases, we expect
that the molecule is not neatly bridging the gap, but does play a role
in total electrical conduction. It has long been
established[S2-S4] that conduction in
such nanojunctions takes place via tunneling. Because of the
exponential dependence of tunneling on distance, the dominant volume
for current flow is of molecular scale. As argued
previously[S5], coincident conductance and Raman changes
imply (1) that at least some portion of a Raman-active molecule is
influencing the tunneling electrons; and (2) that Raman-active
molecule is a significant (in many cases dominant) contributor to the
total Raman signal.
At higher nanojunction conductances (0.5~$G_{0}$, such as those
observed in Figure S4), we observe similar correlations between
conduction and Raman response, but the conductance fluctuation
magnitude is much larger, as one might expect. At this conductance
level, conduction is likely dominated by an atomic-scale metallic
junction with transmission less than one. (One example of such a
junction would be the beginnings of a tunneling contact, with two tip
atoms slightly farther apart than their equilibrium lattice spacing.)
Again, the conduction path is highly localized, with a transverse
dimension of molecular or atomic dimensions. Still, given the
correlation between Raman response and conductance, at least one
Raman-active molecule must be significantly coupled to this current
path. We note that it is very unlikely that the unusually high
conductance of this particular junction results from many molecules in
parallel. If this was the case, the observed, correlated conduction
and Raman response would imply many molecules changing their
configurations simultaneously, which is very unlikely. One would
expect conductance fluctuations due to individual molecular motions to
be much smaller such as those seen in nanojunctions with conductances of
0.01~$G_{0}$; however this is not the case. Instead, high
conductance nanojunctions are likely bridged by or strongly
electronically coupled to just a part of a molecule,
as the gaps are too small (based upon the conductance) to fit an
entire molecule.
\section{Junction configurations and contributions to the conductance}
Figure~\ref{fig:junctioncartoon} presents schematic examples of possible nanojunction configurations. Fig.~\ref{fig:junctioncartoon}a shows the idealized single-molecule junction that has been considered for more than fifteen years, with a single molecule neatly bridging and bound to both ends of a nanoscale interelectrode gap. In such a geometry, it is clear that interelectrode conduction would have a dominant contribution from current flow through the molecule.
However, as mentioned in the main text, this ideal configuration is unlikely in electromigrated junctions, since the electromigration process (unlike mechanical break junctions or scanning tunneling microscope junctions, for example) does not control interelectrode distance with atomic precision. Rather, the closest interelectrode distance is determined by details of individual junctions and the breaking procedure.
Fig.~\ref{fig:junctioncartoon}b-d are examples of other nanojunction geometries, in which the closest interelectrode distance is \textit{not} the length of a molecule of interest. In Fig.~\ref{fig:junctioncartoon}b and d, only a portion of the molecule of interest may be located at the point of closest interelectrode separation, while steric effects prevent the molecule from forming strong bonds with both electrodes. In Fig.~\ref{fig:junctioncartoon}c, the molecule is bound to both electrodes, but a closer point of interelectrode separation exists, and due to the exponential decay of tunneling with distance, the total conductance would have a dominant contribution from direct metal-metal tunneling. These configurations, while only schematic, show clearly that many reasonable junction arrangements are possible that can have total conductances \textit{higher} than the idealized situation in Fig.~\ref{fig:junctioncartoon}a, but nonetheless involve contributions from current flow through a molecule. An example from the literature of such a situation is thought to be presented in Tal \textit{et al.}[S6]. In that work, a benzene molecule is thought to lie down flat between two Pt electrodes in a mechanical break junction, leading to high conductance (comparable to 1 $G_{0}$), but still with molecular influence on the current.
\section{Additional optical vibrational pumping data}
An example of optical vibrational pumping in OPV3 is shown in Figure S6. We observe several discrete spectral configurations. Each configuration has varying levels of optical pumping, with the most noticable change occuring at 150~s, when several lower energy vibrations appear in the antiStokes signal.
\section{Additional electrical vibrational pumping data}
An additional example of electrical vibration pumping in OPV3 is shown
in Figure S7. The two modes observed 880~cm$^{-1}$ and 410~cm$^{-1}$
have temperature differences of almost 150~K at $V = 0.4$~V. This
device also exhibited optical pumping of both modes resulting in zero
bias temperatures of 220~K and 120~K respectively. As expected both
modes to not show any electrical pumping until $V$ exceeds the
vibrational energy. At 280~mV a sharp change in conductance is
observed and no pumping is observed. The conductance changes again at
320 mV and pumping is observed but with less AS intensity than previously. This is strong evidence the importance of the local environment to pumping cross-sections.
\section{Additional electron heating data}
An additional example of electrical heating is shown in Figure S8. This sample had more symmetric $I-V$ curves and power dissipation than other samples presented. The effective temperature is observed to be more symmetric as well when considering the size of the error bars.
\section{Raman Stark effect}
Figure S9 provides a closer look at the Raman Stark effect seen in
Figure 3B of the main text. A clear shift of 13~cm$^{-1}$ can be
observed between zero bias and 0.5~V. All vibrational modes present
experience similar shifts. These shifts are reproducible on
subsequent voltage sweeps of this junction.
\section{OPV3 Synthesis}
Tetrahydrofuran (THF) was distilled from sodium benzophenone ketyl. Triethylamine (TEA) was distilled from CaH$_2$ under N$_2$. Triethylphosphite (98\%, Aldrich), 4-aminostyrene (90\%, Aldrich), and palladium(II) acetate (98\% Aldrich) were used as received. Silica gel plates were 250 mm thick, 40 F254 grade obtained from EM Science. Silica gel was grade 60 (230 - 400 mesh) from EM Science.
Anhydrous DMF (120 mL) was placed in a 250 mL round bottom flask and two freeze-pump-thaw cycles were performed to ensure the removal of oxygen. A large screw-cap tube was charged with 2,5-didecoxy-1,4-diiodobenzene (10.4 g, 16.2 mmol),[S1] tetrabutylammonium bromide (15.7 g, 48.6 mmol), Pd(OAc)$_2$ (0.40 g, 1.62 mmol), and K$_2$CO$_3$ (4.48 g, 32.4 mmol). 4-Aminostyrene (5.00 g, 32.4 mmol) was added to the DMF and the resulting solution was cannulated into the screw-cap tube. The tube was sealed with the Teflon cap and the tube was heated to 100 $^{\circ}$C for 1 d. After cooling, the crude reaction mixture was poured into water, extracted with Et$_2$O, dried over MgSO$_4$, and then the solvent was removed under reduced pressure. The crude product was then purified by silica gel chromatography using 1:1 CH$_2$Cl$_2$ : hexanes as eluent. The solid was then recrystallized from CH$_2$Cl$_2$ : hexanes to yield 1.57 g (16\%) of an orange solid. IR: 3729, 3470, 3375, 3
207, 3025, 2925, 2851, 2648, 2321, 1976, 1622, 1599, 1516, 1466, 1413, 1254, 1172, 962, 895, 812 cm$^{-1}$. $^1$H NMR (400 MHz, CD$_2$Cl$_2$) $\delta$ 7.26 (d, J = 8.3 Hz, 4H), 7.18 (d, J = 16.5 Hz, 2 H), 7.01 (2 H), 6.96 (d, J = 16.5 Hz, 2 H), 3.95 (t, J = 6.5 Hz, 4 H), 3.73 (br s, 4 H), 1.78 (m, 4 H), 1.45 (m, 4 H), 1.34 (br m, 24 H), 0.80 (m, 6 H); $^{13}$C (100 MHz, CD$_2$Cl$_2$) $\delta$ 151.35, 147.02, 128.95, 128.91, 128.18, 127.21, 120.02, 115.51, 110.58, 70.10, 32.51, 30.26, 30.19, 30.14, 30.06, 29.96, 26.88, 23.29, 14.48. HRMS calcd for C$_{42}$H$_{60}$N$_2$O$_2$: 624.4655, found: 624.4662.
The structure of the OPV3 Molecule is shown in Figure \ref{fig:OPV3}.
\begin{figure}
\begin{center}
\includegraphics[width=7.2in]{sfig1.jpg}
\end{center}
\caption{
Waterfall plot of Raman spectrum (1~s integrations) and electrical conduction measurement for an OPV3 devices. The Raman response is observed to change whenever a change in conductance occurs. Colorbar indicates Raman intensity in CCD counts.
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7.2in]{sfig2.jpg}
\end{center}
\caption{
Waterfall plot of Raman spectrum (1~s integrations) and electrical conduction measurement for an OPV3 devices. The Raman response is observed to change whenever a change in conductance occurs. The number of Raman vibrational modes observed also changes with conductance level. Colorbar indicates Raman intensity in CCD counts.
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7.2in]{sfig3.jpg}
\end{center}
\caption{
Waterfall plot of Raman spectrum (5~s integrations) and electrical conduction measurement for an OPV3 devices. The Raman response is observed to change whenever a change in conductance occurs. Colorbar indicates Raman intensity in CCD counts.
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7.2in]{sfig4.jpg}
\end{center}
\caption{
Waterfall plot of Raman spectrum (1~s integrations) and electrical conduction measurement for an OPV3 devices. The Raman response is observed to change whenever a change in conductance occurs. Colorbar indicates Raman intensity in CCD counts.
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=6in]{junctioncartoon.jpg}
\end{center}
\caption{\label{fig:junctioncartoon}
Cartoon of possible junction configurations, using the OPV3 molecule as an example (saturated side chains omitted for clarity). (a) The idealized single-molecule junction, a configuration unlikely in these experiments since interelectrode separation is not precisely controlled. (b-d) Alternative junction configurations, in which interelectrode conduction would be expected to include a contribution from current interacting with the molecule, as well as direct metal-metal tunneling.
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7.0in]{sfig5.jpg}
\end{center}
\caption{
Raman response of an OPV3 junction as a function of time under zero bias. Blue indicates 0 counts and red indicates 100 (8000) counts for antiStokes (Stokes) sides. Integration time is 1~s. The junction switches stochastically between several stable configurations, each with characteristic spectra that exhibit strong optical pumping of different vibrational modes.
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7.2in]{sfig6.jpg}
\end{center}
\caption{
A) Effective vibrational temperature as a function of $V$ for two OPV3
modes: 880~cm$^{-1}$(red) and 410~cm$^{-1}$(blue). Error bars
indicate the uncertainty in inferred effective temperature due to the
statistical limitations of the antiStokes amplitude
measurements.
Inset) IV curve for this device.
B) Raman response of this device as a function of $V$. Blue indicates
10 (2500) counts and red indicates 250 (10,000) counts for antiStokes
(Stokes). The strong Stokes peak at 520~cm$^{-1}$ is from the Si substrate.
C) Sample spectra for given voltage. All antiStokes (Stokes) spectra are plotted on the same scale. Full amplitude corresponds to 1200 (18,000) counts for antiStokes (Stokes).
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7.2in]{sfig7.jpg}
\end{center}
\caption{
A)Effective electronic temperature(blue) and dissipated electrical power (red) for a device with very little vibrational Raman activity. Error bars are described in text.
Inset) $I-V$ curves for these devices.
B) Raman response for these devices. Blue indicates 0 counts and red indicates 350 counts.
C) Sample spectra (blue) and best fit given by Equation 2 in main text (green) for a given voltage.
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7.2in]{sfig8.jpg}
\end{center}
\caption{
A) Raman response of the same device shown in Figure 3a in main text. A shift of about 15~cm$^{-1}$ is present for many spectral lines. Blue indicates 2500 counts and red indicates 10000 counts.
B) Zoom in of Raman response for the mode centered at 1523~cm$^{-1}$.Blue indicates 2500 counts and red indicates 7000 counts.
C) Sample spectra for given voltage. All spectra are plotted on the same scale with a base line of 3000 counts substracted. Full amplitude corresponds to 4000 counts. The peak at 1523~cm$^{-1}$ can clearly be seen systematically shifting to lower energy at higher voltages reaching 1510~cm$^{-1}$ at 500~mV.
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=4in]{sfig9.jpg}
\end{center}
\caption{\label{fig:OPV3}
OPV3 Molecule
}
\end{figure}
\clearpage
\noindent[S1] Zhao, Y., Shirai, Y., Slepkov, A.~D., Cheng, L., Alemany, L.~B.,
Sasaki, T., Hegmann, F.~A., Tour, J.~M. Synthesis, Spectroscopic and
Nonlinear Optical Properties of Multiple [60]Fullerene-Oligo(
\textit{p}-Phenylene Ethynylene) Hybrids. {\it Chem. Eur. J.} {\bf 11}, 3643
(2005).
\noindent[S2] Park, H. et al. Fabrication of metallic electrodes with nanometer separation by electromigration. {\it Appl. Phys. Lett.} {\bf 75}, 301-303 (1999).
\noindent[S3] Natelson, D., Yu, L. H., Ciszek, J. W., Keane, Z. K. \& Tour, J. M. Single-molecule transistors: electron transfer in the solid state. {\it Chem. Phys.} {\bf 324}, 267-275 (2006).
\noindent[S4] Ward, D.~R., Scott, G.~D., Keane, Z.~K., Halas, N.~J. \& Natelson, D. Electronic and optical properties of electromigrated molecular junctions. {\it J. Phys. Condens. Matt.} {\bf 20}, 374118 (2008).
\noindent[S5] Ward, D.~R. et al. Simultaneous measurements of electronic conduction and Raman response in molecular junctions. {\it Nano Lett.\/} {\bf 8}, 919-924 (2008).
\noindent[S6] Tal, O. et al. Molecular signature of highly conductive metal-molecule-metal junctions. {\it Phys. Rev. B} {\bf 80}, 085427 (2009).
\end{document}
|
2,877,628,089,843 | arxiv | \section*{Introduction}
\medbreak
Prompted by findings of Kowalevski in her analysis of gyroscopic motion, Picard initiated a study of second-order ordinary differential equations of the form
$$\frac{{\rm d}^2 w}{{\rm d} z^2} = F\Big(z, w, \frac{{\rm d} w}{{\rm d} z}\Big)$$
in which the right side is analytic in $z$ but rational in $w$ and ${\rm d} w/{\rm d} z$. Painlev\'e and his student Gambier classified all such ODEs possessing the property that their solutions have only poles among their movable singularities. Their classification resulted in 50 canonical forms: these are listed explicitly in [Ince] and each is traditionally labelled by a Roman numeral according to its position in this list. Six of these 50 equations are separated from this list, freshly labelled ${\bf PI}$ through ${\bf PVI}$ and called the {\it Painlev\'e equations}, their solutions being {\it Painlev\'e transcendents}; solutions to all 50 may be expressed in terms of solutions to these six along with solutions to `classical' ODEs (including linear equations and those that define elliptic functions).
\medbreak
In fact, one of the Painlev\'e equations is very intimately related to another in the list of 50. The second Painlev\'e equation ${\bf PII}$ has the form
$$\frac{{\rm d}^2 w}{{\rm d} z^2} = 2 w^3 + z w + \alpha$$
in which $\alpha$ is a parameter. The special case in which $\alpha = 0$ is called homogeneous; we shall label it ${\bf PII}_0$. The twentieth equation {\bf XX} in the list of 50 has the form
$$\frac{{\rm d}^2 w}{{\rm d} z^2} = \frac{1}{2 w}\Big( \frac{{\rm d} w}{{\rm d} z}\Big)^2 + 4 w^2 + 2 z w$$
in which the quotient on the right is to be understood as a limit when appropriate. It is asserted on page 337 of [Ince] that {\bf XX} is equivalent to ${\bf PII}_0$ by squaring.
\medbreak
Here we examine more closely certain aspects of the relationship between {\bf XX} and ${\bf PII}_0$. To be specific, we restrict attention to {\bf XX} and ${\bf PII}_0$ as {\it real} equations with real solutions. In this context, squares of nowhere-zero solutions to ${\bf PII}_0$ satisfy ${\bf XX}$ while positive square-roots of strictly positive solutions to {\bf XX} satisfy ${\bf PII}_0$. When solutions are allowed to acquire (isolated) zeros we find that there is a sudden change in behaviour, which we analyze in detail. Our examination brings out a significant property of equation {\bf XX}. The presence of $w$ in the denominator on the right side of {\bf XX} means that the standard (local) existence-uniqueness theorem for a second-order ODE does not apply to {\bf XX} when the initial data involve a zero of the solution. We observe that further differentiation leads to a third-order ODE in which the right side is polynomial in all variables; accordingly, the standard existence-uniqueness theorem for a {\it third}-order ODE applies to this equation. As a direct consequence, a solution to {\bf XX} that vanishes at a point is uniquely determined by the value of its {\it second} derivative there.
\medbreak
\section*{{\bf XX} and an associated third-order ODE}
\medbreak
As we mentioned in the Introduction, we shall regard {\bf XX} as a real ordinary differential equation with real solutions. Thus, we shall write this equation as
\begin{equation} \label{XX}
\overset{\mdot \mdot}{S} = \frac{\overset{\mdot}{S}^2}{2 S} + 4 S^2 + 2 t S, \tag{{$\bf XX$}}
\end{equation}
where a superior dot $^{\mdot}$ signifies the derivative and where the ratio $\overset{\mdot}{S}^2/{2 S}$ is to be understood as a limit when appropriate. It follows that the derivative of a solution vanishes wherever the solution itself vanishes: if $S(a) = 0$ then automatically $\overset{\mdot}{S}(a) = 0$ also; this has consequences, as we shall see.
\medbreak
Notice that {\bf XX} has the form
$$\overset{\mdot \mdot}{S} = F(t, S, \overset{\mdot}{S})$$
in which the right side is rational, with $S$ in the denominator. In consequence of this, the standard (local) existence-uniqueness theorem for second-order ODEs applies to {\bf XX} away from zeros: there exists a unique solution $S$ to {\bf XX} for which $S(a) \ne 0$ and $\overset{\mdot}{S}(a)$ have specified values. The standard existence-uniqueness theorem fails when the initial data involve a zero of the solution: indeed, $S(a) = 0$ entails $\overset{\mdot}{S}(a) = 0$ as noted above; were the standard theorem to apply, it would force $S$ to vanish identically on its interval domain. We analyze further the case of an isolated zero below.
\medbreak
It follows at once from {\bf XX} that each solution $S$ is thrice-differentiable away from its zeros: calculation of the third derivative starts conveniently from the reformulation
$$2 S \overset{\mdot \mdot}{S} = \overset{\mdot}{S}^2 + 8 S^3 + 4 t S^2;$$
after differentiation, $2 \overset{\mdot}{S} \overset{\mdot \mdot}{S}$ falls from each side so that
$$2 S \: \overset{\mdot \mdot \mdot}{S} = 24 S^2 \overset{\mdot}{S} + 8 t S \overset{\mdot}{S} + 4 S^2$$
and therefore
$$\overset{\mdot \mdot \mdot}{S} = 12 S \overset{\mdot}{S} + 4 t \overset{\mdot}{S} + 2 S.$$
Now let the solution $S$ to {\bf XX} have an isolated zero at $a$. The understanding that the ratio on the right side of {\bf XX} is defined as a limit ensures that $\overset{\mdot \mdot}{S}$ is continuous at $a$. As we let $(a \ne ) \; t \ra a$ in the equation
$$\overset{\mdot \mdot \mdot}{S}(t) = 12 S(t) \overset{\mdot}{S}(t) + 4 t \overset{\mdot}{S}(t) + 2 S(t)$$
both $S(t) \ra S(a) = 0$ and $\overset{\mdot}{S}(t) \ra \overset{\mdot}{S}(a) = 0$ so that $\overset{\mdot \mdot \mdot}{S}(t) \ra 0$ also. We deduce that $S$ is also thrice-differentiable at $a$ with $\overset{\mdot \mdot \mdot}{S}(a) = 0$, by an application of the mean value theorem to the continuous function $\overset{\mdot \mdot}{S}$.
\medbreak
We may record the result of our recent deliberations as the following theorem; in its statement, we assume that the zeros of $S$ are isolated.
\medbreak
\begin{theorem} \label{X'}
If $S$ is a solution to {\bf XX} then $S$ satisfies the third-order equation
\begin{equation} \label{XX'}
\overset{\mdot \mdot \mdot}{S} = 12 S \overset{\mdot}{S} + 4 t \overset{\mdot}{S} + 2 S. \tag{{$\bf XX'$}}
\end{equation}
\end{theorem}
\qed
\medbreak
Observe that equation ${\bf XX'}$ has the form
$$\overset{\mdot \mdot \mdot}{S} = G(t, S, \overset{\mdot}{S}, \overset{\mdot \mdot}{S})$$
in which the right side is polynomial in all variables (and $\overset{\mdot \mdot}{S}$ is incidentally absent). The standard (local) existence-uniqueness theorem for a third-order ODE thus applies: there exists a unique solution to ${\bf XX'}$ having specified values of $S(a), \; \overset{\mdot}{S}(a)$ and $\overset{\mdot \mdot}{S}(a)$.
\medbreak
This has an immediate application to {\bf XX} itself.
\medbreak
\begin{theorem} \label{ne}
Let $S$ be a solution to {\bf XX}. If $S$ has an isolated zero at $a$ then $\overset{\mdot \mdot}{S}(a) \ne 0.$
\end{theorem}
\begin{proof}
According to Theorem \ref{X'}, $S$ is also a solution to ${\bf XX}'$. As we have seen, if $S(a) = 0$ then $\overset{\mdot}{S}(a) = 0$ automatically. If also $\overset{\mdot \mdot}{S}(a) = 0$ then the standard uniqueness theorem for solutions to the third-order equation ${\bf XX'}$ forces $S = 0$ and so prevents the zero at $a$ from being isolated.
\end{proof}
\medbreak
\section*{{\bf XX} in relation to homogeneous PII }
\medbreak
Now we explore the relationship between ${\bf XX}$ and the homogeneous second Painlev\'e equation, which we record as
\begin{equation} \label{PII}
\overset{\mdot \mdot}{s} = 2 s^3 + t s \tag{${\bf PII}_0$}
\end{equation}
and view in real terms. Throughout what follows, the intention is that lower case $s$ should suggest a solution to ${\bf PII}_0$ while upper case $S$ should suggest a solution to ${\bf XX}$.
\medbreak
Let $s$ be a nowhere-zero solution to ${\bf PII}_0$ and define $S := s^2$. Then $\overset{\mdot}{S} = 2 s \overset{\mdot}{s}$ and
$$\overset{\mdot \mdot}{S} = 2 s \overset{\mdot \mdot}{s} + 2 \overset{\mdot}{s}^2$$
so that $\overset{\mdot}{s} = \overset{\mdot}{S}/2 s$ and
$$\overset{\mdot \mdot}{S} = 2 s (2 s^3 + t s) + 2 \Big(\frac{\overset{\mdot}{S}}{2 s} \Big)^2 = 4 s^4 + 2 t s^2 + \frac{\overset{\mdot}{S}^2}{2 s^2}$$
whence
$$\overset{\mdot \mdot}{S} = 4 S^2 + 2 t S + \frac{\overset{\mdot}{S}^2}{2 S}$$
which proves that $S$ is a solution to ${\bf XX}$. In the opposite direction, let $S$ be a strictly positive solution to ${\bf XX}$ and define $s := \sqrt S$ to be its positive square-root. A similar direct calculation using $\overset{\mdot}{S} = 2 s \overset{\mdot}{s}$ and the fact that $S$ satisfies ${\bf XX}$ shows that
$$2 s \overset{\mdot \mdot}{s} = \overset{\mdot \mdot}{S} - 2 \overset{\mdot}{s}^2 = \frac{\overset{\mdot}{S}^2}{2 S} + 4 S^2 + 2 t S - 2 \overset{\mdot}{s}^2 = 4 S^2 + 2 t S = 4 s^4 + 2 t s^2$$
so by cancellation
$$\overset{\mdot \mdot}{s} = 2 s^3 + t s$$
and $s$ is a solution to ${\bf PII}_0$.
\medbreak
\begin{theorem} \label{nozero}
If $s$ is a nowhere-zero solution to ${\bf PII}_0$ then $s^2$ is a solution to ${\bf XX}$. If $S$ is a strictly positive solution to ${\bf XX}$ then $\sqrt S$ is a solution to ${\bf PII}_0.$
\end{theorem}
\qed
\medbreak
The presence of zeros introduces complications. As in the previous section, we take zeros to be isolated; more precisely, we consider a function (an $s$ or an $S$ as the case may be) that is defined on an open interval $I$ and vanishes at precisely one point $a \in I$.
\medbreak
\begin{theorem} \label{square}
If $s$ satisfies ${\bf PII}_0$ on $I$ and is zero only at $a \in I$ then $s^2$ satisfies ${\bf XX}$ on $I$.
\end{theorem}
\begin{proof}
Theorem \ref{nozero} guarantees that the twice-differentiable function $S : = s^2$ satisfies ${\bf XX}$ on $I \setminus \{a\}$; we must examine its behaviour at $a$. Note that
$$4 S(a)^2 + 2 a S(a) = 0$$
and
$$\overset{\mdot \mdot}{S}(a) = 2 s(a) \overset{\mdot \mdot}{s}(a) + 2 \overset{\mdot}{s}(a)^2 = 2 \overset{\mdot}{s}(a)^2$$
because $s$ vanishes at $a$. Note further that if $I \ni t \ne a$ then
$$\frac{\overset{\mdot}{S}(t)^2}{2 S(t)} = \frac{(2 s(t) \overset{\mdot}{s}(t))^2}{2 s(t)^2} = 2 \overset{\mdot}{s}(t)^2$$
which converges to $2 \overset{\mdot}{s}(a)^2$ as $t \ra a$. We conclude that $S$ satisfies ${\bf XX}$ at $a$ too.
\end{proof}
\medbreak
Thus squaring yields no surprises. The taking of square-roots is more interesting.
\medbreak
We begin with a negative result.
\medbreak
\begin{theorem} \label{notroot}
If $S$ satisfies ${\bf XX}$ on $I$ and is strictly positive except for a zero at $a \in I$ then $\sqrt S$ does not satisfy ${\bf PII}_0$ at $a$.
\end{theorem}
\begin{proof}
We offer two based on standard uniqueness theorems, the one for ${\bf PII}_0$ and the other for ${\bf XX'}$. Let $s : = \sqrt S$. (1) Suppose that $s$ were to satisfy ${\bf PII}_0$: as $s$ is non-negative, not only $s(a) = 0$ but also $\overset{\mdot}{s}(a) = 0$; now standard uniqueness forces $s = 0$ so that the zero $a$ is not isolated. (2) In fact, we claim that $s$ is not even twice-differentiable at $a$; for suppose it were. Again, $s(a) = 0$ and $\overset{\mdot}{s}(a) = 0$: as $S = s^2$ it follows that
$$\overset{\mdot \mdot}{S}(a) = 2 s(a) \overset{\mdot \mdot}{s}(a) + 2 \overset{\mdot}{s}(a)^2 =0;$$
as $a$ is an isolated zero, this contradicts Theorem \ref{ne}.
\end{proof}
\medbreak
Nevertheless, a solution $S$ to ${\bf XX}$ satisfying the hypotheses of this theorem {\it is} the square of a solution $s$ to ${\bf PII}_0$; it is simply the case that $s$ must change sign at $a$.
\medbreak
\begin{theorem} \label{root}
If $S$ is a solution to ${\bf XX}$ on $I$ and is strictly positive except for a zero at $a \in I$ then there exists a solution $s$ to ${\bf PII}_0$ on $I$ such that $S = s^2.$
\end{theorem}
\begin{proof}
Define $s$ on $I$ by
\begin{equation*}
s(t)=
\begin{cases}
- \sqrt{S(t)} & \text{if}\ I \ni t \leqslant a, \\
+ \sqrt{S(t)} & \text{if} \ I \ni t \geqslant a.
\end{cases}
\end{equation*}
From Theorem \ref{square} it follows that $s$ satisfies ${\bf PII}_0$ on $I \setminus \{ a \}$; we must verify that $s$ is twice-differentiable at $a$ with $\overset{\mdot \mdot}{s}(a) = 0$. First of all, note that if $I \ni t \ne a$ then $\overset{\mdot}{s}(t) = \overset{\mdot}{S}(t)/2 s(t)$ whence
$$\overset{\mdot}{s}(t)^2 = \frac{\overset{\mdot}{S}(t)^2}{4 s(t)^2} = \frac{\overset{\mdot}{S}(t)^2}{4 S(t)} = \frac{1}{2} \Big( \overset{\mdot \mdot}{S}(t) - 4 S(t)^2 - 2 t S(t) \Big)$$
and therefore
$$\lim_{t \ra a} \overset{\mdot}{s}(t)^2 = \frac{1}{2} \overset{\mdot \mdot}{S}(a)$$
because $S(a) = 0$. Next, $\overset{\mdot}{S}(a) = 0$ while Theorem \ref{ne} informs us that $\overset{\mdot \mdot}{S}(a) > 0$; as a consequence, $\overset{\mdot}{S}(t)$ changes from strictly negative to strictly positive as $t$ increases through $a$. Thus $\overset{\mdot}{s} = \overset{\mdot}{S}/2 s$ is strictly positive on each side of $a$ and so the taking of square-roots yields
$$\lim_{t \ra a} \overset{\mdot}{s}(t) = \sqrt{\frac{1}{2} \overset{\mdot \mdot}{S}(a)}.$$
An application of the mean value theorem now shows that the continuous function $s$ is continuously differentiable at $a$. Finally, as $s$ satisfies ${\bf PII}_0$ away from $a$ we deduce that
$$\lim_{t \ra a} \overset{\mdot \mdot}{s}(t) = \lim_{t \ra a} \big(2 s(t)^3 + t s(t)\big) = 0$$
and a further application of the mean value theorem to the continuous function $\overset{\mdot}{s}$ permits us to conclude that $\overset{\mdot \mdot}{s}(a)$ exists and equals $0$.
\end{proof}
\medbreak
\section*{Remarks}
\medbreak
Here, we consider matters of related interest, particularly concerning solutions to ${\bf XX}$ that are non-positive or change sign at an isolated zero.
\medbreak
\begin{theorem} \label{sigma}
If $S$ is a strictly negative solution to ${\bf XX}$ then $\sigma : = \sqrt{-S}$ is a solution to
$$\overset{\mdot \mdot}{\sigma} = t \sigma - 2 \sigma^3.$$
\end{theorem}
\begin{proof}
Direct calculation from $S = - \sigma^2$ gives $\overset{\mdot}{S} = - 2 \sigma \overset{\mdot}{\sigma}$ and $\overset{\mdot \mdot}{S} = - 2 \sigma \overset{\mdot \mdot}{\sigma} - 2 \overset{\mdot}{\sigma}^2$; cancellation of $ - 2 \sigma$ following the invocation of ${\bf XX}$ concludes the argument.
\end{proof}
\medbreak
Conversely, if $\sigma$ is a nowhere-zero solution to this differential equation, then it is readily checked that $S : = - \sigma^2$ is a strictly negative solution to ${\bf XX}$. To interpret the differential equation displayed in the theorem, notice that $\sigma$ satisfies this equation precisely when $s := i \sigma$ satisfies ${\bf PII}_0$. Of course, this interpretation is not entirely unexpected.
\medbreak
We leave the reader to contemplate the non-positive case, merely remarking that if $S$ is a solution to ${\bf XX}$ that has a single zero but is otherwise negative then $S = - \sigma^2$ for some (sign-changing) solution $\sigma$ to the differential equation of Theorem \ref{sigma}.
\medbreak
Our results on non-negative and non-positive solutions to ${\bf XX}$ are nicely complemented by the following result.
\medbreak
\begin{theorem}
A solution to ${\bf XX}$ cannot change sign at an isolated zero.
\end{theorem}
\begin{proof}
According to Theorem \ref{ne}, if the solution $S$ to ${\bf XX}$ has an isolated zero at $a$ then either $\overset{\mdot \mdot}{S}(a) > 0$ (in which case $S$ is strictly positive on each side of $a$) or $\overset{\mdot \mdot}{S}(a) < 0$ (in which case $S$ is strictly negative on each side of $a$).
\end{proof}
\medbreak
By sharp contrast, a solution to ${\bf PII}_0$ {\it must} change sign at an isolated zero, as noted in the first proof of Theorem \ref{notroot}; of course, this circumstance also bears on Theorem \ref{square} and Theorem \ref{root}.
\bigbreak
\begin{center}
{\small R}{\footnotesize EFERENCES}
\end{center}
\medbreak
[Ince] E.L. Ince. {\it Ordinary Differential Equations}, Longman, Green and Company (1926); Dover Publications (1956).
\medbreak
\end{document}
of the form
$$\frac{{\rm d}^3 w}{{\rm d} z^3} = G\Big(z, w, \frac{{\rm d} w}{{\rm d} z}\Big)$$
\medbreak
Notice also that if $S$ is a solution to {\bf XX} then its second derivative $\overset{\mdot \mdot}{S}$ is continuous. To see this, let $a$ be an arbitrary point in the domain of $S$ and let $t \ra a$: passage
\medbreak
The aspects of the relationship between {\bf XX} and ${\rm PII}_0$ that we examine have to do with the squaring process. To be specific, we restrict attention to {\bf XX} and ${\rm PII}_0$ as {\it real} equations with real solutions. In this context, squares of nowhere-zero solutions to ${\rm PII}_0$ satisfy XX while positive square-roots of strictly positive solutions to {\bf XX} satisfy ${\rm PII}_0$. When solutions are allowed to acquire (isolated) zeros we find that there is a sudden change in behaviour, which stems from the presence of $w$ in the denominator on the right side of {\bf XX}.
In the complex case, the Riemann continuation theorem supports this last deduction.
the vanishing of $S$ at $a$ necessitates the vanishing of $\overset{\mdot}{S}$ at $a$.
First let $s$ be a solution to ${\bf PII}_0$ on the interval $I$ and assume that $s$ vanishes at $a \in I$ but is elsewhere nonzero. Theorem \ref{nozero} guarantees that $S : = s^2$ satisfies ${\bf XX}$ on $I \setminus \{a\}$; we claim that $S$ satisfies ${\bf XX}$ on the whole of $I$.
If the non-negative function $s : = \sqrt S$ were differentiable at $a$ then we should have not only $s(a) = 0$ but also $\overset{\mdot}{s}(a) = 0$.
Let $S$ be a strictly negative solution to ${\bf XX}$ and define $\sigma : = \sqrt{-S}$. Direct calculation from $S = - \sigma^2$ gives $\overset{\mdot}{S} = - 2 \sigma \overset{\mdot}{\sigma}$ and $\overset{\mdot \mdot}{S} = - 2 \sigma \overset{\mdot \mdot}{\sigma} - 2 \overset{\mdot}{\sigma}^2$ leading via ${\bf XX}$ to
$$\overset{\mdot \mdot}{\sigma} = t \sigma - 2 \sigma^3.$$
In the opposite direction, if $\sigma$ is a nowhere-zero solution to this differential equation, then it is readily checked that $S : = - \sigma^2$ is a strictly negative solution to ${\bf XX}$.
stems from the presence of $w$ in the denominator on the right side of {\bf XX}
|
2,877,628,089,844 | arxiv | \section{Introduction}
Pooling has been an essential component of modern machine learning, allowing pertinent local information to be propagated to global intermediate feature sets or final discriminators.
The shape of the pooling operation is typically determined by hand, setting the size of a convolutional filter and the number of pooling steps before an output layer.
This process is difficult to optimize for graph neural networks~\cite{Bronstein_2017}, since neighbourhoods of nodes may vary in size and meaning depending on the problem at hand.
In the area of message passing neural networks~\cite{gilmer2017neural} there are recent advancements in learned pooling techniques on graphs~\cite{diehl2019edge}, and there is small but steady progress on using pooling to alter input graph structures.
This text describes a new pooling architecture using dynamic graph convolutions~\cite{wang2018dynamic} and clustering algorithms to learn an optimized representation and corresponding graph for pooling.
The model used in the following text is implemented in Pytorch Geometric~\cite{Fey/Lenssen/2019} (PyG).
This architecture was derived in the context of hadron\footnote{Particles that are bound together by the strong force.} energy regression in High Energy Physics (HEP), where graph neural networks are beginning to solve difficult clustering problems~\cite{gravnet, NEURIPS2019} in novel ways.
The objective of that problem is to determine the original energy of a particle incident upon a device called a ``sampling calorimeter."
Within the calorimeter, which is made of dense material like lead or steel interspersed with lighter material, incident particles above a threshold energy will produce pairs of particles by nuclear interaction, creating a ``shower" of particles.
At a number of fixed depths within the calorimeter, it records scintillation or ionization signals at fixed depths as a proxy for the number of produced particles.
These estimates of multiplicity can be used to infer the originating particle's energy.
The energy deposition patterns of hadrons are known to have a high degree of local fluctuations in particle multiplicity during the shower's evolution.
This means that throughout the shower there are randomly located regions that require different treatment from more homogeneous ones, and so there exists an optimal dynamic cluster for each hadron shower's data to best estimate the energy.
A human-designed algorithm called ``software compensation"~\cite{Tran_2017} has been developed to solve this problem as well.
It is already based in the principle of reducing an objective function to generate learned weights to determine shower energies.
However, there is a significant amount of specific tuning that needs to be done to make the algorithm function for varying detector designs, and its domain of applicability remains well within HEP.
Using the technique described in this paper, the entire algorithm is now learned, rather than a specific part of a correction, and the algorithm can dynamically adapt to the topology of a given hadron shower.
Using a machine learning algorithm for this task mitigates the need for manual specialization, and affords the possibility to investigate the applications of technique on rather different tasks from calorimetry and with more widely available datasets.
Benchmarks for various estimation and classification tasks will be demonstrated in the more classic machine learning tasks.
The advantages of this dynamic reduction architecture are:
\begin{itemize}
\item Representation spaces with good performance are very small.
\item No prior graph structure is necessary, and if one is provided it can be altered by the pooling layers, since the graph pooling structure is learned.
\item Without a prior graph structure, data for training and inference need very little preprocessing beyond normalization and stacking.
\end{itemize}
\section{Related Work}
The architecture proposed here is similar in outcome to the techniques proposed in~\cite{diehl2019edge, lee2019selfattention}, but radically different in implementation.
Our focus is on learning latent representations that optimize the pooling performance of an unsupervised clustering algorithm.
In particular, previous works in graph learning~\cite{monti2016geometric,fey2017splinecnn} demonstrate that controlling the behavior of an unsupervised algorithm can help in learning concise representations quickly.
However, aspects of the original structure of the data were kept and only messages to pass in that structure were generated.
This text expands on both previous aspects by combining it with dynamic graph convolutions~\cite{wang2018dynamic} and controlling a clustering algorithm in the latent space to produce a dynamically learned optimized pooling.
\section{Dynamic Reduction Network}
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.8\textwidth]{drn_flow.png}
\caption{The basic workflow of the Dynamic Reduction Network, details given below.}
\label{fig:drn_flow}
\end{figure}
All clustering algorithms can be treated as an indexing of nodes, so any clustering algorithm can be used in the latent space to make this demonstration.
The unsupervised clustering algorithm is treated as a black box, and the supervised task is to optimize its performance for the task at hand.
Using a dynamic graph convolutional approach, it is possible to learn the parameters that maximize the impact of the clustering on reducing information for further processing, and hence this network is called a dynamic reduction network (DRN).
We have chosen to use a readily available greedy popularity based clustering algorithm~\cite{10.5555/2340646.2340660} as an initial demonstrator.
Other clustering algorithms will be tested and compared once their GPU implementations are made available in PyG in the future.
The default model used for the MNIST superpixels~\cite{monti2016geometric} classification task in this paper is composed as follows:
\begin{enumerate}
\item The input data are normalized by fixed values such that the input data largely occupy the range $[0,1]$, outliers are allowed.
\item A multilayer perceptron (MLP) encoding the normalized input data to a latent space of dimension N is applied to all input nodes. The default depth is 3 layers, with an intermediate layer half the size of the final output.
\item The latent space data are processed by a dynamic graph convolution layer, i.e. neighbours are found in the latent space rather than the original representation. The internal messages are created using a MLP with three layers, starting from 2N, 1.5N, and outputing a message of width N. The update function can be summation, maximum, or average.
\item The resulting latent nearest neighbours graph is then weighted by distance and clustered using a greedy clustering algorithm, pooling the node features by taking the maximum of the clustered data.
\item The reduced latent data are passed through another dynamic convolutional filter, and clustered again with the same algorithm.
\item The results of the second learned pooling step are then globally max pooled and passed through an MLP decoder to produce the output logits.
\end{enumerate}
\noindent
This process is summarized in Figure~\ref{fig:drn_flow}.
Alterations to this model used for various test are described later in the text.
The depth of the MLPs in the various encoding, message passing, and decoding steps are parameterized.
The number of nearest neighbours, k, is a hyper-parameter of the model and may require tuning to a given task, depending on the relational data that is needed to make a prediction.
\section{Results}
This model was tested on MNIST ``superpixels" dataset with 75 superpixels~\footnote{This model was developed using private data of the CMS Collaboration which cannot be published here.}.
The superpixels dataset is a downsampled and aggregated form of the full MNIST dataset.
Previous graph models are shown in Refs.~\cite{monti2016geometric, fey2017splinecnn},
A result of a scan in hidden-dimension (the ``width") of the network is shown in ~\ref{fig:MNISTSP_perf}.
During the training and evaluation both the original graph structure and all pixels that are not filled are dropped, the data are ``zero-suppressed".
Each superpixel has a pair of coordinates defining a centroid, and an intensity.
The performance using a width of 20 channels with $\mathrm{k} = 4$ is a factor of two better on 75 superpixels than the reference models in ~\cite{monti2016geometric, fey2017splinecnn}, and approaches the performance of models trained on full MNIST at a width of 256 channels.
The data passed to the model consists only of the centroid x and y, and the gray-scale of that pixel.
In Figure~\ref{fig:MNISTSP_perf} each DRN is trained for 400 epochs using an nVidia Tesla V100 with the AdamW optimizer~\cite{loshchilov2017decoupled} using one-cycle cosine annealing (on the learning rate only) with a starting value of 0.001.
The weight decay is held constant at a value of 0.001.
Best performance is often achieved quite early, so further tuning of model training and optimization is possible, and volatile GPU usage is low.
There is significant room for improving the training and evaluation time performance of this model.
\begin{figure}[!hbt]
\centering
\begin{tabular}{c|c|c|c|c}
Model & No. of & Best achieved & Epoch of best & Performance\\
variant & parameters & performance & performance & at 400 epochs \\ \hline
DRN20 & 5123 & \emph{0.9761} & 180 & 0.9731 \\
DRN32 & 12797 & 0.9806 & 205 & 0.9792 \\
DRN64 & 50157 & 0.9872 & 347 & 0.9866 \\
DRN128 & 198605 & 0.9891 & 232 & 0.9884 \\
DRN256 & 790413 & $\mathbf{0.9905}$ & 217 & 0.9899 \\
MoNet & -- & 0.9111 & -- & -- \\
SplineCNN & 63786 & 0.9522 & 40 & -- \\
\end{tabular}
\caption{Performance for a scan in dynamic reduction network width of the architecture described above on the MNIST ``superpixels" dataset~\cite{monti2016geometric} with 75 superpixels, and two established models. DRN<N> means a dynamic reduction network with hidden dimension N is used. Comparison to similar models on the superpixels dataset are made. All superpixel graph data has been dropped and the network is allowed to learn an entirely new representation based on the pixel data. For all DRN models, k = 4. The performance at a hidden dimension size of 20 is particularly interesting given the modest gains in performance from wider networks. `--' indicates unknown data.}
\label{fig:MNISTSP_perf}
\end{figure}
The superpixels dataset is known to yield poor performance in CNN-based networks, and graph networks have been proposed in previous work to improve upon this.
The performance demonstrated in Figure~\ref{fig:MNISTSP_perf} shows clearly that further improvements on very under-sampled data are achievable and this model sets new performance benchmarks on these datasets, especially in terms of model size.
\section{Conclusions}
This text introduces the Dynamic Reduction Network as a new tool in High Energy Particle physics for processing sampled data from a highly varying multi-dimensional image.
This is accomplished by designing a network that can learn effective pooling strategies by manipulating an unsupervised algorithm in a high-dimensional latent space.
To demonstrate the efficacy of this network, a performance benchmark on an undersampled MNIST dataset indicates that this new architecture outperforms previous graph based architectures, even for very small numbers of parameters.
This outcome suggests a powerful new technique for approaching classification and regression problems in both computer vision and high energy physics.
\section{Acknowledgements}
This research was supported in part by the Office of Science, Office of
High Energy Physics, of the US Department of Energy under Contract No. DE-AC02-07CH11359 through FNAL LDRD-2019-017.
Many thanks to Matthias Fey and Song Han for quick consultation on this model.
Thanks as well to Nhan Tran and Salvatore Rappoccio for proofreading assistance.
\bibliographystyle{unsrt}
|
2,877,628,089,845 | arxiv | \section{Introduction}
The deflection of light rays by massive objects results in magnification of distant light sources when a massive body passes close to the line joining light source and observer. For a simple point-source-point-lens system, the magnification over time (the "light curve") has a smooth symmetric form, whereas a binary lensing system may produce significant deviations from the simple light curve, although these are typically of short duration. A comparison of such light curves is shown in Fig. 1. These were generated using one of the models mentioned in \citet{wf}, by sampling the number of light rays passing through a narrow strip of the magnification map.
\begin{figure}
\vspace{1cm}
\caption{Typical light curves for point-source-point-lens model (left), and for a binary system (right). The horizontal axis "Time" corresponds to distance that the observer has travelled across the magnification map. Intensity is relative to the un-lensed intensity of the background star.}
\includegraphics[height=4cm]{fig1a}
\includegraphics[height=4cm]{fig1b}
\label{curves}
\end{figure}
This phenomenon, known as gravitational lensing, is used by astrophysicists in identifying characteristics of the lensing object. Such an approach is useful in searching for dark matter, as suggested by \citet{pac}. The first exo-planet discovered using this approach was found in 2003 (see \citet{bond}), with several more discovered since that time. The presence of such planets in the lensing system can cause caustics in the magnification map. Such caustics are described by \citet{wam}. Various techniques can be used to model such caustic patterns. The simplest of these is to deflect the light ray as it crosses the lensing plane (this is the plane containing the lensing object, normal to the line joining the source and observer). The light path is then considered as two straight lines with an abrupt change of trajectory at the lensing plane. For a description of this method, see for example \citet{sch}. The amount of deflection in such a model is given by the Einstein deflection angle. As the deflection involved is very small (which means that the photon passes through areas of weak gravitational fields only), such a "first order" approach is a very accurate approximation.
Recently, \citet*{wfj} undertook a new approach, in which they used the Schwarzschild metric to derive kinematic type laws for the propagation of light rays through a lensing system. They found that the acceleration vectors thus derived gave results in close agreement with those obtained using the simpler model described above. Later, \citet{wf} considered a linearized approximation, in which the light rays were assumed to deflect only slightly from an otherwise straight-line path. They showed that their linearized equations were capable of an exact closed-form solution which agreed well with the fully non-linear simulations.
In the present paper, the approach of \citet{wfj} is generalized to include the effects of relativistic frame dragging due to rotation of the lensing object, as described by the Kerr metric. A kinematic description is given in Section 2. It is found that converting to Cartesian co-ordinates simplifies the description of the light paths, by removing all acceleration terms at zeroth order. The non-rotating (Schwarzschild) case is examined in Section 3, and rotation effects, which become significant at second order in the Schwarzschild radius, are considered in Section 4. Application to delay of pulses in a binary pulsar model is presented in Section 5, and concluding remarks are given in Section 6.
\section{Light Rays in a Kerr System}
The Kerr metric describes spacetime outside an uncharged point mass, rotating or otherwise. The Schwarzschild solution is contained as a special case wherein the mass has no angular momentum. Such a solution has spherical symmetry, whereas for a rotating body, the system is axi-symmetric only. For any light path other than one confined to the equatorial plane, a fully three dimensional description of the path is required. This is different than for the Schwarzschild case, where any path is confined to a plane, and can thus be treated as a two dimensional problem. We therefore begin with the Kerr metric given in Boyer-Lindquist coordinates (the conversion is described in Section \ref{acccomp} below), as written in Chandrasekhar's thorough mathematical treatment of black holes (\citet{cha})
\begin{eqnarray}
\mathrm{d} s^2&=&\frac{\Delta}{\rho^2}[\mathrm{d} t-(a\sin^2 \theta)\mathrm{d}\phi]^2-\frac{\sin^2 \theta}{\rho^2}[(r^2+a^2)\mathrm{d}\phi-a\mathrm{d}t]^2-\frac{\rho^2}{\Delta}(\mathrm{d}r)^2-\rho^2(\mathrm{d}\theta)^2.
\end{eqnarray}
From this metric, the equations of motion can be derived. In this paper, we are interested in the paths of light rays, so we consider the null geodesics $ds=0$ for a Kerr spacetime (\citet{cha}, pp. 346-7):
\begin{eqnarray}
\rho^4 \dot{r}^2&=&r^4+(a^2-L^2-Q)r^2+r_s r (Q+(L-a)^2)-a^2 Q \label{rdot} \\
\rho^4 \dot{\theta}^2&=&Q+a^2\cos^2{\theta}-L^2 \cot^2{\theta}\label{thetadot} \\
\rho^2 \dot{\phi}&=&\frac{1}{\Delta}(r_s a r+\frac{(\rho^2-r_s r)L}{\sin^2{\theta}})\label{phidot}\\
\rho^2 \dot{t}&=&\frac{1}{\Delta}((r^2+a^2)^2-r_s a r L).\label{tdot}
\end{eqnarray}
Here, the Schwarzschild radius is $r_{s}=2MG/c^{2}$, $t$ is the time coordinate in the reference frame of the mass, $a=J/Mc$ is the angular momentum term, and the dot indicates differentiation by a parameter, which we will call $\tau'$. The other symbols are defined as: $\rho^{2}=r^2 + a^2 \cos^{2}\theta$; $\Delta=r^2+a^2-r_s r$; $M$ is the mass of the body; and $J$ is the angular momentum of the body. We are using geometrized units, that is, $c=G=1$. Finally, $L$ and $Q$ are constants of the motion, related closely to the angular momentum of the particle. The first of these, $L$, comes from the first integral of the Euler-Lagrange equation for $\dot \phi$, and the second, $Q$, is Carter's constant, which is derived from the separation of the Hamilton-Jacobi equation for geodesic motion (\citet{cha}, p. 342).
\subsection{Acceleration Components} \label{acccomp}
Solving equations (\ref{rdot}) and (\ref{thetadot}) for $\dot{r}$ and $\dot{\theta}$ introduces square roots, for which the sign ($\pm$) is ambiguous (that is, either sign may be chosen). Additionally, we found that numerical integrators such as the Runge-Kutta method find singular solutions such as closed orbits when integrating these equations, and so do not always find the path of unbound photons. To remove these difficulties, we will take derivatives, producing acceleration components which have a simpler form than the first derivatives. As the parameterisation is arbitrary, for simplicity we first re-parameterise in order to remove the $\rho^2$ terms at the beginning of each equation. We choose a parameter $\tau$ such that $r^2 \frac{d}{d\tau}=\rho^2\frac{d}{d\tau'}$. This has the result that each instance of $\rho$ on the left of the geodesic equations above becomes $r$. Re-using the dot-notation for $d/d\tau$ and differentiating gives the following equations:
\begin{eqnarray}
\ddot{r}&=&\frac{L^2+Q-a^2}{r^3}-\frac{3 r_s}{2 r^4}(Q+(L-a)^2)+\frac{2 a^2 Q}{r^5} \label{rdd} \\
\ddot{\theta}&=&\frac{\cos{\theta}}{\sin^3\theta}(L^2-a^2\sin^4\theta)-\frac{2 \dot{r}\dot{\theta}}{r} \label{thetadd} \\
\ddot{\phi}&=&\frac{a \dot{r}}{r^2\Delta^2}(r_s a^2 - r_s r^2 + a L (2 r -r_s)) -
\frac{ 2 L \cos{\theta}}{r^2\sin^3{\theta}} \dot\theta - \frac{2\dot{r}\dot\phi}{r}
\label{phidd}
\end{eqnarray}
In order to describe the path of a particle through a system consisting of more than a single body at the origin, it is convenient to express the acceleration components in Cartesian co-ordinates. The conversion is given by the following substitutions (\citet{cha}, pp. 306-7):
\begin{eqnarray}
x&=&(r \cos\widetilde{\varphi}+a \sin\widetilde{\varphi})\sin\theta \nonumber \\
y&=&(r \sin\widetilde{\varphi}-a \cos\widetilde{\varphi})\sin\theta \nonumber \\
z&=&r \cos\theta
\label{xyz}\end{eqnarray}
where $\dot{\widetilde{\varphi}}=\dot\phi-a \dot{r}/ \Delta$. These equations provide an implicit definition of $r$ as:
\begin{eqnarray}
r^4-r^2(x^2+y^2+z^2-a^2)-a^2 z^2=0 \nonumber
\end{eqnarray}
Notice that if there is no rotation, that is, $a=0$, then this degenerates to a conversion from spherical co-ordinates, as expected. Differentiating the first equation in (\ref{xyz}) twice gives the following expression for $\ddot x$:
\begin{eqnarray}
\ddot x=\frac{1}{a^2+r^2}\biggl[(x+\frac{r_s a y}{\Delta})(r\ddot{r}+\frac{a^2-r^2}{a^2+r^2}\dot r^2)+ r \dot r \big(\dot x+\frac{r_s a \dot y}{\Delta}-\frac{r_s a (2 r-r_s)y \dot r}{\Delta^2}\big)\biggl]-\dot y \dot\phi-y \ddot\phi -\frac{x
\dot\theta^2}{\sin^2\theta}+\frac{\dot x \dot\theta +x \ddot\theta}{\tan \theta}.
\label{xdd}
\end{eqnarray}
In this equation, $\dot r$, $\dot \theta$ and $\dot \phi$ are obtained from the conversion equations (\ref{xyz}) by differentiation. A similar approach for $y$ and $z$ will give expressions for $\ddot y$ and $\ddot z$ respectively. Substituting in equations (\ref{rdd})-(\ref{phidd}) for $\ddot r$, $\ddot \theta$ and $\ddot \phi$ and simplifying leads to a system of the form
\begin{eqnarray}
\ddot x=\frac{-3 r_s x (L^2+Q)}{2 r^5}+a F_x(x,y,z,\dot x,\dot y,\dot z) \nonumber \\
\ddot y=\frac{-3 r_s y (L^2+Q)}{2 r^5}+a F_y(x,y,z,\dot x,\dot y,\dot z) \nonumber \\
\ddot z=\frac{-3 r_s z (L^2+Q)}{2 r^5}+a F_z(x,y,z,\dot x,\dot y,\dot z)
\label{xyzdd2}
\end{eqnarray}
The constant $a$ has a valid range from $-r_s/2$ to $r_s/2$. It is therefore reasonable to say that the angular momentum term $a$ is of the same order of magnitude as the Schwarzschild radius $r_s$. It may then be said that because the functions $F_x, F_y$ and $F_z$ are of order $r_s$, the first term in each of the equations in (\ref{xyzdd2}) is of first order, and the remainder is second order and higher. The full acceleration components in equation (\ref{xyzdd2}) are given in the appendix.
\section{Schwarzschild Acceleration in Cartesian Co-ordinates} \label{secschw}
We can see that for the non-rotating (Schwarzschild) case, that is, $a=0$, we obtain the elegant result:
\begin{eqnarray}
\mathbf{\ddot{r}}=\frac{-3 r_s (L^2+Q)}{2 r^5}\mathbf{r}
\label{schw1}
\end{eqnarray}
where $\mathbf{r}=[x,y,z]$ is the position vector, and $r=\lvert \lvert \mathbf{r} \rvert \rvert=\sqrt{x^2+y^2+z^2}$ is its Euclidean distance from the origin. From the non-rotating ($a=0$) versions of equations (\ref{thetadot}) and (\ref{phidot}) and the conversion equations (\ref{xyz}), we can write
\begin{eqnarray}
L&=&x \dot y-y \dot x \nonumber \\
Q&=&(x \dot z-z \dot x)^2+(z \dot y-y \dot z)^2.
\label{LQ}
\end{eqnarray}
We can now say that $L^2+Q$ is the square of the impact parameter, which is the perpendicular distance of the initial (straight-line) path of the photon from the point lens. Equation (\ref{schw1}) is presented in a form similar to the standard Newtonian gravitational equation
\begin{eqnarray}
\mathbf{\ddot{r}}=\frac{-r_s}{2 r^3}\mathbf{r}. \nonumber
\end{eqnarray}
However, it should be noted that the parameter in equation (\ref{schw1}) differs in that it includes the time dilation factor, that is, $\dot t=r/(r-r_s) $. It will be helpful to explore the Schwarzschild solution in this coordinate system before continuing on to the more general Kerr solution. Expanding equation (\ref{schw1}) into the three components gives the equality:
\begin{eqnarray}
\frac{\ddot x}{x}=\frac{\ddot y}{y}=\frac{\ddot z}{z}, \nonumber
\end{eqnarray}
which can be integrated to give the angular momentum conservation equations, analogously with classical mechanics:
\begin{eqnarray}
x \dot y-y \dot x&=&L_z \nonumber \\
x \dot z-z \dot x&=&L_y \nonumber \\
y \dot z-z \dot y&=&L_x. \nonumber
\end{eqnarray}
In these equations the constants $L_x$, $L_y$ and $L_z$ are the three components of angular momentum. From equation (\ref{phidot}), we can identify $L$ with $L_z$. Taking the inner product of equation (\ref{schw1}) with $\bf \dot r$ and integrating gives
\begin{eqnarray}
\lvert \lvert \mathbf{\dot r} \rvert \rvert^2=1+\frac{r_s(L^2+Q)}{r^3}, \nonumber
\end{eqnarray}
after the integration constant has been determined by the boundary condition $\lvert \lvert \mathbf{\dot r} \rvert \rvert \rightarrow 1$ as $r \rightarrow \infty$. Further use of the identity (\ref{LQ}) enables this to be expressed in the final form
\begin{eqnarray}
(x \dot x+y \dot y+z \dot z)^2=x^2+y^2+z^2-(L_x^2+L_y^2+L_z^2)+\frac{r_s}{\sqrt{(x^2+y^2+z^2)}}(L_x^2+L_y^2+L_z^2).
\label{schw3}
\end{eqnarray}
Equation (\ref{schw3}) permits us to identify $Q$ with $L_x^2+L_y^2$ and we arrive back at the non-rotating version of (\ref{rdd}). We have identified $Q$ and $L$ in the non-rotating case with the angular momentum of the particle. In the rotating case, we will see that while there are conserved quantities, $Q$ and $L$, they are not identical with $L_x^2+L_y^2$ and $L_z$ above. Due to the spherical symmetry of the Schwarzschild system, $L$ and $Q$ only appear in the form $L^2+Q$. For readability in the Schwarzschild analysis to follow, it is convenient to introduce the non-negative constant $K=L^2+Q$.
\subsection{Linearized Schwarzschild Expansion} \label{secschw1}
We can approximate the path taken by photons in the Schwarzschild system, using the expansions:
\begin{eqnarray}
x&=&X_0+r_s X_1+r_s^2 X_2+O(r_s^3) \nonumber \\
y&=&Y_0+r_s Y_1+r_s^2 Y_2+O(r_s^3) \nonumber \\
z&=&Z_0+r_s Z_1+r_s^2 Z_2+O(r_s^3)
\label{xyz1}
\end{eqnarray}
where $r_s$ is considered small, relative to the distance of closest approach. Matching terms of corresponding order in $r_s$ will give the zeroth, first and second order solutions. Differentiating the first equation in (\ref{xyz1}) twice and equating with the $x$-component of equation (\ref{schw1}) yields:
\begin{eqnarray}
\ddot X_0+r_s \ddot X_1=\frac{-3 r_s x K}{2 r^5}+O(r_s^2), \nonumber
\end{eqnarray}
where instances of $x$, $y$ and $z$ in the right side must also be expanded. Matching up the zeroth-order terms gives $\ddot X_0=0$ (and similarly $\ddot Y_0=0$ and $\ddot Z_0=0$). Integrating twice gives us the zeroth order solution
\begin{eqnarray}
X_0&=&C_1 \tau+C_2 \nonumber \\
Y_0&=&C_3 \tau+C_4 \nonumber \\
Z_0&=&C_5 \tau+C_6
\label{xyz0}
\end{eqnarray}
for some constants of integration $C_1$ to $C_6$. As is expected, this solution describes a straight line. In order to solve the first-order and second-order equations, it will be necessary to expand $r$ and $K=L^2+Q$ in powers of $r_s$ using equation (\ref{xyz1}). We write $r=R_0+r_s R_1+r_s^2 R_2+O(r_s^3)$ and then the zeroth order term for $r$ is given by
\begin{eqnarray}
R_0^2&=&X_0^2+Y_0^2+Z_0^2 \nonumber \\
&=&A \tau^2+2 B \tau+C \nonumber
\end{eqnarray}
where we have introduced three constants for readability:
\begin{eqnarray}
A&=&C_1^2+C_3^2+C_5^2 \nonumber \\
B&=&C_1 C_2+C_3 C_4+C_5 C_6 \nonumber \\
C&=&C_2^2+C_4^2+C_6^2. \label{abc}
\end{eqnarray}
However, we note that in the zeroth order solution, the speed of the photon ($=\sqrt{C_1^2+C_3^2+C_5^2}$) is $1$, so that $A=1$. The zeroth order term for $K$ is
\begin{eqnarray}
K_0&=&(X_0 \dot Y_0-Y_0 \dot X_0)^2+(X_0 \dot Z_0-Z_0 \dot X_0)^2+(Z_0 \dot Y_0-Y_0 \dot Z_0)^2 \nonumber \\
&=&C-B^2 \nonumber
\end{eqnarray}
Terms of first order in the small parameter $r_s$ are now equated and we obtain
\begin{eqnarray}
\ddot X_1=\frac{-3 X_0 K_0}{2 R_0^5}. \nonumber
\end{eqnarray}
We can now use the substitution $\tau+B=\sqrt{K_0} \tan \gamma$ and integrate twice. This gives the first-order corrections to the light paths
\begin{eqnarray}
X_1&=&\frac{X_0}{2 R_0}-\frac{R_0}{K_0}(C_2-B C_1)+C_{11} \tau + C_{21} \nonumber \\
Y_1&=&\frac{Y_0}{2 R_0}-\frac{R_0}{K_0}(C_4-B C_3)+C_{31} \tau + C_{41} \nonumber \\
Z_1&=&\frac{Z_0}{2 R_0}-\frac{R_0}{K_0}(C_6-B C_5)+C_{51} \tau + C_{61}
\label{XYZ1}
\end{eqnarray}
Consequently, the first-order velocity components are:
\begin{eqnarray}
\dot X_1&=&\frac{C_1}{2 R_0}-\frac{X_0 (\tau+B)}{2 R_0^3}-\frac{\tau+B}{R_0 K_0}(C_2-B C_1)+C_{11} \nonumber \\
\dot Y_1&=&\frac{C_3}{2 R_0}-\frac{Y_0 (\tau+B)}{2 R_0^3}-\frac{\tau+B}{R_0 K_0}(C_4-B C_3)+C_{31} \nonumber \\
\dot Z_1&=&\frac{C_5}{2 R_0}-\frac{Z_0 (\tau+B)}{2 R_0^3}-\frac{\tau+B}{R_0 K_0}(C_6-B C_5)+C_{51}
\label{XYZ1d}
\end{eqnarray}
Choosing the initial position for the light ray gives us the three constants $C_2, C_4, C_6$. We then specify the initial angle of the ray by choosing two of $\dot x$, $\dot y$ and $\dot z$, and the third of these can be identified using the geodesic equations (\ref{rdot}), (\ref{thetadot}), and (\ref{phidot}) to determine the speed:
\begin{eqnarray}
\dot x^2+\dot y^2+\dot z^2=\dot r^2+r^2 \sin^2 \theta \dot \phi^2+r^2 \dot \theta^2=1+r_s K/r^3. \label{speed}
\end{eqnarray}
This gives the constants $C_1$, $C_3$ and $C_5$. We can then solve for $C_{11}$ to $C_{61}$ in the same way using the equations in (\ref{XYZ1}) and (\ref{XYZ1d}) and the speed equation (\ref{speed}). We now have complete path equations for the first order approximation. Converting the velocity given by equations (\ref{rdot})-(\ref{phidot}) (with $a=0$) to Cartesian co-ordinates gives a constraint on the constants of integration which will be useful later:
\begin{eqnarray}
C_1 C_{11}+C_3 C_{31}+C_5 C_{51}=0. \nonumber
\end{eqnarray}
\subsection{Application: Magnification map - binary system}
We are now in a position to determine the caustic map due to photons travelling through a system consisting of one or more non-rotating masses, either by tracing their paths using forward integration of equation (\ref{schw1}) or by solving the first order equations as above. In either case, we determine the initial conditions for each light ray. With the lens placed at the origin, we place the source of the rays on the $x$-axis, at $(x_{source},0,0)$. This source is taken to emit light isotropically, so the rays are spread evenly over the azimuthal angle $\phi_s$ from $0$ to $2 \pi$ and the inclination angle $\theta_s$ from $-\pi/2$ to $\pi/2$. To save on computational time, we will only include the small subset of these rays that will pass near to the planet. For each ray, $dy/dx=\tan \phi_s$ and $dz/dx=\tan \theta_s \sec \phi_s$. This gives us five of the six initial values for the ray. The speed of the photon is determined by the speed equation (\ref{speed}), substituting in the five chosen initial values in the equations for $K$ and for the speed (\ref{speed}) to obtain an equation for $\dot x$:
\begin{eqnarray}
\dot x^2=\left(1+ \bigg[\left( \frac{dy}{dx}\right)^2+\left(\frac{dz}{dx}\right)^2\bigg]\bigg[ 1-\frac{r_s}{ \left|x_{source}\right| }\bigg] \right)^{-1} \nonumber
\end{eqnarray}
We can then say that $\dot y=\dot x (dy/dx)$ and $\dot z=\dot x (dz/dx)$. Having the six initial values ($x,y,z,\dot x,\dot y,\dot z$), forward integration can now be used to solve the system of six first order equations
\begin{equation}
\frac{d}{d\tau}
\left[\begin{array}{c}
x\\
y\\
z\\
\dot x\\
\dot y\\
\dot z\\
\end{array}
\right]=\left[\begin{array}{c}
\dot x\\
\dot y\\
\dot z\\
\ddot x\\
\ddot y\\
\ddot z
\nonumber
\end{array}\right]
\end{equation}
where the acceleration components $\ddot x, \ddot y$ and $\ddot z$ are given by equation (\ref{schw1}). The integration is stopped once the value of $x$ corresponds to that of the observer, and a point is then plotted at ($y,z$).
Unsurprisingly, tracing such paths through a system consisting of a central mass and a single planet produces the same diamond caustic pattern, similar to that seen in the top part of Fig. \ref{kerr1} in Section 4, which was described by \citet{wam}, and was also plotted previously using 2-dimensional polar co-ordinates in \citet{wfj} and \citet{wf}. Interestingly the computations were slightly quicker with this new Cartesian system, as it was not necessary to rotate each ray into the $x,y$ or $r,\phi$ plane, and also because the zeroth order terms of $\ddot x$, $\ddot y$ and $\ddot z$ in equation (\ref{schw1}) are now all zero (whereas those of $\ddot r$ and $\ddot \phi$ are not). This leaves only small acceleration terms which the numerical integration routine can process more rapidly. However, any simple forward integration method, including this one, is still slower than more elaborate methods, such as the semi-analytical method of \citet{dex} or the numerical methods of \citet{rb} which increase computational efficiency through the use of elliptic integrals and clever changes of variables.
Similarly, to obtain the first order solution (\ref{XYZ1}) calculated above, we again use the five given initial conditions and the speed equation to give ($x,y,z,\dot x,\dot y,\dot z$). These are used with the zeroth order equations (\ref{xyz0}) and their derivatives to derive the constants $C_1-C_6$ and then with the first order equations (\ref{XYZ1}) and (\ref{XYZ1d}) to derive $C_{11}-C_{61}$. The position of the ray ($y,z$) at the observer's plane can then be directly calculated from the zeroth and first order equations, by solving for $\tau$ at the value of $x$ corresponding to the observer, and then applying that value of $\tau$ to the equations for $y$ and $z$. This gives a caustic map indistinguishable from that obtained using forward integration, but in a much shorter time, approaching the speed of the thin lens formula $\delta=r_s/b$.
\subsection{Application: Total Deflection Angle - first order approximation} \label{sectiondeflect1}
The well known total deflection for a light ray passing near to a spherically symmetric mass can now easily be estimated to first order in $r_s$. Due to the spherical symmetry of the space-time around the non-rotating mass, we can choose a ray confined to the equatorial plane, without loss of generality. At $\tau=0$, let the ray cross the $y$-axis parallel to the $x$-axis, at some value $y_i$, as shown in Fig. \ref{fig2}. Solving for the speed of the particle at $\tau=0$, (where $\dot y=0$), it can be seen that $\dot x^2=1+r_s K/r^3$. Also, at that point, $x=0$ and $y=y_i$. It is straightforward to solve for the zeroth-order constants and obtain $C_1=1$, $C_2=0$, $C_3=0$, $C_4=y_i$. The first-order constants can then be calculated to give $C_{11}=0$, $C_{21}=0$, $C_{31}=0$, $C_{41}=1/2$. Having the full first-order path equations, the total deflection is given by the difference in $\arctan (\dot y/\dot x)$ as $\tau \rightarrow \infty$ and $\arctan (\dot y/\dot x)$ as $\tau \rightarrow -\infty$. This gives the result $2r_s/y_i+O(r_s^2)$, which is consistent with the well known Einstein deflection. In this case, $y_i$ is the point of closest approach (often referred to as $r_0$), and also the zeroth-order approximation to the impact parameter, often referred to as $b$. Thus to first-order in $r_s$, $2r_s/y_i=2r_s/b$.
\begin{figure}
\vspace{1cm}
\caption{Approximating deflection and delay to the light path near a massive object, located at the origin. For ease of calculations, the light path is chosen so that the ray is horizontal as it crosses the $y$-axis. For a non-rotating mass, the path is left-right symmetric.}
\includegraphics[width=15cm]{fig2} \\
\label{fig2}
\end{figure}
\subsection{Application: Travel Time Delay - first order approximation} \label{delaysec1}
Using the first-order equations again, it is a simple matter to compute the travel-time for a photon from any initial point and time ($x_i, y_i, \tau_i$) to any other point and time ($x_f, y_f, \tau_f$). For ease of computation, and without loss of generality, we may use the same arrangement, and therefore the same constants as described in the angle calculation illustrated in Fig. \ref{fig2} (Section \ref{sectiondeflect1}). In order to measure the travel-time to a point of given radius $r_f$, we solve for $\tau_f$ by means of the path equations with the constraint $x_f^2+y_f^2=r_f^2$. This will simplify the calculation of the travel time delay for a light ray passing close to the sun. This delay has been calculated to first order previously, and will serve as a check on this new method. We note that $y_i$ is the closest approach to the sun, which is usually designated $r_0$, Then, at the final point, $\tau=\tau_f$, so $X_0=\tau_f$ and $Y_0=r_0$, so that at that point, $R_0=\sqrt{\tau_f^2+r_0^2}$. The first order terms are
\begin{eqnarray}
X_1&=&\frac{\tau_f}{2\sqrt{\tau_f^2+r_0^2}} \nonumber \\
Y_1&=&\frac{r_0}{2\sqrt{\tau_f^2+r_0^2}}-\frac{\sqrt{\tau_f^2+r_0^2}}{r_0}+\frac{1}{2}. \nonumber
\end{eqnarray}
To obtain the first-order delay term, we solve for $\tau_f$, and then convert to co-ordinate time $t$ by equation (\ref{tdot}), giving
\begin{eqnarray}
r_f^2&=&x_f^2+y_f^2 \nonumber \\
&=&\bigg[\tau_f+r_s\frac{\tau_f}{2\sqrt{\tau_f^2+r_0^2}}\bigg]^2+\bigg[r_0+r_s\bigg(\frac{r_0}{2\sqrt{\tau_f^2+r_0^2}}-\frac{\sqrt{\tau_f^2+r_0^2}}{r_0}+\frac{1}{2}\bigg)\bigg]^2+O(r_s^2) \nonumber \\
&=&\tau_f^2+r_0^2+r_s(r_0-\sqrt{\tau_f^2+r_0^2})+O(r_s^2). \label{delaycalc1}
\end{eqnarray}
This is a quadratic equation in $\sqrt{\tau_f^2+r_0^2}$. After solving, we see that
\begin{eqnarray}
\tau_f=\pm\sqrt{r_f^2-r_0^2}(1+\frac{r_s}{2(r_f+r_0)})+O(r_s^2).\nonumber
\end{eqnarray}
Solving equation (\ref{tdot}) to first order and integrating gives $t=\tau+r_s \ln((\tau+R_0)/r_0)+O(r_s^2)$, the constant of integration being determined by letting $t=0$ when $\tau=0$. Substituting this into equation (\ref{delaycalc1}) gives the total travel time
\begin{eqnarray}
t_f&=&\pm\bigg(\sqrt{r_f^2-r_0^2}+\frac{r_s}{2}\sqrt{\frac{r_f-r_0}{r_f+r_0}}+r_s \ln{\frac{r_f+\sqrt{r_f^2-r_0^2}}{r_0}}\bigg)+O(r_s)^2. \label{traveltime1}
\end{eqnarray}
The first term on the right hand side of equation (\ref{traveltime1}) is the straight-line time, and the rest constitutes the delay. This delay is in complete agreement with the well known first order delay (for example, see \citet{wei} p.202).
\subsection{Second Order Schwarzschild Expansion} \label{secschw2}
Frame dragging effects due to rotation do not occur at first order, so it will be necessary to consider the Kerr metric equations at second order. Before doing so, it will be worth identifying the second order expansion of the Schwarzschild system. The advantage of this approach is that we can follow the same procedure as above while dealing with fewer terms than in the full rotational model.
The second-order terms $X_2$, $Y_2$ and $Z_2$ in the expansion (\ref{xyz1}) are now considered. First it is necessary to expand $r=\sqrt{x^2+y^2+z^2}$ and the constant $K$ to first order in $r_s$, that is, $r=R_0+r_s R_1+O(r_s^2)$ and $K=K_0+r_s K_1+O(r_s^2)$. From Section \ref{secschw1}, it is straightforward to establish that
\begin{eqnarray}
R_1&=&-\frac{1}{2}+\frac{1}{R_0}(B_{R} \tau+C_{R}) \nonumber \\
K_1&=&2(C_R-B_R B) \nonumber
\label{r1eq2}
\end{eqnarray}
where we have introduced two more constants, $B_R$ and $C_R$ for readability. These are named according to their similarity with the constants $B$ and $C$ in equations (\ref{abc}). They are
\begin{eqnarray}
B_{R}&=&C_1 C_{21}+C_2 C_{11}+C_3 C_{41}+C_4 C_{31}+C_5 C_{61}+C_6 C_{51} \nonumber \\
C_{R}&=&C_2 C_{21}+C_4 C_{41}+C_6 C_{61}. \nonumber
\end{eqnarray}
In a manner similar to the first order expansion of section \ref{secschw1}, we can now expand $\ddot x$ to second order, and equation (\ref{schw1}) yields
\begin{eqnarray}
\ddot X_0+r_s \ddot X_1+r_s^2 \ddot X_2=\frac{-3 r_s (X_0+r_s X_1)(K_0+r_s K_1)}{2 (R_0+r_s R_1)^5}+O(r_s^3). \nonumber
\end{eqnarray}
Expanding and matching terms with coefficient $r_s^2$ gives:
\begin{eqnarray}
\ddot X_2&=&\frac{-3 K_0}{2 R_0^5}\big(X_1-5 X_0 R_1/R_0+ K_1 X_0/K_0\big) \nonumber\\
&=&\frac{-3 K_0}{2 R_0^5}\bigg(3 \frac{X_0}{R_0}-\frac{R_0}{K_0}(C_2-B C_1)+C_{11} \tau + C_{21} -5\frac{X_0}{R_0^2} (B_{R} \tau+C_{R})+\frac{K_1}{K_0}X_0\bigg). \nonumber
\end{eqnarray}
Integrating twice gives the equation for $X_2$:
\begin{eqnarray}
X_2 = \frac{X_0}{R_0}F_1 +\frac{C_1}{\sqrt{K_0}}F_2+\frac{C_1 B-C_2}{K_0}F_3 -\frac{C_{21} - C_{11} B}{K_0}R_0+\frac{C_{11}\tau+C_{21}}{2R_0}+C_{12} \tau+C_{22}.
\label{x2int}
\end{eqnarray}
The intermediary functions $F_1$, $F_2$ and $F_3$ are given by:
\begin{eqnarray}
F_1&=&\frac{9}{16 R_0}-\frac{B_{R} \tau+C_{R}}{2 R_0^2} \nonumber\\
F_2&=&\frac{B_{R} R_0}{\sqrt{K_0}}+\frac{9}{16} \arctan{\frac{\tau + B}{\sqrt{K_0}}} \nonumber\\
F_3&=&2 R_0\frac{B_{R} B-C_{R}}{K_0}+\frac{B_{R} \tau+C_{R}}{R_0}+\frac{15}{16}\frac{\tau+B}{\sqrt{K_0}} \arctan{\frac{\tau + B}{\sqrt{K_0}}} \nonumber
\end{eqnarray}
Due to the spherical symmetry of the Schwarzschild space-time, the equations for $Y_2$ and $Z_2$ have a similar form:
\begin{eqnarray}
Y_2&=&\frac{Y_0}{R_0}F_1 +\frac{C_3}{\sqrt{K_0}}F_2+\frac{C_3 B-C_4}{K_0}F_3 -\frac{C_{41} - C_{31} B}{K_0}R_0+\frac{C_{31}\tau+C_{41}}{2R_0}+C_{32} \tau+C_{42} \nonumber\\
Z_2&=&\frac{Z_0}{R_0}F_1 +\frac{C_5}{\sqrt{K_0}}F_2+\frac{C_5 B-C_6}{K_0}F_3 -\frac{C_{61} - C_{51} B}{K_0}R_0+\frac{C_{51}\tau+C_{61}}{2R_0}+C_{52} \tau+C_{62}
\label{yz2int}
\end{eqnarray}
As in Section \ref{secschw1}, we can identify the constants, $C_{12}$, $C_{32}$ and $C_{52}$ by solving for $\dot X_2$, $\dot Y_2$ and $\dot Z_2$ at $\tau=0$, and likewise to determine $C_{22}$, $C_{42}$ and $C_{62}$ we solve for $X_2$, $Y_2$ and $Z_2$ at $\tau=0$. We can now compare the paths taken by light rays as calculated using the following three methods:
(i) forward integration of equation (\ref{schw1});
(ii) zeroth-order path equations (\ref{xyz0}) with first-order corrections (\ref{XYZ1}); and
(iii) zeroth-order path equations (\ref{xyz0}) with first-order corrections (\ref{XYZ1}) and second-order corrections (\ref{x2int}) and (\ref{yz2int}).
As the paths within the Schwarzschild system are contained within a plane, we can compare our different solutions in two dimensions without loss of generality. This is illustrated in Fig. \ref{schw_cf} with $r_s=0.2$, where the three different methods have been applied to five rays originating from (0,-10) with different starting angles, and each being deflected by the mass at the origin. The three methods agree well in the weak gravity regime at the top of the diagram, and the second order solution does not diverge much from the exact solution until the deflection becomes quite large, that is for rays passing close to the mass. In each case, the second order paths approximate the paths obtained by forward integration more closely than do the first order paths. In particular, for selected points, if we call the difference in deflection angle between the numerically calculated path and the first-order path $\delta_1$, and the difference in deflection angle between the numerically calculated path and the second-order path $\delta_2$, we find that $\delta_2 \approx \delta_1^2$. That is, the errors are found to behave proportional to $r_s$ and $r_s^2$, as expected.
\begin{figure}
\vspace{1cm}
\caption{Comparison of first and second order path approximations against numerical integration of the full acceleration vector. Rays originate at (-10,0) and are deflected by the mass at (0,0) of Schwarzschild radius $0.2$. Five different initial trajectories are chosen, each of which is computed using the three different methods. In each case, the deflection is greatest with the forward integration of the acceleration vector, and least with the first order path equations.}
\includegraphics[width=16cm]{fig3}
\label{schw_cf}
\end{figure}
\subsection{Application: Total Deflection angle - second order approximation}
Following the earlier procedure for the first order approximation of the total deflection angle in Section \ref{sectiondeflect1}, we can now easily determine the second order correction. Of the second order constants, only $C_{32}$ will appear in this calculation, and by noting that $\dot y=0$ at $\tau=0$, its value is found to be $C_{32}=0$. The deflection angle is again given by the difference in $\arctan (\dot y/\dot x)$ as $\tau \rightarrow \infty$ and $ \arctan (\dot y/\dot x)$ as $\tau \rightarrow -\infty$. In the system described in Section \ref{sectiondeflect1}, and represented in Fig. \ref{fig2}, this is approximated by
\begin{eqnarray}
\Delta\Phi&=&2 \frac{\dot y}{\dot x}\bigg|_{\tau \rightarrow \infty}=2\frac{\dot Y_0+r_s\dot Y_1+r_s^2\dot Y_2}{\dot X_0+r_s\dot X_1+r_s^2\dot X_2}\bigg|_{\tau \rightarrow \infty}+O(r_s^3). \nonumber
\end{eqnarray}
In the system under consideration, $\dot X_0=1$, $\dot Y_0=0$ and $\dot X_1 \rightarrow 0$ as $\tau \rightarrow \pm \infty $ so that
\begin{eqnarray}
\Delta\Phi&=&2(r_s\dot Y_1+r_s^2\dot Y_2)|_{\tau \rightarrow \infty}+O(r_s^3) \nonumber \\
&=&\frac{2r_s}{C_4}\bigg[1+\frac{r_s}{2C_4}+\frac{15\pi}{32}\frac{r_s}{C_4}\bigg]+O(r_s^3) \nonumber \\
&=&\frac{2r_s}{r_0}\bigg[1+\frac{r_s}{2r_0}+\frac{15\pi}{32}\frac{r_s}{r_0}\bigg]+O(r_s^3) \nonumber \\
&=&\frac{2r_s}{b}\bigg[1+\frac{15\pi}{32}\frac{r_s}{b}\bigg]+O(r_s^3) \nonumber
\end{eqnarray}
where $b=\sqrt{K_0+r_s K_1}+O(r_s^2)=C_4+r_s C_4/2+O(r_s^2)$ is the impact parameter. This deflection to second order is found to be in complete agreement with that calculated by \citet{fish}.
\subsection{Application: Travel Time Delay - second order approximation}
As for the first-order delay calculation, we can calculate the time for the ray to go from $r_0$ to $r_f$ by solving
\begin{eqnarray}
r_f^2&=&x_f^2+y_f^2 \nonumber \\
&=& X_0^2+Y_0^2+2r_s(X_0 X_1+Y_0 Y_1)+r_s^2(X_1^2+Y_1^2+2X_0 X_2+2Y_0 Y_2) \nonumber
\label{delaycalc2}
\end{eqnarray}
for the time parameter $\tau_f$ at the final point. The initial point allows us to calculate the second order constants as $C_{12}=-1/(2r_0^2)$, $C_{22}=0$, $C_{32}=0$, $C_{42}=-9/(16r_0)$. Solving for $\tau_f$ as in Section \ref{delaysec1}, but including terms to second order in $r_s$ gives
\begin{eqnarray}
\tau_f=\sqrt{r_f^2-r_0^2}+\frac{r_s}{2}\sqrt{\frac{r_f-r_0}{r_f+r_0}}+\frac{3 r_s^2}{8 r_0} \arctan \frac{\sqrt{r_f^2-r_0^2}}{r_0}-\frac{r_s^2}{8(r_f+r_0)}\sqrt{\frac{r_f-r_0}{r_f+r_0}}+O(r_s^3). \label{tau2}
\end{eqnarray}
Converting from $\tau$ to $t$ by integrating equation (\ref{tdot}), but this time solved to second order, results in
\begin{eqnarray}
t=\tau+r_s \ln \frac{\tau+\sqrt{\tau^2+r_0^2}}{r_0}+\frac{r_s^2}{2r_0} \bigg(3\arctan\frac{\tau}{r_0}-\frac{\tau}{\sqrt{\tau^2+r_0^2}}\bigg)+O(r_s^3).
\end{eqnarray}
This allows the second order approximation of travel time delay to be written as
\begin{eqnarray}
\Delta T&=&\frac{r_s}{2}\sqrt{\frac{r_f-r_0}{r_f+r_0}}+r_s \ln{\frac{r_f+\sqrt{r_f^2-r_0^2}}{r_0}}+r_s^2\bigg(\frac{15}{8 r_0} \arctan \frac{ \sqrt{r_f^2-r_0^2}}{r_0}-\sqrt{ \frac{r_f-r_0}{r_f+r_0}}\bigg(\frac{1}{2 r_0}+\frac{1}{8(r_f+r_0)}\bigg)\bigg).
\end{eqnarray}
In order to check this result, we may compare it to the delay ($\Delta t$) calculated numerically to high precision using Gaussian quadrature with the formula given by \citet{wfj}. For a ray starting at earth orbit, grazing the sun ($r_0=696000$km and $r_s=2.95$km) and reaching earth-orbit again, the travel time delay is calculated accurately for a range of orbital distances. In Fig. \ref{delayfig1} the delay is shown, along with the residuals from the first order and second order approaches. While the first order approximation has a relative error (that is, $(\Delta t-\Delta T)/\Delta t)$ of approximately $r_s/r_0\approx10^{-6}$, the second order approximation has a relative error of approximately $(r_s/r_0)^2\approx10^{-11}$. Distance is shown in astronomical units (`AU'), and time in micro-seconds (`$\mu$s').
\begin{figure}
\begin{flushleft}
\vspace{1cm}
\caption{The top picture shows Shapiro delay ("the delay") as calculated using Gaussian Quadrature. The Middle picture shows the difference between the delay and the delay calculated using the first order approximation, and the lower picture shows the difference between the delay and that calculated using the second order approach. The vertical scale is in micro-seconds, the horizontal scale is in astronomical units (AU).}
\includegraphics[width=\textwidth]{fig4a} \\
\includegraphics[width=\textwidth]{fig4b} \\
\includegraphics[width=\textwidth]{fig4c}
\label{delayfig1}
\end{flushleft}
\end{figure}
\section{Rotating lens}
Having explored the Schwarzschild solution in the Cartesian co-ordinate system, we are ready to move on to the rotating (Kerr) case. We may start by adding the rotational terms of the acceleration equations (\ref{xyzdd2a}) which are given in the appendix. These equations can be solved numerically using forward integration to produce a magnification map at the plane containing the observer. As the rotational terms are at second order and greater, the light rays must pass very close to the massive object to make a noticeable change to the trajectory. This is illustrated here by placing the light source close behind the massive lens. In order to observe the change in the pattern, the light source and planet have been placed approximately $3 r_s$ away from the black hole, which is clearly not a tenable position for any massive object, but is chosen only to highlight the effect of rotation on the caustic pattern. The top picture in Fig. \ref{kerr1} shows the normal diamond caustic without rotation as described by \citet{wam}, and calculated here using the numerical procedure described in \citet{wfj}. This was generated using almost 15,000 simulated light rays in a numerical integration of equation (\ref{schw1}). The lower figure uses the same procedure, but with the addition of the rotational terms. While the diamond caustic pattern is still recognizable, it has clearly undergone a twisting, with the bottom of the shape pushed further over to the right side of the diagram.
\begin{figure}
\vspace{1cm}
\caption{Caustic patterns due to central mass and single planet, using forward numerical integration of the full equations (\ref{schw1}). The top figure has a non-rotating central body, whereas in the bottom diagram the central body is rotating maximally (that is, with $a=r_s/2$). The light source is located on the $x$-axis at $-3 \times 10^{-6}$. The primary mass is at the origin with Schwarzschild radius $r_s=9.9 \times 10^{-7}$, and a planet is located on the $z$-axis at $3.3 \times 10^{-6}$ having $r_s=10^{-8}$. The observer's plane is located at $x=8000$.}
\includegraphics[width=15cm]{fig5a} \\
\includegraphics[width=15cm]{fig5b}
\label{kerr1}
\end{figure}
\subsection{Second order Kerr expansion}
For a black hole, physically sensible values for the rotational constant $a$ lie between $-r_s/2$ and $+r_s/2$. Therefore it is reasonable to consider $a$ to be of order $r_s$, that is $a=\alpha r_s$ where $\alpha$ is a constant between $-1/2$ and $1/2$. In the appendix, equation (\ref{xyzdd2a}) has been approximated to second order, resulting in equations (\ref{2ndorderfull}). Expanding the first of these equations using the expansions (\ref{xyz1}), yields
\begin{eqnarray}
\ddot X_0+r_s \ddot X_1+r_s^2 \ddot X_2&=&\frac{-3 r_s (X_0+r_s X_1) K_0+r_s K_0)}{2 (R_0+r_s R_1)^5}\nonumber \\
&+&r_s a \bigg( \frac{\dot Y_0}{R_0^3}+3(Y_0 \dot Z_0-Z_0 \dot Y_0)\frac{Z_0}{R_0^5}+\frac{Y_0}{R_0^4}+2\dot Y_0 \frac{\dot R_0}{R_0^3}-4Y_0\frac{\dot R_0^2}{R_0^4} \bigg) \nonumber \\
&+&a^2 \frac{2\dot X_0 Z_0}{R_0^5}(2Z_0 \dot R_0-R_0\dot Z_0) + O(r_s^3).\nonumber
\end{eqnarray}
In these equations, it can be seen that the first term on the right hand side is the Schwarzschild acceleration discussed in some detail in Section \ref{secschw}, which we have already integrated to obtain second-order path equations. It therefore remains to integrate the remaining two terms and to add them to the second-order Schwarzschild solution. The integration is straightforward, and following the same procedure for $y$ and $z$, we arrive at the second-order path equations.
\begin{eqnarray}
x&=& C_1 \tau+C_2+r_s X_1+r_s^2 X_2+r_s a \bigg(L_{x0} F_{RS} +\frac{C_3 R_0}{K_0} - \frac{Y_0}{2 R_0^2}\bigg)-a^2 C_1 F_{A} + O(r_s^3)\nonumber \\
y&=& C_3 \tau+C_4+r_s Y_1+r_s^2 Y_2+r_s a \bigg(L_{y0} F_{RS} -\frac{C_1 R_0}{K_0} + \frac{X_0}{2 R_0^2}\bigg)-a^2 C_3 F_{A} + O(r_s^3)\nonumber \\
z&=& C_5 \tau+C_6+r_s Z_1+r_s^2 Z_2+r_s a L_{z0} F_{RS} -a^2 C_5 F_{A} + O(r_s^3)\nonumber
\end{eqnarray}
in which $X_1, Y_1, Z_1$ and $X_2, Y_2, Z_2$ are described in Sections \ref{secschw1} and \ref{secschw2}. The remaining terms are
\begin{eqnarray}
F_{RS}&=&2 R_0 \frac{C_6 - B C_5}{K_0^2} - \frac{Z_0}{R_0 K_0} \nonumber \\
F_A&=&\frac{Q_0 (\tau + B)-2 C_5 K_0 Z_0}{2 R_0^2 K_0} + \frac{Q_0}{2 K_0^{3/2}} \arctan \frac{\tau+b}{\sqrt{K_0}} \nonumber \\
L_{x0}&=&C_4 C_5-C_3 C_6 \nonumber \\
L_{y0}&=&C_2 C_5-C_1 C_6 \nonumber \\
L_{z0}&=&C_2 C_3-C_1 C_4 \nonumber \\
Q_{0}&=&L_{x0}^2+L_{y0}^2. \nonumber
\end{eqnarray}
In order to estimate travel-time delays, as above we write co-ordinate time $t$ as a function of $\tau$. Expanding $\dot t$ to second order in $r_s$, and integrating yields
\begin{eqnarray}
t= \tau+r_s \log \bigg(\frac{\tau+B+R_0}{B+\sqrt{C}}\bigg)+\frac{\frac{3}{2}r_s^2+a^2}{\sqrt{K_0}}\arctan\frac{\tau\sqrt{K_0}}{B \tau+C}-r_s \tau \frac{r_s (B_R B - C_R)+ a L_{z0}}{K_0 R_0}+O(r_s^3). \nonumber
\end{eqnarray}
Again, the constants of integration have been determined by setting $t=0$ when $\tau=0$.
\subsection{Second order expansion - equatorial case}\label{sec42}
It is clear that in the equatorial case ($z=\dot z=0$, which also means that $C_5=C_6=Q_0=L_{x0}=L_{y0}=0$ and $K_0=L_{z0}^2$) the above equations simplify to
\begin{eqnarray}
x&=& C_1 \tau+C_2+r_s X_1+r_s^2 X_2+r_s a \bigg(\frac{C_3 R_0}{L_{z0}^2} - \frac{Y_0}{2 R_0^2}\bigg) + O(r_s^3)\nonumber \\
y&=& C_3 \tau+C_4+r_s Y_1+r_s^2 Y_2-r_s a \bigg(\frac{C_1 R_0}{L_{z0}^2} - \frac{X_0}{2 R_0^2}\bigg )+ O(r_s^3)\nonumber
\end{eqnarray}
Interestingly, while terms involving $r_s^2$, $r_s a$ and $a^2$ are all of second order in the expansion parameter, and terms with coefficients $r_s^2$ and $r_s a$ appear in these equatorial equations, there are no such terms with coefficient $a^2$.
\subsection{Application: Total Deflection angle - second order equatorial Kerr approximation}
We can now add the second order term due to rotation to the earlier total deflection angle calculation. As for the earlier scenario (see Fig. \ref{fig2}), $C_1=1$ and $C_3=0$ so that as $\tau$ goes to $\pm \infty$, $R_0 \rightarrow \infty$, and so $\dot y \rightarrow r_s \dot{Y_1}+r_s^2 \dot{Y_2}-r_s a \tau/(R_0 L_{z0}^2)$. Then the deflection becomes
\begin{eqnarray}
\Delta\Phi&=&2\bigg(r_s\dot Y_1+r_s^2\dot Y_2-r_s a \frac{1}{L_{z0}^2}\bigg)\bigg|_{\tau \rightarrow \infty}+O(r_s^3) \nonumber \\
&=&\frac{2r_s}{r_0}\bigg(1+\frac{r_s}{2r_0}+\frac{15\pi}{32}\frac{r_s}{r_0}-\frac{a}{r_0}\bigg)+O(r_s^3) \nonumber \\
&=&\frac{2r_s}{b}\bigg(1+\frac{15\pi}{32}\frac{r_s}{b}-\frac{a}{b}\bigg)+O(r_s^3). \nonumber
\end{eqnarray}
This deflection is found to be in complete agreement with that calculated by \citet{ede}.
\subsection{Application: Travel Time Delay - second order equatorial Kerr approximation}
As before, but including the rotational components of $x$ and $y$, the travel time for the ray to go from $r_0$ to $r_f$ can be calculated by solving
\begin{eqnarray}
r_f^2&=&x_f^2+y_f^2 \nonumber \\
&=& X_0^2+Y_0^2+2r_s(X_0 X_1+Y_0 Y_1)+r_s^2(X_1^2+Y_1^2+2X_0 X_2+2Y_0 Y_2)+r_s a \bigg(X_0(\frac{C_3 R_0}{L_{z0}^2} - \frac{Y_0}{2 R_0^2})-Y_0(\frac{C_1 R_0}{L_{z0}^2} - \frac{X_0}{2 R_0^2})\bigg)+O(r_s^3) \nonumber \\
&=& X_0^2+Y_0^2+2r_s(X_0 X_1+Y_0 Y_1)+r_s^2(X_1^2+Y_1^2+2X_0 X_2+2Y_0 Y_2)+r_s a \frac{R_0}{L_{z0}}+O(r_s^3)
\label{delaycalc2a}
\end{eqnarray}
for the overall time $\tau_f$. As previously, the initial point allows us to calculate the second order constants. With rotation these constants become $C_{12}=-1/(2r_0^2)$, $C_{22}=a/(2r_s r_0)$, $C_{32}=-a/(2r_s r_0^2)$, $C_{42}=a/(r_s r_0)-9/(16r_0)$. The solution for $\tau_f$ now includes a rotational term (dependent on $a$), and becomes
\begin{eqnarray}
\tau_f=\sqrt{r_f^2-r_0^2}+\frac{r_s}{2}\sqrt{\frac{r_f-r_0}{r_f+r_0}}+\frac{3 r_s^2}{8 r_0} \arctan \frac{\sqrt{r_f^2-r_0^2}}{r_0}-\frac{r_s^2}{8(r_f+r_0)}\sqrt{\frac{r_f-r_0}{r_f+r_0}}+\frac{r_s a}{r_0}\sqrt{\frac{r_f-r_0}{r_f+r_0}}+O(r_s^3). \nonumber
\end{eqnarray}
The conversion from $\tau$ to $t$ also now includes rotational terms
\begin{eqnarray}
t=\tau+r_s \ln \frac{\tau+\sqrt{\tau^2+r_0^2}}{r_0}+\frac{3r_s^2+2a^2}{2r_0} \arctan\frac{\tau}{r_0}+r_s \tau\frac{2a-r_s}{r_0\sqrt{\tau^2+r_0^2}}+O(r_s^3). \nonumber
\end{eqnarray}
This allows us to write the second order approximation of travel time delay as
\begin{eqnarray}
\Delta T&=&\frac{r_s}{2}\sqrt{\frac{r_f-r_0}{r_f+r_0}}+r_s \ln{\frac{r_f+\sqrt{r_f^2-r_0^2}}{r_0}}+\bigg(\frac{15 r_s^2}{8 r_0}+\frac{a^2}{r_0}\bigg) \arctan \frac{ \sqrt{r_f^2-r_0^2}}{r_0}+r_s\sqrt{ \frac{r_f-r_0}{r_f+r_0}}\bigg(\frac{a}{r_f}+\frac{4 a-r_s}{2 r_0}-\frac{r_s}{8(r_f+ r_0)}\bigg). \nonumber
\end{eqnarray}
This delay is the same for a ray travelling from perihelion $r_0$ to $r_f$ on the right ($t$ positive) as for a ray travelling from $r_f$ on the left ($t$ negative) to $r_0$. So the total delay for a ray passing the massive object at the origin is twice the amount $\Delta T$ stated above. It can be seen in this example, that if $a$ is positive (that is, the mass has anti-clockwise angular momentum), the motion of the particle is opposite to the frame-dragging effects, and the travel time delay is increased. Conversely, if $a$ is negative the travel time delay is decreased. \citet{dym} has calculated the delay to second order in the limit $r_f>>r_0$. The delay given here in the last equation is in agreement with Dymnikova's result in the same limit, but it also gives the second order delay for all values of $r_f$.
\section{Modelling delay for a pulsar in a binary system}
The regularity of pulses from a millisecond pulsar provides an interesting possibility for observing the effect of rotation on the travel time of the light pulses. A system such as the double pulsar binary system J0737-3039 described by \citet{bur} may provide interesting possibilities for observing the delay due to a rapidly rotating massive object. We will construct a simpler mathematical model by replacing one of the pulsars in that system with a black hole (with rotation also in the same plane as the orbit and observer) so that there is confidence in using the Kerr metric equations. Thus we consider here a binary system consisting of a millisecond pulsar and a rotating black hole with the orbital plane aligned so that the observer and the two bodies are within the same plane. We also ignore any atmospheric or magnetospheric interference which may introduce complications in measurements in the real system mentioned above. Finally we will ignore the modulation of the pulse timing due to the spinning of the pulsar. This last effect is expected to be small for a millisecond pulsar with an orbital period of hours or days, such as we are considering.
Having designed this system with the orbital plane and the observer in the equatorial plane of the black hole, we can use the simpler two dimensional equatorial equations of section \ref{sec42} to describe the paths of light rays from the pulsar to the observer. This is for simplicity and clarity only; another arrangement using the full three dimensional equations is only slightly more difficult to describe and to code.
In order to determine the delay of pulses due to the rotating black hole, we will send light rays back from the observer past the black hole using forward integration of the equatorial equations, stopping the integration procedure when the rays meet the orbital distance of the pulsar. As the time coordinate has been reversed, note that this also entails reversing the direction of spin of the black hole (which is decided arbitrarily in this model, but should be considered when using data from a real system).
Figure (\ref{pulsar1}) shows the last section of rays as they reach the circular orbit of the pulsar. Due to the large difference between the vertical and horizontal scales, and because only a small section of the orbit is shown, the endpoints of the rays appear to be in a line, but they do in fact form a circular arc. All distances shown are in light-seconds and times for the delays are in seconds. The Schwarzschild radius ($r_s$) of the black hole is $2 \times 10^{-4}$ light-seconds, equivalent to approximately $20$ solar masses. The delay increases in an almost linear relationship with mass of the lens, so that a black hole of $10$ solar masses would have approximately half the delay times as those shown in Fig. \ref{pulsar2}. In a different study of travel time delay in a binary pulsar system, \citet{lag} note that in order for the binary system to have sufficient longevity for a reasonable chance of observation, there are limitations on the proximity of the pulsar to the black hole, with approximately $5$ solar radii being near optimum compromise between longevity and magnitude of the delay effect. We therefore place the circular orbit at $11.6$ light-seconds, approximately $5$ solar radii. This orbit induces a delay term in the straight-line time from $-11.6$ seconds when the pulsar is closest to the observer, to $+11.6$ seconds when it is furthest from the observer. However, in the present study, we are interested in the additional asymmetric delay due to rotation. The orbital delay is symmetric about the point of superior conjunction, or occultation, which occurs when the lensing body is directly between the pulsar and the observer, and is the point of interest for the present study. For this reason, the orbital delay will be ignored here.
\begin{figure}
\vspace{1cm}
\caption{Rays from observer to pulsar orbit at radius of $11.6$ light-seconds, using forward integration past a maximally rotating central body ($r_s=2\times10^{-4}$). Only the final sections of the rays are shown. Rays passing very close to the mass, and experiencing large deflection, are not shown here because the images produced by such rays are extremely faint.}
\includegraphics[width=\textwidth]{fig6}
\label{pulsar1}
\end{figure}
Along with the equatorial equations, we also integrate Equation (\ref{tdot}) to keep track of the time coordinate. Comparing the time taken with the time light would take to travel in a straight line from pulsar to observer without any lensing object gives the delay, shown in Fig. \ref{pulsar2}. The solid curve represents the delay due to a rotating black hole, and was calculated using forward integration of the equatorial equations (\ref{eqnequat1}) given in the appendix. This delay is plotted against the angle from superior conjunction. That is, a zero angle represents the system with the black hole in between the pulsar and observer, while all three are in a line. The dashed line represents the delay due to a non-rotating black hole, calculated using the Schwarzschild acceleration, equation (\ref{schw1}). The part of the delay due to rotation is small, and so to highlight the difference between the delays shown, a magnified portion near the intersection is shown in the lower figure.
\begin{figure}
\vspace{1cm}
\caption{Delay due to black hole. Four curves are present for rays passing on each side of the black hole, and in the cases where the black hole is rotating (solid lines) or not rotating (dashed lines). The change in delay due to rotation is small so the curves in each pair are very close together. The lower figure shows a magnification of the central section, which allows the different delays to be seen.}
\includegraphics[width=\textwidth]{fig7a}
\includegraphics[width=\textwidth]{fig7b}
\label{pulsar2}
\end{figure}
Finally the delay due to maximal rotation of the black hole is subtracted from the delay due to a black hole of the same mass without rotation. The difference between these delays gives the delay due solely to rotation, which provides a small asymmetry in the delay curve in the rotational case. This difference is shown in Figure (\ref{pulsar3}). The upper and lower lines represent the delay difference for the two different images of the source, one passing to the left of the mass, the other to the right. One of these images is dominant prior to conjunction, and the other image becomes dominant after conjunction.
\begin{figure}
\vspace{1cm}
\caption{Difference in travel-time delay between rotating and non-rotating cases, plotted against orbital position of the pulsar. Each curve corresponds to rays passing the black hole on either the left (opposed to the black hole's rotation) or right side (aligned to the rotation).}
\includegraphics[width=\textwidth]{fig8}
\label{pulsar3}
\end{figure}
The delays due to lensing in this system are of the order of $10^{-3}$ seconds, while the asymmetric part of the delay that is due to rotation is of the order of $10^{-6}$ seconds. Assuming other effects on travel time can be adequately accounted for, these rotational delays may be measurable, and could possibly be used to estimate the angular momentum of the lens.
\section{Conclusion and Discussion}
In this paper, we have presented Cartesian acceleration components for photons using the Kerr metric. While these components are somewhat complicated, they allow easy modelling of systems in Cartesian coordinates, with the advantage that the components are normally very small. This allows for rapid numerical integration, and a new result (the caustic shape due to a binary system with rotating mass) has been presented. In order to approximate the light paths near a rotating black hole, we have built up the approximations in stages, beginning with the zeroth order and first order expansions, followed by the second order Schwarzschild expansion and finally the second order Kerr expansion. At each point, the versatility of this approach has been demonstrated by the ease of calculating deflection and travel time delay, which are found to match previously calculated amounts. In addition, a new formula for delay due to spinning black holes was presented. It may prove possible in some practical astrophysical circumstances to measure directly the delay due to rotation of the lensing object, and so to infer the angular momentum, using the formulae presented here. This awaits future experimental observation.
\section{Bibliography}
|
2,877,628,089,846 | arxiv | \section{Introduction}
\section{Probes of dark energy and dark matter}
Dark energy affects the cosmic history of the Hubble expansion H(z) as
well as the cosmic history of mass clustering. If combined, different
types of probes of the expansion history and structure history can
lead to percent level precision in dark energy parameters. This is
because each probe depends on the other cosmological parameters or
errors in different ways. These probes range from cosmic shear, baryon
acoustic oscillations, supernovae, and cluster counting -- all as a
function of redshift z. Using the CMB as normalization, the
combination of these probes will yield the needed precision to
distinguish between models of dark energy (Zhan 2006).
Next generation surveys will measure positions, colors, and shapes of
distant galaxies over such a large volume that the resulting
stochastic (random) errors will be very small. It is necessary to
control and reduce the systematic errors to even lower levels. There
are two primary systematic errors which can influence the data:
Photometric Redshift errors, and Weak Lens Shear errors. The work to
date has employed highly idealized data models. Here we describe some
of the image processing challenges associated with reconstruction of
the galaxy images from many dithered exposures.
With its capability to go deep, wide, and fast, the LSST will yield
continuous overlapping images of 20,000 - 25,000 square degrees of
sky. The baseline exposure time is 15 seconds, and each ``visit" to
a single patch of sky will consist of two such exposures separated
by a 2 sec readout with the shutter closed. In order to meet the
science goals, six bandpasses ($u, g, r, i, z,$ and $y$) covering
the wavelength range 320-1050 nm will be used. The system is
designed and will be engineered to yield exquisite astrometric and
photometric accuracy and superb image quality. The telescope and
camera optics and the detector combine to deliver 80\% energy
within a 0.2 arcsecond pixel over the full 10 square degree field
and full wavelength range. This LSST survey will take ten years to
complete. In a ten-year survey, the LSST will make more than five
million exposures. In current simulations, the sky is tiled with a
fixed spherical covering of circular fields. This overlap leads to
a significant fraction of area which is observed twice as
frequently as the average. In practice, the position of each visit
will be varied continuously across the sky to average out this
extra exposure.
How is the precision of shear measurements of distant galaxies in weak
lensing tomography affected by ground-based seeing? Galaxy shape
measurement depends on three parameters: galaxy size, delivered PSF,
and limiting surface brightness. New ground-based telescopes are
routinely delivering 0.4-0.7 arcsec FWHM imaging without adaptive
optics. Clearly there are unique advantages in space for UV or IR
imaging. Galaxies at 25 mag have mean half-light radius ~ 0.4 arcsec
and FWHM $\sim 0.8$ arcsec. Angular sizes of galaxies change with
redshift due to a number of effects including the cosmological
angle-redshift relation, luminosity evolution, and surface brightness
dimming. The net effect is a plateau over a range of z, out to z=3
(Cameron and Driver 2007). At the low surface brightness reached in
hundreds of LSST exposures, typical galaxies at redshift $z<3$ can be
resolved sufficiently to measure their ellipticity. This is shown in
Figure 1. One must convolve with the PSF and ask if the ellipticity
can be measured. Galaxies have a large intrinsic ellipticity (rms
$\sim 0.3$), and it is most important to have many of them in order to
average down the shot noise of this intrinsic ellipticity. At 28-29
mag per sq. arcsec ground based seeing is sufficient to measure the
large ellipticities of 40-50 galaxies per square arcminute to the
required $z<3$ redshift limit for tomography. However, it is crucial
that shape systematics are minimized.
\begin{figure}
\center
\includegraphics[width=0.7\textwidth]{O5-3_1}
\caption{Galaxy surface brightness vs radius (arcsec) in one redshift bin
from z = 2 - 3 for $23 < i < 25$ AB mag. This plot is from HST
ACS imaging and ground based spectroscopy. At the 28 $i$ and 29 $r$
mag/sq.arcsec limit of the LSST survey most galaxies at $z<3$ are
sufficiently resolved in 0.6 arcsec FWHM seeing to reconstruct their
ellipticity. (courtesy H. Ferguson)}
\label{fig:O5.3_1}
\end{figure}
\section{Weak lens shear measurement}
Background galaxies are mapped to new positions on the sky by
intervening mass concentrations, shearing their images
tangentially. First detected in 2000 (Wittman, et al. 2000), the full
3-D cosmic mass distribution creates statistical correlations in
galaxy shapes called ``cosmic shear." Systematic errors in either
redshifts or shear affect the precision obtainable for dark energy
parameters and are classified as either multiplicative or additive.
There is some level of self-calibration, especially for multiplicative
errors, i.e. the level of error can be obtained from the data and
marginalized over without severely compromising the cosmological
constraints. Additive errors do not have this property.
Multiplicative errors are also known as shear calibration errors, and
arise from the convolution of a galaxy's true shape with the isotropic
part of the point-spread PSF, which dilutes the shear by some factor
which depends on the relative angular sizes of the galaxy and the PSF.
Therefore multiplicative errors will be a function of redshift (more
distant galaxies appear smaller) and of position on the sky (the
ground-based PSF depends on the atmosphere). Additive errors, or
spurious shear, arise from the anisotropic part of the PSF and are
position-dependent but not redshift dependent, except perhaps
indirectly, if the PSF is a function of source color.
\section{Shift-and-stare imaging}
If a large number of exposures are taken of a field on the sky it is
possible in principle to separate spatial defects on the imager from
the true scene on the sky. Shift-and-stare imaging was developed in
the early days of CCD imagers for this purpose (Tyson 1986). There
are a variety of algorithms for recombining the sub-images in this
data cube into a master co-added image. The original technique used
median averaging a pixel of fixed sky location up the registered stack
of sub-images, but care must be taken not to introduce
correlations. Using sinc interpolation rather than simple weighted
neighbor pixel interpolation one can decorrelate noise on adjacent
pixels in the co-added image, making it possible to estimate
statistical significance. Shift-and-stare is the method of choice
currently in all wide field deep imaging. However, it probably has
outlived its usefulness. While it is convenient from a storage and
computation point of view to compress the data cube to a single
co-added image, important information is lost particularly if image
quality or effective exposure varies between sub-images.
\begin{figure}[t]
\center
\includegraphics[width=0.7\textwidth]{O5-3_2}
\caption{Shift-and-stare: Multiple exposures disregistered on the sky
contain information about the objects as well as information about
defects on the imager. Stars and galaxies are disregistered between
exposures, but systematic errors in the CCD are registered in each
frame. Processing with a 'superflat' can remove most of the CCD
based defects, and then registering the stars an co-adding generates
a deep defect-free image. Subtle problems can occur if the PSF is
different in each image.}
\label{fig:O5.3_2}
\end{figure}
\section{Reconstructing galaxy images: co-addition vs Muti-Fit}
Several algorithms have been suggested to beat down PSF systematics
using multiple exposures of the same field. The naive use of such
data would be to construct a single stacked image with higher
signal-to-noise, and then measure the shear correlation function by
averaging over all pairs of galaxies. This requires pixel
interpolation, which can lead to systematics and correlated noise.
Generally the stack algorithms combine sub-images with different PSF,
and information is lost. Moreover, there is generally a discontinuous
PSF change in the stack image at locations of edges of the
sub-images. This creates PSF variations on the stack image that are
hard to model. As a result the stack method does not provide the
desired accuracy for image analysis algorithms which are sensitive to
spatial variations in PSF.
We propose analyzing the full ``data cube" by fitting, for each galaxy,
a single model which, when convolved with the $N$ different PSFs, best
matches the $N$ measurements of that galaxy (the MultiFit method). This
means that PSF misestimation, which is strongly decorrelated from
image to image, behaves more like a random error for that galaxy,
rather than a systematic error. LSST will have hundreds of dithered
images per filter band per sky patch, and there will be about 2000
overlapping (dithered) 10 square degree sky patches per bandpass. It
is desirable to use all the information in those overlapping data
cubes.
The best current methods reach a shear calibration accurate to 1\%. In
principle LSST can do 20 times better because LSST will have hundreds
of exposures, each with an independent shear calibration. Current
shear analysis operates on the co-added deep image. A new method,
Multi-Fit, does a superior job of estimating the true shear of a
galaxy by fitting a model to all images of it in the stack of $N$
exposures.
\subsection{Multi-Fit}
We describe a method for fitting the shapes of galaxies that have been
imaged in multiple exposures. Instead of the traditional approach of
co-adding many exposures to produce a single image for measurement,
this method simultaneously analyzes all individual exposures to
determine the galaxy shape and size that best fits all images of a
single galaxy in a noise-weighted fasion. This process effectively
uses knowledge about the PSF of individual exposures, taking advantage
of the detailed information present in highly resolved images, while
still extracting the limited information available in images with
poorer resolution. A PSF map is made for each image, by fitting all
the stars. The simultaneous fit is performed using a maximum
likelihood technique that combines the likelihoods calculated from
each individual exposure. First, a parameterized model for a galaxy
radial light profile is chosen. The model is convolved with each of
the PSF models measured from the individual exposures. The final,
convolved light distributions are compared to the data pixels for the
galaxy images on each individual exposure to determine a
likelihood. The fitting procedure adjusts the parameters of the input
model until the likelihood is maximized, resulting in a best-fit model
of actual galaxy shape prior to the effects of PSF smearing.
There are several advantages to using a procedure that fits multiple
exposures. First, errors that are made in PSF estimation in each
exposure are treated as random errors, and these errors are propagated
into the statistical error calculated during the fitting
process. Thus, these errors are determined directly for each
individual galaxy, rather than being an unknown systematic error.
Compared to interpolating PSF estimation on a co-added image, this
also reduces any spatial correlation introduced by PSF mis-estimation
in a given region of sky. A second advantage of this method is that
the PSF interpolation is done on each separate exposure, where the PSF
is expected to vary smoothly. Other methods interpolate on a co-added
image, which has been made using many exposures that have been
dithered relative to each other. The spatial variation of the PSF on a
co-added image is not smooth near the boundaries of the underlying
chips, making accurate interpolation more difficult.
Another advantage, specific to any technique that uses fitting, is
that prior information can be directly incorporated into the fit. The
choice of an underlying galaxy shape profile is one such piece of
information. Parameters of the galaxy-model or the PSF-model can be
constrained with additional terms in likelihood calculation. For
example, if the PSF determination is uncertain, those uncertainties
can be used in the fit and directly propagated into the final
measurement error. Priors based on the high S/N features of an object
in the stacked deep image are useful. The centroid of objects is taken
from the stacked image in our tests shown below and is not allowed to
vary from sub-image to sub-image in the data cube.
The following plots illustrate how well the ellipticity of galaxies of
different magnitudes and sizes can be measured. Below a pre-seeing
size of 0.5 pixels, fitting becomes unstable due to the small
size. Above a FWHM of 10 pixels, a minimum error is reached for a
fixed magnitude. A joint fit to the size and magnitude dependence of
the error, between 0.5 and 10 pixels, gives the expected statistical
dependence based on signal-to-noise, thus demonstrating the extreme
robustness of this technique.
\begin{figure}
\center
\includegraphics[width=0.45\textwidth]{O5-3_3a}
\includegraphics[width=0.45\textwidth]{O5-3_3b}
\caption{Dependence of galaxy shape measurement error on magnitude
(left) and galaxy size (right) in a simulated exposure cube. The
dotted horizontal line indicates the level of shape noise - the
intrinsic distribution of galaxy shapes. The magnitude variation is
due to the higher signal-to-noise measurement possible with brighter
objects. The variation of error with size shows that larger objects
are measured better up to a magnitude-dependent noise floor. The
vertical line is the level below which many current methods become
unstable - when the observed area is 1.25x the PSF area.}
\label{fig:O5.3_3}
\end{figure}
The statistical figure of merit for a weak-lensing survey is the
effective number of galaxies for which shapes have been measured. By
measuring ever smaller and fainter galaxies, a survey can dramatically
increase galaxy sample size, but at the cost of using noisier
measurements. There is a trade-off between increased shot noise plus
lower systematic shear error at the bright end and decreased shot
noise (due to the large number of galaxies) and susceptibility of PSF
sysytematics at the faint end. Many current methods for shape
measurement become unusable when observed objects have sizes close to
the PSF size. Often, galaxies observed to be less than $\sim 1.25$
times the area of the PSF are discarded. With fitting techniques, this
limit can be reduced and galaxies can be measured almost down to the
size of the PSF. The variance of a shape measurement decreases as the
square-root of the pre-seeing area for small galaxies. Since the
number of galaxies increases with the decreasing angular size, the
rapid increase in sample size can compensate for increased
noise. Consequently, the effective number of galaxies of a survey can
be substantially increased by recovering barely resolved galaxies. The
following figure depicts the relative increase in a survey's effective
sample size as galaxies less than 1.25 times the PSF area are
included.
This algorithm uses all information in the images, weights
better-seeing images appropriately, and handles image boundaries. PSF
on a stacked image changes abruptly at a sub-image boundary. Each
sub-image PSF has less structure than the stacked image PSF, and this
approach thus transforms some systematics into random errors.
\begin{figure}
\center
\includegraphics[width=0.5\textwidth]{O5-3_4}
\caption{Effective galaxy sample size relative to a sample with a
cutoff of observed galaxy size 1.25 times the PSF size (2.35 on the
x-axis, in these units). At low redshift, there are few galaxies
present to add to the sample, while at higher redshift, there is
some gain in measuring galaxies down to a cutoff of $\sim 1$.}
\label{fig:O5.3_4}
\end{figure}
\subsection{Computational challenges and R\&D to be done}
Currently, galaxies and PSFs are modeled sums of Gaussians, so
convolutions are fast. Real galaxies are not Gaussian, and an upgrade
to more realistic models has begun. The current algorithm
requires 1 sec per galaxy for data cube of 20 images, with no speed
optimization yet, on a 2 GHz desktop. For the 5 million images LSST
will obtain, a rough extrapolation of the existing Multi-Fit runs
suggests over $10^{22}$ floating point operations. This is
competitive with the computational requirements for the LSST image
differencing transient pipeline. The new code is being written in C++
and Python. It will be necessary to quantify the improvement of
Multi-Fit over stacking for various science cases (weak
lens shear, photometry). It will be particularly useful to extend
fitting to include other quantities: magnitudes, colors, etc., or to
use them as priors for single-band galaxy reconstruction. Finally, we
will pursue speed optimization and extensive Monte Carlo tests. We
propose to use Multi-Fit in full shift-and-stare Monte Carlo
simulations of LSST sky tiling operations including PSF systematics.
|
2,877,628,089,847 | arxiv | \section{Introduction}\label{sec:introduction}
As the backbone of transportation systems in megacities, metro systems play a critical role in meeting the increasing demand of urban mobility. For example, the Beijing metro---a network consists of 22 lines and 391 stations---has an average daily ridership of more than 10 million by the end of 2019 \citep{beijingwiki}. To better satisfy the massive passenger demand, numerous measures have been taken to maximize the operational capacity, such as reducing peak-hour headway, increasing train speed, and removing seats for more standing space. In addition to these engineering practices, recent research also shows increasing interest in developing optimization strategies for the operation of large-scale metro systems, such as designing better timetables and schedules \citep{niu2013optimizing,sun2014demand,yin2016energy}, synchronizing different lines to reduce transfer time \citep{kang2015case}, and integrating the metro network with the bus network to minimize the impact of service disruptions \citep{jin2014enhancing,jin2015optimizing}.
Despite the tremendous efforts in increasing operational capacity, some metro systems are still operated in an oversaturated condition \citep{shi2019cooperative}, which is purely due to the fact that even the optimized capacity cannot satisfy the burst of passenger demand. As a result, it is common to see overcrowded platforms with left-behind passengers who have to wait for more than one train to get on board during peak hours \citep{zhu2018inferring}. In these extreme scenarios, safety measures need to be taken to prevent overcrowdedness on the platform and operational risks and ensure the smooth operation of the system \citep{xu2019passenger}. A common flow-control measure is \textit{out-of-station} queueing---passengers are compelled to queue outside of a metro station before entering the metro station. It is reported that the out-of-station waiting time of a few metro stations in Beijing, Guangzhou, and Shenzhen can be up to more than ten minutes at peak hours \citep{Shenzhen_queue_2021, beijing_queue_2019}.
Quantifying passenger waiting time in metro systems is crucial for evaluating service quality/performance and understanding passengers' choice behavior. In terms of the economic evaluation of public transport services, waiting time is also a critical component in assessing the social benefit/cost of different planning and operation strategies. In the literature, many methods have been developed to estimate the in-station waiting time or transfer time of metro systems using individual-based smart card transactions \citep{sun2012using,sun2012rail,sun2015integrated,zhu2018inferring,qu2020estimating}. However, despite that out-of-station waiting may cover a substantial proportion of overall travel time and the experience is much more unpleasant than waiting inside the train station (e.g., under bad weather \citep{zhang2021outdoor}), it has received little attention in the research. This is primarily due to that out-of-station waiting time cannot be inferred directly from smart card transactions, since out-of-station waiting happens before a passenger taps into a metro station. Although one can conduct field surveys to measure out-of-station waiting, the survey approach is very time-consuming and it cannot collect data continuously for long-term monitoring.
The goal of this research is to develop a data-driven method for quantifying out-of-station waiting time using smart card data. To address the aforementioned challenges, we propose an accessible and accurate method by combining the smart card data from both bus and metro systems. Our key idea is to consider those passengers who transferred from a nearby bus stop to the metro station as a proxy, whose transfer time can be estimated as the time interval from the first tapping-out on the bus to the next tapping-in at the metro station. In detail, we first identify these transfer passengers using multi-source data. Next, the time interval between the bus tap-out and the metro tap-in is used to estimate the out-of-station queueing time. To handle the noise in the data and to extend the estimation to all passengers, we assume the latent true out-of-station waiting time is a continuous function of time and estimate it with a Gaussian Process regression with a Student-$t$ likelihood. Moreover, the estimated out-of-station waiting time is used to build queueing diagrams for further analysis. We present a case study for the Tiantongyuan North station of Beijing metro. We find the maximum out-of-station waiting time is around 15 minutes, and the maximum queue length can reach 3000 passengers.
To the best of our knowledge, this is the first quantitative study for out-of-station waiting time estimation. The contribution of this paper is three-fold. First, we propose an innovative approach that uses multi-source data and Gaussian Process regression to estimate the metro out-of-station waiting time. Our method is well-founded and can be used for long-term monitoring. Second, we show by real-world data that the out-of-station waiting is a non-negligible part of the total travel time for over-saturated metro stations; more attention should be paid on this underestimated phenomenon.
The rest of the paper is organized as follows. Section~\ref{sec:literature} reviews relevant works and presents the research gap. Section~\ref{sec:background} introduces the background and the problem. Section~\ref{sec:model} presents the modeling framework of out-of-station waiting time estimation. In Section~\ref{sec:casestudy}, we present a case study of the Tiantongyuan metro stations in Beijing. Next, we discuss potential solutions for the out-of-station waiting in Section~\ref{sec:remedies}. Finally, Section~\ref{sec:conclusion} summarizes the paper and provides future research directions.
\section{Literature review}
\label{sec:literature}
Most modern metro systems adopt a fare gantry-based smart card system, which generates a continuous flow of transactions registering when and where passengers start their trips \citep{pelletier2011smart}. Given the rich information collected, smart card data has been widely used in understanding individual travel behavior and enhancing the planning and operation of metro systems \citep[e.g., ][]{niu2013optimizing, sun2014demand, jin2014enhancing, kang2015case, jin2015optimizing, yin2016energy}. In the following, we review the application of smart card data in estimating waiting time and inferring route choices in metro systems.
The waiting time of a metro system is a crucial indicator for transit service quality, and it is also a key determinant for passenger route choice behavior \citep{wardman2004public}. Many methods have been developed to estimating waiting times from smart card data. Typically, these methods decompose the time interval between tapping-in (at origin) and tapping-out (at destination) into waiting time, onboard time, and transfer time using certain regression techniques and side information (e.g., timetables of trains). In the meanwhile, these methods usually also output the route choice of each trip. For example, \citet{kusakabe2010estimation} combined smart card data with train timetables to estimate which train is boarded by each individual traveler. \citet{sun2012using} proposed a linear regression model to decompose travel time and applied this model to estimate the spatiotemporal loading profile of trains. \citet{sun2012rail} used smart card data to study travel time reliability and proposed a probabilistic mixture model to infer passenger route choice. \citet{sun2015integrated} developed a probabilistic generative model of trip time observations characterizing both the randomness of link travel time and route choice behavior. This model can be used as a passenger flow assignment framework for service planning and operation. \citet{zhao2016estimation} proposed a probabilistic model to assign each passenger to specific trains. \citet{krishnakumari2020estimation} developed a linear regression method that estimates the delay at each metro station, link, and transfer.
The waiting time and route choice in an oversaturated metro system are more complex. For instance, passengers may travel backward to an uncrowded station to find a seat and then travel forward. \citet{tirachini2016valuation} investigated this interesting backward traveling phenomenon and estimated the disutility of sitting and standing (and also level of crowdedness) in metro trains. Besides, passengers often have to wait for multiple trains to get on board. \citet{zhu2018inferring, ma2019estimation} developed data-driven methods to estimate the number of left-behind passengers in metro systems. \citet{qu2020estimating} also studied the waiting time of left-behind passengers; they found passengers' waiting time in peak hours is much longer than the metro headway. \citet{mo2020capacity} proposed a performance monitoring framework that incorporates the number of left-behind passengers.
The aforementioned studies have proposed various methods to estimate the waiting time, transfer time, or route choice from smart card data. However, as mentioned, all these methods assume that a trip starts when a passenger taps in and finishes when he/she taps out; thus, we can see that out-of-station waiting actually leaves no traces in metro smart card data. As a result, these methods cannot quantify out-of-station waiting time, which could be a substantial component of total travel time in oversaturated stations with flow-control measures. In this paper, we combine the smart card data from bus and metro systems to infer the out-of-station waiting time in the metro system. This study is closely related to the work of \citet{sun2015characterizing}, which models passenger transfer time using smart card transactions from both bus and metro services.
\section{Background}\label{sec:background}
Beijing Metro is one of the busiest metro systems in the world. During rush hours, the ridership at a few stations is extremely high that passengers have to queue for quite a long time outside the station before entering the metro station (see Fig. ~\ref{fig:waitinline}). For example, the Tiantongyuan area of Beijing is one of the largest residential hubs in China; it has a total population of 700,000 in 2019 \citep{tiantongyuanwiki}. There are three metro stations, Tiantongyuan North (TTY-N), Tiantongyuan (TTY), and Tiantongyuan South (TTY-S), in this area. Due to the large number of commuting passengers, all three stations are oversaturated during morning peak hours on weekdays. In this paper, we use the TTY-N station as an example to demonstrate our proposed solution to quantify out-of-station waiting time. The location of the TTY-N station is shown in Fig.~\ref{fig:location}. Because the TTY-N station is the northern terminus of Metro Line 5, the boarding rate in the morning peak is controlled to alleviate the overcrowdedness on the platform and to prevent the service at downstream stations be overwhelmed; this is also one of the reasons for the out-of-station queueing. Without this flow-control intervention, the trains will be fully loaded at departure, leaving no capacity for passengers waiting at the subsequent/downstream stations.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.6\textwidth]{Waitinginline.jpg}
\caption{The queue outside the Tiantongyuan metro station. Photo taken at 8:14 on Thursday, October 31, 2019.}\label{fig:waitinline}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.9\textwidth]{Locationandwalk.pdf}
\caption{The location of the Tiantongyuan North metro station and a nearby bus stop.}\label{fig:location}
\end{center}
\end{figure}
The public transit system in Beijing uses a distance-based fare scheme. Therefore, passengers need to tap their smart cards or tickets when getting on and off a bus and when entering and leaving a metro station. Useful information in smart card data includes anonymous IDs, origin/destination, and timestamps of tapping-in/-out. For bus trips, the transactions also record the ID of the bus. Note that user IDs are consistent in both metro and bus systems, so that we can link trips from both systems for a particular user. Next we show how to estimate the out-of-station waiting time by combining the smart card data from both bus and metro systems.
As illustrated in Fig.~\ref{fig:transfer_trip}, we separate all the incoming passengers at a metro station into two groups: (G1) direct passengers who do not have a previous bus transfer and (G2) transfer passengers that coming from a nearby bus stop. For a direct passenger $i$, we only know the tap-in time $t_{\text{in},i}$ at the metro fare gantry, but we have no information about the out-of-station queueing. For a transfer passenger $i$, we can know the metro tap-in time $t_{\text{in},i}$, the bus tap-out time $t_{\text{out},i}$, and the transfer duration $d_{\text{transfer},i} = t_{\text{in},i} - t_{\text{out},i}$ (we use $t$ for a timestamp and $d$ for a time duration/interval). In addition, we can estimate the out-of-station waiting duration for a passenger in G2 by subtracting the walking time $d_{\text{walk},i}$ between the bus stop and the metro station from the transfer duration.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.5]{figure_demo2.pdf}
\caption{An illustration of out-of-station queueing of a metro station. G1 means direct passengers without a previous bus trip. G2 represents transfer passengers coming from a previous bus trip. Red numbers to the left side give the bus tap-out time $t_\text{out}$, and blue numbers to the right side give the tap-in time $t_\text{in}$ of the metro station. We estimate the out-of-station queueing time for both G1 and G2 based on the $t_\text{in}$ and $t_\text{out}$ of G2.}
\label{fig:transfer_trip}
\end{figure}
Typically, G2 only accounts for a small percentage of the total boarding passengers. Therefore, we regard G2 as a small sample set drawn from total passengers and use it to estimate the out-of-station waiting profile for all boarding passengers. In doing so, we need to (1) accurately estimate the out-of-station waiting time for all boarding passengers and (2) develop a method to analyze queueing profile. We introduce the methodologies for these tasks in Section~\ref{sec:model}.
\section{Modeling framework}
\label{sec:model}
This section elaborates the methods for profiling the out-of-station waiting time using smart card data. First, in Section~\ref{sec:GP}, we illustrate the impact of noise in the data and propose a Gaussian Process regression for the out-of-station waiting time estimation. Then, in Section~\ref{sec:queue}, we introduce the idea of using a queueing diagram to analyze the out-of-station waiting.
\subsection{Gaussian Process for waiting time estimation}\label{sec:GP}
The out-of-station waiting time of a passenger $i$ in G2 can be roughly estimated by $d_{\text{transfer},i} - d_{\text{walk},i}$, and we refer it as the \textit{observed} waiting time for simplicity. Fig.~\ref{fig:observation} shows the observed waiting time at different metro tap-in times, where we regard the walking time $d_\text{walk}$ as a constant and determine it by the median value of all $d_\text{transfer}$ during 12:00-4:00 pm (assuming no out-of-station waiting at off-peak hours). We can see the observed waiting time is much higher in the morning peak. However, there is substantial noise in the observed waiting time. Even in a short period of time, there are significant discrepancies between different observations. Sources of noise include different walking speeds, unsynchronized clocks between smart card readers, intermediate activities, and some passengers may ``tap-out'' before the bus arrives at the bus stop to speed up the alighting. Because of the noise, a well-founded method is required for out-of-station waiting time estimation.
\begin{figure}[!ht]
\begin{center}
\includegraphics[]{observation.pdf}
\caption{The observed out-of-station waiting time (i.e., $d_\text{transfer} - d_\text{walk}$) at different metro tap-in times in a workday.}\label{fig:observation}
\end{center}
\end{figure}
We use a Gaussian Process (GP) regression \citep{williams2006gaussian} to estimate the out-of-station waiting time. A GP is a non-parametric Bayesian model that defines a distribution over functions. Gaussian processes are very flexible and can approximate complex functions using various kinds of kernels and likelihoods. Most importantly, a GP is a probabilistic approach that also gives confidence intervals for the estimated values. We refer readers to \citet{williams2006gaussian} for more information about Gaussian Processes.
Let $y\left(t\right)$ be the observed waiting time for a passenger with metro tap-in time $t$. For ease of description, the time $t$ in here and after means the metro tap-in time if not otherwise specified. The observed waiting time can be decomposed into a latent ``true'' waiting time $f(t)$ and a noise term $\varepsilon$:
\begin{equation}\label{eq:observation}
y\left( t \right) =f\left(t \right)+\varepsilon.
\end{equation}
We assume the ``true'' out-of-station waiting time $f\left(t\right)$ is a continuous function of time. We need to infer the latent ``true'' waiting time given the observed waiting time. In doing so, we impose a GP prior to $f\left(t\right)$, meaning the function's values $f\left(\mathbf{t}\right)=\left[f\left(t_1\right), f\left(t_2\right),\cdots ,f\left(t_n\right)\right]^\top$ for any finite collection of inputs $\mathbf{t}=\left[t_1, t_2, \cdots, t_n \right]^\top$ have a joint multivariate Gaussian distribution. We will write
\begin{equation}\label{eq:GP}
f\left( t \right) \sim \mathcal{GP}\left(\mu\left( t \right), k\left(t, t^{\prime}\right) \right),
\end{equation}
where $\mu\left(t \right)$ is the mean function and $k\left(t, t^{\prime}\right)$ is the covariance/kernel function. By convention, the mean function is set to zero, i.e., $\mu\left(t\right)=0$. For the covariance function, we choose the commonly used squared-exponential kernel
\begin{equation}\label{eq:kernel}
k\left(t, t^{\prime} \mid \ell, \lambda^2 \right)=\lambda^{2} \exp \left(-\frac{\left(t-t^{\prime}\right)^{2}}{2 \ell^{2}}\right),
\end{equation}
where the length scale $\ell$ and the variance $\lambda^2$ are two hyperparameters that should be calibrated by data. The covariance in Eq.~\eqref{eq:kernel} is larger for two closer $t$ and $t^{\prime}$, indicating passengers that enter the metro station at a closer time are more likely to have more similar waiting time. We can see a GP is fully specified by the mean and covariance functions and does not impose any assumption on the form of the function $f\left(t\right)$.
When using an i.i.d.\ Gaussian distribution for the noise term $\varepsilon$, the posterior of the latent variable $f(\mathbf{t})$ can be solved analytically. However, this convenient approach is very sensitive to outliers and is not appropriate for our data. To make a robust estimation for the ``true'' waiting time, we assume the noise is a zero-mean i.i.d.\ Student-$t$ distribution with a long-tail probability density function \citep{jylanki2011robust}:
\begin{equation}\label{eq:t}
p\left( \varepsilon \mid \nu, \sigma \right) =\frac{\Gamma\left(\frac{\nu+1}{2}\right)}{\Gamma\left(\frac{\nu}{2}\right) \sqrt{\pi \nu }\sigma}\left(1+\frac{\varepsilon^2}{\nu\sigma^2}\right)^{-\frac{\nu+1}{2}},
\end{equation}
where $\nu$ is the degrees of freedom and $\sigma$ the scale parameter \citep{gelman2013bayesian}.
Eq.~\eqref{eq:observation}--\eqref{eq:t} describe the GP regression with a Student-t likelihood. Given a set of observed waiting times $\mathbf{y}$ at metro tap-in time $\mathbf{t}$, the four hyperparameters $\theta=\lbrace \ell, \lambda^2, \nu, \sigma\rbrace$ can be optimized by maximizing the log marginal likelihood
\begin{equation}
\log p(\mathbf{y} | \mathbf{t}, \theta) = \log\int p(\mathbf{y} \mid \mathbf{f}) p(\mathbf{f} \mid \mathbf{t}, \theta) d\mathbf{f}.
\end{equation}
The log marginal likelihood cannot be explicitly obtained when the noise is a Student-$t$ distribution. Therefore, approximate inference methods \citep{neal1997monte, vanhatalo2009gaussian, jylanki2011robust} were developed to fit hyperparameters. We use the Laplace appropriation \citep{vanhatalo2009gaussian} as implemented in GPy \citep{gpy2014} for the appropriate inference. Next, for new passengers with metro tap-in times $\mathbf{t}_{*}$, we can calculate the posterior distribution $p\left(\mathbf{f}_{*} \mid \mathbf{y}, \mathbf{t}, \mathbf{t}_{*} \right)$ of their ``true'' out-of-station waiting time. The posterior distribution is a Gaussian but is also approximately solved \citep{jylanki2011robust}. We use the posterior mean as a point estimation $\hat{f}(\mathbf{t}_{*})$ for the out-of-station waiting time, referred to as the \textit{estimated} waiting time in the following.
\subsection{Queueing diagram}\label{sec:queue}
The out-of-station waiting phenomenon at a metro station is a queueing process with varying arrival rate and service rate. To better analyze the reason and the impact of the out-of-station queue, we further establish a queueing diagram based on the estimated waiting time, as illustrated in the virtual example of Fig.~\ref{fig:queueing}.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8]{Totalwaitingtime.pdf}
\caption{Establishing a queueing diagram by the estimated out-of-station waiting time -- a virtual example. (a) The queueing diagram. (b) The estimated out-of-station waiting time (only shows 10\% of passengers for clarity).}
\label{fig:queueing}
\end{figure}
We used the virtual example in Fig.~\ref{fig:queueing} to illustrate how to establish a queueing diagram from the estimated out-of-station waiting time. Fig.~\ref{fig:queueing} (a) shows the queueing diagram, where the departure curve indicates the service rate at the metro gantries. Because the smart card data in Beijing contain passengers' tap-in times, the departure curve is directly reconstructed from passengers' metro tap-in records. The passenger arrival curve is not directly available from the data but can be inferred from the metro tap-in time and the out-of-station waiting duration. For example, the point $B$ in Fig.~\ref{fig:queueing} represents a passenger that tapped in the metro station at $t_B$; we can estimate his/her out-of-station waiting duration $\hat{f}(t_B)$ by the GP model described in Section~\ref{sec:GP}, as shown in Fig.~\ref{fig:queueing} (b). The segment $|AB|$ in the queueing diagram means the out-of-station waiting duration, we can thus calculate the arrival time for this passenger (point $A$) by $t_B - \hat{f}(t_B)$. The arrival curve can thus be obtained by connecting the estimated arrival times of all passengers. Note we use the estimated waiting time instead of the observed waiting time for both G1 and G2 to avoid the impact of noise in the data.
The queueing diagram provides important information much more than just a visualization. For example, the slope of the departure curve represents the service rate at the metro entrance, the slope of the arrival curve means the arrival rate. The horizontal distance (e.g., $|AB|$) and vertical distance (e.g., $|AC|$) between the two curves represent the waiting time and the queueing length, respectively. Moreover, the area between the two curves represents the total waiting time of all passengers. Next, we will build queueing diagrams to analyze the out-of-station queueing at the TTY-N station.
\section{Results}
\label{sec:casestudy}
In this section, we present the results for the out-of-station queueing at the TTY-N metro station. Firstly, the data and the demand pattern at the TTY-N station are introduced in Section~\ref{sec:case_demand}. Next, Section~\ref{sec:case_waiting} exhibits the out-of-station waiting time estimated by GP regressions. Finally, we analyze the queueing process in the morning peak and discuss possible solutions in Section~\ref{sec:case_queue}.
\subsection{Data description}\label{sec:case_demand}
Based on the available data, we select a five-day period from August 3rd to 8th, 2015, to analyze the out-of-station queueing at the TTY-N metro station. This five-day period reflects a typical weekday demand pattern. We have full smart card data for the TTY-N station in this period. The tap-in/out information for all metro passengers, including those who use tickets, is registered in the data. Besides, we also have smart card data from three different bus routes that pass through the TTY-N bus stop (next to the TTY-N subway station). The walking time from the bus stop to the metro station is the same for the three bus routes. The whole analysis is based on the tap-out at the TTY-N bus stop and the tap-in at the TTY-N metro station.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.8]{demand.pdf}
\caption{The number of boarding passengers per hour at the TTY-N station (Monday, August 3rd, 2015).}
\label{fig:Subwaydemand}
\end{figure}
If we detect the same smart card ID has an immediate metro trip after a bus trip, and the interval between the bus tap-out and the metro tap-in is within 30 minutes, we regard this ID as a passenger in G2. After separating passengers into G1 and G2, the boarding demand of G1 and G2 at the TTY-N station on a typical weekday is shown in Fig.~\ref{fig:Subwaydemand}. we can see the TTY-N station services more than ten thousand passengers per hour in the morning peak (7-8 am), and the demand is very low in the afternoon and evening, showing TTY-N is a typical residential-type station. On the other hand, the number of passengers in G2 is much smaller than that in G1. This is because only a small portion of passengers are transferred from the bus stop. We use G2 as a small sample drawn from all passengers to recover the out-of-station waiting profile at the TTY-N station.
\subsection{Estimated out-of-station waiting}\label{sec:case_waiting}
Because the TTY-N station has a low boarding demand in off-peak hours, we can safely assume there is no out-of-station waiting at off-peak hours. Therefore, we regard the walking time as a constant and determine it by the median value of all $d_{\mathrm{transfer}}$ during 12:00-4:00 pm. Next, we can calculate the observed waiting time, as shown in the black points in Fig.~\ref{fig:GP_result}. We can see the noise is very high for the observed waiting time.
\begin{figure}[!t]
\begin{center}
\includegraphics[]{Figure_1.pdf}
\caption{The observed waiting time, the GP posterior mean, and the 95\% confidence interval of the GP posterior mean.}\label{fig:GP_result}
\end{center}
\end{figure}
Next, we fit a GP for each day. Fig.~\ref{fig:GP_result} shows the posterior mean and confidence interval of the estimated out-of-station waiting time. We can see the GP with a Student-$t$ likelihood is robust to the noise, and the estimated waiting time makes intuitive sense. Despite the presence of outliers, the estimated waiting time in general wiggles around zero from 5:00 to 7:00 am and after 9:00 am, which is consistent with the real-life situation. We also see the confidence interval is larger in the period with fewer observations. The estimated waiting time on Friday night is unusually high, this could be caused by too few G2 passengers during that period. We expect the estimated waiting time for morning peaks to be more reliable because of the larger number of observations. All five weekdays have significant out-of-station waiting from around 7:00 to 9:00 am. The maximum waiting time is around 15 minutes for Monday to Thursday, and the waiting time on Friday is relatively shorter. The quantitative results for the waiting time is shown in the Tab.~\ref{tab:result}.
\subsection{Queueing analysis}\label{sec:case_queue}
This section analyzes the out-of-station waiting by queueing diagrams. We first set negative values in the estimated waiting time to zero. Following the illustration in Fig.~\ref{fig:queueing}, we next establish queueing diagrams for the five weekdays, as shown in Fig.~\ref{fig:case_queue}.
The upper half of Fig.~\ref{fig:case_queue} shows the cumulative arrival curve and departure curve at the TTY-N metro station. Note that the ``departure'' means entering the metro gantry rather than boarding a train. The arrival/service rate at time $t$ is estimated by the average arrival/service rate in $[t-5\mathrm{min}, t+5\mathrm{min}]$, as shown in the lower half of Fig.~\ref{fig:case_queue}. For all five days, we can see the arrival rates are larger than the service rates from around 7:00 to 7:50 am, and queues are therefore formed. The maximum arrival rate is often more than 300 people/min, while the maximum service rate is only around 200 people/min. The queue lengths start to decrease after around 7:50 am and the queue dissipates at around 9:00 am.
\begin{figure}[!t]
\begin{center}
\includegraphics[]{Figure_2.pdf}
\caption{The queueing process in the morning peak.}\label{fig:case_queue}
\end{center}
\end{figure}
\begin{table}[htbp]
\centering\small
\caption{Quantifying the out-of-station queueing from 7:00 to 9:00 am.}
\begin{tabular}{lrrrrrr}
\toprule
& Monday & Tuesday & Wednesday & Thursday & Friday & Average \\
\midrule
Total number of passengers & 22740 & 23193 & 22964 & 23241 & 22524 & 22932 \\
Maximum waiting time (min) & 15.1 & 15.0 & 15.3 & 13.5 & 11.1 & 14.0 \\
Total waiting time (hour) & 3665 & 2984 & 3240 & 3006 & 2194 & 3018 \\
Average out-of-station waiting time (min) & 9.7 & 7.7 & 8.5 & 7.8 & 5.8 & 7.9 \\
Maximum queue length (people) & 3191 & 3188 & 3251 & 2893 & 2310 & 2967 \\
Maximum arrival rate (people/min) & 295 & 384 & 319 & 302 & 280 & 316 \\
Maximum service rate (people/min) & 212 & 220 & 219 & 220 & 215 & 217 \\
The time with the longest queue & 7:48 & 7:43 & 7:44 & 7:52 & 7:49 & 7:47 \\
\bottomrule
\end{tabular}%
\label{tab:result}%
\end{table}%
Tab.~\ref{tab:result} summarizes major indices for the out-of-station queueing. We can see the out-of-station waiting greatly impact passengers' travel. The maximum arrival rate is 50\% larger than the maximum service rate. When the queue has a maximum length, over three thousand passengers are waiting in the queue, and it takes around fifteen minutes for a passenger to enter the station. On average, every passenger waits eight minutes outside of the station in the morning peak. Considering the number of passengers, the total out-of-station waiting time exceeds three thousand hours per day at the TTY-N station; the queueing is a big waste of time and efficiency.
\section{Potential solutions for out-of-station queueing} \label{sec:remedies}
Queueing outside of metro stations has a substantial negative impact on passenger travel experience. In this section, we discuss existing and potential solutions to this problem. The fundamental reason for the out-of-station waiting is the mismatch between demand and supply. The commuting demand is rooted in the urban structure and can hardly be changed. Many traffic congestion problems should be avoided in the initial urban planning stage. However, we can still manage the demand from a temporal aspect \citep{halvorsen2019demand}. For example, providing reduced-rate fares to off-peak trips can flatten the peak-hour demand (e.g., shift peak-hour trips to pre-peak and after-peak hours). The temporally differentiated fare scheme has been studied in many research \citep{yang2018managing, lu2020managing, li2018modeling, adnan2020examining}. A few real-world practices show that properly designed off-peak discounts can help reduce metro crowding \citep{halvorsen2016reducing, greene2018bart}. Based on the queueing analysis of Section~\ref{sec:case_queue}, a potential solution is to design a fare scheme for the TTY-N metro station to reduce the boarding demand from 7:00 to 8:00 am. Overall, using a temporally differentiated fare scheme is a potential solution, although the effect is hard to evaluate in advance.
Beijing metro has made a lot of efforts from the supply side. In fact, the minimum headway of most metro lines of Beijing has been reduced to less than two minutes to increase network capacity. Moreover, the original TTY-N metro station has been integrated into Tiantongyuan North Transportation Hub since October 13, 2019. The TTY-N Transportation Hub integrates metro line 5, coaches, buses, and a P+R parking lot. There is a large-scale waiting lobby in the hub, and passengers no longer need to queue in the open air, which is helpful under bad weather conditions. New transportation facilities are also under construction or planning. For example, the Beijing metro line 13A, which is expected to complete in 2023 \citep{NDRC2019}, can significantly relieve the commuting pressure of the Tiantongyuan area.
Reducing perceived waiting time can also improve the level of service. For example, improving the waiting environment by providing shelters \citep{fan2016waiting} and improving the thermal environment \citep{zhang2021outdoor} can significantly reduce the perceived waiting time. Besides, it has been shown that providing real-time information can reduce the anxiety for uncertainty and the perceived waiting time \citep{watkins2011my, brakewood2014experiment}. Therefore, providing real-time queueing information has the potential to improve transit services \citep{brakewood2015impact}.
Finally, using other transportation modes to share the metro's demand is also a solution. Normally, the bicycle is not an ideal substitution for the metro considering its short travel distance. However, there are always special cases. In 2019, a 6.5 km elevated bicycle-only path was built in Beijing to share the extremely high commuting demand between Huilongguan and Shangdi, where Huilongguan is another high-density residential area suffering from out-of-station queueing. It is reported that cycling between Huilongguan and Shangdi takes around 30 minutes, while it could take more than 40 minutes to commute by metro in the rush hour \citep{bike_only_2019}.
\section{Concluding Remarks}\label{sec:conclusion}
This paper proposes a data-driven method to estimate the waiting time outside of an oversaturated metro station due to flow control measures. To the best of our knowledge, this paper presents the first quantitative study to measure passengers' out-of-station waiting time. By combining smart card data from the metro and bus system, we use transfer passengers as a proxy to quantify the queueing time outside of a metro station. A probabilistic approach by Gaussian Process regression is developed to infer the out-of-station waiting time for all passengers. Besides, we propose to analyze the queueing process by a queueing diagram. In the TTY-N metro station case study, results show our method is robust to the noise in data and provides a reliable estimation for out-of-station waiting time. We find the out-of-station waiting can be a big burden---more than 15 minutes waiting time---for passengers in oversaturated metro stations. Our results could help transit agencies better understand service performance. In addition, the accurate estimation of out-of-station waiting can also be used to evaluate user utility and social cost, which could be further used to support decision making, such as design better flow-control strategies.
A limitation of this work is the lack of validation by a field survey. Because we can only access the smart card data of 2015, but the boarding demand at the TTY-N station has drastically changed since the completion of the TTY-N Transportation Hub and the outbreak of COVID-19. Nevertheless, the GP regression produces reasonable confident intervals, and we believe that our estimation should be a solid reference.
There are several directions for future research. First, the performance of metro systems can be re-evaluated by taking the out-of-station waiting into account, especially for megacities like Beijing. Our case study shows that passengers at Tiantongyuan North suffer while downstream passengers benefit from the flow-control measures. A critical research question is to balance the trade-off and design optimized flow-control strategies based on passenger flow assignment when demand exceeds network capacity. Second, because waiting in an open-air is more vulnerable to extreme weather, it is important to quantify the disutility of out-of-station waiting time \citep{tirachini2016valuation, zhang2021outdoor}. Furthermore, how the waiting time affects mode choice is also worthy of investigation \citep{sun2012rail}. Finally, an interesting and important research direction is to develop time- and station-dependent transit fare schemes to flatten peak hour demand and thus reduce the mismatch between demand and supply \citep{yang2018managing, lu2020managing, li2018modeling, adnan2020examining}.
\section*{Acknowledgement}
This research is supported by the Fonds de Recherche du Qu\'{e}bec - Soci\'{e}t\'{e} et Culture (FRQSC) under the NSFC-FRQSC Research Program on Smart Cities and Big Data and the Canada Foundation for Innovation (CFI) John R. Evans Leaders Fund.
\bibliographystyle{elsarticle-harv}
|
2,877,628,089,848 | arxiv | \section*{Acknowledgments}
We are grateful to P.~Draper, J.~Feng, P.~Kant, and S.~Profumo for
collaboration relevant to this work, and K.~Matchev for useful input
in preparing this manuscript. DS is supported in part by
U.S.~Department of Energy grant DE--FG02--92ER40701 and by the Gordon
and Betty Moore Foundation through Grant No.~776 to the Caltech Moore
Center for Theoretical Cosmology and Physics.
|
2,877,628,089,849 | arxiv | \section{Introduction}
Neutrinos are known to be massive and have mixing. Among the various ways to understand neutrino masses and mixing, discrete flavour symmetries are widely used in describing the mixing structure as well as offering testable predictions~\cite{King:2015aea,Petcov:2014laa}. $S_4$ is one of the popular groups that are adopted as symmetries of the flavour sector in the literature. It has been used to reproduce the tribimaximal (TB) mixing~\cite{Lam:2008rs,Bazzocchi:2008ej,Ding:2009iy,Ishimori:2010xk,Zhao:2011pv}, bimaximal (BM) mixing~\cite{Altarelli:2009gn,Toorop:2010yh,Meloni:2011fx} and trimaximal (TM) mixing~\cite{King:2011zj,Krishnan:2012me,Varzielas:2012pa,Luhn:2013vna}, and is combined with CP symmetry to give realistic mixing predictions~\cite{Ding:2013hpa,Feruglio:2013hia,Li:2014eia}.
In the meantime, phenomenological observations can give us hints of underlying structure, and are used as starting points for further model building. The above mentioned constant mixing patterns inspire many discrete flavour symmetry models.
The self-complementarity relation (SC) of lepton mixing~\cite{Zhang:2012xu,Zheng:2011uz} is observed in 2012 in light of the relatively large value of $\theta_{13}$ measured by reactor neutrino experiments~\cite{An:2012eh,Ahn:2012nd}. It correlates the three lepton mixing angles in a simple way. Namely, the sum of the two relative small mixing angles are equal to the large one, and to $45^\circ$~\footnote{In ref~\cite{Zhang:2012xu}, this is the self-complementarity relation of the first kind. There are two other kinds of self-complementarity relations, which are the sum of the two smaller angles equal to the third angle, or to $45^\circ$. }. There are several phenomenological investigations related to this relation~\cite{Zhang:2012xu,Zhang:2012pv,Qu:2013aca,Ke:2014hxa}, but a model realization of this relation is still missing. The current work presents as a primary trial in this situation. By ``primary" we intentionally keep the model ``small" by leaving the extension to quark sector, to supersymmetry and/or GUTs to further studies.
In this work, we firstly construct a self-complementary mixing pattern from the self-complementarity relation and $\delta_{\rm CP}=-\frac{\pi}{2}$. Then we build a neutrino mass model with $S_4$ flavour symmetry dictates the mass matrix structure given by the self-complementary mixing. We perform a numerical study on the model's parameter space and discuss the model's predictions for neutrino masses and mixing in the end.
This paper is organized as follows. In Section~\ref{sec:sc}, we introduce the self-complementary mixing, and give the structure of the Majorana neutrino mass matrix dictated by the self-complementary mixing. The explicit model building is in Section~\ref{sec:model}, where the mass matrix structure is reproduced. Then we perform a numerical analysis in Section~\ref{sec:ph} for the phenomenology of the model, and in Section~\ref{sec:sum} we make the summary and give the conclusion.
\section{General approach and preparations}\label{sec:sc}
In this section we first construct a mixing pattern featuring the self-complementarity relation, and then use this mixing pattern to get the neutrino mass matrix. In next section we will reproduce the neutrino mass matrix structure in the context of a model. It is in the same spirit of many models in the literatures.
\subsection{Self-complementary mixing}
We start with the self-complementarity relation of the first kind~\cite{Zhang:2012xu}, i.e.,
\begin{align}
\theta_{12}+\theta_{13}=\theta_{23}=45^\circ,
\end{align}
where $\theta_{ij}$ are lepton mixing angles in the standard parametrization. Compared with the latest global fit result in Table~\ref{tab:glb}, we see that $\theta_{12}+\theta_{13}=45^\circ$ works marginally at the $3\sigma$ edge and $\theta_{23}=45^\circ$ is included in $1\sigma$ only in the normal ordering. Nevertheless, there are several possibilities to account for the deviation from exact SC relation: the SC relation may hold at high energy scale (e.g. a seesaw scale), so it receives corrections from renormalization group (RG) running effects; it may receive corrections from high order operators; it may hold only in the neutrino sector, and receive corrections from the charged lepton sector. A quantitatively examination is model dependent and we will see soon in the case of our model. So we conclude that the comparison with the current data does not weaken the necessity of the present work.
\begin{table}
\centering\label{tab:glb}
\begin{tabular}{l|c|c}
\toprule
& bfp$\pm 1\sigma$ & $3\sigma$ range\\
\hline
$\theta_{12}~~[^\circ]$&$33.48^{+0.78}_{-0.75}$&$31.29-35.91$\\
$\theta_{23}~~[^\circ]$&$42.3^{+3.0}_{-1.6}$~(N),~$49.5^{+1.5}_{-2.2}$~(I)&$38.2-53.3$~(N),~$38.6-53.3$~(I)\\
$\theta_{13}~~[^\circ]$&$8.50^{+0.20}_{-0.21}$~(N),~$8.21^{+0.20}_{-0.21}$~(I)&$7.85-9.10$~(N),~$7.87-9.11$~(I)\\
$\delta_{\rm CP}~[^\circ]$&$306^{+39}_{-70}$~(N),~$254^{+63}_{-62}$~(I)&$0-360$\\
\bottomrule
\end{tabular}\caption{Global fit result for the mixing parameters~\cite{Gonzalez-Garcia:2014bfa}. N (I) stands for normal (inverted) ordering.}
\end{table}
An additional ingredient we use for constructing the self-complementary mixing is $\delta_{\rm CP}=-\frac{\pi}{2}$. We use it for two reasons: firstly, it is the value of $\delta_{\rm CP}$ which is indicated by T2K~\cite{Abe:2013hdq} and NO$\nu$A~\cite{Bian:2015opa}, and is within the $1\sigma$ range of the global fit~\ref{tab:glb}; secondly, it is special in the sense that $\delta_{\rm CP}$ contributes its maximal to the Jarlskog invariant. Applying the SC relation together with $\delta_{\rm CP}=-\frac{\pi}{2}$ to the standard parameterization, we get the self-complementary mixing directly as
\begin{align}
U_{\rm SC}=\left(
\begin{array}{ccc}
\cos \left(\frac{\pi }{4}-\theta_{13}\right) \cos \theta_{13} & \sin \left(\frac{\pi }{4}-\theta_{13}\right) \cos \theta_{13}& i \sin \theta_{13} \\
\frac{i\cos \left(\frac{\pi }{4}-\theta_{13}\right) \sin\theta_{13}-\sin \left(\frac{\pi }{4}-\theta_{13}\right)}{\sqrt{2}} & \frac{\cos \left(\frac{\pi }{4}-\theta_{13}\right)+i \sin\left(\frac{\pi }{4}-\theta_{13}\right) \sin \theta_{13}}{\sqrt{2}}& \frac{\cos \theta_{13}}{\sqrt{2}} \\
\frac{\sin \left(\frac{\pi }{4}-\theta_{13}\right)+i \cos \left(\frac{\pi }{4}-\theta_{13}\right) \sin \theta_{13}}{\sqrt{2}} & \frac{i \sin \left(\frac{\pi }{4}-\theta_{13}\right) \sin \theta_{13}-\cos \left(\frac{\pi }{4}-\theta_{13}\right)}{\sqrt{2}} & \frac{\cos\theta_{13}}{\sqrt{2}} \\
\end{array}
\right),\label{eq:Usc}
\end{align}
and the whole lepton mixing matrix when neutrinos are Majorana particles is $U_{\rm PMNS}=U_{\rm SC}.P,~~P={\rm Diag}\{e^{-i\alpha_1/2},e^{-i\alpha_2/2},1\}$.
Here we comment on why we have to construct a mass matrix perturbatively to realize a self-complementary mixing. As can be seen from Eq.~(\ref{eq:Usc}), the SC mixing satisfies $|U_{\rm SC}|_{{\rm \mu} i}=|U_{\rm SC}|_{\tau i}$, which is the $\mu$-$\tau$ exchange symmetry prediction for the mixing~\cite{Zhang:2014rua}. The immediate guess would be if we try to construct the mass matrix using Eq.~(\ref{eq:Usc}) directly, we will arrive at a mass matrix resembling the $\mu$-$\tau$ exchange symmetry. As $|U_{\rm SC}|_{\mu i}=|U_{\rm SC}|_{\tau i}$ is necessary but not sufficient to get $\mu$-$\tau$ exchange symmetry, the guess is wrong. However, the realistic situation is similar: a mass matrix given by Eq.~(\ref{eq:Usc}), which is at the meantime simple enough for model building, only reflects the two input: $\theta_{23}=45^\circ,~\delta_{\rm CP}=-\frac{\pi}{2}$. That is to say, the other ingredient of SC mixing, i.e., $\theta_{12}+\theta_{13}=45^\circ$ is obscured from the direct construction. This ingredient gives substructure of a mass matrix that is given by $\theta_{23}=45^\circ,~\delta_{\rm CP}=-\frac{\pi}{2}$. To see this substructure, we have to construct the mass matrix perturbatively.
There is a constant mixing pattern having the features including the SC relation and also a maximal CP violating phase~\cite{Qu:2013aca}. For the reason stated above, we do not use it for SC model building here. There are $A_4$ models featuring $\theta_{23}=45^\circ,~\delta_{\rm CP}=-\frac{\pi}{2}$~\cite{He:2015afa}.
\subsection{Mass matrix structure inspired from SC mixing}
By identifying
\begin{align}
\sin\theta_{13}&=\lambda,\\
\cos\theta_{13}&\cong 1-\frac{1}{2}\lambda^2,
\end{align}
we get the expansion of $U_{\rm SC}$ in powers of $\lambda$,
\begin{align}
U_{\rm SC}&\equiv U_{\lambda^0}+\lambda U_{\lambda^1}+\lambda^2 U_{\lambda^2}+...\nonumber\\
&= \left(
\begin{array}{ccc}
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 \\
-\frac{1}{2} &\frac{1}{2} &\frac{1}{\sqrt{2}} \\
\frac{1}{2} &-\frac{1}{2} & \frac{1}{\sqrt{2}}\\
\end{array}\right) +\lambda\left(
\begin{array}{ccc}
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & i \\
\frac{1}{2}+\frac{i}{2} &\frac{1}{2}+\frac{i}{2} &0 \\
-\frac{1}{2}+\frac{i}{2} &-\frac{1}{2}+\frac{i}{2} & 0\\
\end{array}\right)+\lambda^2\left(
\begin{array}{ccc}
-\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0 \\
\frac{1}{4}+\frac{i}{2} &-\frac{1}{4}-\frac{i}{2} &-\frac{1}{2\sqrt{2}} \\
-\frac{1}{4}+\frac{i}{2} &\frac{1}{4}-\frac{i}{2} & -\frac{1}{2\sqrt{2}}\\
\end{array}\right)~~~\nonumber\\
&+...~ .~~~\label{eq:Usc_exp}
\end{align}
We use the expansion of $ U_{\rm SC}$ to get the Majorana mass matrix expanded in $\lambda$,
\begin{align}
\hat{m}_\nu&= U_{\rm SC}^* \hat{m}^d U_{\rm SC}^\dagger\nonumber\\
&=( U_{\lambda^0}+\lambda U_{\lambda^1}+\lambda^2 U_{\lambda^2}+...)^* \hat{m}^d ( U_{\lambda^0}+\lambda U_{\lambda^1}+\lambda^2 U_{\lambda^2}+...)^\dagger\nonumber\\
&\equiv \hat{m}_0 + \lambda \hat{m}_1 + \lambda^2 \hat{m}_2 +...,
\end{align}
where $\hat{m}^d=diag\{m_1,m_2,m_3\}$. We use a hat notation above $m$ to distinguish matrix $\hat{m}_i$ from eigenvalues $m_i$. The mass matrix of the first few orders of $\lambda$ reads
\begin{align}
\hat{m}_0&= U_{\lambda^0}^* \hat{m}^d U_{\lambda^0}^\dagger;\\
\hat{m}_1&= U_{\lambda^0}^* \hat{m}^d U_{\lambda^1}^\dagger+U_{\lambda^1}^* \hat{m}^d U_{\lambda^0}^\dagger;\\
\hat{m}_2&= U_{\lambda^0}^* \hat{m}^d U_{\lambda^2}^\dagger+U_{\lambda^1}^* \hat{m}^d U_{\lambda^1}^\dagger+U_{\lambda^2}^* \hat{m}^d U_{\lambda^0}^\dagger;\\
...\nonumber
\end{align}
Using the $U_{\lambda^i}$ in Eq.~(\ref{eq:Usc_exp}), the mass matrix can be constructed accordingly.
At the leading order, we get
\begin{align}
\hat{m}_0 =\left(
\begin{array}{ccc}
\frac{1}{2}(m_1+m_2) & \frac{1}{2 \sqrt{2}}(m_2-m_1) & \frac{1}{2 \sqrt{2}} (m_1-m_2)\\
\frac{1}{2 \sqrt{2}} (m_2-m_1)& \frac{1}{4} (m_1+m_2+2 m_3) & \frac{1}{4} (-m_1-m_2+2 m_3) \\
\frac{1}{2 \sqrt{2}}(m_1-m_2) & \frac{1}{4} (-m_1-m_2+2 m_3) & \frac{1}{4} (m_1+m_2+2 m_3) \\
\end{array}
\right),
\end{align}
which is of the form
\begin{align}
\hat{m}_0\sim \left(
\begin{array}{ccc}
x & y & -y \\
y & z & z-x \\
-y & z-x & z\\
\end{array}\right),\label{eq:m0}
\end{align}
where x, y and z are in general complex and their complexity comes solely from the Majorana phases which we associated with the eigenvalues $m_i$ in $\hat{m}^d$. Such a mass matrix can be diagonalized by
\begin{align}
U_0=\left(
\begin{array}{ccc}
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0 \\
-\frac{1}{2} & -\frac{1}{2} &\frac{1}{\sqrt{2}} \\
\frac{1}{2} &\frac{1}{2} & \frac{1}{\sqrt{2}}\\
\end{array}\right),
\end{align}
which is the BM mixing \footnote{A diagonal matrix $P={\rm Diag}\{1,-1,1\}$ needs to be multiplied on the left of $U_0$ to make it the same as the BM mixing.}. Actually, this is the main reason for our choice of the flavour group to be $S_4$.
At $\mathcal{O}(\lambda)$, we find $\hat{m}_1$ is of the following form
\begin{align}
\hat{m}_1= a\left(
\begin{array}{ccc}
2 & 0 & 0 \\
0 & -1+i & 1 \\
0 & 1 & -1-i\\
\end{array}\right)+
b\left(
\begin{array}{ccc}
0 & i & i \\
i & 0 & 0 \\
i & 0 & 0\\
\end{array}\right),\label{eq:m1}\\
a=\frac{1}{2}(m_1-m_2),\quad b=-\frac{1}{2\sqrt{2}}(m_1+m_2+2m_3),~\label{eq:ab}
\end{align}
where a, b are complex numbers.
At $\mathcal{O}(\lambda^2)$, we find $\hat{m}_2$ is of the following form
\begin{align}
\hat{m}_2= c\left(
\begin{array}{ccc}
2 & 0 & 0 \\
0 & 1 & 1 \\
0 & 1 & 1\\
\end{array}\right)+
d\left(
\begin{array}{ccc}
0 & 1 & -1 \\
1 & 0 & 0 \\
-1 & 0 & 0\\
\end{array}\right)+
e\left(
\begin{array}{ccc}
0 & i & i \\
i & 0 & 0 \\
i & 0 & 0\\
\end{array}\right),\label{eq:m2}\\
c=-\frac{1}{4}(m_1+m_2+2m_3),\quad d=\frac{5}{4\sqrt{2}}(m_1-m_2),\quad e=-\frac{1}{\sqrt{2}}(m_1-m_2)~\label{eq:cde},
\end{align}
where c, d and e are complex, too.
Since $\lambda=\sin\theta_{13}\simeq0.15$, we will stop at $\mathcal{O}(\lambda^2)$, so the deviation from the exact SC mixing at percent level is expected. In next section, we will build a model to reproduce the neutrino mass matrix structure to this order.
\section{Model construction}\label{sec:model}
In this section we build a model to reproduce the neutrino mass matrix structure inspired from the SC mixing. As a first trial to do this, we keep the model non-supersymmetric, and leave the discussion of the quark sector to future works. We adopt here an $S_4$ flavour symmetry. The standard model lepton doublets are assigned to 3 representation of $S_4$. We extend the standard model by adding three right handed neutrinos, which explain the smallness of light neutrino masses through seesaw type I~\cite{seesawI}. The right handed neutrinos are gauge singlets, but in 3 representation of $S_4$. Through the spontaneous breaking of the $S_4$ symmetry, which occurred when the flavons acquire their vacuum expectation values (vev), we get the structure of the Majorana mass matrix that will result in the desired neutrino mass matrix structure after applying seesaw.
We list first the field representations in $S_4$ and charges under additional symmetries in our model in Table~\ref{tab:assignment}. Higgs is the singlet in $S_4$ and is not charged under any of the additional symmetries, so it is omitted in the table. We have flavons in 1, 2 and 3 representations of $S_4$. The $\rm U(1)$ charges are arranged in a way that no new terms at the discussed order will show up, explicitly, $x\neq m\neq n\neq z$.
\begin{table}
\centering\caption{Field representations in $S_4$ and charges under additional symmetries of the model}\label{tab:assignment}
\resizebox*{1.1\textwidth}{.14\textheight}{%
\begin{tabular}{l|ccccccccccccccccccc}
\toprule
& L & $e_R$ & $\mu_R$ & $\tau_R$ & N & $\phi_e$ & $\phi_\mu$ & $\phi_\tau$ & $\theta$ & $\xi_1$ & $\phi_1$ & $\psi_1$ & $\phi_{21}$ & $\phi_{22}$ & $\phi_{23}$ & $\phi_{31}$ & $\phi_{32}$ & $\psi_3$ & $\xi_3$\\
\hline
$S_4$ & 3 & 1 & 1 & 1 & 3 & 3 & 3 & 3 & 1 & 1 & 3 & 2 & 3 & 3 & 3 & 3 & 3 & 2 & 1\\
$\rm U(1)$ & -x & z & m & n & x & $\frac{1}{2}(x-z)$ & x-m & x-n & 0 &-2x & -2x & -2x & -x & -x & -x & $-\frac{2}{3}x$ & $-\frac{2}{3}x$ & y & -2x-2y\\
$\rm U(1)_{\rm FN}$ & 0 & 2 & 1 & 0 & 0 & 0 & 0& 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
$\mathbb{Z}_2$ & 0 & 0 & 0 & 0 & 0 & 0 & 0& 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
$\mathbb{Z}_2$ & 0 & 0 & 0 & 0 & 0 & 0 & 0& 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
$\mathbb{Z}_2$ & 0 & 0 & 0 & 0 & 0 & 0 & 0& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
$\mathbb{Z}_3$ & 0 & 0 & 0 & 0 & 0 & 0 & 0& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 0 & 0\\
\bottomrule
\end{tabular}}%
\end{table}
The effective Lagrangian reads
\begin{align}
-\mathcal{L}&= y_e\bar{L}\tilde{H}e_{\rm R} \left(\frac{\phi_e}{\Lambda}\right)^2\left(\frac{\theta}{\Lambda}\right)^2
+y_\mu\bar{L}\tilde{H}\mu_{\rm R} \left(\frac{\phi_\mu}{\Lambda}\right)\left(\frac{\theta}{\Lambda}\right)
+y_\tau\bar{L}\tilde{H}\tau_{\rm R} \left(\frac{\phi_\tau}{\Lambda}\right)
+y_\nu\bar{L}HN\nonumber\\
&+\frac{y_{11}}{\Lambda}(NN)_1\xi_1+\frac{y_{12}}{\Lambda}(NN)_2\psi_1+\frac{y_{13}}{\Lambda}(NN)_3\phi_1\nonumber\\
&+\frac{y_{21}}{\Lambda^2}(N\phi_{21})_3(N\phi_{21})_3
+\frac{y_{22}}{\Lambda^2}(NN)_3(\phi_{22}\phi_{22})_3
+\frac{y_{23}}{\Lambda^2}(NN)_3(\phi_{23}\phi_{23})_3\nonumber\\
&+\frac{y_{31}}{\Lambda^3}(N\psi_3)_3(N\psi_3)_3\xi_3
+\frac{y_{32}}{\Lambda^3}\left((NN)_3\phi_{31}\right)_1(\phi_{31}\phi_{31})_1
+\frac{y_{33}}{\Lambda^3}\left((NN)_3\phi_{32}\right)_1(\phi_{32}\phi_{32})_1,~~~~~~~
\end{align}
where the flavons coupling to the charged leptons are marked with flavour indices while flavons coupling to neutrinos are marked with numbers 1,2,3; besides, ``$\phi$" stands for a triplet flavon in $S_4$, ``$\psi$" for a doublet and ``$\xi$" for a singlet. In the neutrino sector, we denote the $S_4$ contraction with a subscript outside the parenthesis. Since there is no ambiguity in the $S_4$ contraction in the charged lepton sector, the contraction is not explicitly written. The couplings $y_{ij}$ in the Majorana neutrino mass term are of mass dimension $1$ and are at the scale of heavy neutrino mass (we use a notation ``y" instead of ``m" to avoid confusion with various m in this model); the $\Lambda$ denotes the cutoff scale of the theory.
In such a model setup, the structure of the neutrino mass matrix which we concern comes solely from $S_4$ breaking flavon vevs. The additional symmetries are used to forbid the unwanted terms from the Lagrangian. The $\mathbb{Z}_n$ symmetries are needed to distinguish the copies of flavons in same $S_4$ representation. The potential Goldstone boson coming from the spontaneous breaking of $\rm U(1)$ symmetry may be gauged away by adding more particles, which is beyond the scope of the current work. It is also possible to use more cyclic symmetries instead of the $\rm U(1)$ symmetry to complete a same construction.
\subsection{The charged lepton sector}
In the charged lepton sector, we adopt the same idea as in Ref.~\cite{Altarelli:2009gn} that a $\rm U(1)_{\rm FN}$ symmetry~\cite{Froggatt:1978nt} is used in combination with the other additional symmetries, to generate the hierarchical masses of the charged leptons. When the flavons acquire the following vevs\footnote{An illustrative example of how we get the flavon vev structure is shown in Appendix~\ref{sec:vev}},
\begin{align}
\langle\phi_e\rangle=v_{\phi_e}(0,0,1),\quad\langle\phi_\mu\rangle=v_{\phi_\mu}(0,0,1),\quad\langle\phi_\tau\rangle=v_{\phi_\tau}(0,1,0),
\end{align}
the charged lepton mass matrix reads
\begin{align}
\hat{m}_l=\left(\begin{array}{ccc}
\frac{y_e}{\Lambda^4}v v_{\theta}^2 v_{\phi_e}^2 & 0 & 0\\
0 & \frac{y_\mu}{\Lambda^2}v v_\theta v_{\phi_\mu} & 0\\
0 & 0 & \frac{y_\tau}{\Lambda}v v_{\phi_\tau}\\
\end{array}\right).
\end{align}
As a rough estimation, we require that all the charged lepton Yukawa coupling constants are of the same maginitude: $y_e\sim y_\mu \sim y_\tau$; and all the flavons are of the same magnitude: $v_{\phi_e} \sim v_{\phi_\mu} \equiv v_{\phi_l}$. Comparing with $\frac{m_e}{m_\mu}|_{\rm expt}\simeq 0.005,\quad \frac{m_\mu}{m_\tau}|_{\rm expt}\simeq 0.06$, we get $\frac{v_{\phi_l}}{\Lambda}\sim 0.08, \quad \frac{v_\theta}{\Lambda}\sim 0.06$, which is the same as in Ref.~\cite{Altarelli:2009gn}.
For the field assignments as in Table.~\ref{tab:assignment}, the charged lepton sector and the neutrino sector are well separated. In the meantime, since the right handed charged leptons are all singlet of $S_4$, the resulting charged lepton mass matrix will always be diagonal (even when higher order operators enter). As a result, there will be no corrections to the lepton mixing matrix from the charged lepton sector. So we will focus on the neutrino part of the model from now on.
\subsection{The neutrino sector}
The Dirac neutrino mass matrix coming from the neutrino Yukawa coupling reads
\begin{align}
\hat{m}_{\rm D}=y_\nu v\left(\begin{array}{ccc}
1 & 0 & 0\\
0 & 0 & 1\\
0 & 1 & 0\\
\end{array}\right).
\end{align}
We construct the Majorana neutrino mass matrix order by order. In the leading order, the flavons vevs are
\begin{align}
\langle\xi_1\rangle=v_{\xi_1},\quad\langle\phi_1\rangle=v_{\phi_1}(0,1,1),\quad\langle\psi_1\rangle=v_{\psi_1}(1,\frac{1}{3}).
\end{align}
The resulting Majorana neutrino mass matrix when the flavons acquire their vevs is
\begin{align}
\hat{m}_{\rm LO}=\frac{y_{11}}{\Lambda}v_{\xi_1}\left(\begin{array}{ccc}
1& 0 & 0\\
0 & 0 & 1\\
0 & 1& 0\\
\end{array}\right)+\frac{y_{12}}{\Lambda}v_{\psi_1}\left(\begin{array}{ccc}
1& 0 & 0\\
0 & \frac{\sqrt{3}}{6} & -\frac{1}{2}\\
0 & -\frac{1}{2}& \frac{\sqrt{3}}{6}\\
\end{array}\right)\frac{y_{13}}{\Lambda}v_{\phi_1}\left(\begin{array}{ccc}
0& -1 &1\\
-1 & 0 & 0\\
1 & 0& 0\\
\end{array}\right),
\end{align}
which resembles the structure of $\hat{m}_0$ in Eq.~(\ref{eq:m0}). The singlet flavon $\xi_1$ is necessary to avoid $m_3=0$ when making direct comparison with $\hat{m}_0$.
In the next-to-leading order, the flavon vevs are
\begin{align}
\langle\phi_{21}\rangle=v_{\phi_{21}}(0,1,1),\quad \langle\phi_{22}\rangle=v_{\phi_{22}}(0,1,0),\quad\langle\phi_{23}\rangle=v_{\phi_{23}}(1,1,1),
\end{align}
the resulting mass matrix is
\begin{align}
\hat{m}_{\rm NLO}&=\frac{y_{21}}{\Lambda^2}v_{\phi_{21}}^2
\left(\begin{array}{ccc}
-2 & 0 & 0\\
0 & 1 &-1\\
0 & -1& 1\\
\end{array}\right)
+\frac{y_{22}}{\Lambda^2}v_{\phi_{22}}^2
\left(\begin{array}{ccc}
0 & 0 & 0\\
0 & 1 & 0\\
0 & 0& -1\\
\end{array}\right) \nonumber\\
&+\frac{y_{23}}{\Lambda^2}v_{\phi_{23}}^2
\left(\begin{array}{ccc}
0 & -2 & -2\\
-2 & 0 & 0\\
-2 & 0 & 0\\
\end{array}\right).
\end{align}
Here we use two terms to reproduce the ``a" term in Eq.~(\ref{eq:m1}). Since we adopt a real representation of $S_4$ as in Ref.~\cite{Altarelli:2009gn}, we arrange the imaginary unit to the coefficient of the constant matrix. We do the same to the ``b" term and other terms whenever necessary.
In the next-to-next leading order, the flavon vevs are
\begin{align}
\langle\xi_3\rangle=v_{\xi_3},\quad\langle\psi_3\rangle=v_{\psi_3}(-\sqrt{3},1),\quad\langle\phi_{31}\rangle=v_{\phi_{31}}(0,1,1),\quad\langle\phi_{32}\rangle=v_{\phi_{32}}(0,-1,1).
\end{align}
The resulting mass matrix is
\begin{align}
\hat{m}_{\rm NNLO}=\frac{y_{31}}{\Lambda^3}v_{\psi_3}^2v_{\xi_3}
\left(\begin{array}{ccc}
3 & 0 & 0\\
0 & \frac{3}{2} & \frac{3}{2}\\
0 & \frac{3}{2} & \frac{3}{2}\\
\end{array}\right)
+\frac{y_{32}}{\Lambda^3}v_{\phi_{31}}^3
\left(\begin{array}{ccc}
0 & -2 & 2\\
-2 & 0 & 0 \\
2 & 0 & 0\\
\end{array}\right)
+\frac{y_{33}}{\Lambda^3}v_{\phi_{32}}^3
\left(\begin{array}{ccc}
0 & -2 & -2\\
-2 & 0 & 0 \\
-2 & 0 & 0\\
\end{array}\right),
\end{align}
which resembles the structure of $\hat{m}_2$ in Eq.~(\ref{eq:m2}).
Due to the strict constraints given by all the symmetries of the model, we find no higher order terms contributing to the Majorana neutrino mass matrix to mass dimension $10$.
The effective light neutrino mass matrix is given by seesaw as
\begin{align}
\hat{m}_{\nu_{\rm model}}=\hat{m}_{\rm D}(\hat{m}_{\rm LO}+\hat{m}_{\rm NLO}+\hat{m}_{\rm NNLO}+...)^{-1}\hat{m}_{\rm D}^T\equiv \hat{m}_{\rm D}\hat{m}_{\rm R}^{-1}\hat{m}_{\rm D}^T,
\end{align}
where we define $\hat{m}_{\rm R}=\hat{m}_{\rm LO}+\hat{m}_{\rm NLO}+\hat{m}_{\rm NNLO}+...$ as the heavy neutrino mass matrix. It is expected from the above construction that $\hat{m}_{\rm R}$ resembles the same structure inspired from the SC mixing to order $\lambda^2$.
The parameters in $\hat{m}_{\rm R}$ can be simplified by noticing that the coefficients ``a,b,c,d,e" are all of the form ``$m_1+m_2+2m_3$" or ``$m_1-m_2$". By forcing $\hat{m}_{\nu_{\rm model}}$ in the same form as the one constructed from the SC mixing to $\mathcal{O}(\lambda^2)$ (which also means in the same form as $\hat{m}_{\rm R}$), we arrive at five independent real parameters in $\hat{m}_{\rm R}$: $\tilde{v}_{\xi_1}$, $\tilde{v}_{\psi_1}$ and their phase $\gamma$; $\tilde{v}_{\phi_1}$ and its phase $\rho$. For a detailed description of the parameters simplification, see Appendix~\ref{apd:para}.
Although our approach here is in same spirit of the usual model building of this kind, there is something different: we do not require that $\hat{m}_{\nu_{\rm model}}$ is diagonalized by the SC mixing. In other words, the SC mixing is not exact in our construction, as has been mentioned in the end of last section. This fact motivates a detailed scan of the parameter space to see its validity, as we will do in next section.
\section{Phenomenology}\label{sec:ph}
In this section we perform a numerical scan of the parameter space of the model. The seesaw scale is set at $10^{13}$ GeV, and $y_\nu$ is fixed to 0.1. We scan over five real parameters: $\tilde{v}_{\xi_1}$, $\tilde{v}_{\psi_1}$ and their phase $\gamma$; $\tilde{v}_{\phi_1}$ and its phase $\rho$. Since the model is constructed at a high energy, we use the REAP package~\cite{Antusch:2005gp} to perform the evolution of the mixing parameters from the seesaw scale to the low energy scale to make comparison with the oscillation observables.
\begin{figure}
\centering
\includegraphics[scale=0.49]{th13th12.eps}
\includegraphics[scale=0.49]{th13th23.eps}
\includegraphics[scale=0.49]{th13d.eps}
\includegraphics[scale=0.49]{th13al1.eps}
\includegraphics[scale=0.49]{th13al2.eps}
\includegraphics[scale=0.475]{al2al1.eps}
\caption{
Numerical parameter space scan result. For observables (and also $\delta$), the plots are framed in their $3\sigma$ ranges, and the colored bands mark the $1\sigma$ ranges. The ranges are taken from Ref.~\cite{Gonzalez-Garcia:2014bfa}. Blue (red) points are given by the model in agreement with 3$\sigma$ (1$\sigma$) ranges of the low energy neutrino masses and mixing parameters.
}
\label{fig:mixing_parameters}
\end{figure}
In Figure~\ref{fig:mixing_parameters} we show the result of the numerical parameter space scan for neutrino masses in the normal ordering. For the low energy oscillation observables (and also $\delta$), the plots are framed in their $3\sigma$ ranges, and the colored bands mark the $1\sigma$ ranges. The ranges are taken from Ref.~\cite{Gonzalez-Garcia:2014bfa}. We find that a region in the parameter space gives predictions of the mixing parameters within their $3\sigma$ ranges.
In the first two plots of Figure~\ref{fig:mixing_parameters}, we see deviations from the self-complementarity relation. It is understood that different sets of parameters agree with the SC relation at different level, and more importantly, the RG effects of the three mixing angles are different~\cite{Antusch:2003kp}. We prefer to scan over the parameter space to find points compatible with all the low energy constraints rather than the SC relation for obvious reasons. On one hand, it is what we build a model for; on the other hand, the reproducing of $\hat{m}_\nu$ structure guarantees the realization of the SC relation. The numerical result here clearly carries the features of the SC mixing as in $\theta_{23}$ and $\delta_{\rm CP}$.
Given the numerical result as shown in Figure~\ref{fig:mixing_parameters}, we are ready to discuss the predictions of the model. In the following we give model predictions using the $3\sigma$ low energy constraints.
The CP violating phases are predicted as
\begin{align}
\delta_{\rm CP} \in[256.05^\circ,283.69^\circ];\\
\alpha_1 \in [133.82^\circ,231.05^\circ];\\
\alpha_2 \in [0.019^\circ,115.62^\circ] \cup [249.32^\circ,359.39^\circ].
\end{align}
The rephasing invariant in oscillation, i.e., the Jarlskog invariant~\cite{Jarlskog:1985ht}, is
\begin{align}
J_{\rm CP}=\frac{1}{8}\cos\theta_{13}\sin2\theta_{12}\sin2\theta_{23}\sin2\theta_{13}\sin\delta \in [0.029,0.035].
\end{align}
The sum of the neutrino masses is
\begin{align}
\sum_j m_j \in [0.060,0.067] ~\text{eV} \;,
\end{align}
which lies safely within the cosmology limit $\sum_j m_j < 0.23 \text{ eV}$ \cite{Ade:2013zuv}.
The kinematic mass $m_{\beta}$ as measured
in the KATRIN experiment~\cite{Drexlin:2005zt} is
\begin{align}
m_{\beta}=\sqrt{m_{1}^{2}c_{12}^{2}c_{13}^{2}+m_{2}^{2}s_{12}^{2}c_{13}^{2}+m_{3}^{2}s_{13}^{2}} \in [0.009,0.010] ~\text{eV},
\label{eq:katrin}
\end{align}
which is below the reach of $m_{\beta} > 0.2$~eV.
The effective Majorana mass in neutrinoless double beta decay is
\begin{align}
|\langle m_{\rm ee}\rangle|=|m_1c_{12}^2c_{13}^2e^{-i\alpha_1}+m_2s_{12}^2c_{13}^2e^{-i\alpha_2}+m_3s_{13}^2e^{-2i\delta}|\in [0.0004,0.006]~\text{eV}.
\end{align}
We show in Figure~\ref{fig:mee_plot} this prediction in comparison with current limits given by oscillation experiments, neutrinoless double beta decay experiments, cosmology and also the KATRIN experiment.
\begin{figure}
\centering
\includegraphics[scale=0.7]{meePlot.eps}
\caption{
Prediction for the effective Majorana neutrino mass $|\langle m_{\rm ee}\rangle|$ in neutrinoless double beta decay
experiments as a function of the lightest neutrino mass $m_{\rm min}$. The blue (red) points are given by the model respecting the 3$\sigma$ (1$\sigma$) low energy constraints. The light blue (pink) region are obtained from the 3$\sigma$ ranges of the low energy
neutrino masses and mixings in case of normal (inverted) ordering. The light grey region is the upper limit on $|\langle m_{\rm ee}\rangle|$ given by the EXO-200~\cite{Auger:2012ar}, KamLAND-Zen~\cite{Gando:2012zm}, and GERDA experiments~\cite{Agostini:2013mzu}. The vertical black dashed line is the Planck limit~\cite{Ade:2013zuv}, and the vertical red dot-dashed line represents the limit on $m_{\rm min}$ ($\sim0.2$ eV) obtained from KATRIN sensitivity~\cite{Drexlin:2005zt}.
}
\label{fig:mee_plot}
\end{figure}
Notice that the above results are given when light neutrino masses are in the normal ordering. In the inverted ordering case, after performing a similar parameter space scan, we cannot find viable points given the $3\sigma$ constraints of the observables. We can get some insights into this issue by fitting the models' predictions on $\{\theta_{12}, \theta_{13}, \theta_{23}, \delta_{\rm CP}, \Delta m_{21}^2, \Delta m_{32}^2\}$ to their global fit values. In the inverted ordering case, we get a $\chi^2_{\rm min}/_{\rm NDF} \simeq 12/1$, indicating that the model is not a suitable description of the data in this case. It is understandable as we look again at $\theta_{23}$ in Table~\ref{tab:glb}, which is far from our input $\theta_{23}=45^\circ$ in the model. So we conclude that this model is not viable to give realistic predictions in the inverted ordering of neutrino masses.
\section{Summary and conclusion}\label{sec:sum}
In this paper, we construct a self-complementary mixing pattern from the self-complementarity relation and $\delta_{\rm CP}=-\frac{\pi}{2}$. A Majorana neutrino mass matrix is obtained from the perturbative SC mixing to $\mathcal{O}(\lambda^2)$ ($\lambda=\sin\theta_{13}$). Then we build an $S_4$ flavour model to reproduce the structure of the Majorana mass matrix. The model gives predictions on neutrino masses and mixing that are compatible with the oscillation experiments results in the normal ordering of light neutrino masses. For the quantities that have not been measured yet, the model gives predictions.
We argue that the SC mixing has to be perturbatively realized in a model, as it contains substructure of the $\mu-\tau$ symmetric mixing. Using the expanded SC mixing, the Majorana mass matrix is constructed order by order. Although the SC mixing has a BM mixing at the leading order, the model is not merely another BM $S_4$ model. Because the higher order terms that are carefully arranged are as important as the leading order to reveal the full structure dictated by the SC mixing. It becomes clearer when we look at the neutrino sector of the model, which contains many parameters at a first sight but the detailed structure gets most of them correlated.
It is not surprising that the model gives realistic predictions on neutrino masses and mixing parameters. The SC mixing is within the $3\sigma$ ranges of the experimental data. The charged lepton sector is separated from the neutrino sector due to the U(1) symmetry, which means no corrections from the charged lepton sector to the lepton mixing. At the same time, the symmetries of the model forbid higher order terms to mass dimension 10, so the neutrino mass matrix structure is also safe from high order corrections. Allowing for deviations from the exact SC mixing and taking into consideration the RG effects, we find a region in the parameter space that is compatible with all the low energy observables only in the case of normal ordering.
The model also gives predictions on the not-yet observed quantities. For example, the Dirac CP violating phase is predicted to be in the range $[256.05^\circ,283.69^\circ]$, and the Majorana phases are: $\alpha_1 \in [133.82^\circ,231.05^\circ]$, $\alpha_2 \in [0.019^\circ,115.62^\circ] \cup [249.32^\circ,359.39^\circ]$. The lightest neutrino mass is $m_1\in [0.003,0.006]$ eV. The effective Majorana neutrino mass in neutrinoless double beta decay is $|\langle m_{\rm ee}\rangle| \in [0.0004,0.006]$ eV. These quantities might be measured in future experiments.
In sum, the $S_4$ model we built are elaborately controlled to percent level to render the mass matrix structure dictated by the SC mixing. Also as a result of the control, there are few free parameters left in the model. A numerical study of the parameter space shows that the model gives realistic predictions on neutrino masses and mixings, and can be tested in future experiments.
\vspace{.5cm}
{\large \bf{Acknowledgement}}
The author would like to thank J. Gehrlein and M. Spinrath for sharing the code generating the $|\langle m_{\rm ee}\rangle|$
vs. $m_{\rm min}$ plot, thank G.S. Li, I. Girardi and J.P. Dai for discussions on the numerical method, and thank Prof. X.G. He for his hospitality in SJTU where the work was done. This work is supported in part by the Shanghai Laboratory for Particle
Physics and Cosmology under Grant No. 11DZ2260700.
|
2,877,628,089,850 | arxiv | \section{Introduction}
\label{sec1}
Although in nature granular materials are usually immersed in a gas or liquid phase (like the air, for instance), the influence of the latter on the transport properties of solid particles is generally neglected in most theoretical and computational studies. However, high-velocity, gas-solid flows occur in a wide range of practical applications (like circulating fluidized beds, for instance) and hence, the impact of the gas phase on grains should be accounted for in many circumstances. An example corresponds to species segregation problems where several works \cite{MLNJ01,NSK03,SSK04,WZXS08,CPSK10,PGM14} have shown that the presence of the interstitial fluid may significantly change the segregation phase-diagrams obtained in previous studies for (dry) granular flows (namely, when the role of the gas phase is neglected).
At a kinetic theory level, the description of such multiphase flows is quite intricate since the system involves two different phases (solid particles and interstitial fluid) and hence, one would need to solve a set of two coupled kinetic equations for each one of the velocity distribution functions of the different phases. On the other hand, in order to gain some insight into this complex problem, most of the models proposed in the literature for gas-solid flows have considered a single kinetic equation for the solid particles where the effect of the surrounding fluid on them is taken into account through an effective external force $\mathbf{F}_\text{fluid}$ \cite{models}.
A simple and realistic way of modeling the fluid-solid interaction force $\mathbf{F}_\text{fluid}$ is by means of a viscous drag force given by
\beq
\label{1.1}
\mathbf{F}_{\text{fluid}}=-m \gamma (\mathbf{v}-\mathbf{U}_g),
\eeq
where $m$ and $\mathbf{v}$ are the mass and the velocity of the particles, respectively, $\gamma$ is the friction coefficient (assumed to be proportional to the gas viscosity $\mu_g$), and $\mathbf{U}_g$ is the (known) mean velocity of the gas phase. The model defined in Eq.\ \eqref{1.1} has been recently considered in different papers to study the shear rheology of frictional hard-sphere suspensions \cite{H13,SMMD13,WGZS14,MSMD15}. In addition, model \eqref{1.1} can be seen as a simplified version of the more general particle acceleration model proposed in Ref.\ \cite{GTSH12} where the effect of the gas phase is not only accounted for by the drag force \eqref{1.1} but also by means of a Langevin-like term. This latter term takes into account the added effects coming from neighboring particles and can be neglected when the mean velocity of the solid particles follows the mean flow velocity of the gas ($\mathbf{U} \simeq \mathbf{U}_g$). Here, $\mathbf{U}$ [defined below in Eq.\ \eqref{2.5}] denotes the mean flow velocity of the solid particles. Thus, the results derived from this simple version of the model can be considered of practical interest to analyze linear transport in dilute gas-solid flows when the mean flow velocity of the solid and gas phases are practically the same (like, for instance, in the simple or uniform shear flow (USF) state \cite{TK95,SMTK96,ChVG15}).
An interesting problem is to assess the impact of the interstitial fluid on the transport properties of solid particles under USF. As usual, solid particles are modeled as a gas of inelastic smooth hard spheres with a constant coefficient of restitution $0<\al\leq 1$. The USF state is likely the simplest flow problem since the only nonzero hydrodynamic gradient is $\partial U_x/\partial y \equiv a$, where $a$ is the \emph{constant} shear rate. Due to its simplicity, this state has been widely studied in the past for dry elastic \cite{GS03} and inelastic \cite{C90,G03} gases as an ideal testing ground to shed light on the response of the system to large shear rates. Years ago, two independent papers \cite{L06,G06} analyzed momentum and heat transport around USF for a dry \emph{dilute} granular gas in spatially inhomogeneous states close to the USF. The heat and momentum fluxes were determined to first order in the deviations of the hydrodynamic field gradients from their values in the reference USF state. Given that the granular gas is strongly sheared, the corresponding transport coefficients are nonlinear functions of both the shear rate and the coefficient of restitution $\al$. This is one of the main new added values of these constitutive equations. On the other hand, in order to get explicit results and due to the mathematical difficulties involved in the general non-stationary problem, a particular sort of perturbations were considered to obtain the generalized transport coefficients under steady state conditions. Given that the (scaled) shear rate $a^*\equiv a/\nu$ [$\nu$ is a collision frequency for hard spheres, see Eq.\ \eqref{2.24}] and $\al$ are coupled in the steady state, then the generalized transport coefficients are \emph{only} functions of the coefficient of restitution $\al$.
The aim of this paper is to study transport around USF in dilute granular suspensions. As said before, the starting point is the inelastic Boltzmann equation \cite{BP04,P15} with the presence of the viscous drag force \eqref{1.1}. As in Refs.\ \cite{L06,G06}, the Boltzmann equation is solved by means of a Chapman-Enskog-like expansion \cite{CC70} around the reference USF distribution $f^{(0)}$. Since the latter applies for arbitrary values of the shear rate $a$, the successive approximations $f^{(k)}$ in the perturbation expansion retain all the hydrodynamic orders in $a$. Consequently, the problem deals with two kinds of spatial gradients: \emph{small} gradients due to perturbations of the USF and arbitrary shear rates due to the background shear flow. As in Refs.\ \cite{L06,G06}, the study here is restricted to first order in the spatial gradients in the density, temperature, and flow velocity. The question arises then as to whether, and if so to what extent, the conclusions drawn from Refs.\ \cite{L06,G06} may be altered when the new ingredient associated with the presence of the gas phase is accounted for in the theory.
In the first-order approximation, the momentum transport is characterized by the viscosity tensor $\eta_{ijk\ell}$ while the heat flux is characterized by the thermal conductivity tensor $\kappa_{ij}$ and the Dufour-like tensor $\mu_{ij}$. As in the case of dry granular gases, to get explicit analytical results, the steady state conditions are considered and hence, the (scaled) friction coefficient $\gamma^*$ (which characterizes the amplitude of the drag force) is given in terms of the (independent) relevant parameters $a^*$ and $\al$. This contrasts with the results offered in Refs.\ \cite{L06,G06} since the transport coefficients are now explicitly obtained as nonlinear functions of both the shear rate and the coefficient of restitution.
For ordinary fluids (elastic collisions), several previous works studied the shear-rate dependence of the thermal conductivity tensor under shear flow. Thus, Evans \cite{E91} derived years ago a Green-Kubo formula for the thermal conductivity in a strongly shearing fluid. In a similar way as in the equilibrium case, the thermal conductivity of a shearing steady state is expressed in terms of fluctuations in steady heat flux. This formula was subsequently employed to calculate the shear-rate dependence of the thermal conductivity of a Lennard-Jones fluid via nonequilibrium molecular dynamics simulations methods \cite{DE93}. In the context of kinetic theory, an explicit expression of the thermal conductivity tensor was derived \cite{G93,G95} by solving the Boltzmann equation by means of an expansion around the shear flow state. These analytical results were shown to compare qualitatively well with the computer simulations performed in Ref.\ \cite{DE93}. It must be noted that the calculations carried out in Refs.\ \cite{G93,G95} slightly differ from the ones carried out in this paper since the former require an additional external force to reach a steady state with constant pressure and linear shear field. Apart from these papers, a more recent paper \cite{SA14} for dry granular gases has determined the thermal conductivity tensor via an expansion around an anisotropic Gaussian distribution function. The authors derived a generalized Fourier law for the granular heat flux where the thermal conductivity is characterized by an anisotropic second rank tensor. A comparison between the results obtained here with those reported before for ordinary \cite{G93,G95} and granular \cite{SA14} sheared gases will be made in sec.\ \ref{sec7}.
The plan of the paper is as follows. In Sec.\ \ref{sec2} the Boltzmann kinetic equation is introduced and its corresponding balance equations derived. Section \ref{sec3} deals with the relevant results derived in the (unperturbed) USF problem by solving the Boltzmann equation by means of Grad's moment method \cite{G49}. In Sec.\ \ref{sec4} the problem we are interested in is described and the set of coupled linear equations defining the generalized coefficients $\eta_{ijk\ell}$, $\kappa_{ij}$, and $\mu_{ij}$ are provided. Explicit expressions for these shear-rate dependent transport coefficients are then obtained in Sec.\ \ref{sec5} by employing a kinetic model of the Boltzmann equation. The details of the calculations are displayed along several Appendices. The shear-rate dependence of some transport coefficients is illustrated in Sec.\ \ref{sec6} for different values of the coefficient of restitution. Finally, in Sec.\ \ref{sec7} the paper is closed with some concluding remarks.
\section{Boltzmann kinetic equation for monodisperse granular suspensions}
\label{sec2}
We consider a granular suspension of solid particles of mass $m$ and diameter $\sigma$ immersed in a gas of viscosity $\mu_g$. Under rapid flow conditions, particles are modeled as a gas of smooth hard spheres or disks with inelastic collisions. The inelasticity of collisions is characterized by a \emph{constant} (positive) coefficient of normal restitution $\al \leq 1$. As said in the Introduction, a simple and usual usual way of modeling the effect of the interstitial gas on the dynamic properties of the solid particles is through the presence of nonconservative external forces. These forces are incorporated into the corresponding Boltzmann kinetic equation of the solid particles. Thus, in the low-density regime, the one-particle velocity distribution function $f(\mathbf{r}, \mathbf{v},t)$ of grains obeys the kinetic equation \cite{BP04}
\beq
\label{2.1}
\frac{\partial f}{\partial t}+{\bf v}\cdot \nabla f+\frac{\partial}{\partial \mathbf{v}}\cdot \left(\frac{\mathbf{F}_{\text{fluid}}}{m}f\right)=J[\mathbf{v}|f,f],
\eeq
where the Boltzmann collision operator $J\left[{\bf v}|f,f\right]$ is given by
\beqa
\label{2.2}
J\left[{\bf v}_{1}|f,f\right]&=&\sigma^{d-1}\int \dd {\bf v}
_{2}\int \dd \widehat{\boldsymbol{\sigma}}\,\Theta (\widehat{{\boldsymbol {\sigma }}}
\cdot {\bf g}_{12})(\widehat{\boldsymbol {\sigma }}\cdot {\bf g}_{12})\nonumber\\
& & \times \left[\alpha^{-2}f({\bf v}_1')f({\bf v}_2')-f({\bf v}_1)f({\bf v}_2)\right].
\eeqa
Here, $d$ is the dimensionality of the system ($d=2$ for disks and $d=3$ for spheres), $\boldsymbol
{\sigma}=\sigma \widehat{\boldsymbol {\sigma}}$, $\widehat{\boldsymbol {\sigma}}$ being a unit vector pointing in the direction from the center of particle $1$ to the center of particle $2$, $\Theta $ is the Heaviside step function, and ${\bf g}_{12}={\bf v}_{1}-{\bf v}_{2}$ is the relative velocity. The primes on the velocities in Eq.\ \eqref{2.2} denote the initial
values $\{{\bf v}_{1}^{\prime}, {\bf v}_{2}^{\prime }\}$ that lead to
$\{{\bf v}_{1},{\bf v}_{2}\}$ following a binary collision:
\begin{equation}
{\bf v}_{1,2}^{\prime}={\bf v}_{1,2}\mp\frac{1}{2}\left( 1+\alpha^{-1}\right)
(\widehat{{\boldsymbol {\sigma }}}\cdot {\bf g}_{12})\widehat{{\boldsymbol {\sigma }}}.
\label{2.3}
\end{equation}
As mentioned in the Introduction, a simplest way of modeling the fluid-solid interaction force $\mathbf{F}_{\text{fluid}}$ is through the drag force \eqref{1.1} where
\begin{equation}
\label{2.5}
{\bf U}(\mathbf{r},t)=\frac{1}{n(\mathbf{r},t)}\int \;\dd{\bf v} \; {\bf v}\; f(\mathbf{r},\mathbf{v},t)
\end{equation}
is the mean flow velocity of the solid particles, and
\beq
\label{2.6}
n(\mathbf{r},t)=\int\; \dd \mathbf{v}\; f(\mathbf{r},\mathbf{v},t)
\eeq
is the number density of particles. Thus, according to Eqs.\ \eqref{1.1} and \eqref{2.1}, the Boltzmann equation becomes
\beq
\label{2.4}
\frac{\partial f}{\partial t}+{\bf v}\cdot \nabla f-\gamma\Delta \mathbf{U}\cdot \frac{\partial f}{\partial \mathbf{V}}-\gamma\frac{\partial}{\partial \mathbf{V}}
\cdot \mathbf{V} f=J[\mathbf{v}|f,f],
\eeq
where $\Delta \mathbf{U}=\mathbf{U}-\mathbf{U}_g$, and ${\bf V}={\bf v}-{\bf U}$ is the peculiar velocity. Note that in the case of very dilute suspensions, $\gamma$ is assumed to be a constant \cite{K90,KS99,WKL03}.
The macroscopic balance equations for the densities of mass, momentum and energy can be obtained by multiplying Eq.\ \eqref{2.4} by $1$, $m \mathbf{V}$, and $\frac{1}{2}m V^2$, respectively, and integrating over velocity. The result is
\beq
\label{2.7}
D_t n+n \nabla \cdot \mathbf{U}=0,
\eeq
\beq
\label{2.8}
D_t \mathbf{U}+(m n)^{-1} \nabla \cdot \mathsf{P}=-\gamma\Delta \mathbf{U},
\eeq
\beq
\label{2.9}
D_tT+\frac{2}{d n}\left( \nabla \cdot \mathbf{q}+\mathbf{P}:\nabla \mathbf{U} \right)=-2T\gamma-T \zeta.
\eeq
Here, $D_t\equiv \partial_t+\mathbf{v}\cdot \nabla$ is the material derivative,
\beq
\label{2.9.1}
T(\mathbf{r},t)=\frac{m}{d n(\mathbf{r},t)}\int\; \dd \mathbf{v}\; V^2\; f(\mathbf{r},\mathbf{v},t),
\eeq
is the \emph{granular} temperature,
\beq
\label{2.10}
P_{ij}(\mathbf{r},t)=m\int\; \dd \mathbf{v} V_i V_j f(\mathbf{r},\mathbf{v}, t),
\eeq
is the pressure tensor,
\beq
\label{2.11}
\mathbf{q}(\mathbf{r},t)=\frac{m}{2}\int\; \dd \mathbf{v} V^2 \mathbf{V} f(\mathbf{r},\mathbf{v}, t),
\eeq
is the heat flux, and
\beq
\label{2.12}
\zeta(\mathbf{r},t)=-\frac{m}{d n(\mathbf{r},t) T(\mathbf{r},t)}\int\; \dd \mathbf{v} V^2 J[\mathbf{v}|f,f]
\eeq
is the cooling rate characterizing the rate of energy dissipated due to collisions.
Notice that the interaction of solid particles with the gas phase is modeled solely by the friction term \eqref{1.1} since the term accounting for the momentum transferred from the gas (bath) to the granular particles (which is modeled by a stochastic force) has been neglected for the sake of simplicity. This stochastic force contributes to the Boltzmann equation \eqref{2.4} with a Langevin-like term of the form $-\frac{1}{2}\xi \partial^2 f/\partial V^2$, where $\xi$ is the strength of the noise term. As said in sec.\ \ref{sec1}, this stochastic term was considered in the complete suspension model proposed in Ref.\ \cite{GTSH12}. For elastic collisions and zero shear rate, the inclusion of the above stochastic term yields the balance equation $\partial_t T=-2T \gamma+m\xi$ and so, the Boltzmann equation \eqref{2.4} admits a stable steady equilibrium state. Indeed, it is precisely the condition of admitting an equilibrium state that gives rise to a fluctuation-dissipation theorem \cite{M89} fixing the strength of the noise term [i.e., $\xi=2\gamma T/m$, where $T$ is the steady equilibrium temperature]. The omission of this Langevin-like term could be justified if the bath temperature is very low compared to the granular temperature or if the mean flow velocities of solid and gas phases are quite similar \cite{GTSH12}.
On the other hand, in spite of the absence of the Langevin-like term, the Boltzmann equation \eqref{2.4} still admits a simple solution in the homogenous state (zero shear rate) for elastic collisions ($\al=1$). This solution is given by a time-dependent Maxwellian distribution. For homogeneous states, Eq.\ \eqref{2.4} becomes
\beq
\label{2.12.1}
\frac{\partial f}{\partial t}-\gamma\frac{\partial}{\partial \mathbf{v}}
\cdot \mathbf{v} f=J[\mathbf{v}|f,f],
\eeq
where an appropriate selection of the frame of reference where the mean flow velocity vanishes ($\mathbf{U}=\mathbf{U}_g=\mathbf{0}$) has been chosen. The only relevant balance equation is that of the temperature \eqref{2.9} which reads
\beq
\label{2.12.2}
\frac{\partial \ln T}{\partial t}=-2 \gamma.
\eeq
Since $\gamma \equiv \text{const.}$, then the solution to Eq.\ \eqref{2.12.2} is simply
\beq
\label{2.12.3}
T(t)=T(0) e^{-2\gamma t},
\eeq
where $T(0)$ is the initial temperature. Under these conditions, it is easy to see that the Boltzmann equation \eqref{2.12.1} has the solution \cite{GSB90,PG14}
\beq
\label{2.12.4}
f_0(\mathbf{v},t)=n \left(\frac{m}{2\pi T(t)}\right)^{d/2} \exp \left(-\frac{m v^2}{2 T(t)}\right),
\eeq
where $T(t)$ is given by \eqref{2.12.3}. An H-theorem has been also proved \cite{GSB90} for the distribution $f_0$ in the sense that, starting from any initial condition and in the presence of the viscous drag force $\gamma \mathbf{v}$, the velocity distribution function $f(\mathbf{r}, \mathbf{v}, t)$ reaches in the long time limit the Maxwellian form \eqref{2.12.4} with a time-dependent temperature.
Before closing this Section, it is interesting to remark the situations in which the suspension model \eqref{2.4} is expected to provide reliable predictions. As has been previously discussed in several papers \cite{models,TK95,SMTK96,WKL03}, since the form of the Boltzmann collision operator \eqref{2.2} is the same as for a dry granular gas, one expects that the model \eqref{2.4} is appropriate for problems where the stresses applied by the gas phase on particles have only a weak influence on the dynamics of grains. This necessarily requires that the mean-free time between collisions is much shorter than the viscous relaxation time due to the viscous drag force. For other kind of systems (e.g., glass beads in liquid water), one should take into account the influence of the interstitial fluid on the Boltzmann collision operator.
\section{Simple shear flow problem in dilute granular suspensions}
\label{sec3}
We assume now that the suspension is in \emph{steady} USF. This state is macroscopically defined by a constant density $n$ and temperature $T$ and the mean velocity $\mathbf{U}$ is
\beq
\label{2.15}
U_i=a_{ij}r_j, \quad a_{ij}=a\delta_{ix}\delta_{jy},
\eeq
where $a$ is the constant shear rate. In addition, as usual in uniform sheared suspensions \cite{TK95,SMTK96,ChVG15}, the average velocity of particles follows the velocity of the gas phase and so, $\mathbf{U}=\mathbf{U}_g$. In this case, $\Delta {\bf U}=\textbf{0}$ and the Boltzmann equation \eqref{2.4} becomes
\beq
\label{2.16}
-aV_y\frac{\partial f}{\partial V_x}-\gamma\frac{\partial}{\partial
{\bf V}}\cdot {\bf V} f =J[\mathbf{V}|f,f].
\eeq
Upon writing Eq.\ \eqref{2.16} use has been made of the fact that the USF state becomes spatially uniform when one expresses the Boltzmann equation in terms of the peculiar velocity $V_i=v_i-a_{ij}r_j$ \cite{DSBR86}. In the USF problem, the heat flux vanishes and the only relevant balance equation is that of the temperature \eqref{2.9}. In the steady state and for the geometry of the USF, Eq.\ \eqref{2.9} reads
\beq
\label{2.17}
\frac{2}{d n} P_{xy} a=-2T\gamma-\zeta T.
\eeq
Equation \eqref{2.17} implies that the viscous heating term ($-aP_{xy}>0$) is exactly canceled by the cooling terms arising from viscous friction ($\gamma T$) and collisional dissipation ($\zeta T$). Thus, in stationary conditions, for a given value of $\gamma$, the (steady) temperature is a function of the shear rate $a$ and the coefficient of restitution $\al$. Equivalently, one might chose $\gamma$ and $\al$ as independent parameters instead of $a$ and $\al$. This was the choice made in Refs.\ \cite{TK95,SMTK96,ChVG15}. Since we are mainly interested here in obtaining the shear-rate dependence of the transport coefficients, the former choice will be considered in this paper. A remarkable point is that a steady state is still possible for suspensions when the collisions are elastic ($\al=1$ and so, $\zeta=0$) provided $\gamma=-P_{xy} a/(d p)$, where $p=n T$ is the hydrostatic pressure.
The USF state is non-Newtonian. This can be characterized by generalized transport coefficients measuring the departure of transport coefficients from their Navier-Stokes forms. Thus, one can define a non-Newtonian shear viscosity coefficient $\eta(\al, a)$ by
\beq
\label{2.18.0}
P_{xy}=-\eta(\al,a) a.
\eeq
Moreover, while $P_{xx}=P_{yy}=P_{zz}$ in the Navier-Stokes domain, normal stress differences are present in the USF state.
The elements of the pressure tensor $P_{ij}$ can be obtained by multiplying both sides of Eq.\ \eqref{2.16} by $mV_iV_j$ and integrating over velocity. The result is
\beq
\label{2.18}
a_{ik}P_{kj}+a_{jk}P_{ki}+2\gamma P_{ij}=\Lambda_{ij},
\eeq
where
\beq
\label{2.19}
\Lambda_{ij}\equiv \int\; \dd\mathbf{v}\; m V_iV_j J[\mathbf{V}|f,f].
\eeq
So far, the hierarchy \eqref{2.18} is still exact. However, the exact expression of the collision integral $\Lambda_{ij}$ is not known (even for elastic collisions). A good estimate of $\Lambda_{ij}$ can be obtained by using Grad's approximation to $f$ \cite{G49}, namely,
\beq
\label{2.20}
f(\mathbf{V})\to f_\text{M}(\mathbf{V}) \left(1 +\frac{m}{2nT^2}V_iV_j \Pi_{ij}\right),
\eeq
where
\begin{equation}
\label{2.21}
f_\text{M}(\mathbf{V})=n\left(\frac{m}{2\pi T}\right)^{d/2}e^{-mV^2/2T}
\end{equation}
is the local equilibrium distribution function, and
\begin{equation}
\label{2.22}
\Pi_{ij}=P_{ij}-p\delta_{ij}
\end{equation}
is the traceless part of the pressure tensor. When Eq.\ \eqref{2.20} is substituted into the definition of $\Lambda_{ij}$ and nonlinear terms in $\Pi_{ij}$ are neglected, one gets the result \cite{G02}
\beq
\label{2.23}
\Lambda_{ij}=-\nu \left(\beta \Pi_{ij}+\zeta^* P_{ij}\right),
\eeq
where
\beq
\label{2.24}
\nu=\frac{8}{d+2}\frac{\pi^{(d-1)/2}}{\Gamma\left(\frac{d}{2}\right)}n\sigma^{d-1}\sqrt{\frac{T}{m}}
\eeq
is an effective collision frequency,
\beq
\label{2.25}
\zeta^*=\frac{\zeta}{\nu}=\frac{d+2}{4d}\left(1-\alpha^2\right)
\end{equation}
is the dimensionless cooling rate evaluated in the local equilibrium approximation and
\beq
\label{2.26}
\beta=\frac{1+\al}{2}\left[1-\frac{d-1}{2d}(1-\al)\right].
\eeq
As will show below, the determination of the collisional moment $\Lambda_{ij}$ by considering only linear terms yields $P_{xx}\neq P_{yy}$ but $P_{yy}=P_{zz}$. This latter identity disagrees with computer simulation results \cite{TK95,SMTK96,ChVG15}. The evaluation of $\Lambda_{ij}$ by retaining all the quadratic terms in the pressure tensor $P_{ij}$ has been recently carried out in Ref.\ \cite{ChVG15}. As expected, the addition of these nonlinear terms allows to evaluate the normal stress differences in the plane normal to the laminar flow (e.g., $P_{yy}-P_{zz}$). However, given that this difference is quite small, the expression \eqref{2.23} can be considered as a reliable approximation. Apart from its simplicity, the linear Grad solution is also essentially motivated by the desire of analytic expressions that show in a clean way the shear-rate dependence of the rheological properties.
Once the collisional moment $\Lambda_{ij}$ is known, the set of coupled equations for $P_{ij}$ can be easily solved. In terms of the reduced shear rate $a^*= a/\nu$ and the coefficient of restitution $\al$, the expressions for the (scaled) elements $P_{ij}^*=P_{ij}/p$ are
\beq
\label{2.27}
P_{yy}^*=P_{zz}^*=\frac{1}{1+2\chi}, \quad P_{xx}^*=d-(d-1)P_{yy}^*,
\eeq
\beq
\label{2.28}
P_{xy}^*=-\frac{\widetilde{a}}{(1+2\chi)^2},
\eeq
where $\widetilde{a}= a^*/\beta$, and $\chi$ is the real root of the cubic equation
\beq
\label{2.29}
\widetilde{a}^2=d \chi(1+2\chi)^2,
\eeq
namely,
\beq
\label{2.30}
\chi(\widetilde{a})=\frac{2}{3}\sinh^2\left[\frac{1}{6}\cosh^{-1}\left(1+\frac{27}{d}\widetilde{a}^2\right)\right].
\eeq
The (scaled) friction coefficient $\gamma^*=\gamma/\nu$ is defined as
\beq
\label{2.31}
\gamma^*=\beta\chi-\frac{1}{2}\zeta^*.
\eeq
In the case of elastic collisions ($\al=1$), Eqs.\ \eqref{2.27}--\eqref{2.31} agree with those obtained \cite{GS03} for a thermostatted dilute gas under USF. Moreover, the analytical results given by Eqs.\ \eqref{2.27}--\eqref{2.31} compare quite well with Monte Carlo simulations of the Boltzmann equation \cite{ChVG15}, even for strong inelasticity.
\begin{figure}
\includegraphics[width=0.7 \columnwidth,angle=0]{fig1.eps}
\caption{(color online) Dependence of the threshold shear rate $a_{\text{th}}^*$ on the coefficient of restitution $\al$. The dashed line corresponds to a two-dimensional system ($d=2$) while the solid line refers to a three-dimensional ($d=3$) system. Points above the curves correspond to physical solutions $(\gamma^* \geq 0)$ while points below the curves refer to unphysical solutions $(\gamma^*<0$).
\label{fig1}}
\end{figure}
Since $\gamma^* \geq 0$, then necessarily $2\beta \chi-\zeta^* \geq 0$, according to Eq.\ \eqref{2.31}. This means that, at a given value of the coefficient of restitution, there is a threshold value of the (scaled) shear rate $a_{\text{th}}^*$ such that the steady state condition \eqref{2.17} admits a physical solution for $a^* \geq a_{\text{th}}^*$. This physical solution yields a positive granular temperature and is related to what Sangani \emph{et al.} \cite{SMTK96} call \emph{ignited} state. The value of $a_{\text{th}}^*$ is determined from the condition
\beq
\label{2.32}
2\beta \chi=\zeta^*.
\eeq
In particular, for elastic collisions, $\zeta^*=0$ and so, $a_{\text{th}}^*=0$. However, for inelastic collisions, $\zeta^*\neq 0$, and $a_{\text{th}}^*>0$. Thus, the rheological properties are only well-defined for shear rates beyond the nonvanishing $a_{\text{th}}^*$ in the case of granular suspensions ($\al \neq 1$). The $\al$-dependence of $a_{\text{th}}^*$ is plotted in Fig.\ \ref{fig1} for $d=2$ and $d=3$. For strong inelasticity, the curves highlight that the granular suspension is in general beyond the Navier-Stokes domain (non-Newtonian regime) since the (reduced) threshold shear rate $a_{\text{th}}^*$ is not small in general. Thus, for instance $a_{\text{th}}^*\simeq 0.512$ at $\al=0.8$ in the physical three-dimensional case.
\begin{figure}
\includegraphics[width=0.7 \columnwidth,angle=0]{fig2.eps}
\caption{(color online) Shear-rate dependence of the (scaled) generalized shear viscosity $\eta^*(\al,a^*)/\eta^*(\al, 0)$ for $d=3$ and three different values of the coefficient of restitution $\al$: $\al=1$ (solid line), $\al=0.9$ (dashed line), and $\al=0.8$ (dash-dotted line). Note that $a_{\text{th}}^*\simeq 0.359$ and $a_{\text{th}}^*\simeq 0.512$ for $\al=0.9$ and $\al=0.8$, respectively.
\label{fig2}}
\end{figure}
The fact that for granular suspensions ($\al \neq 1$) a steady state is only possible for sufficiently high shear rates can be easily understood from a physical point of view. For $\gamma^*=0$, the balance equation \eqref{2.32} establishes an intrinsic connection between the shear field [through the nonlinear function $\chi(\al,a^*)$] and the collisional dissipation [through the cooling rate $\zeta^*(\al)$] in the system. Thus, the magnitude of the (scaled) shear rate $a^*$ is set by the coefficient of restitution $\al$. Since $\zeta^* \propto 1-\al^2$, then the cooling rate increases with inelasticity. Moreover, the rheological function $\chi$ increases with increasing $a^*$. Consequently, one needs to consider higher values of $a^*$ as $\al$ decreases to verify the condition \eqref{2.32} and achieve a steady state.
The (reduced) nonlinear shear viscosity $\eta^*=\eta/\eta_0$ can be easily identified from Eqs.\ \eqref{2.18.0} and \eqref{2.28}. Here, $\eta_0=p/\nu$ is the Navier-Stokes shear viscosity of an ordinary (elastic) gas of hard spheres. The expression of $\eta^*$ is given by
\beq
\label{2.33}
\eta^*(\al,a^*)=\frac{1}{\beta(1+2\chi)^2}.
\eeq
Since $\chi \sim a^{*2/3}$ for very large shear rates, then $\eta^* \sim a^{*-4/3}$ and goes to zero in the limit $a^*\to \infty$. To illustrate the shear-rate dependence of $\eta^*$, Fig.\ \ref{fig2} shows the ratio $\eta^*(\al,a^*)/\eta^*(\al,0)$ versus $a^*$ for $d=3$ and three different values of the coefficient of restitution $\al$. As mentioned before, except for elastic collisions and although $\eta^*$ is well defined for shear rates smaller than the threshold value $a_{\text{th}}^*$, the curves in Fig. \ \ref{fig2} start from the point $a^*=a_{\text{th}}^*$ for $\al \neq 1$. It appears that shear thinning (viscosity decreases with increasing shear rate) is always present, regardless of the value of the coefficient of restitution. We also observe that, at a given value of $a^*$, inelasticity inhibits the momentum transport. However, the influence of inelasticity on the (scaled) shear viscosity is not quantitatively significant.
\section{Transport coefficients for states close to USF}
\label{sec4}
Let us assume that we perturb the USF by small spatial gradients. This will give rise to new contributions to the momentum and heat fluxes that can be characterized by generalized transport coefficients. Since the system is strongly sheared, the corresponding transport coefficients are highly nonlinear functions of the shear rate. The evaluation of these coefficients is the main objective of the present paper.
As in previous papers \cite{L06,G06,G07}, in order to analyze this problem one has to start from the Boltzmann
equation \eqref{2.4} with a general time and space dependence.
First, it is convenient to continue using the relative velocity ${\bf V}={\bf v}-{\bf U}_0$, where
${\bf U}_0={\sf a}\cdot {\bf r}$ is
the flow velocity of the {\em unperturbed} USF state. As said before, the
only nonzero element of the tensor $\mathsf{a}$ is
$a_{ij}=a\delta_{ix}\delta_{jy}$. On the other hand, in the {\em perturbed} state the true velocity ${\bf U}$ is in general
different from ${\bf U}_0$ since ${\bf U}={\bf U}_0+\delta {\bf U}$, $\delta {\bf U}$ being a small perturbation to
${\bf U}_0$. As a consequence, the true peculiar velocity is now ${\bf c}\equiv {\bf v}-{\bf U}={\bf V}-\delta{\bf U}$. In addition, for the sake of simplicity, we also assume that the interstitial gas is not perturbed and hence, $\mathbf{U}_g=\mathbf{U}_0$. Thus, in the Lagrangian frame moving with velocity $\mathbf{U}_0$, the convective operator $\mathbf{v}\cdot \nabla$ can be written as
\beq
\label{3.0}
\mathbf{v}\cdot \nabla f=\left(\mathbf{V}+\mathbf{U}_0\right)\cdot \nabla f=
-aV_y\frac{\partial f}{\partial V_x}+\left(\mathbf{V}+\mathbf{U}_0\right)\cdot \mathbf{\nabla}f,
\eeq
where the derivative $\nabla f$ is taken now at constant $\mathbf{V}$. In this case,
the Boltzmann equation \eqref{2.4} reads
\beq
\label{3.1}
\partial_{t}f-aV_y\frac{\partial f}{\partial V_x}+\left(\mathbf{V}+\mathbf{U}_0\right)\cdot \mathbf{\nabla}f
-\gamma \frac{\partial}{\partial{\bf V}}\cdot {\bf V} f
=J\left[{\bf v}|f,f\right].
\end{equation}
The corresponding macroscopic balance
equations associated with this disturbed USF state follows from the general equations (\ref{2.9})--(\ref{2.11})
when one takes into account that ${\bf U}={\bf U}_0+\delta {\bf U}$. The result is
\begin{equation}
\label{3.2}
\partial_tn+{\bf U}_0\cdot \nabla n=-\nabla \cdot (n\delta {\bf U}),
\end{equation}
\begin{equation}
\label{3.3}
\partial_t\delta {\bf U}+{\sf a}\cdot \delta {\bf U}+({\bf U}_0+\delta {\bf U})\cdot \nabla \delta {\bf U}
=-\gamma\delta \mathbf{U}-(mn)^{-1}\nabla \cdot {\sf P},
\end{equation}
\beqa
\label{3.4}
& & \frac{d}{2}n\partial_tT+\frac{d}{2}n({\bf U}_0+\delta {\bf U})\cdot \nabla T+aP_{xy}+\nabla
\cdot {\bf q}\nonumber\\
& & +{\sf P}:\nabla \delta {\bf U} =-\frac{d}{2}p\left(2\gamma+\zeta\right),
\eeqa
where the pressure tensor ${\sf P}$, the heat flux ${\bf q}$ and
the cooling rate $\zeta$ are defined by Eqs.\ (\ref{2.10})--(\ref{2.12}), respectively, with the replacement
${\bf V}\rightarrow {\bf c}$.
Since we are interested here in states close to the USF state, it is assumed that the deviations from the USF state are small and hence, the spatial gradients of $n$, $\delta \mathbf{U}$, and $T$ are small. In this case, Eq.\ \eqref{3.1} can be solved by means of a generalization of the conventional Chapman-Enskog method \cite{CC70}, where the velocity distribution function is expanded around a \emph{local} shear flow reference state in terms the small spatial gradients of the hydrodynamic fields relative to those of USF. This type of Chapman-Enskog-like expansion has been carried out for elastic gases to obtain the set of shear-rate dependent transport coefficients \cite{GS03,LD97} in a thermostatted shear flow problem and it has also been employed in the context of dry granular gases \cite{L06,G06,G07}.
The Chapman-Enskog method assumes the existence of a \emph{normal} solution in which all space and time dependence of the distribution function occurs through a functional dependence of the hydrodynamic fields
\beq
\label{3.4.1}
A(\mathbf{r},t)\equiv \left\{n(\mathbf{r},t), \delta \mathbf{U}(\mathbf{r},t), T(\mathbf{r},t)\right\}.
\eeq
This solution expresses the fact that the space dependence of the shear flow is absorbed in $\mathbf{V}$ and the remaining space and time dependence is through a functional dependence on the fields $A(\mathbf{r},t)$. As in the conventional Chapman-Enskog method, this functional dependence can be made local by an expansion of $f$ in powers of spatial gradients:
\begin{equation}
\label{3.5}
f({\bf r}, {\bf V},t)=f^{(0)}(A({\bf r}, t), {\bf V})+ f^{(1)}(A({\bf r}, t), {\bf V})+\cdots,
\end{equation}
where the reference zeroth-order distribution function corresponds
to the USF distribution function but taking into account the local
dependence of the density and temperature and the change ${\bf V}\rightarrow {\bf V}-\delta{\bf U}({\bf r}, t)$. The successive approximations $f^{(k)}$ are of order $k$ in the gradients of $n$, $T$, and
$\delta {\bf U}$ but retain all the orders in the shear rate $a$.
This is the main feature of this expansion. In addition, as in previous works \cite{GFHY16}, since the friction coefficient $\gamma$ does not induce any flux in the system, it is assumed then to be at least of zeroth order in the gradients. In this paper, only the first order approximation will be considered.
The expansion (\ref{3.5}) yields the corresponding expansion for the fluxes and the cooling rate when
one substitutes (\ref{3.5}) into their definitions (\ref{2.10})--(\ref{2.12}):
\begin{equation}
\label{3.6}
{\sf P}={\sf P}^{(0)}+{\sf P}^{(1)}+\cdots, \quad {\bf
q}={\bf q}^{(0)}+{\bf q}^{(1)}+\cdots,
\eeq
\beq
\label{3.6.1}
\zeta=\zeta^{(0)}+\zeta^{(1)}+\cdots.
\end{equation}
Finally, as in the usual Chapman-Enskog method, the time derivative is also expanded as
\begin{equation}
\label{3.7}
\partial_t=\partial_t^{(0)}+\partial_t^{(1)}+\partial_t^{(2)}+\cdots,
\end{equation}
where the action of each operator $\partial_t^{(k)}$ is obtained from the hydrodynamic equations
(\ref{3.2})--(\ref{3.4}). These results provide the basis for generating the Chapman-Enskog
solution to the inelastic Boltzmann equation (\ref{3.1}).
\subsection{Zeroth-order approximation}
Substituting the expansions (\ref{3.5})--(\ref{3.7}) into Eq.\ (\ref{3.1}), the kinetic equation
for $f^{(0)}$ is given by
\begin{equation}
\label{3.8}
\partial_t^{(0)}f^{(0)}-aV_y\frac{\partial}{\partial V_x}f^{(0)}
-\gamma \frac{\partial}{\partial{\bf V}}\cdot {\bf V} f^{(0)}
=J[{\bf V}|f^{(0)},f^{(0}].
\end{equation}
To lowest order in the expansion the conservation laws are
\begin{equation}
\label{3.10}
\partial_t^{(0)}n=0,\quad \partial_t^{(0)}T=-\left(\frac{2}{dn}a P_{xy}^{(0)}+2T\gamma+T\zeta^{(0)}\right),
\end{equation}
\begin{equation}
\label{3.11}
\partial_t^{(0)}\delta U_i=-a_{ij} \delta U_j-\gamma\delta U_i.
\end{equation}
As discussed in previous works \cite{L06,G06,G07}, for given values of $a$, $\gamma$ and $\alpha$, the steady
state condition (\ref{2.17}) establishes a mapping between the
density and temperature so that every density corresponds to one
and only one temperature. Since the density $n({\bf r}, t)$ and
temperature $T({\bf r}, t)$ are specified separately in the {\em
local} USF state, the viscous heating only partially compensates
for the collisional cooling and friction viscous dissipation and so, $\partial_t^{(0)} T \neq 0$.
Consequently, the zeroth-order distribution $f^{(0)}$ depends on
time through its dependence on the temperature and the (dimensionless) parameters $a^*$, $\gamma^*$, and $\al$ must be considered as independent parameters for general infinitesimal perturbations around the USF state. The fact that the temperature must be considered as a time-dependent parameter has been already accounted for in previous perturbation solutions around driven non-steady states \cite{GMT13,GChV13}.
Since $f^{(0)}$ is a normal solution, then
\begin{eqnarray}
\label{3.12}
\partial_t^{(0)}f^{(0)}&=&\frac{\partial f^{(0)}}{\partial
n}\partial_t^{(0)} n+\frac{\partial f^{(0)}}{\partial
T}\partial_t^{(0)} T+\frac{\partial f^{(0)}}{\partial \delta
U_i}\partial_t^{(0)} \delta U_i\nonumber\\
&=&-\left(\frac{2}{d n}a
P_{xy}^{(0)}+2T\gamma+T\zeta^{(0)}\right)\frac{\partial f^{(0)}}{\partial T}
\nonumber\\
& &-\left(a_{ij}\delta U_j
+\gamma\delta U_i \right)\frac{\partial f^{(0)}}{\partial \delta U_i}\nonumber\\
&=&-\left(\frac{2}{d n}a
P_{xy}^{(0)}+2T\gamma+T\zeta^{(0)}\right)\frac{\partial f^{(0)}}{\partial T}\nonumber\\
& & +\left(a_{ij}\delta U_j
+\gamma\delta U_i \right)\frac{\partial f^{(0)}}{\partial c_i}.
\end{eqnarray}
Upon deriving the last step in Eq.\ \eqref{3.12} use has been made of the fact that $f^{(0)}$ depends on $\delta {\bf U}$ only through the peculiar velocity ${\bf c}$. Substituting Eq.\ \eqref{3.12} into Eq.\ \eqref{3.8} yields the following kinetic equation for $f^{(0)}$:
\beqa
\label{3.13}
& & -\left(\frac{2}{d n}a
P_{xy}^{(0)}+2T\gamma+T\zeta^{(0)}\right)\frac{\partial f^{(0)}}{\partial T}
-ac_y\frac{\partial f^{(0)}}{\partial c_x}\nonumber\\
& &-\gamma \frac{\partial}{\partial{\bf c}}\cdot {\bf c} f^{(0)}
=J[{\bf V}|f^{(0)},f^{(0}].
\eeqa
The zeroth-order solution leads to ${\bf q}^{(0)}={\bf 0}$ by symmetry. The closed set of equations defining the zeroth-order pressure tensor $\mathsf{P}^{(0)}$ can be obtained from Eq.\ \eqref{3.13} by taking into account Eq.\ \eqref{2.23}. The result is
\beqa
\label{3.14}
& &
-\left(\frac{2}{d n}a
P_{xy}^{(0)}+2T\gamma+T\zeta^{(0)}\right)\frac{\partial P_{ij}^{(0)}}{\partial T} +
a_{ik}P_{jk}^{(0)}\nonumber\\
& & +a_{jk}P_{ik}^{(0)}+2\gamma P_{ij}^{(0)}=
-\nu\left[\beta \left(P_{ij}^{(0)}-p\delta_{ij}\right)+\zeta_0^*P_{ij}^{(0)}\right],
\nonumber\\
\eeqa
where $\zeta_0^*\equiv \zeta^{(0)}/\nu$ is defined by Eq.\ \eqref{2.25}.
The steady state solution of Eq.\ (\ref{3.14}) is given by Eqs.\ (\ref{2.27})--
(\ref{2.29}). However, for non-steady conditions, in general Eqs.\ \eqref{3.14} must be solved numerically to get
the dependence of the zeroth-order pressure tensor $P_{ij}^{(0)}(T)$ on temperature. In the hydrodynamic regime, it is expected that $P_{ij}^{(0)}$ adopts the form
\beq
\label{3.15}
P_{ij}^{(0)}=p P_{ij}^*(\gamma^*,a^*),
\eeq
where the temperature dependence of the (dimensionless) pressure tensor $P_{ij}^*$ is through its dependence on $\gamma^*$ and $a^*$. Since $\gamma^*\propto T^{-1/2}$ and $a^*\propto T^{-1/2}$, then
\beq
\label{3.16}
T\partial_T P_{ij}^{(0)}=P_{ij}^{(0)}-\frac{1}{2}p\left(\gamma^*\frac{\partial P_{ij}^*}{\partial \gamma^*}
+a^*\frac{\partial P_{ij}^*}{\partial a^*}\right).
\eeq
As it will be shown below, to determine the generalized transport coefficients in the steady state, one needs to know the derivatives $\partial_{\gamma^*}P_{ij}^*$ and $\partial_{a^*}P_{ij}^*$ in this state. These derivatives are evaluated in Appendix \ref{appA}. In what follows, $P_{ij}^{(0)}(T)$ will be considered as a known function of $T$.
\begin{figure}
\includegraphics[width=0.7 \columnwidth,angle=0]{fig2.1.eps}
\caption{(color online) Shear-rate dependence of the derivatives of the pressure tensor with respect to $a^*$ and $\gamma^*$ in the steady state for $d=3$ and $\al=1$. The lines (a), (b), and (c) correspond to $\partial_{a^*} P_{xy}^*=\partial_{\gamma^*} P_{xy}^*$, $\partial_{a^*} P_{yy}^*$, and $\partial_{\gamma^*} P_{yy}^*$, respectively.
\label{fig2.1}}
\end{figure}
The shear-rate dependence of the derivatives of the (reduced) pressure tensor with respect to $a^*$ and $\gamma^*$ in the steady state are illustrated in Fig.\ \ref{fig2.1} for a three-dimensional suspension with elastic collisions ($\al=1$). Although we could not analytically prove the identity $\partial_{a^*} P_{xy}^*=\partial_{\gamma^*} P_{xy}^*$, numerical results systematically show this result. Since the magnitude of these derivatives is not in general quite small, it appears that their influence on transport cannot be in principle neglected.
\subsection{First-order approximation}
The first order approximation is worked out in Appendix \ref{appB}. Only the final results are given here. The velocity distribution function $f^{(1)}$ is
\begin{equation}
\label{3.17}
f^{(1)}={\bf X}_{n}\cdot \nabla n+ {\bf X}_{T}\cdot \nabla T+{\sf X}_{u}:\nabla \delta {\bf u},
\end{equation}
where the vectors ${\bf X}_{n}$ and ${\bf X}_{T}$ and the tensor ${\sf X}_{u}$ are the solutions of the following set of coupled linear integral equations:
\beqa
\label{3.18}
& & -\left(\frac{2}{d p}a
P_{xy}^{(0)}+2\gamma+\zeta^{(0)}\right)T \partial_T X_{n,i}- a
c_y\frac{\partial X_{n,i}}{\partial c_x}
\nonumber\\
& &
-\gamma \frac{\partial}{\partial{\bf c}}\cdot \left({\bf c}
X_{n,i}\right)+{\cal L}X_{n,i}+\frac{T}{n}\left[\frac{2a}{d p}(1-n\partial_n)
P_{xy}^{(0)}\right.\nonumber\\
& &\left. -\zeta^{(0)}\right]X_{T,i}
=Y_{n,i},
\eeqa
\beqa
\label{3.19}
& & -\left(\frac{2}{d p}a
P_{xy}^{(0)}+2\gamma+\zeta^{(0)}\right)T \partial_T X_{T,i}+
\left[\frac{2a}{d n}(\partial_T P_{xy}^{(0)})\right.\nonumber\\
& & \left.+2\gamma+\frac{3}{2}\zeta^{(0)}\right]X_{T,i}- a
c_y\frac{\partial X_{T,i}}{\partial c_x}-\gamma \frac{\partial}{\partial{\bf c}}\cdot \left({\bf c}
X_{T,i}\right)
\nonumber\\
& &
+{\cal L}X_{T,i}=Y_{T,i},
\eeqa
\beqa
\label{3.20}
& & -\left(\frac{2}{d p}a
P_{xy}^{(0)}+2\gamma+\zeta^{(0)}\right)T \partial_T X_{u,k\ell}
- a c_y\frac{\partial X_{u,k\ell}}{\partial c_x}\nonumber\\
& & -\gamma \frac{\partial}{\partial{\bf c}}\cdot \left({\bf c}
X_{u,k\ell}\right)+{\cal L}X_{u,k\ell}
-a\delta_{ky}X_{u,x\ell}\nonumber\\
& &-\gamma X_{u,k\ell}-\zeta_{u,k\ell}T\partial_T
f^{(0)}=Y_{u,k\ell},
\eeqa
where ${\bf Y}_n({\bf c})$, ${\bf Y}_T({\bf c})$, and ${\sf Y}_u({\bf c})$ are defined by Eqs.\ (\ref{b9})--(\ref{b11}),
respectively, and $\zeta_{u,k\ell}$ is defined by Eq.\ (\ref{b12}). An approximate expression of $\zeta_{u,k\ell}$ is given by Eq.\ (\ref{b13}). In addition, ${\cal L}$ is the linearized Boltzmann collision operator around the USF state, namely,
\begin{equation}
\label{3.21}
{\cal L}X \equiv -\left(J[f^{(0)},X]+J[X,f^{(0)}]\right).
\end{equation}
Note that, due to the presence of $P_{xy}^{(1)}$ in Eq.\ \eqref{b5}, the unknown coefficients $\eta_{xyk\ell}$ appear in the quantity $Y_{u,k\ell}$ of Eq.\ \eqref{3.20}. In the particular case of $\gamma^*=0$, Eqs.\ \eqref{3.18}--\eqref{3.20} are consistent with the results derived in Ref.\ \cite{G06} for dry granular gases.
With the distribution $f^{(1)}$ determined by Eq.\ \eqref{3.17}, the first-order corrections to the fluxes are given by
\begin{equation}
\label{3.22}
P_{ij}^{(1)}=-\eta_{ijk\ell} \frac{\partial \delta U_k}
{\partial r_{\ell}},
\end{equation}
\begin{equation}
\label{3.23}
q_i^{(1)}=-\kappa_{ij}\frac{\partial T}{\partial r_j}-
\mu_{ij}\frac{\partial n}{\partial r_j},
\end{equation}
where
\begin{equation}
\label{3.24}
\eta_{ijk\ell}=-\int\; \dd {\bf c}\, mc_ic_j X_{u,k\ell}({\bf c}),
\end{equation}
\begin{equation}
\label{3.25}
\kappa_{ij}=-\int\; \dd {\bf c}\, \frac{m}{2}c^2c_i X_{T,j}({\bf c}),
\end{equation}
\begin{equation}
\label{3.26}
\mu_{ij}=-\int\; \dd {\bf c}\, \frac{m}{2}c^2c_i X_{n,j}({\bf c}).
\end{equation}
Upon writing Eqs.\ \eqref{3.22}--\eqref{3.26} use has been made of
the symmetry properties of $X_{n,i}$, $X_{T,i}$, and $X_{u,ij}$.
In the absence of gas phase ($\gamma^*=0$), for $a^*=0$ and $\al=1$, the conventional Navier-Stokes constitutive equations for ordinary gases are reobtained, namely,
\beq
\label{3.26.1}
\eta_{ijk\ell}\to \eta_0 \left(\delta_{ik}\delta_{j\ell}+\delta_{jk}\delta_{i\ell}-\frac{2}{d}\delta_{ij}\delta_{k\ell}\right),
\eeq
\beq
\label{3.26.2}
\kappa_{ij}\to \kappa_0 \delta_{ij}, \quad \mu_{ij} \to 0.
\eeq
Here, $\eta_0=p/\nu$ and $\kappa_0=d(d+2)\eta_0/2(d-1)m$ are the expressions of the shear viscosity and thermal conductivity coefficients, respectively, of an ordinary gas of disks ($d=2$) or hard ($d=3$) spheres \cite{CC70}. In the absence of shear rate, the expressions of the Navier-Stokes coefficients of a granular suspension have been recently derived in Ref.\ \cite{GFHY16}.
In general, the set of {\em generalized} transport coefficients
$\eta_{ijk\ell}$, $\kappa_{ij}$, and $\mu_{ij}$ are nonlinear
functions of the coefficient of restitution $\alpha$, the
reduced shear rate $a^*$ and the reduced friction coefficient $\gamma^*$. The anisotropy induced in the system by
the shear flow gives rise to new transport coefficients,
reflecting broken symmetry. Since $P_{ij}^{(1)}$ is a symmetric and traceless tensor, then the viscosity tensor $\eta_{ijk\ell}$ is symmetric and traceless in $ij$, namely,
\beq
\label{3.26.1}
\eta_{ijk\ell}=\eta_{jik\ell}\neq \eta_{ij\ell k}, \quad \eta_{xxk\ell}+\eta_{yyk\ell}+\eta_{zzk\ell}+\cdots=0.
\eeq
The heat flux is expressed in terms of a thermal conductivity tensor $\kappa_{ij}$ and a Dufour-like
tensor $\mu_{ij}$. While the diagonal elements of both tensors can be interpreted as generalizations of the Navier-Stokes transport coefficients, the off-diagonal elements $\kappa_{xy}$, $\kappa_{yx}$, $\mu_{xy}$, and $\mu_{yx}$ are generalizations of Burnett coefficients that, for small shear rates, are proportional to $a^*$. In addition, because of symmetry reasons, the off-diagonal elements $xz$, $zx$, $yz$, and $zy$ of the tensors $\kappa_{ij}$ and $\mu_{ij}$ are identically zero. This is consistent with Eqs.\ \eqref{3.18} and \eqref{3.19}. The above behavior implies that if the thermal gradient is parallel to the $z$ axis ($\nabla T \parallel \widehat{\boldsymbol{z}}$), then $\mathbf{q}^{(1)} \parallel \widehat{\boldsymbol{z}}$, while if $\nabla T \perp \widehat{\boldsymbol{z}}$, then $\mathbf{q}^{(1)} \perp \widehat{\boldsymbol{z}}$. Similarly, many of the elements of the viscosity tensor $\eta_{ijk\ell}$ are zero. For instance, if the only nonzero velocity gradient is $\partial \delta U_x/\partial z$, then $P_{ij}^{(1)}=P_{xz}^{(1)}(\delta_{ix}\delta_{jz}+\delta_{jx}\delta_{iz})$.
\subsection{Steady state conditions}
As in the case of dry granular gases ($\gamma^*=0$), the evaluation of the transport coefficients $\eta_{ijk\ell}$, $\kappa_{ij}$ and $\mu_{ij}$ for general unsteady conditions is quite intricate. This is due essentially to the fact that the temperature dependence of the velocity moments of the distribution $f^{(0)}$ must be numerically determined. Thus, since we want to get analytical expressions for those coefficients, the present study is limited to steady state conditions. This means that the relation \eqref{2.17} is considered at the end of the calculations. In this state, the (scaled) shear rate $a^*$ is coupled to the (reduced) friction coefficient $\gamma^*$ and the coefficient of restitution $\alpha$ so that, only two of the three parameters are independent. Here, as alluded to in Sec.\ \ref{sec2}, $a^*$ and $\alpha$ are chosen as the independent (input) parameters of the problem. This allows us to independently assess the influence of shearing and inelasticity on momentum and heat transport. This contrasts with the analysis of dry granular gases \cite{G06} where both $a^*$ and $\al$ are considered as dependent parameters in the steady state.
Since the relation \eqref{2.17} holds in the steady state, the first term on the left hand side of the integral equations \ \eqref{3.18}--\eqref{3.20} vanishes. In this case, these equations become
\beqa
\label{3.28}
& & - a
c_y\frac{\partial X_{n,i}}{\partial c_x}-\gamma \frac{\partial}{\partial{\bf c}}\cdot \left({\bf c}
X_{n,i}\right)+{\cal L}X_{n,i}
\nonumber\\
& &
+\frac{T}{n}\left[\frac{2a}{d p}(1-n\partial_n)
P_{xy}^{(0)}-\zeta^{(0)}\right]X_{T,i}
=Y_{n,i},
\eeqa
\beqa
\label{3.29}
& &
\left[\frac{2a}{d n}(\partial_T P_{xy}^{(0)})+2\gamma+\frac{3}{2}\zeta^{(0)}\right]X_{T,i}- a
c_y\frac{\partial X_{T,i}}{\partial c_x}\nonumber\\
& & -\gamma \frac{\partial}{\partial{\bf c}}\cdot \left({\bf c}
X_{T,i}\right)
+{\cal L}X_{T,i}=Y_{T,i},
\eeqa
\beqa
\label{3.30}
& &
- a c_y\frac{\partial X_{u,k\ell}}{\partial c_x}-\gamma \frac{\partial}{\partial{\bf c}}\cdot \left({\bf c}
X_{u,k\ell}\right)+{\cal L}X_{u,k\ell}
-a\delta_{ky}X_{u,x\ell}\nonumber\\
& &-\gamma X_{u,k\ell}-\zeta_{u,k\ell}T\partial_T
f^{(0)}=Y_{u,k\ell}.
\eeqa
In Eqs.\ \eqref{3.28}--\eqref{3.30} it is understood that all the quantities are evaluated in the steady state. Moreover, the dependence of $P_{ij}^{(0)}$ on the temperature $T$ is given by Eq.\ \eqref{3.16} while the dependence of $P_{ij}^{(0)}$ on the density can be written as
\beq
\label{3.27}
n\partial_n P_{ij}^{(0)}=P_{ij}^{(0)}-p\left(\gamma^*\frac{\partial P_{ij}^*}{\partial \gamma^*}
+a^*\frac{\partial P_{ij}^*}{\partial a^*}\right).
\eeq
\section{Results from a BGK-like kinetic model}
\label{sec5}
Needless to say, the explicit form of the generalized transport coefficients $\eta_{ijk\ell}$, $\kappa_{ij}$, and $\mu_{ij}$ requires to solve the integral equations \eqref{3.28}--\eqref{3.30}. Apart from the mathematical
difficulties embodied in the Boltzmann collision operator ${\cal L}$, it is quite apparent that the fourth-degree velocity moments of the zeroth-order distribution $f^{(0)}$ are also needed to determine the heat flux transport coefficients $\mu_{ij}$ and $\kappa_{ij}$. Although these moments could in principle be determined from Grad's moment method by including them in the trial distribution \eqref{2.20}, their evaluation would be an intricate task.
A possible alternative could be the use of the so-called inelastic Maxwell models \cite{BCG00,BK00,EB02}, i.e., models for which the collision rate is independent of the relative velocity of the two colliding particles. The use of these models allows to obtain the velocity moments of the Boltzmann collision operator without the explicit knowledge of the velocity distribution function. This was the route followed in Ref.\ \cite{G07} to determine the shear-rate dependent transport coefficients in a dry granular sheared gas. However, apart from the difficulties associated with the evaluation of the fourth-degree moments and their derivatives, the results obtained for inelastic Maxwell models \cite{G07} show significant discrepancies from those obtained for inelastic hard spheres \cite{G06}.
Therefore, as in the previous study carried out for dry granular gases \cite{G06}, a model kinetic equation of the Boltzmann equation is considered to achieve explicit results. As for elastic collisions, the idea is to replace the true Boltzmann collision operator with a simpler, more tractable operator that retains the most relevant physical properties of the Boltzmann operator. Here, we consider a kinetic model \cite{BDS99} based on the well-known Bhatnagar-Gross-Krook (BGK) \cite{GS03} for
ordinary gases where the operator $J[f,f]$ is \cite{note}
\begin{equation}
\label{4.1}
J[f,f]\to -\beta \nu (f-f_\text{M})+\frac{\zeta}{2}\frac{\partial}
{\partial {\bf c}}\cdot \left({\bf c}f\right).
\end{equation}
Here, $\nu$ is the effective collision frequency defined by Eq.\ \eqref{2.24}, $f_\text{M}(\mathbf{c})$ is the Maxwellian distribution \eqref{2.21}, $\beta$ is given by Eq.\ \eqref{2.26} and $\zeta$ is the cooling rate. It is easy to see that the BGK model yields the same expressions for the pressure tensor in the steady USF state than those derived from Grad's method [Eqs.\ \eqref{2.27}--\eqref{2.29}]. Moreover, the fourth-degree velocity moments obtained from the BGK model compare quite well with Monte Carlo simulations \cite{ChVG15,AS05} of the Boltzmann equation. This confirms again the reliability of kinetic models to evaluate the velocity moments of the true Boltzmann equation \cite{GS03}.
In the perturbed USF problem, Eqs.\ \eqref{3.28}--\eqref{3.30} still apply with the replacements
\begin{equation}
\label{4.2}
{\cal L}X\to \nu \beta X-\frac{\zeta^{(0)}}{2}
\frac{\partial}{\partial {\bf c}}\cdot \left({\bf c}X\right),
\end{equation}
in the case of $X_{n,i}$ and $X_{T,i}$ and
\begin{equation}
\label{4.3}
{\cal L}X_{ij}\to \nu \beta X_{ij}-\frac{\zeta^{(0)}}{2}
\frac{\partial}{\partial {\bf c}}\cdot \left({\bf c}X_{ij}\right)
-\frac{\zeta_{u,ij}}{2}\frac{\partial}{\partial {\bf c}}\cdot
\left({\bf c}f^{(0)}\right),
\end{equation}
in the case of $X_{u,ij}$. In the above equations, $\zeta^{(0)}$
is the zeroth-order approximation to $\zeta$ which is given by
Eq.\ (\ref{2.25}). With the changes (\ref{4.2}) and
(\ref{4.3}) all the generalized transport coefficients can be
easily evaluated from Eqs.\ \eqref{3.28}--\eqref{3.30}. Details of these calculations are given in Appendix
\ref{appC}.
\section{Shear-rate dependence of the generalized transport coefficients}
\label{sec6}
\begin{figure}
\includegraphics[width=0.7 \columnwidth,angle=0]{fig5.1.eps}
\caption{Shear-rate dependence of the (reduced) generalized transport coefficients $\eta_{xzxz}^*$ (a) and $\eta_{yzzx}^*$ (b) for a three-dimensional ($d=3$) granular suspension with two different values of the coefficient of restitution $\al$: $\al=1$ (solid lines) and $\al=0.8$ (dashed lines).
\label{fig5.1}}
\end{figure}
\begin{figure}
\includegraphics[width=0.7 \columnwidth,angle=0]{fig5.eps}
\caption{Shear-rate dependence of the (reduced) generalized transport coefficients $\eta_{yyxy}^*$ (a), $\eta_{xyxy}^*$ (b), and $\eta_{xxxy}^*$ (c) for a three-dimensional ($d=3$) granular suspension with two different values of the coefficient of restitution $\al$: $\al=1$ (solid lines) and $\al=0.8$ (dashed lines).
\label{fig5}}
\end{figure}
The general results derived in the previous sections clearly show that the dependence of the generalized transport coefficients on both $a^*$ and $\al$ is quite complex. Since the main goal of the present paper is to assess the shear-rate dependence of $\eta_{ijk\ell}$, $\kappa_{ij}$, and $\mu_{ij}$ for given values of $\al$, we illustrate here this dependence for some relevant elements of the above tensors by two different values of $\al$: $\al=1$ (ordinary suspensions) and $\al=0.8$ (granular suspensions). Moreover, a three-dimensional system ($d=3$) is considered in all the plots and hence, $a_\text{th}^*\simeq 0.512$ for $\al=0.8$.
To analyze the shear-rate dependence of the transport coefficients, it is convenient first to introduce the dimensionless coefficients $\eta_{ijk\ell}^*\equiv \eta_{ijk\ell}/\eta_0$, $\kappa_{ij}^*\equiv \kappa_{ij}/\kappa_0$, and
$\mu_{ij}^*\equiv n \mu_{ij}/T\kappa_0$. Here, $\eta_0=p/\nu$ and $\kappa_0=((d+2)/2)n T/(m \nu)$ are the elastic values of the shear viscosity and thermal conductivity coefficients, respectively, for a dilute gas given by the BGK kinetic model.
\subsection{Viscosity tensor}
The (reduced) elements of the viscosity tensor $\eta_{ijk\ell}^*$ are determined by solving the set of algebraic equations \eqref{c17}. There are in principle two classes of terms \cite{GS11}. Class I is made of those coefficients $\eta_{ijk\ell}^*$ with $(k,\ell)=\left\{(xx), (xy), (yx), (yy), (zz)\right\}$. The complementary class II is constituted by coefficients with $(k,\ell)=\left\{(xz), (yz), (zx), (zy)\right\}$. Of course, class II (as well as the elements $\eta_{ijzz}^*$ of class I) is meaningless in the two-dimensional case ($d=2$).
A careful analysis of the set of algebraic equations shows that the coefficients of the form $\eta_{xzk\ell}^*$ and $\eta_{yzk\ell}^*$ vanish in class I. In addition, the coefficients of the form $\eta_{xxk\ell}^*$ of class II include the first-order contribution to the cooling rate $\zeta_{u,ij}$. However, they obey a set of \emph{homogeneous} algebraic equations whose solution is the trivial one for arbitrary values of $a^*$. A similar behavior is expected for the coefficients of the form $\eta_{xyk\ell}^*$, $\eta_{yyk\ell}^*$, and $\eta_{zzk\ell}^*$. Thus, one can conclude that all the above elements of class II vanish.
The remaining elements of class II are independent of the derivatives $\partial_{a^*}P_{ij}^*$ and $\partial_{\gamma^*}P_{ij}^*$. Some of them are given by
\beq
\label{7.1}
\eta_{xzxz}^*=\eta_{yzyz}^*=\eta_{yzzy}=\frac{1+2\chi}{1-\widetilde{\gamma}+2\chi}\eta^*, \quad \eta_{yzxz}^*=0,
\eeq
\beq
\label{7.2}
\eta_{yzzx}=\frac{1+2\chi}{1-\widetilde{\gamma}+2\chi}\frac{P_{xy}^*}{P_{yy}^*}\eta^*,
\eeq
where the nonzero elements of the pressure tensor $P_{yy}^*$ and $P_{xy}^*$ are defined by Eqs.\ \eqref{2.27} and \eqref{2.28}, respectively, and the nonlinear shear viscosity $\eta^*$ is defined by Eq.\ \eqref{2.33}. The expressions of the remaining elements of class II can be obtained from Eqs.\ \eqref{c14} and \eqref{c17}. Their forms are very long and will be omitted here. Figure \ref{fig5.1} shows the dependence of two elements of class I ($\eta_{xzxz}^*$ and $\eta_{yzzx}^*$) for $\al=1$ and 0.8. These two coefficients measure the presence of non-zero values of $P_{xz}$ and $P_{yz}$ due to perturbations of the form $\partial \delta U_x/\partial z$ and $\partial \delta U_z/\partial x$, respectively. It is quite apparent that, at a given value of $\al$, the largest impact of the shear rate on momentum transport occurs on $P_{xz}$. We also observe that $\eta_{xzxz}^*$ exhibits a shear-thinning effect more pronounced than that of the nonlinear shear viscosity $\eta^*$, as expected from Eq.\ \eqref{7.1}. In addition, the influence of collisional dissipation is very tiny in both generalized transport coefficients.
Finally, the expressions for the non-zero elements of class I contain the derivatives $\partial_{a^*}P_{ij}^*$ and $\partial_{\gamma^*}P_{ij}^*$. Those expressions are much more involved than those of class II. In order to illustrate their shear-rate dependence, we consider here the set of coefficients $\left\{\eta_{xxxy}^*, \eta_{xyxy}^*, \eta_{yyxy}^*, \eta_{zzxy}^*\right\}$. Note that $\eta_{xxxy}^*=-(\eta_{yyxy}^*+\eta_{zzxy}^*)$. In addition, the algebraic equations defining those coefficients show that $\eta_{yyxy}^*=\eta_{zzxy}^*$. This result is a consequence of the linear version of Grad's moment method that yields $P_{yy}^*=P_{zz}^*$. As said before, recent Monte Carlo simulations of granular suspensions \cite{ChVG15} have shown that the second normal stress difference is different from zero although its value is very small. The shear-rate dependence of the elements $\eta_{ijxy}^*$ is plotted in Fig.\ \ref{fig5}. The coefficients $\eta_{xyxy}^*$ and $\eta_{yyxy}^*$ measure the deviations of $P_{xy}$ and $P_{yy}$, respectively, from their \emph{unperturbed} USF values due to perturbations of the form $\partial \delta U_x/\partial y$. While the coefficient $\eta_{xyxy}^*$ decreases in general with $a^*$ (except for high shear rates), the coefficient $\eta_{yyxy}^*$ exhibits clearly a non-monotonic shear-rate dependence regardless the value of the coefficient of restitution. It is also interesting to note that $\eta_{xxxy}^*$ is always negative.
\begin{figure}
\includegraphics[width=0.7 \columnwidth,angle=0]{fig3.eps}
\caption{Shear-rate dependence of the (reduced) diagonal element $\kappa_{zz}^*$ of the thermal conductivity tensor for a three-dimensional ($d=3$) granular suspension with two different values of the coefficient of restitution $\al$: $\al=1$ (solid line) and $\al=0.8$ (dashed line).
\label{fig3}}
\end{figure}
\begin{figure}
\includegraphics[width=0.7 \columnwidth,angle=0]{fig4.eps}
\caption{Shear-rate dependence of the (reduced) diagonal element $\mu_{zz}^*$ of the Dufour-like tensor for a three-dimensional ($d=3$) granular suspension with two different values of the coefficient of restitution $\al$: $\al=1$ (solid line) and $\al=0.8$ (dashed line).
\label{fig4}}
\end{figure}
\begin{figure}
\includegraphics[width=0.7 \columnwidth,angle=0]{fig6.eps}
\caption{Shear-rate dependence of the (reduced) off-diagonal element $-\kappa_{xy}^*$ of the thermal conductivity tensor for a three-dimensional ($d=3$) granular suspension with two different values of the coefficient of restitution $\al$: $\al=1$ (solid line) and $\al=0.8$ (dashed line).
\label{fig6}}
\end{figure}
\begin{figure}
\includegraphics[width=0.7 \columnwidth,angle=0]{fig7.eps}
\caption{Shear-rate dependence of the (reduced) off-diagonal element $-\mu_{xy}^*$ of the Dufour-like tensor for a three-dimensional ($d=3$) granular suspension with two different values of the coefficient of restitution $\al$: $\al=1$ (solid line) and $\al=0.8$ (dashed line).
\label{fig7}}
\end{figure}
\subsection{Thermal conductivity and Dufour-like tensors}
The evaluation of the heat flux transport coefficients $\Delta_{ij}\equiv \left\{\kappa_{ij}^*, \mu_{ij}^*\right\}$ is much more involved than that of the shear viscosity tensor $\eta_{ijk\ell}^*$. As Eqs.\ \eqref{c12} and \eqref{c13} show, in the steady state the set of transport coefficients $\Delta_{ij}$ also depends on the derivatives of the fourth-degree moments of the USF with respect to $\gamma^*$ and $a^*$. The evaluation of these derivatives is in general a quite tedious task that can be accomplished by following the steps devised in the Appendix \ref{appA} \cite{code}.
As mentioned before, we have $\Delta_{xz}=\Delta_{zx}=\Delta_{yz}=\Delta_{zy}=0$ according to the linear shear flow \eqref{2.15}. Therefore, there are five nonzero elements of the (scaled) tensors $\Delta_{ij}$: the three diagonal ($\Delta_{xx}$, $\Delta_{yy}$, and $\Delta_{zz}$) and the two off-diagonal elements ($\Delta_{xy}$ and $\Delta_{yx}$). The algebraic equations \eqref{c15} and \eqref{c16} also show that the anisotropy induced by the shear flow yields the properties $\Delta_{xx}\neq \Delta_{yy}\neq \Delta_{zz}$ and $\Delta_{xy} \neq \Delta_{yx}$.
To illustrate the shear-rate dependence of the coefficients $\Delta_{ij}$, we consider here the elements $\Delta_{zz}\equiv \left\{\kappa_{zz}^*, \mu_{zz}^*\right\}$ and $\Delta_{xy}\equiv \left\{\kappa_{xy}^*, \mu_{xy}^*\right\}$. The first set of coefficients measures the heat flux along the direction orthogonal to the shearing plane. The second set of coefficients provides information on cross-effects in the thermal conduction since $\kappa_{xy}^*$ and $\mu_{xy}^*$ measure the transport of energy parallel to the flow direction due to a thermal gradient along the velocity gradient. Figures \ref{fig3}--\ref{fig7} show the generalized coefficients $\kappa_{yy}^*$, $\mu_{yy}^*$, $\kappa_{xy}^*$, and $\mu_{xy}^*$ versus $a^*$ for $\al=1$ and 0.8. We observe first that the deviations of these coefficients with respect to their equilibrium values is significant, regardless of the collisional dissipation. This means that the impact of shear flow on heat transport is in general significant in a region of shear rates where shear thinning is quite important (see Fig.\ \ref{fig2}). Regarding the diagonal element $\kappa_{yy}^*$, it is quite apparent from Fig.\ \ref{fig3} that this coefficient decreases with $a^*$ in the region of shear rates considered. A similar behavior is found in Fig.\ \ref{fig4} for $\mu_{zz}^*$ when the collisions are inelastic ($\al \neq 1$). On the other hand, for elastic collisions, $\mu_{zz}^*$ first increases with $a^*$ for small shear rates and then it decreases with the shear rate. In any case, for elastic collisions, the magnitude of $\mu_{zz}^*$ is much smaller than that of $\kappa_{zz}^*$. Thus, for practical purposes, one can neglect the contribution to the heat flux coming from the term proportional to the density gradient when the collisions are elastic. In accordance with the above results, we conclude that in general the shear flow inhibits the transport of energy along the direction orthogonal to the velocity gradient (vorticity direction). With respect to the influence of $\al$ on both generalized coefficients, it appears that the effect of inelasticity is more important in the case of $\mu_{zz}^*$ than in the case of $\kappa_{zz}^*$.
The absolute values of the off-diagonal elements $\kappa_{xy}^*$ and $\mu_{xy}^*$ are plotted in Figs.\ \ref{fig6} and \ref{fig7}, respectively. As said before, these coefficients measure cross effects in the energy transport. This cross coupling does not appear in the linear regime since the first-order contribution to the heat flux $q_x^{(1)}$ is at least of Burnett order (i.e., proportional to $a^* \partial_x T$). It is quite apparent that the element $\kappa_{xy}$ is negative and its magnitude presents a non-monotonic dependence with $a^*$ since it increases first with the shear rate (in the region of small shear rates), reaches a maximum and then decreases with increasing $a^*$. This behavior is much more evident in the case of elastic collisions. Regarding the coefficient $\mu_{xy}^*$, we observe that it is always negative for granular suspensions ($\al \neq 1$) and its magnitude is very small for elastic collisions. Recall that the coefficient $\mu_{ij}^*$ vanishes when $\al=1$ for vanishing shear rates. As in the case of the diagonal elements, the effect of inelasticity on heat transport is more noticeable for $\mu_{xy}^*$ than for $\kappa_{xy}^*$.
Finally, it is important to remark that the qualitative shear-rate dependence of $\kappa_{zz}^*$ and $\kappa_{xy}^*$ obtained here for elastic collisions (ordinary fluids) agrees with the one observed years ago by Daivis and Evans \cite{DE93} in molecular dynamics simulations of a thermostatted shear-flow state.
\section{Concluding remarks}
\label{sec7}
The influence of gas phase on the transport properties of solid particles under USF has been studied in this paper. In the low-density regime, a viscous drag force term for the interstitial fluid has been incorporated into the Boltzmann kinetic equation to account for the effect of the former on the dynamics of grains. The physical situation is such that the granular suspension is in a state that deviates from the USF by small spatial gradients. Since the system is subjected to a strong shear flow and is not restricted to nearly elastic spheres, the corresponding transport coefficients characterizing momentum and heat transport are nonlinear functions of both the shear rate and the coefficient of restitution. The explicit determination of the above coefficients has been the main objective of the present contribution. The search for such expressions has been prompted by previous results \cite{L06,G06} obtained for \emph{dry} granular gases (i.e., in the absence of the viscous drag force). Here, the problem is revisited by considering the effect of the gas phase on transport properties.
Assuming that the USF state is slightly perturbed, the Boltzmann equation \eqref{2.4} has been solved by means of a Chapman-Enskog-like expansion. The new feature of this expansion is that the (local) shear flow distribution is employed as the reference state instead of the usual (local) equilibrium distribution \cite{CC70} or the (local) homogeneous cooling state \cite{BDKS98,GD99}. As already noted in previous works \cite{L06,G06,GMT13,GChV13}, since the zeroth-order derivative $\partial_t^{(0)} T$ is in general different from zero, the reference base state is not stationary. This fact introduces technical difficulties in the implementation of the perturbation scheme. Thus, in order to get explicit results, the steady-state condition \eqref{2.17} is considered at the end of the calculations. In this state, the (reduced) shear rate $a^*$ and the coefficient of restitution $\al$ are coupled to the (scaled) friction coefficient $\gamma^*$, so that the former two are the relevant parameters of the problem.
To first order of the expansion, the momentum and heat fluxes are given by Eqs.\ \eqref{3.22} and \eqref{3.23}, respectively, where the generalized transport coefficients $\eta_{ijk\ell}$, $\kappa_{ij}$, and $\mu_{ij}$ are defined in terms of the solutions of the set of coupled integral equations\eqref{3.28}--\eqref{3.30}. However, since the solution of the above integral equations is in general quite a complex problem, the BGK-like kinetic model \eqref{4.1} has been employed to obtain the explicit shear-rate dependence of the above set of transport coefficients. Although the kinetic model \eqref{4.1} can be considered as a crude representation of the true Boltzmann equation, it gives the same results for the rheological properties as those derived from the Boltzmann equation by means of Grad's moment method. Given that those theoretical predictions compare quite well with Monte Carlo simulations \cite{ChVG15}, it is expected that the results provided by the kinetic model are accurate even for conditions of practical interest, such as strong dissipation and/or large shear rates.
As expected, there are many new transport coefficients in comparison to the case of states close to equilibrium (for ordinary gases) or states near the homogeneous cooling state (for dry granular gases). Here, for the sake of illustration, the shear-rate dependence of some relevant elements of the viscosity tensor, the thermal conductivity tensor, and the Dufour-like tensor have been studied. More specifically, Figs.\ \ref{fig5.1} and \ref{fig5} show the (reduced) elements $\eta_{xzxz}^*$, $\eta_{yzzx}^*$, and $\eta_{ijxy}^*$, respectively, Figs.\ \ref{fig3} and \ref{fig4} show the diagonal elements $\kappa_{zz}^*$ and $\mu_{zz}^*$, respectively, and Figs.\ \ref{fig6} and \ref{fig7} show the off-diagonal elements $\kappa_{xy}^*$ and $\mu_{xy}^*$, respectively. It is apparent that in general the deviation of these coefficients from their equilibrium values (i.e., for $a^*=0$ and $\al=1$) is quite significant. In addition, the influence of collisional dissipation on transport is much more significant for the heat flux transport coefficients than for the coefficients associated with the pressure tensor.
\begin{figure}
\includegraphics[width=0.7 \columnwidth,angle=0]{fig8.eps}
\caption{Shear-rate dependence of the (reduced) element $\widehat{\kappa}_{zz}\equiv \kappa_{zz}^*-\mu_{zz}^*$ for a three-dimensional ($d=3$) ordinary fluid ($\al=1$). The solid line corresponds to the results obtained here, the dashed line refers to the results derived from the Boltzmann equation in Ref.\ \cite{G95} for Maxwell molecules, and the dash--dotted line corresponds to the results obtained in Ref.\ \cite{G93} from the BGK equation for Maxwell molecules.
\label{fig8}}
\end{figure}
As said in the Introduction, for ordinary fluids ($\al=1$), the thermal conductivity tensor of a thermostatted shear-flow state was determined years ago from the BGK \cite{G93} and Boltzmann \cite{G95} kinetic equations. The physical situation corresponds to a perturbed \emph{steady} USF state with $\delta \mathbf{U}=\mathbf{0}$, $p=n T\equiv \text{const.}$ and $\nabla T \neq 0$. Under these conditions, one needs to add an external field that exactly compensates for the increase or decrease of momentum due to the term $\nabla \cdot \mathsf{P}$ \cite{GS03}. The addition of this external field affects the value of the thermal conductivity tensor and hence, the situation studied in Refs.\ \cite{G93,G95} slightly differs from the one analyzed in the present paper. On the other hand, in order to make a comparison with these previous results \cite{G93,G95}, one considers particular perturbations such that $\nabla p=0$ and so, $\nabla \ln n=-\nabla \ln T$. Therefore,
the heat flux \eqref{3.23} obeys the generalized Fourier's law
\beq
\label{6.1}
q_i^{(1)}=-\kappa_0 \widetilde{\kappa}_{ij} \partial_j T, \quad \widetilde{\kappa}_{ij}=\kappa_{ij}^*-\mu_{ij}^*.
\eeq
Figure \ref{fig8} shows the transport coefficient $\widetilde{\kappa}_{zz}\equiv \kappa_{zz}^*-\mu_{zz}^*$ versus $a^*$ for $\al=1$. We observe that the previous predictions made for thermostatted shear flow states from the BGK \cite{G93} and Boltzmann \cite{G95} equations compare qualitatively well with the results obtained here for arbitrary perturbations. However, at a more quantitative level, it seems that the impact of shear flow on energy transport is more significant in the situation analyzed in this paper than those studied in Refs.\ \cite{G93,G95}.
\begin{figure}
\includegraphics[width=0.7 \columnwidth,angle=0]{fig9.eps}
\caption{Plot of the (reduced) elements $\kappa_{yy}^*$ (a) and $-\kappa_{xy}^*$ (b) as a function of the coefficient of restitution $\al$ for a two-dimensional dry granular gas ($\gamma^*=0$). The solid and dashed lines are the results derived in this paper and in Ref.\ \cite{SA14}, respectively.
\label{fig9}}
\end{figure}
In the case of dry granular gases ($\gamma^*=0$ but $\al\neq 1$), Saha and Alam \cite{SA14} have determined the heat flux of a two-dimensional granular gas under USF. The results were obtained by solving the Boltzmann equation by means of a perturbation expansion around an anisotropic Gaussian distribution. This distribution was employed years ago by Jenkins and Richman \cite{JR88} to obtain the rheological properties of USF via Grad's moment method. The corresponding constitutive relation for the heat flux derived in Ref.\ \cite{SA14} can be written as
\beq
\label{6.2}
q_i^{(1)}=-\kappa_{ij} \partial_j T-\Xi_{ij}\partial_j \Pi_{ij},
\eeq
where $\Pi_{ij}$ is the deviatoric or traceless part of the pressure tensor defined by Eq.\ \eqref{2.22}. In Eq.\ \eqref{6.2}, $\kappa_{ij}$ is identified as the thermal conductivity tensor and $\Xi_{ij}$ is a tensor quantifying the contribution to the heat flux coming from the gradient of the deviatoric stress $\Pi_{ij}$. As expected, the tensors $\kappa_{ij}$ and $\Xi_{ij}$ are nonlinear functions of the coefficient of restitution $\al$. It appears first that Eq.\ \eqref{6.2} disagrees with the constitutive relation \eqref{3.23} derived here for the heat flux. On the other hand, in an attempt to make a comparison with the theoretical results obtained in Ref.\ \cite{SA14} for the thermal conductivity tensor, Fig.\ \ref{fig9} shows the dependence of the (reduced) coefficients $\kappa_{yy}^*$ and $\kappa_{xy}^*$ on $\al$ for a dry two-dimensional granular gas. Notice that, in order to get analytical expressions, the theoretical results of Ref.\ \cite{SA14} plotted in Fig.\ \ref{fig9} were derived by considering terms up to super-Burnett order (i.e., third order in the shear rate). We observe that the $\al$-dependence of the diagonal element $\kappa_{yy}^*$ is qualitatively different from the one predicted in Ref.\ \cite{SA14} since while in the latter theory $\kappa_{yy}^*$ decreases with increasing inelasticity, the opposite happens here. Although a more qualitative agreement is found for the magnitude of $\kappa_{xy}^*$, both theoretical results exhibit significant quantitative discrepancies for strong inelasticity. The differences between both theories at the level of the thermal conductivity tensor $\kappa_{ij}$ could be in part due to the different form of the constitutive relation for the heat flux derived in Ref.\ \cite{SA14}. In addition, while the results obtained in the latter work were obtained by solving the Boltzmann equation up to super-Burnett order, the theoretical predictions made in the present paper are based on an exact solution of the BGK-like kinetic model. It would be convenient to perform computer simulations for $\kappa_{ij}$ to check the reliability of the above theories for strong inelasticities.
The explicit results reported in this paper can be useful for studying different problems First, as done in Ref.\ \cite{G06}, an important application is to perform an stability analysis of the hydrodynamic equations with respect to the USF state. This analysis will allow us to identify the conditions for stability in terms of both the shear rate and the coefficient of restitution. Another interesting and challenging problem is to extend the present results by considering the general Langevin-like model proposed in Ref.\ \cite{GTSH12}. This will allow us to provide additional refinements of the predictions obtained here so that, a closer comparison with direct numerical simulations of granular suspensions could be performed. Finally, it would be also relevant to extend the analysis made here for a monodisperse granular suspension to the intriguing and important subject of polydisperse suspensions. A good starting point for this achievement could be the suspension model introduced in Ref.\ \cite{KG13}. Work along the above lines will be carried out in the near future.
\acknowledgments
I am grateful to Dr. Mariano L\'opez de Haro for a critical reading of the manuscript. The present research has been supported by the Spanish Government through grant No. FIS2016-76359-P, partially financed by FEDER funds and by the Junta de Extremadura (Spain) through Grant No. GR15104.
|
2,877,628,089,851 | arxiv | \section{Introduction}
Recently, many papers display the influence of the Robin condition
on the spectrum of the Laplacian. In planar domains, the papers
\cite{GS, LP, Pan} and references therein contain asymptotics of the
principal eigenvalue. The tunneling effect for planar domains with
corners is discussed in the paper \cite{HP}. In higher dimensions,
the low-lying eigenvalues are studied in \cite{PanP}, where the
effect of the boundary mean curvature is made precise. Trace
semi-classical asymptotics are obtained in \cite{FG}. In all the
aforementioned papers, there is no magnetic field and the function
in the boundary condition is supposed smooth. The new issue
addressed in this paper is that we include a magnetic field and we
do not assume smoothness of the boundary function in \eqref{eq:bc}
below. The discussion in this paper is limited for planar domains.
Extensions to higher dimensions does not seem trivial; \cite{N}
contains results for the Neumann condition in 3D domains.
Let $\Omega\subset\mathbb{R}^{2}$ be an open domain with a smooth $C^3$ and
compact boundary $\Gamma=\partial\Omega$. We suppose that the
boundary $\partial\Omega$ consists of a finite number of connected
components. The domain $\Omega$ is allowed to be an {\it interior}
or {\it exterior} domain. By smoothness of the boundary
$\partial\Omega$, we can define the unit outward normal vector $\nu$
of $\partial\Omega$.
The magnetic field is defined via a vector field (magnetic
potential). Let $A\in C^{2}(\overline\Omega;\mathbb{R}^{2})$. The magnetic
field is
\begin{equation}\label{eq:mf}
B:= {\rm curl}\, A\,.
\end{equation}
Consider a function $\gamma\in L^3(\partial\Omega)$, a number
$\alpha\geq 1/2$ and a parameter $h>0$. The parameter $h$ is called
the {\it semi-classical} parameter and we shall be concerned with the
asymptotic limit of various quantities when the semi-classical
parameter tends to $0$.
The self-adjoint magnetic Schr\"odinger operator
\begin{equation}\label{Shr-op-Gen}
\mathcal{P}^{\alpha,\gamma}_{h,\Omega}=(-ih\nabla+A)^{2},
\end{equation}
with a boundary condition of the third type (Robin condition)
\begin{equation}\label{eq:bc}
\nu\cdot(-ih\nabla+A)u+h^\alpha\gamma\,u=0\quad{\rm on}~\partial\Omega\,,
\end{equation}
can be defined by the Friedrich's Theorem via the closed
semi-bounded quadratic form,
\begin{equation}\label{QF-Gen}
\mathcal{Q}^{\alpha,\gamma}_{h,\Omega}(u):=\norm{(-ih\nabla+A)u}^{2}_{L^{2}(\Omega)}+h^{1+\alpha}\Int{\partial\Omega}{}\gamma(x)|u(x)|^{2}dx\,
\end{equation}
The assumption $\gamma\in L^3(\partial\Omega)$ ensures that the
quadratic form in \eqref{QF-Gen} is semi-bounded. Since this does
not follow in a straightforward manner, we will recall the main
points of the classical proof in the appendix.
As is revealed from \eqref{eq:bc} and \eqref{QF-Gen}, the role of
the parameter $\alpha$ is to control the strength of the boundary
condition. Formally, we shall deal with the boundary term in
\eqref{QF-Gen} as a surface electric potential. This analogy is
already observed in \cite{FL}.
The quantity
\begin{equation}\label{eq:infb}
b=\inf_{x\in\overline\Omega}B(x)\,,
\end{equation}
is critical in the analysis of the spectrum of the operator
$\mathcal{P}^{\alpha,\gamma}_{h,\Omega}$. If the domain $\Omega$ is
an exterior domain, i.e. the complement of a bounded subset, then
the operator $\mathcal{P}^{\alpha,\gamma}_{h,\Omega}$ has an
essential spectrum. In this case, the spectrum below $bh$ is
discrete, see e.g. \cite{KP}. When the domain $\Omega$ is an
interior domain, i.e. bounded, then by Sobolev embedding, the
operator $\mathcal{P}^{\alpha,\gamma}_{h,\Omega}$ is with compact
resolvent and its spectrum is purely discrete. If
$\sigma(\mathcal{P}^{\alpha,\gamma}_{h,\Omega})\cap(-\infty,bh)\not=\emptyset$,
then,
$$\sigma(\mathcal{P}^{\alpha,\gamma}_{h,\Omega})\cap(-\infty,bh)=\{e_1(h),e_2(h),\cdots\}\,,$$
where the terms of the sequence $(e_j(h))$ are eigenvalues of the
operator $\mathcal{P}^{\alpha,\gamma}_{h,\Omega}$ listed in
increasing order and by counting the multiplicity.
Let $\lambda\leq bh$. According to the aforementioned discussion, we
can introduce the two quantities,
\begin{align}
&E(\lambda;h,\gamma,\alpha)=-{\rm tr}\Big(\mathcal{P}^{\alpha,\gamma}_{h,\Omega}-bh\Big)_{-}=\sum_j\Big(e_j(h)-\lambda h\Big)_-\,,\label{eq:en}\\
&N(\lambda;h,\gamma,\alpha)={\rm tr}\Big(\mathbf 1_{(-\infty,\lambda h)}\left(\mathcal{P}^{\alpha,\gamma}_{h,\Omega}\right)\Big)\,.\label{eq:nb}
\end{align}
Notice that $E(\lambda;h,\alpha)$ is the sum of the absolute value
of the {\it negative} eigenvalues of
$\mathcal{P}^{\alpha,\gamma}_{h,\Omega}-\lambda h$ counting
multiplicities while the number of these eigenvalues is
$N(\lambda;h,\gamma,\alpha)$. In physics, $E(\lambda;h,\alpha)$ can
be interpreted as the energy of non-interacting fermionic particles
in $\Omega$ at chemical potential $\lambda h$ \cite{FG}.
The Lieb-Thirring inequality will ensure that the sum
$E(\lambda;h,\gamma,\alpha)$ is finite for all $\lambda\leq b$. This
will be discussed further in Section~\ref{sec:lb}. Concerning the
number of eigenvalues, $N(\lambda;h,\gamma,\alpha)$ is finite for
all $\lambda<b$. Actually, this energy level is strictly lower than
the bottom of the essential spectrum. For exterior domains, we may
have that the eigenvalues accumulate near $bh$, i.e.
$N(\lambda;h,\gamma,\alpha)=\infty$. In fact, it is proved that this
is the case when the magnetic field $B(x)$ is constant, see
\cite{CKP}.
The behavior of the two quantities in \eqref{eq:en} and
\eqref{eq:nb} in the semiclassical regime, i.e. when the
semiclassical parameter $h$ goes to $0$, is studied for the Neumann
problem in \cite{Fr} and \cite{Fo-Ka}. The Neumann problem
corresponds to $\gamma$ being identically $0$ in \eqref{eq:bc}.
When the magnetic field $B(x)=b$ is constant, then the results in
\cite{Fr} and \cite{Fo-Ka} assert that, if $h\to0_+$, then,
\begin{align}
&N(\lambda;h,\gamma=0,\alpha)= h^{-1/2}\,c_1(\lambda)+h^{-1/2}\,o(1)\,,\label{eq:nbFr}\\
&E(\lambda;h,\gamma=0,\alpha)=h^{1/2}\,c_2(\lambda)+h^{1/2}\,o(1)\,.\label{eq:nbFK}
\end{align}
The formula in \eqref{eq:nbFr} is valid for all $\lambda<b$ while
that in \eqref{eq:nbFK} is valid for all $\lambda\leq b$. It is
pointed in \cite{Fo-Ka} that the formulas in \eqref{eq:nbFr} and
\eqref{eq:nbFK} are equivalent when $\lambda<b$.
The quantities $c_1(\lambda)$ and $c_2(\lambda)$ are defined by
explicit expressions involving spectral quantities for a harmonic
oscillator on the semi-axis. In \cite{Ka4}, it is derived an
analogue of \eqref{eq:nbFr} valid for a general function $\gamma\in
C^\infty(\partial\Omega)$ and constant magnetic fields. The key
issue in \cite{Ka4} was the analysis of a modified harmonic
oscillator on the semi-axis and a standard approximation of the
function $\gamma$ by a constant. The smoothness of the function
$\gamma$ vindicates the approximation of $\gamma$ by a constant
value as long as the approximation is done in a small domain.
In this paper, we aim to obtain analogues of \eqref{eq:nbFr} and
\eqref{eq:nbFK} under the relaxed assumptions that the magnetic
field is variable and the function $\gamma$ is no more smooth but
simply in $L^3(\partial\Omega)$. (This is the assumption needed to
define the self-adjoint operator in \eqref{Shr-op-Gen}). Also, we
add to the results of \cite{Ka4} by establishing a formula for
$E(\lambda;h,\gamma,\alpha)$ valid in the extended range
$\lambda\in(-\infty,b]$.
The approach we follow is by carrying out a reduction to a thin
boundary layer. This is easy to do. After localization in the thin
boundary layer, we localize in small sub-domains of the boundary
layer. In each small sub-domain, the operator is reduced to a one
defined with a constant magnetic field and a constant $\gamma$. The
reduced operator is defined in the half-plane. The reduction to a
constant magnetic field is quiet standard as in \cite{Fr} and
\cite{Fo-Ka}. The non-trivial point is to reduce to a constant
$\gamma$ since the smoothness of $\gamma$ is dropped. We do this by
dealing with $\gamma$ as being a {\it surface} electric potential.
With this point of view, we borrow the methods in \cite{LSY} that
allow to approximate a non-smooth electrical potential by a smooth
one, and then one passes from the smooth potential to the constant
potential in the standard manner. Many errors will arise here. These
are controlled by various Lieb-Thirring inequalities, notably the
ones in \cite{Er-So, Sob, LW} and a remarkable inequality obtained
in \cite{Fo-Ka} valid in the torus.
We proceed in the statement of the main result of this paper. We
will need some notation regarding a harmonic oscillator in the
semi-axis. For $(\gamma,\xi)\in \mathbb{R}^{2}$, we denote by
\begin{equation}\label{Op-h-g-x}
\mathfrak{h}[\gamma,\xi]= -\partial_{t}^{2}+(t-\xi)^{2}\quad {\rm in}\quad L^{2}(\mathbb{R}_{+}),
\end{equation}
the self-adjoint differential operator in $L^{2}(\mathbb{R}_+)$ associated
with the boundary condition $u^{\prime}(0)=\gamma u(0)$.
The increasing sequence of eigenvalues of $\mathfrak{h}[\gamma,\xi]$ is
$\{\mu_{j}(\gamma,\xi)\}_{j}$. By Sturm-Liouville theory, these
eigenvalues are known to be simple and smooth functions of $\gamma$
and $\xi$. These facts will be recalled precisely in a
separate section.
In the following, $(x)_{-}=\max(-x,0)$ and $(x)_{+}=\max(x,0)$ denote the negative, respectively positive, part of a number $x\in\mathbb{R}$.
Our main result is
\begin{theorem}\label{thm:KN}
Suppose that the magnetic field satisfies,
$$b=\inf_{x\in\overline\Omega} B(x)>0\,.$$
Let $\lambda\leq b$, $\alpha\geq 1/2$ and $\gamma\in
L^3(\partial\Omega)$. There holds:
\begin{itemize}
\item If $\alpha>1/2$, then,
\[
\lim_{h\to0_+} \Big(h^{-1/2}\,E(\lambda;h,\gamma,\alpha)\Big)=\frac{1}{2\pi}\int_{\partial\Omega}\int_{\mathbb{R}}B(x)^{3/2}\Big(\mu_{1}(0,\xi)-\frac{\lambda}{B(x)}\Big)_{-}d\xi ds(x)\,.
\]
\item
If $\alpha=1/2$ and $\gamma\in L^\infty(\partial\Omega)$, then,
\begin{align*}
&\lim_{h\to0_+}
\Big(h^{-1/2}\,E(\lambda;h,\gamma,\alpha)\Big)\\
&\qquad\qquad=\frac{1}{2\pi}\sum_{p=1}^{\infty}\int_{\partial\Omega}\int_{\mathbb{R}}B(x)^{3/2}\Big(\mu_{p}\left(B(x)^{-1/2}\gamma(x),
\xi\right)-\frac{\lambda}{B(x)}\Big)_{-}d\xi ds(x).
\end{align*}
\end{itemize}
Here $ds(x)$ denotes integration with respect to arc-length along
the boundary $\partial\Omega$, and $E(\lambda;h,\gamma,\alpha)$ is
introduced in \eqref{eq:en}.
\end{theorem}
The results in Theorem~\ref{thm:KN} display the strength of the
boundary condition in \eqref{eq:bc}. We observe that the influence
of the Robin condition is not strong when $\alpha>\frac12$, since
the leading behavior of $E(\lambda;h,\gamma,\alpha)$ is essentially
the same as that for the Neumann condition (i.e. $\gamma=0$).
The sum
\begin{equation}\label{eq:sum}
\sum_{p=1}^{\infty}\int_{\mathbb{R}}\Big(\mu_{p}\left(\gamma,
\xi\right)-1\Big)_{-}d\xi\end{equation} is actually a sum of a
finite number of terms (for every fixed $\gamma$). The expression in
\eqref{eq:sum} is a continuous function of $\gamma$. This will be
proved in a separate section of this paper. Thus, we observe that
the terms appearing in Theorem~\ref{thm:KN} are well defined.
Due to the implicit nature of the quantity in \eqref{eq:sum}, it
seems hard to prove that the functional
$$\mathcal F(\gamma)=\sum_{p=1}^{\infty}\int_{\partial\Omega}\int_{\mathbb{R}}B(x)^{3/2}\Big(\mu_{p}\left(B(x)^{-1/2}\gamma(x),
\xi\right)-\frac{\lambda}{B(x)}\Big)_{-}d\xi ds(x)$$ is continuous
in $L^1(\partial\Omega)$. If this continuity is true, then the
result in Theorem~\ref{thm:KN} continues to hold under the relaxed
assumption that $\alpha=\frac12$ and $\gamma\in
L^3(\partial\Omega)$. This will be clear in the proof we provide to
Theorem~\ref{thm:KN}.
The methods we use do not allow us to obtain versions of
Theorem~\ref{thm:KN} valid for $\alpha<\frac12$. In this specific
regime, the sign of the function $\gamma$ will play a significant
role, as one can observe the results for the first eigenvalue in
\cite{Ka1}. The results in \cite{Ka1} suggest that the localization
to the boundary is very strong when $\alpha<\frac12$ and $\gamma$ is
negative. When $\alpha<\frac12$ and $\gamma>0$, then the effect of
the boundary is weak, and the situation is closer to the Dirichlet
boundary condition, for which the methods in \cite{CFFH} are
relevant.
Differentiation of the formulas in Theorem~\ref{thm:KN} with respect
to $\lambda h$ yields a formula for the number of eigenvalues. See
\cite{Fo-Ka, N} for a precise statement of this technique. The
formulas for the number of eigenvalues are collected in:
\begin{corollary}\label{cor:KN}
Let $\lambda<b$. Under the assumptions of Theorem~\ref{thm:KN},
there holds:
\begin{itemize}
\item If $\alpha>1/2$, then
\begin{equation}\label{FA}
\lim_{h\to0}\Big(h\, N(\lambda;h,\gamma,\alpha)\Big)=\frac{1}{2\pi}\iint_{
\{(x,\xi)\in
\partial\Omega \times\mathbb{R}~:~B(x)\mu_1(0,\xi)<\lambda\}} B(x)^{1/2}d\xi ds(x)\,.
\end{equation}
\item If $\alpha=1/2$, then
\begin{equation}\label{SA}
\begin{aligned}
&\lim_{h\to0}\Big(h\,
N(\lambda;h,\gamma,\alpha)\Big)\\
&\qquad=\frac{1}{2\pi}\sum_{p=1}^\infty
\iint_{
\{(x,\xi)\in\partial\Omega\times\mathbb{R}~:~B(x)\mu_p\left(B(x)^{-1/2}\gamma(x),\xi\right)<\lambda\}}
B(x)^{1/2}d\xi ds(x)\,.
\end{aligned}
\end{equation}
\end{itemize}
Here, $N(\lambda;h,\gamma,\alpha)$ is the number of eigenvalues
below $\lambda h$, introduced in \eqref{eq:nb}.
\end{corollary}
The proof of Corollary~\ref{cor:KN} is sketched below in Section~\ref{Sec:7}. We mention that a formula for the number of eigenvalues below the energy value
$\lambda=1$ is not available yet, even for the case of Neumann
boundary condition, i.e. $\gamma=0$. For a matter of illustration,
we include the following simple result in the case of Neumann
boundary condition and a square domain.
\begin{thm}\label{thm:SQ}
Suppose that the domain $\Omega$ is a square, the magnetic field is
constant, $\curl A=b$, and that $\gamma=0$ in \eqref{eq:bc}. As
$h\to 0_+$, there holds,
\begin{equation}\label{eq:nb-s}
\limsup_{h\to0_+}\Big(h\,N(bh)\Big)=\frac{b|\Omega|}{2\pi}\,.\end{equation}
Here,
$$N(bh)=N(1;h,\gamma=0,\alpha=1)$$
is as introduced in \eqref{eq:nb}.
\end{thm}
In \cite{KK}, it is proved that the formula for the energy in
Theorem~\ref{thm:KN} is still valid when the domain $\Omega$ is a
square and $\gamma=0$. This indicates an interesting observation,
namely, the energy
$$\sum_{j}(e_j(h)-bh)_-$$
is localized near the boundary, while the leading order expression
of the number of the eigenvalues below $bh$ is determined by the
bulk. The proof we give to Theorems~\ref{thm:KN} and \ref{thm:SQ}
suggests that the eigenvalues strictly below $bh$ are
associated with eigenfunctions concentrated near the boundary. A
mathematically rigorous explanation of this point is still missing
in the literature. Helpful information might be obtained by
computing the second correction term in \eqref{eq:nb-s}, expected to
be a boundary term. Toward that end, the methods in \cite{CFFH} must
prove useful.
If one considers the Dirichelt realization of the operator
$P^D=(-ih\nabla+A)^2$, then the number $ N(bh)$ is equal to $0$. If
$bh$ is an eigenvalue of $P^D$, then the corresponding ground state
can be extended by $0$ to all of $\mathbb{R}^2$. The min-max principle will
yield that this constructed function is an eigenfunction of the
Landau Hamiltonian in $\mathbb{R}^2$ with constant magnetic field $bh$. This
violates the description of the eigenfunctions of the lowest
eigenspace of the Landau Hamiltonian with a constant magnetic field,
since this space can not have compactly supported functions. That
way we see that the lowest eigenvalue of $P^D$ is strictly larger
than $bh$.
\begin{rem}
A key ingredient in the proof of Theorem~\ref{thm:SQ} is to compare
with a model Schr\"odinger operator with (magnetic) periodic
conditions. The advantage of this model operator is that its first
eigenvalue is known together with its multiplicity.
\end{rem}
\begin{rem}
We list some interesting open problems in connection with
Theorem~\ref{thm:SQ}:
\begin{itemize}
\item Inspection of the asymptotics in Theorem~\ref{thm:SQ} for
general domains.
\item Inspecting if the result in Theorem~\ref{thm:SQ} is valid with
$\liminf$ replacing $\limsup$.
\item Inspection of the number $\mathsf n(bh)$ of eigenvalues of $P_{h,b,\Omega}$ in the interval
$(-\infty,bh)$. This question is related to the existence of a
non-zero function $u$ solving the problem:
$$P_{h,b,\Omega}u=bhu{\rm ~in~}\Omega\quad{\rm and}\quad
\nu\cdot(h\nabla-iA_{0})u=0{\rm ~on~}\partial\Omega\,.$$
\end{itemize}
\end{rem}
\section{Preliminaries}
\subsection{Variational principles}
In this section, we recall methods used in \cite{LSY} to establish
upper and lower bounds on the energy of eigenvalues.
\begin{lem}{\label{lem-VP-2}}
Let $\mathcal{H}$ be a semi-bounded self-adjoint operator on $L^{2}(\mathbb{R}^{3})$ satisfying
\begin{equation}\label{hypI}
\inf{\rm Spec}_{\rm ess}(\mathcal{H})\geq 0\,.
\end{equation}
Let $\{\nu_{j}\}_{j=1}^{\infty}$ be the sequence of negative
eigenvalues of $\mathcal H$ counting multiplicities. We have,
\begin{equation}\label{eq-var-2}
-\sum_{j=1}^{\infty}(\nu_{j})_{-}
=\inf\Sum{j=1}{N}\big\langle \psi_{j},\mathcal{H}\psi_{j}\big\rangle,
\end{equation}
where the infimum is taken over all $N\in\mathbb{N}$ and orthonormal families $\{\psi_{1},\psi_{2},\cdots,\psi_{N}\}\subset D(H)$.
\end{lem}
The next lemma states another variational principle. It is used in several papers, e.g. \cite{LSY}.
\begin{lem}\label{lem-VP-3}
Let $\mathcal{H}$ be a self-adjoint semi-bounded operator satisfying the hypothesis \eqref{hypI}.
Suppose in addition that $(\mathcal{H})_{-}$ is trace class.
For any orthogonal projection $\gamma$ with range belonging to the domain of $\mathcal{H}$ and such that $\mathcal{H}\gamma$ is trace class, we have,
\begin{equation}\label{eq-var-2}
-\sum_{j=1}^{\infty}(\nu_{j})_{-} \leq {\rm tr}(\mathcal{H\gamma})\,.
\end{equation}
\end{lem}
\subsection{Existence of discrete spectrum of $\mathcal{P}_{h,\Omega}^{\alpha,\gamma}$}\label{Ess-spec}
If the domain $\Omega$ is bounded, it results from the compact
embedding of $\mathcal{D}(\mathcal{Q}^{\alpha,\gamma}_{h,\Omega})$
into $L^{2}(\Omega)$ that $\mathcal{P}_{h}$ has compact resolvent.
Hence the spectrum is purely discrete consisting of a sequence of
eigenvalues accumulating at infinity.
In the case of exterior domains, the operator ${\mathcal{P}}_{h}$
can have essential spectrum. In particular, we have the inequality
\begin{equation}\label{est-HM}
\int_{\Omega}|(-ih\nabla +{ A})u|^{2}dx\geq h\int_{\Omega}{B}(x)|u|^{2}dx, \quad \forall\,u\in C^{\infty}_{0}(\Omega).
\end{equation}
Using then a magnetic version of Persson's Lemma ( see \cite{Bo1,Per}), we get that
\[
\inf{\rm Spec}_{\rm ess}\mathcal{P}_{h}\geq h b.
\]
This is the reason behind considering the sum of eigenvalues that are below $bh$.
\subsection{Lifting with respect to the dimension}
Let $d\in \mathbb{N}$, and let
\[
A(x)=(a_{1}(x),a_2(x),\cdots,a_{d+1}(x))^{T},
\]
be a magnetic vector potential with real-values entries in $L^2_{\rm loc}(\mathbb{R}_{+}^{d+1})$.
We introduce the operator $H_{d}(\gamma)$ defined via the quadratic form
\begin{equation}
h_{d}(\gamma)[u]= \iint_{\mathbb{R}^{d+1}_{+}}|(-i\nabla+A)u(x)|^2dx-\int_{\mathbb{R}^d}{\gamma}(x)|u(x)|^2dx.
\end{equation}
Here and in the sequel $\mathbb{R}^{d}_{+}=\mathbb{R}^{d-1}\times \mathbb{R}_{+}$.
We are going to show the following theorem following a strategy used
in \cite[Theorem~3.2]{LW} to generalize a Lieb-Thirring type
inequality to the case with magnetic field.
\begin{theorem}\label{Lb-thm}
Let $d\geq 1$, $A\in L^{2}_{\rm
loc}(\overline{\mathbb{R}^{d+1}_{+}},\mathbb{R}^{d+1})$ and $\gamma\in
L^{2\alpha+d}(\mathbb{R}^{d})$. Let $\alpha\geq 1/2$, then
\begin{equation}\label{LT-Gamma}
{\rm tr}[H(\gamma)]^{\alpha}_{-}\leq2 L_{\alpha,d}^{\rm cl} \int_{\mathbb{R}^{d}}\gamma_{+}^{2\alpha+d}dx,
\end{equation}
where $L_{\alpha,d}^{\rm cl}$ is defined by
\[
L_{\alpha,d}^{\rm cl}=\dfrac{\Gamma(\alpha+1)}{2^d \pi^{d/2}\Gamma(1+\alpha+d/2)}
\]
\end{theorem}
\begin{proof}
We shall prove~\ref{LT-Gamma} by induction over $d$. Notice that this operator is well-defined for $d=0$ and $\gamma$ a non-negative real number. In this case we have $H_{0}(\gamma)=(-i\partial_{y}u+a(y))^2$ and $u^{\prime}(0)=-\gamma u(0)$, and one easily can find that this operator has one negative eigenvalue, namely $-\gamma_{+}^2$, associated with the eigenfunction $e^{-i \int_{0}^{y} a(\tau)d\tau}e^{-{\gamma_{+}}y}$. Hence
\[
{\rm tr}_{L^2(\mathbb{R}_{+})}[H_0(\gamma)]^{\alpha}_{-}=(\gamma_{+}^2)^{\alpha}
\]
which is the analogue of \eqref{LT-Gamma} for $d=0$.
Now fix $d\geq 1$ and suppose that the assertion is already proved for all smaller dimensions. We write $x=(x_1,x^{\prime})$ when $x_1\in\mathbb{R}$ and $x^{\prime}\in \mathbb{R}^{d-1}$ and note that
\[
H_{d}(\gamma)\geq (-i\partial_{x_1}+a_{1}(x))^2\otimes 1_{L^2(\mathbb{R}^{d}_{+})}-[H_{d-1}(\gamma(x_1,\cdot))]_{-}
\]
We now choose a gauge
\[
\phi(x)=\int_{0}^{x_1}a_{1}(\tau,x_2,\cdots,x_{d+1})d\tau.
\]
and $\widetilde{u}(x)=e^{-i\phi}u(x)$ for all $u\in \mathcal{D}(H_{d}(\gamma))$. Then
\[
\big\langle H_d(\gamma)u,u\rangle_{L^2(\mathbb{R}^{d+1}_{+})}\geq \int _{\mathbb{R}^{d+1}_{+}}|\partial_{x_1}\widetilde u|^2dx-\int_{\mathbb{R}}\big\langle e^{-i\phi}[H_{d-1}(\gamma(x_1,\cdot))]_{-}e^{i\phi}\widetilde u, \widetilde u\big\rangle_{L^2(\mathbb{R}^{d}_{+})}dx_{1}
\]
So by the variational principle
\begin{equation*}
{\rm tr}_{L^2(\mathbb{R}^{d+1}_{+})}[H_d(\gamma)]^\alpha_{-}\leq {\rm tr}_{L^2(\mathbb{R})}\left[-\partial_{x_1}^2\otimes 1_{L^2(\mathbb{R}^{d}_{+})}-e^{-i\phi}[H_{d-1}(\gamma(x_1,\cdot))]_{-}e^{i\phi}\right]^{\alpha}_{-}
\end{equation*}
and the operator-valued Lieb-Thirring inequality \cite[corollary 3.5]{HLW}, it follows that
\begin{multline}
{\rm tr}_{L^2(\mathbb{R})}\left[-\partial_{x_1}^2\otimes 1_{L^2(\mathbb{R}^{d}_{+})}-e^{-i\phi}[H_{d-1}(\gamma(x_1,\cdot))]_{-}e^{i\phi}\right]^{\alpha}_{-}\\
\leq 2L_{\alpha,1}^{\rm cl} \int_{\mathbb{R}}{\rm tr}_{L^2(\mathbb{R}^{d}_{+})}[H_{d-1}(\gamma(x_1,\cdot))]^{\alpha+1/2}_{-}dx_1.
\end{multline}
By induction hypothesis, the right hand side is bounded above by
\[
2L_{\alpha,1}^{\rm cl} L_{\alpha+1/2,d-1}^{\rm cl}\int_{\mathbb{R}}\int_{\mathbb{R}^{d-1}}\gamma_{+}^{d+2\alpha}dx^{\prime}dx_1=2 L_{\alpha,d}^{\rm cl}\int_{\mathbb{R}^{d}}\gamma_{+}^{d+2\alpha}dx,
\]
which establishes the assertion for dimension $d$ and completes the
proof of Theorem~\ref{Lb-thm}.
\end{proof}
\subsection{Rough energy bound for the cylinder} In this
section, we recall a remarkable inequality for the Schr\"{o}dinger
operator
\begin{equation}
\mathcal{P}_{h,{b},S,T}=(-ih\nabla+ {b}{\bf A}_{0})^{2}\qquad {\rm in}\qquad L^{2}\Big([0,S]\times(0,h^{1/2}T)\Big)\,.
\end{equation}
Here ${S}$, $T$ and $b$ are positive parameters. The magnetic
potential ${\bf A}_{0}$ is
$${\bf A}_0(s,t)=(-t,0)\,.$$
Functions in the domain of the
operator $\mathcal{P}_{h,{b},S,T}$ satisfy the periodic conditions
\[
u(0,\cdot)= u(S,\cdot)\qquad {\rm on}\quad (0,h^{1/2}T),
\]
Neumann condition at $t=0$,
$$\partial_{t}u=0 \quad{\rm on }\quad t=0\,,$$
and Dirichlet condition at $t=h^{1/2}T$.
In this particular case of a bounded domain, the operator has
compact resolvent and the spectrum consists of an increasing
sequence of eigenvalues $(e_{j})_{j\geq 1}$ tending to $+\infty$. We
define the energy of the sum of the eigenvalues as follows,
\begin{equation}\label{energy-Gen}
\mathcal{E}(\lambda,{b},S,T)=\Sum{j}{}\big(h{b}(1+\lambda)-e_{j}\big)_{+}\,.
\end{equation}
In \cite{Fo-Ka}, the energy in \eqref{energy-Gen} is controlled by
the product $ST$. We recall this estimate in the next lemma.
\begin{lem}\label{energy}
There exist positive constants $T_{0}$ and $\lambda_{0}$ such that, for
all $S>0$, $b>0$, $T\geq \sqrt{b}T_{0}$ and $\lambda\in(0,\lambda_{0})$, we
have,
\[
\mathcal{E}(\lambda,b,S,T)\leq C(1+\lambda)hb\left(\dfrac{ST}{\pi h}+1\right).
\]
\end{lem}
\begin{comment}
\begin{proof}
By separation of variables and a scaling we may decompose $\mathcal{P}_{h,{b},S,T}$ as a direct sum~:
\[
\bigoplus_{n\in\mathbb{Z}}h{b} \left( -\dfrac{d^{2}}{dt^{2}}+(2\pi n h^{1/2}{b}^{-1/2} S^{-1}+t)^{2}\right)\text{ in }\bigoplus L^{2}\big((0, T/\sqrt{{b}})\big),
\]
with Neumann boundary condition at $t=0$ and Dirichlet boundary condition at $t=T/\sqrt{{b}}$.
Consequently we obtain:
\begin{equation}\label{spec}
{\rm Spec}\big(\mathcal{P}_{h,{b},S,T}\big)= {\bigcup}_{j=1}^{\infty}\big\{h{b}\mu_{j}(2\pi n h^{1/2}{b}^{-1/2} S^{-1}; T/\sqrt{{b}})\big\},
\end{equation}
where, for a given $\xi\in\mathbb{R}$, $\mu_{j}(\xi;\mathcal{T})$ denote the increasing subsequence of eigenvalues for the operator
\[
-\partial_{t}^{2}+(t-\xi)^{2}\text{ in } L^{2}\Big(\big(0,\mathcal{T}\big)\Big),
\]
with Neumann boundary conditions at the origin and Dirichlet boundary condition at $t=\mathcal{T}$.
Therefore, we may express the energy \eqref{energy-Gen} in the form
\begin{equation}
\mathcal{E}(\lambda,{b},S,T)= hb\Sum{\underset{j\in\mathbb{Z}}{n\in\mathbb{N}}}{}\Big((2+\lambda)-\mu_{j}(2\pi n h^{1/2}{b}^{-1/2}S^{-1}, T/\sqrt{{b}})\Big)_{+}
\end{equation}
Then it is sufficient to localize the set,
\[
\{\xi\in\mathbb{R}~:~\mu_{1}(2\pi n h^{1/2}b^{-1/2}S^{-1}; T/\sqrt{b})\leq 2+\lambda \},
\]
to get an estimate of the energy.
Notice that if $t\leq\mathcal{T}$ and $|\xi|\geq 2\mathcal{T}$, it holds that $(t-\xi)^{2}\geq \mathcal{T}^{2}$, hence by \eqref{mu-0-T} and the min-max principle, it is easy to see that
\[
\mu_{1}(-h^{\alpha-1/2}b^{-1/2}M,\xi;\mathcal{T})\geq \mathcal{T}^{2}-M^2h^{2\alpha-1}\delta^{-1} b^{-1} \,.
\]
Therefore choosing $\mathcal{T}_{0}$ such that, for all $\mathcal{T}\geq \mathcal{T}_{0}$ and $\lambda\in(0,\lambda_{0})$,
\[
\mathcal{T}^{2}\geq \mathcal{T}_{0}^{2}>M^2h^{2\alpha-1}\delta^{-1} b^{-1}+1+\lambda_0 \,.
\]
Thus
\[
\mu_{1}(-b^{-1/2}h^{\alpha-1/2}M,\xi;\mathcal{T})\leq 1+\lambda\Rightarrow|\xi|\leq 2 \mathcal{T}\,.
\]
From the above localization, the estimate of Lemma~\ref{energy} becomes a consequence of \eqref{localization}\,.
\end{proof}
\end{comment}
\subsection{Boundary coordinates}\label{Sec:BC}
The aim of this section is to define a new system of coordinates
near the boundary which allows us to approximate the magnetic
potential locally near the boundary by a new one corresponding to a
constant magnetic field. These coordinates are used in \cite{HM}.
Let $\Omega$ be a smooth, simply connected domain in $\mathbb{R}^{2}$.
Suppose that the boundary $\partial\Omega$ is $C^{4}$-smooth. Let
furthermore,
\[
\mathbb{R}/(|\partial \Omega |\mathbb{Z})\ni s\mapsto M(s)\in\partial\Omega
\]
be a parametrization of $\partial\Omega$. The unit tangent vector of $\partial\Omega$ at the point $M(s)$ of the boundary is given by
\[
T(s):= M^{\prime}(s).
\]
We define the scalar curvature $k(s)$ by the following identity
\[
T^{\prime}(s)=k(s)\nu(s),
\]
where $\nu(s)$ is the unit vector, normal to to the boundary, pointing outward at the point $M(s)$.
We choose the orientation of the parametrization $M$ to be counterclockwise, so
\[
\det(T(s),\nu(s))=1, \qquad \forall s\in \mathbb{R}/(|\partial \Omega |\mathbb{Z}) .
\]
For all $\delta>0$, we define
\[
\mathcal{V}_{\delta}= \{x\in\partial\Omega~:~{\rm dist} (x,\partial\Omega)<\delta\}\,.
\]
Let $t_0>0$. The map $\Phi=\Phi_{t_0}$ is defined as follows~:
\begin{equation}
\Phi:\mathbb{R}/(|\partial \Omega |\mathbb{Z})\times(0,t_{0})\mapsto x= M(s)-t\nu(s)\in \mathcal{V}_{t_{0}}.
\end{equation}
By smoothness of the boundary $\partial\Omega$, we may select $t_0$
sufficiently small so that $\Phi$ is invertible. Thus, for all
$x\in\mathcal{V}_{t_{0}}$, one can write
\begin{equation}\label{BC}
x\mapsto \Phi^{-1}(x):=(s(x),t(x))\in \mathbb{R}/(|\partial \Omega |\mathbb{Z})\times (0,t_{0}),
\end{equation}
where $t(x)={\rm dist}(x,\partial\Omega)$ and $s(x)\in\mathbb{R}/(|\partial \Omega |\mathbb{Z})$ is associated with the point $M(s(x))\in\partial\Omega$ such that ${\rm dist}(x,\partial\Omega)= |x-M(s(x))|$.
The determinant of the Jacobian of the transformation $\Phi^{-1}$ is
\[
a(s,t)=1-tk(s).
\]
For all $u\in L^{2}(\mathcal V_{t_0})$, we define the function
\begin{equation}\label{tilde}
\widetilde u(s,t):= u(\Phi(s,t)).
\end{equation}
If $A=(A_{1},A_{2})$ is a vector field in $\mathcal{V}_{t_{0}}$, we
define the associated vector potential in the $(s,t)$-coordinates by
\begin{equation}
\begin{aligned}\label{mf-nc}
\widetilde A_{1}(s,t)&=(1-tk(s)) \vec{A}(\Phi(s,t))\cdot M^{\prime}(s),\\
\widetilde A_{2}(s,t)&= \vec{A}(\Phi(s,t))\cdot\nu(s).
\end{aligned}
\end{equation}
The new magnetic potential $\widetilde A$ satisfies,
\begin{equation}\label{Mf-dim2}
\Big[\dfrac{\partial \widetilde A_{2}}{\partial s}(s,t)- \dfrac{\partial \widetilde A_{1}}{\partial t}(s,t)\Big]ds\wedge dt= B(\Phi^{-1}(s,t))dx \wedge dy= (1-tk(s))\widetilde B(s,t)ds\wedge dt.
\end{equation}
For all $u\in H^{1}_{A}(\mathcal{V}_{t_{0}})$, we have, with $\widetilde
u=u\circ \Phi$,
\begin{equation}
\int_{\mathcal{V}_{t_{0}}}|(-i\nabla +A)u|^{2}dx= \int_{0}^{|\partial\Omega|}\int_{0}^{t_0}\Big
[|(-i\partial_{s}+\widetilde A_{1})\widetilde u|^{2} +(1-tk(s))^{-2}
|(-i\partial_{t}+\widetilde A_{2})\widetilde
u|^{2}\Big](1-tk(s))dsdt\,, \end{equation
and
\begin{equation}\label{unw}
\int_{\mathcal{V}_{t_{0}}}|u|^{2}dx= \int_{0}^{|\partial\Omega|}\int_{0}^{t_0}|\widetilde u(s,t)|^{2}(1-tk(s))dsdt.
\end{equation}
In the next proposition, it is constructed a gauge transformation
such that the magnetic potential in the new coordinates can be
approximated$-$up to a small error$-$by a new one corresponding to a
constant magnetic field. The proof is given in
\cite[Appendix~F]{FH-b}.
\begin{proposition}\label{prop:gauge}
Let $A \in C^{2}(\overline{\Omega},\mathbb{R}^{2})$. There exists a constant $C>0$ such that for all $S\in\big(0,|\partial\Omega|)$, $S_{0}\in[0,S]$ there exists a gauge function $\phi \in C^{2}(\big[0,S\big]\times[0,t_0])$ such that $\overline{A}:=\widetilde{A}(s,t)-\nabla_{(s,t)} \phi$, with $\widetilde{A}$ as defined in \eqref{mf-nc}, satisfies
\begin{equation}\label{Gg-dim2}
\overline{A}(s,t)=\begin{pmatrix}
\overline{A}_{1}(s,t)\\
\overline{A}_{2}(s,t)
\end{pmatrix} = \begin{pmatrix}
- B_{0}t+ \beta(s,t)\\
0\\
\end{pmatrix}, \qquad (s,t)\in[0,S]\times [0,t_{0}],
\end{equation}
where $ B_{0}:=\widetilde B (S_{0},0) $ and for any $0< T\leq t_{0}$, we have
\begin{equation}\label{bnd-beta}
\sup_{(s,t)\in[0,S]\times[0,T]}|\beta(s,t)|\leq C(S^{2}+T^{2}).
\end{equation}
\end{proposition}
We shall frequently make use of the following standard lemma, taken from~\cite[Lemma~3.5]{Fr}.
\begin{lem}\label{Lem-apqf}
There exists a constant $C>0$ and for all $S_{1}\in[0,|\partial\Omega|)$, $S_2\in (S_1,|\partial\Omega|)$, there exists a function $\phi\in C^2([S_1,S_2]\times[0,t_0];\mathbb{R})$ such that, for all
\[
S_0\in[S_1,S_2],\quad \mathcal{T}\in(0,t_0),\quad \varepsilon\in [C\mathcal{T},Ct_0],
\]
and for all $u\in H^{1}_{A}(\Omega)$ satisfying
\[
{\rm supp}~\widetilde{u}\subset [S_1,S_2]\times[0,\mathcal{T}],
\]
one has the following estimate,
\begin{multline}
\left| \int_{\Omega}|(-ih\nabla+A)u|^2dx-\int_{\mathbb{R}^2_{+}}|(-ih\nabla+\widetilde{B}{\bf A}_{0})e^{i\phi/h}\widetilde u|^2 dsdt \right|\\
\leq \int_{\mathbb{R}^2_{+}}\Big( \varepsilon |(-ih\nabla+\widetilde{B}{\bf A}_{0})e^{i\phi/h}\widetilde u|^2+C\varepsilon^{-1} (S^2+\mathcal{T}^2)^2|\widetilde u|^2 \Big)dsdt.
\end{multline}
Here, $\mathbb{R}^2_{+}=\mathbb{R}\times\mathbb{R}_{+}$, $S=S_2-S_1$, $\widetilde{B}=\widetilde{B}(S_0,0)$, the function $\widetilde{u}$ is associated to $u$ by $0$ on $\mathbb{R}^2_{+}\setminus {\rm supp}~\widetilde{u}$.
\end{lem}
\section{A family of one-dimensional differential operators}
We are concerned in this section with the analysis of a family of ordinary differential operators with Robin boundary condition. For $\xi\in\mathbb{R}$, we consider the operator $\mathfrak{h}[\gamma,\xi]$ in $L^{2}(\mathbb{R}_{+})$ associated with the operator $-\frac{d^{2}}{dt^{2}}+(t-\xi)^{2}$, i.e.
\begin{equation}
\mathfrak{h}[\gamma,\xi]:= -\dfrac{d^{2}}{dt^{2}}+(t-\xi)^{2},\qquad \mathcal{D}(\mathfrak{h}[\gamma,\xi])= \{u\in B^{2}(\mathbb{R}_{+})~:~u^{\prime}(0)=\gamma u(0)\}.
\end{equation}
Here, for a given $k\in\mathbb{N}$, the space $B^{k}(\mathbb{R}_{+})$ is defined as~:
\begin{equation}\label{spaceBk}
B^{k}(\mathbb{R}_{+})= \{u\in L^{2}(\mathbb{R}_{+})~:~ t^{p}u^{(q)}(t)\in L^{2}(\mathbb{R}_{+}),\quad \forall p,q \quad{\rm s.t.}\quad p+q\leq k\},
\end{equation}
where $u^{(q)}$ denote the distributional derivative of order $q$ of $u$.
The operator $\mathfrak{h}[\gamma,\xi]$ is associated with the closed quadratic form
\begin{equation}
B^{1}(\mathbb{R}_{+})\ni u\mapsto \mathfrak{q}[\gamma,\xi]:= \int_{0}^{\infty}(|u^{\prime}(t)|^{2}+|(t-\xi)u|^{2})dt,
\end{equation}
where $B^{1}(\mathbb{R}_{+})$ is defined in \eqref{spaceBk}.
It is easy to see that $\mathfrak{h}[\gamma,\xi]$ has compact resolvent since the embedding $B^{1}(\mathbb{R}_{+})$ into $L^{2}(\mathbb{R}_{+})$ is compact. Hence the spectrum of $\mathfrak{h}[\gamma,\xi]$ is purely discrete consisting of an increasing sequence of positive eigenvalues $\{\mu_{j}(\gamma,\xi)\}_{j=1}^{\infty}$.
The lowest eigenvalue of $\mathfrak{h}[\gamma,\xi]$ is defined via the min-max principle by~:
\[
\mu_{1}(\gamma,\xi)=\inf_{u\in B^{1}(\mathbb{R}_{+}),\, u\neq 0}\dfrac{\mathfrak{q}[\gamma,\xi](u)}{\norm{u}^{2}_{L^{2}(\mathbb{R}_{+})}}.
\]
It follows from standard Sturm-Liouville theory that all the
eigenvalues $\mu_j(\gamma,\xi)$ are simple, and $\mu_1(\gamma,\xi)$
has a positive ground state. Details are given in \cite{DaHe}.
We define the functions~:
\[
\Theta(\gamma):=\inf_{\xi\in\mathbb{R}}\mu_{1}(\gamma,\xi)\,,
\]
and
\[
\Theta_{j}(\gamma):=\inf_{\xi\in\mathbb{R}}\mu_{j}(\gamma,\xi) \quad(j\geq2).
\]
When $\gamma=0$, we shall write,
\begin{align}
\mathfrak{h}[\xi]:=\mathfrak{h}[0,\xi],\quad& \mu_{j}(\xi):=\mu_{j}(0,\xi),\quad\forall j\in\mathbb{N}\\
\Theta_{0}:=\Theta(0),\quad&\xi_{0}:=\xi(0).
\end{align}
The result in the next lemma is proved in \cite{Fr}.
\begin{lem}\label{mu2}
For all $\xi\in\mathbb{R}$, we have
$$\mu_{2}(\xi)>1\,.$$
\end{lem}
Next we collect results proved in \cite{Ka4}.
\begin{lem}\label{lem:Ka}
The following statements hold true.
\begin{enumerate}
\item For all $\gamma\in\mathbb{R}$, we have,
$$\Theta_{2}(\gamma)>\Theta(\gamma).$$
\item For every $j\in\mathbb N$, the function $\xi\mapsto\mu_{j}(\gamma,\xi)$ is continuous
and satisfies
\begin{enumerate}
\item $\displaystyle\lim_{\xi\rightarrow-\infty}\mu_{j}(\gamma,\xi)=\infty$\,;
\item$\displaystyle\lim_{\xi\rightarrow\infty}\mu_{j}(\gamma,\xi)=2j+1$\,.
\end{enumerate}
\item Let $\gamma\in(-\infty,0)$ and $j\in\mathbb{N}$. Then
$\Theta_{j}(\gamma)<2j+1$ and for all
$b_{0}\in(\Theta_{j}(\gamma),2j+1)$, the equation
$\mu_{j}(\gamma,\xi)=b_{0}$ has exactly two solutions
$\xi_{j,-}(\gamma,b_{0})$ and $\xi_{j,+}(\gamma,b_{0})$.
Moreover,
\[
\{\xi\in\mathbb{R}~:~\mu_{j}(\gamma,\xi)<b_{0}\}=(\xi_{j,-}(\gamma,b_{0}),\xi_{j,+}(\gamma,b_{0}))\,.
\]
\item Let \[
U_{j}=\{(\gamma,b)\in\mathbb{R}^{2}~:~\Theta_{j}(\gamma)<b<2j+1\}\,.
\]
The functions
\[
U_{j}\ni(\gamma,b)\mapsto \xi_{j,\pm}(\gamma,b)
\]
admit continuous extensions
\[
\mathbb{R}\times(-\infty,2j+1)\mapsto \overline\xi_{j,\pm}(\gamma,b).
\]
\end{enumerate}
\end{lem}
For later use, we include
\begin{lem}\label{Lem:|u0|^2}
Let $\gamma\in\mathbb{R}$, and let $u_{j,\gamma}(\cdot;\xi)$ be the
normalized eigenfunction associated to the eigenvalue
$\mu_{j}(\gamma,\xi)$. It holds true that
\[
|u_{j,\gamma}(0;\xi)|^2\leq C(\mu_{j}(\gamma,\xi)+(\gamma^2+1)).
\]
\begin{proof}
Due to the density of $C^{\infty}_{0}(\overline{\mathbb{R}_{+}})$ in $H^{1}(\mathbb{R}_{+})$, we have for any function $u\in H^{1}(\mathbb{R}_{+})$,
\begin{equation}
|u(0)|^2=-2\int_{0}^{\infty}u^{\prime}(\eta)u(\eta)d\eta.
\end{equation}
The inequality of Cauchy-Schwarz gives us that, for any $\alpha>0$,
\begin{equation}\label{CSh}
|u(0)|^2\leq 2\|u^{\prime}\|_{L^2(\mathbb{R}_{+})}\|u\|_{L^2(\mathbb{R}_{+})}\leq \alpha\|u^{\prime}\|_{L^2(\mathbb{R}_{+})}^2+\alpha^{-1}\|u\|_{L^2(\mathbb{R}_{+})}^2.
\end{equation}
Assume $\gamma<0$ and choose $\alpha=-1/(2\gamma)$, it follows that
\begin{equation}\label{Bdry-term}
\gamma|u(0)|^2\geq -\frac{1}{2}\|u^{\prime}\|^2-2\gamma^2\norm{u}^2.
\end{equation}
Notice that for $u:=u_{j,\gamma}(\cdot,\xi)$, we have
\[
\|u_{j,\gamma}^{\prime}\|^2+\norm{(t-\xi)u_{j,\gamma}}^2+\gamma|u_{j,\gamma}(0;\xi)|^2=\mu_{j}(\gamma,\xi).
\]
Using \eqref{Bdry-term} with $u:=u_{j,\gamma}(\cdot,\xi)$ and adding $\|u_{j,\gamma}^{\prime}\|^2+\norm{(t-\xi)u_{j,\gamma}}^2$ on both sides, we obtain
\begin{equation}\label{Eq-mu-uprime}
\mu_{j}(\gamma,\xi)\geq \frac{1}{2}\|u_{j,\gamma}^{\prime}\|^2-2\gamma^2.
\end{equation}
Note also that the inequality in \eqref{Eq-mu-uprime} is evidently
true for $\gamma\geq0$.
We infer from \eqref{CSh} that,
\[
|u_{j,\gamma}(0;\xi)|^2\leq 2\|u^{\prime}_{j,\gamma}\|^2 +2.
\]
Now we use the inequality in \eqref{Eq-mu-uprime} and the assumption
that $u_{j,\gamma}$ is normalized in $L^2$ to deduce
\[
|u_{j,\gamma}(0;\xi)|^2\leq 4\mu_{j}(\gamma,\xi) +(8\gamma^2+2)\,.
\]
\end{proof}
\end{lem}
In the next lemma, using the analysis in \cite[Theorem~2.6.2]{Ka},
we establish uniform decay estimates on the eigenfunctions
$u_{j,\gamma}$.
\begin{lem}
Let $\epsilon\in(0,1)$ and $K>0$. There exists a constant
$C_{\epsilon,K}>0$ such that, if $|\xi|\leq K$ and
$\mu_{j}(\gamma,\xi)\leq 1$, then,
\begin{equation}\label{Decay}
\norm{e^{\epsilon(t-\xi)^2/2}u_{j,\gamma}(\cdot,\xi)}_{H^{1}(\mathbb{R}_{+};(t-\xi)\geq C_{\epsilon,K})}\leq C_{\epsilon,K}(1+\gamma_{-}+\gamma_{-}^2).
\end{equation}
\end{lem}
\begin{proof}
Let $\Phi:\mathbb{R}_{+}\to\mathbb{R}$ be a Lipschitz function in $\mathbb{R}_{+}$ such that
$\Phi'$ is compactly supported and $e^\Phi u_{j,\gamma}\in
L^2(\mathbb{R}_+)$. For all $\phi\in \mathcal{D}(\mathfrak{h}[\gamma,\xi])$,
we have the following identity:
\[
\langle\mathfrak{h}[\gamma,\xi]\phi,e^{\Phi}\phi\rangle_{L^2(\mathbb{R}_{+})}=\norm{(e^\Phi\phi)^\prime}^2_{L^2(\mathbb{R}_{+})}+\norm{(t-\xi)e^\Phi\phi}^2_{L^2(\mathbb{R}_{+})}+\gamma|e^{\Phi(0)}\phi(0)|^2_{L^2(\mathbb{R}_{+})}-\norm{\Phi^{\prime} e^\Phi\phi}_{L^2(\mathbb{R}_{+})}^2.
\]
Substituting $\phi=u_{j,\gamma}(\cdot;\xi)$, we obtain
\begin{multline}\label{Eq:Dec1}
\norm{(e^\Phi u_{j,\gamma}(\cdot;\xi))^\prime}_{L^2(\mathbb{R}_{+})}^2+\norm{(t-\xi)e^\Phi u_{j,\gamma}(\cdot;\xi)}_{L^2(\mathbb{R}_{+})}^2+ \gamma|e^{\Phi(0)}u_{j,\gamma}(0;\xi) |^2\\
=\mu_{j}(\gamma,\xi)\norm{e^{\Phi}u_{j,\gamma}(\cdot;\xi)}_{L^2(\mathbb{R}_{+})}^2+\norm{\Phi^{\prime} e^\Phi u_{j,\gamma}(\cdot;\xi)}_{L^2(\mathbb{R}_{+})}^2.
\end{multline}
Using Lemma~\ref{Lem:|u0|^2} and that $\mu_{j}(\gamma,\xi)\leq 1$, we deduce that
\[
|u_{j,\gamma}(0;\xi)|^2\leq 8\sqrt{1+\gamma_{-}^2}\,.
\]
Let us observe that
\[
\gamma|u_{j,\gamma}(0;\xi)|^2\geq -8\gamma_{-}\sqrt{1+\gamma_{-}^2}\,.
\]
Inserting this into \eqref{Eq:Dec2}, and again using that $\mu_{j}(\gamma,\xi)\leq 1$, it follows that
\begin{multline}\label{Eq:Dec2}
\norm{(e^\Phi u_{j,\gamma}(\cdot;\xi))^\prime}_{L^2(\mathbb{R}_{+})}^2+\norm{(t-\xi)e^\Phi u_{j,\gamma}(\cdot;\xi)}_{L^2(\mathbb{R}_{+})}^2\\
\leq\norm{e^{\Phi}u_{j,\gamma}(\cdot;\xi)}_{L^2(\mathbb{R}_{+})}^2+\norm{\Phi^{\prime} e^\Phi u_{j,\gamma}(\cdot;\xi)}_{L^2(\mathbb{R}_{+})}^2+ 8\gamma_{-}\sqrt{1+\gamma_{-}^2}e^{2\Phi(0)}.
\end{multline}
Let $N\in\mathbb N$ be sufficiently large. We choose the function
$\Phi$ to be
\[
\Phi:=\Phi_N=\left\{
\begin{array}{ll}
\epsilon \dfrac{(t-\xi)^2}{2}&{\rm if ~}t-\xi< N\,,\\
\epsilon \dfrac{N^2}{2}&{\rm if~}t-\xi\geq N \,.\end{array}
\right.\]
Implementing \eqref{Eq:Dec2}, we find
\begin{equation}\label{Eq:Dec3}
\int_{\mathbb{R}_{+}}\Big[{(e^\Phi u_{j,\gamma}(\cdot;\xi))^\prime}^2+\big[(1-\epsilon^2)(t-\xi)^2-1 \big]|e^\Phi u_{j,\gamma}(\cdot;\xi)|^2 \Big]dt
\leq 8\gamma_{-}\sqrt{1+\gamma_{-}^2}e^{\epsilon K^2/2}.
\end{equation}
This gives
\begin{equation}\label{Eq:Dec4}
\int_{N\geq (t-\xi)\geq a_{\epsilon}}\Big[|{(e^\Phi u_{j,\gamma}(\cdot;\xi))^\prime}|^2+ |{e^\Phi u_{j,\gamma}(\cdot;\xi)}|^2\Big]dt
\leq 8\gamma_{-}\sqrt{1+\gamma_{-}^2}e^{\epsilon K^2/2}+e^{\epsilon a_{\epsilon}^2},
\end{equation}
with $a_{\epsilon}= \sqrt{\frac{2}{1-\epsilon^2}}\,$. Now choose
$C_{\epsilon,K}=\max \{a_{\epsilon},16e^{\epsilon
K^2/2},2e^{\epsilon a^2_{\epsilon}}\}$. That way, we can rewrite
\eqref{Eq:Dec4} as follows,
\begin{equation}\label{Eq:Dec4}
\int_{N\geq (t-\xi)\geq C_{\epsilon,K}}\Big[|{(e^\Phi u_{j,\gamma}(\cdot;\xi))^\prime}|^2+ |{e^\Phi u_{j,\gamma}(\cdot;\xi)}|^2\Big]dt
\leq C_{\epsilon,K}(1+\gamma_{-}+\gamma_{-}^2).
\end{equation}
The estimate in \eqref{Eq:Dec4} is true for all $N> C_{\epsilon,K}$.
Sending $N$ to $\infty$ and using monotone convergence, we get the
estimate in \eqref{Decay}.
\end{proof}
The rest of this section is devoted to an analysis of the term in
\eqref{eq:sum}.
\begin{lem}\label{lem-sum-j}
Let $M>0$. There exist constants $j_0\in\mathbb N$ and $C>0$ such
that, for all $\gamma\in(-M,M)$, we have,
\[
\sum_{j=2}^{\infty}\int_{\mathbb{R}}(\mu_{j}(\gamma,\xi)-1)_{-}d\xi
=\sum_{j=2}^{\infty}\int_{\mathbb{R}}(\mu_{j}(\gamma,\xi)-1)_{-}d\xi\leq C.
\]
\end{lem}
\begin{proof}
Let us observe that for all $j\geq2$,
\[
\{\xi\in\mathbb{R}~:~\mu_{j}(\gamma,\xi)\leq 1\}\subset\{\xi\in\mathbb{R}~:~\mu_{2}(\gamma,\xi)\leq 1\}\,,
\]
and for all $\gamma\in (-M,M)$, using the monotonicity of
$\eta\mapsto\mu_{2}(\eta,\xi)$, we have,
\[
\{\xi\in\mathbb{R}~:~\mu_{j}(\gamma,\xi)\leq 1\}\subset\{\xi\in\mathbb{R}~:~\mu_{2}(-M,\xi)\leq 1\}.
\]
According to Lemma~\ref{lem:Ka}, there exists a constant $\ell>0$
such that
\[
\{\xi\in\mathbb{R}~:~\mu_{2}(-M,\xi)\leq 1\}\subset[-\ell,\ell],\qquad \forall\gamma\in (-M,M).
\]
We introduce constants $(\xi_{j}(M))_{j\geq 2}\subset[-\ell, \ell]$ by
\[
\mu_{j}(-M,\xi_{j}(M))=\min_{\xi\in[-\ell,\ell]}\mu_{j}(-M,\xi).
\]
Arguing as in the proof of \cite[Lemma~2.5]{Ka4}, we get,
\begin{equation}\label{mu-j-gamma}
\lim_{j\rightarrow\infty}\mu_{j}(-M,\xi_{j}(M))=\infty\,.
\end{equation}
Consequently, we may find $j_{0}\geq 2$ depending solely on $M$ such that
\[
\mu_{j}(-M,\xi_{j}(M))>1, \qquad (j> j_{0}).
\]
It follows that, for all $j> j_{0}$, $\xi\in[-\ell,\ell]$ and
$\gamma\in(-M,M)$,
\[
\mu_{j}(\gamma,\xi)\geq \mu_{j}(-M,\xi)\geq \mu_{j}(-M,\xi_{j}(M))>1.
\]
The result of Lemma~\ref{lem-sum-j} now follows upon noticing that,
for all $\gamma>-M$ and $\xi\in\mathbb{R}$,
\[
\mu_{j}(\gamma,\xi)> \mu_{2}(-M,\xi)\,
\]
\end{proof}
Again, the proof of \cite[Lemma~2.5]{Ka4} allows us to obtain:
\begin{lem}\label{lem-lim-mu-j}
For all $M>0$, there holds,
\[
\lim_{j\rightarrow\infty}\left(\inf_{\xi\in\mathbb{R}} \mu_{j}(-M,\xi)\right)=\infty.
\]
\end{lem}
\begin{proof}
It has been established in \cite{DaHe} that there exists a sequence
$(\xi_{j}(M))_{j\in\mathbb{N}}$ such that, for all $j$,
\[
\inf_{\xi\in\mathbb{R}}\mu_{j}(-M,\xi)=\mu_{j}(-M,\xi_{j}(M))\,.
\]
Let us show that
\begin{equation}\label{lim-mu-j-gen}
\lim_{j\rightarrow\infty}\mu_{j}(-M,\xi_{j}(M))=\infty\,.
\end{equation}
Suppose that \eqref{lim-mu-j-gen} were false. Then we can find a
constant $\mathcal{M}$ and a subsequence $j_{n}$ such that
\begin{equation}\label{xi-j-M}
\inf_{\xi\in\mathbb{R}}\mu_{j_{n}}(-M,\xi)=\mu_{j}(-M,\xi_{j_{n}}(M))\leq \mathcal{M},\qquad\forall~j\in\mathbb{N}.
\end{equation}
If $(\xi_{j_{n}}(M))_{n}$ is unbounded, we may find a subsequence,
denoted again by $(\xi_{j_{n}}(M))_{n}$, such that
$$\displaystyle{\lim_{n\rightarrow\infty}}\xi_{j_{n}}(M)=\infty.$$
Fix $j_{0}\in\mathbb{N}$ and let us observe that for all $j_{n}\geq
j_{0}$,
\begin{equation}\label{inq-egn}
\mu_{j_{n}}(-M,\xi_{j_{n}}(M))\geq\mu_{j_{0}}(-M,\xi_{j_{n}}(M))\,.
\end{equation}
On account of Lemma~\ref{lem:Ka}, we know that $\displaystyle{\lim_{\xi\rightarrow\infty}}\mu_{j_{0}}(-M,\xi)=2j_{0}+1$. Therefore, passing to the limit $n\rightarrow\infty$ in \eqref{inq-egn}, we obtai
\[
\liminf_{n\rightarrow\infty}\mu_{j_{n}}(-M,\xi_{j_{n}}(M))\geq 2j_{0}+1.
\]
Letting $j_{0}\rightarrow\infty$, we conclude,
\[
\liminf_{n\rightarrow\infty}\mu_{j_{n}}(-M,\xi_{j_{n}}(M))=\infty,
\]
which contradicts \eqref{xi-j-M}.
Now, if $(\xi_{j_n}(M))_{n}$ is bounded, we follow the proof of
Lemma~2.5 in \cite{Ka4} and establish that
$$\lim_{n\to\infty}\mu_{j_{n}}(-M,\xi_{j_{n}}(M))=\infty\,,$$
which contradicts \eqref{xi-j-M}.
\end{proof}
\begin{lem}\label{cont-int}
The function
\[
\mathcal{I}:\mathbb{R}\ni\gamma\mapsto\Sum{j=2}{\infty}\Int{\mathbb{R}}{}( \mu_{j}(\gamma,\xi)-1)_{-}d\xi\,,
\]
is locally uniformly continuous.
\end{lem}
\begin{proof}
Let $m>0$. It is sufficient to establish,
\begin{equation}
\left(\sup_{|\gamma|\leq m} \left|\mathcal{I}(\gamma+\tau)-\mathcal{I}(\gamma)\right| \right)\rightarrow 0 \text{ as } \tau\rightarrow 0.
\end{equation}
Let $\tau_{1}\in (0,1)$. By monotonicity, it follows that for all
$\tau\in[-\tau_{1},\tau_{1}]$ and $j\geq2$,
\[
\{\xi\in\mathbb{R}~:~\mu_{j}(\gamma+\tau,\xi)\leq 1\}\subset\{\xi\in\mathbb{R}~:~ \mu_{2}(-m-\tau_{1},\xi)\leq 1 \}\,
\]
We may find a constant $M>0$ depending only on $m$ such that
\begin{equation}
\forall \tau\in[-\tau_{1},\tau_{1}],\qquad\forall ~j\geq 2\,,\qquad
\{\xi\in\mathbb{R}~:~\mu_{j}(-m-\tau_{1},\xi)\leq 1 \}\subset[-M,M]\,.
\end{equation}
Let $\xi_{j}(M)$ be as in the proof of Lemma~\ref{lem-sum-j}, i.e.
\[
\forall~\xi\in[-M, M],\qquad\forall~\tau\in[-\tau_{1},\tau_{1}],\qquad\mu_{j}(\gamma+\tau,\xi)\geq\mu_{j}(-m-\tau_{1},\xi_{j}(M))\,.
\]
We get as in Lemma~2.5 in \cite{Ka4} and Lemma~\ref{lem-lim-mu-j}~:
\[
\lim_{j\rightarrow\infty}\mu_{j}(-m-\tau_{1},\xi_{j}(M))=\infty\,.
\]
Hence, we may find $j_{0}\geq 2$ depending solely on $m$ such that,
for all $j\geq j_0$,
\[
\mu_{j}(-m-\tau_{1},\xi_{j}(M))> 1\,
\]
and consequently, for all $|\tau|\leq \tau_{1}$, we have,
\[
\Sum{j=2}{\infty}\Int{\mathbb{R}}{}(\mu_{j}(\gamma+\tau,\xi)-1)_{-}d\xi=\Sum{j=2}{j_{0}}\Int{\mathbb{R}}{}(\mu_{j}(\gamma,\xi)-1)_{-}d\xi.
\]
Therefore, we deal with a sum of $j_{0}$ terms with $j_{0}$
independent from $\tau$ and $\gamma$. So given
$k\in\{2,\cdots,j_{0}\}$ and setting
$\mathcal{I}_{k}(\gamma)=\Int{\mathbb{R}}{}\big(\mu_{k}(\gamma,\xi)-1\big)_{-}d\xi$, it is sufficient to show that
\begin{equation}
\lim_{\underset{|\tau|\leq |\tau_{1}|}{\tau\rightarrow 0}} \left(\sup_{|\gamma|\leq m} \big|\mathcal{I}_{k}(\gamma+\tau)-\mathcal{I}_{k}(\gamma)\big| \right)=0\,.
\end{equation}
Since the function $\gamma\mapsto\mu_{k}(\gamma,\xi)$ is continuous, the above formula
is simply an application of dominated convergence.
\end{proof}
The next theorem is taken from \cite[Theorem~2.4.8]{Ka}.
\begin{thm}
There exist constants $C>0$ and $\eta>0$ such that, for all $\gamma\in\mathbb{R}$ and $\xi\in(\eta,+\infty)$, we have~:
\begin{equation}\label{int-mu-1-finite}
\big|\mu_{1}(\gamma,\xi)-1\big|\leq C(1+|\gamma|)\xi\exp{(-\xi^{2})}.
\end{equation}
\end{thm}
Let us introduce the function
\begin{equation}\label{eq:fctsum}
\mathcal{J}:\mathbb{R}\ni\gamma\mapsto\Sum{j=1}{\infty}\Int{\mathbb{R}}{}(\mu_{j}(\gamma,\xi)-1)_{-}d\xi\,.
\end{equation}
\begin{lem}\label{lim-int}
Let $\{\gamma_{h}\}_{h}$ be a real-sequence such that \(
\displaystyle{\lim_{h\rightarrow0}}\gamma_{h}=\gamma\in\mathbb{R} \). There
holds,
\[
\lim_{h\rightarrow 0}\mathcal{J}(\gamma_{h})=\mathcal{J}(\gamma).
\]
\end{lem}
\begin{proof}
We write,
\begin{multline}\label{dec-sum}
\left|\mathcal{J}(\gamma_{h})-\mathcal{J}(\gamma)\right|
\leq\Big|\Int{\mathbb{R}}{}(\mu_{1}(\gamma_{h},\xi)-1)_{-}d\xi-\Int{\mathbb{R}}{}(\mu_{1}(\gamma,\xi)-1)_{-}d\xi\Big|
+| \mathcal{I}(\gamma_{h})-\mathcal{I}(\gamma)|\,.
\end{multline}
We treat the first term on the right hand side of \eqref{dec-sum}
using the inequality \eqref{int-mu-1-finite}. That way, for every
$\epsilon$, there exists $h_0>0$ such that for all $h\in(0,h_0]$,
\[
\left|\Int{\mathbb{R}}{}(\mu_{1}(\gamma_{h},\xi)-1)_{-}d\xi\right|\leq \Int{\mathbb{R}}{}|\mu_{1}(\gamma_{h},\xi))-1|d\xi\leq \int_{\mathbb{R}}g(\xi)d\xi\,,
\]
with
\[
g(\xi)= C(1+|\gamma|+\epsilon)\xi e^{-\xi^{2}/2}\in L^{1}(\mathbb{R}).
\]
By continuity of the function $\gamma\mapsto \mu_{1}(\gamma,\xi)$
and dominated convergence, it follows that
\[
\Int{\mathbb{R}}{}(\mu_{1}(\gamma_{h},\xi)-1)_{-}d\xi\rightarrow \Int{\mathbb{R}}{}(\mu_{1}(\gamma,\xi)-1)_{-}d\xi
\]
as $h\rightarrow 0$.
The second term in \eqref{dec-sum} converges to $0$ by Lemma~\ref{cont-int}.
\end{proof}
\section{Eigenprojectors}
Recall that $\mathbb{R}^2_{+}=\mathbb{R}\times\mathbb{R}_{+}$. Consider $h,b>0$ and the magnetic potential
\begin{equation}\label{A0}
\mathbb{R}^{2}_{+}\ni (s,t)\mapsto {\bf A}_{0}(s,t)=(-t,0).
\end{equation}
In this section, we construct projectors on the (generalized)
eigenfunctions of the operator
\begin{equation}\label{Op-hs}
\mathcal{P}^{\alpha,\gamma}_{h,b,\mathbb{R}^{2}_{+}}= (-ih\nabla+b{\bf
A_{0}})^{2} \text{ in } L^{2}(\mathbb{R}^{2}_{+})\,,
\end{equation}
whose domain is
\[
\mathcal{D}(\mathcal{P}^{\alpha,\gamma}_{h,b,\mathbb{R}^{2}_{+}})
= \big\{ u\in L^2(\mathbb{R}^2_{+})~:~ (-ih\nabla+b{\bf A_{0}})^j\in L^2(\mathbb{R}^{2}_{+}),\quad j=1,2, \quad \partial_{t}u=\gamma u\quad {\rm on}\quad t=0\big\}\,.
\]
Consider an orthonormal family $(u_{j,\gamma}(\cdot;\xi))_{j=1}^{\infty}$ of real-valued eigenfunctions of the operator $\mathfrak{h}[\gamma,\xi]$
introduced in \eqref{Op-h-g-x}, i.e.
\begin{equation}\label{Egv-HO}
\left\{
\begin{array}{ll}
-u_{j,\gamma}^{\prime\prime}(t;\xi)+(t-\xi)^{2}u_{j,\gamma}(t;\xi)=\mu_{j}(\gamma,\xi)u_{j,\gamma}(t;\xi),\quad\text{ in }\mathbb{R}_{+},&\\
u_{j,\gamma}^{\prime}(0;\xi)=\gamma u_{j,\gamma}(0;\xi),&\\
\Int{\mathbb{R}_{+}}{}u_{j,\gamma}(t;\xi)^{2}dt=1.&
\end{array}
\right.
\end{equation}
Let $u\in \mathcal{D}(\mathcal{P}^{\alpha,\gamma}_{1,1,\mathbb{R}^{2}_{+}})$. Performing a Fourier transformation with respect to $s$, we
observe the formal relation,
\begin{equation}
\mathcal{P}^{\alpha,\gamma}_{1,1,\mathbb{R}^{2}_{+}}u =(2\pi)^{-1} \mathcal{F}_{\xi\rightarrow s}^{-1}\big(-\partial_{t}^2+(t-\xi)^2\big)\mathcal{F}_{s\rightarrow \xi}u\,.
\end{equation}
By the spectral theorem, we have
\[
\mathfrak{h}[\gamma,\xi]= \sum_{j=1}^{\infty} \big\langle \cdot, u_{j,\gamma}(\cdot;\xi)\big\rangle_{L^2(\mathbb{R}_{+})} u_{j,\gamma}(\cdot;\xi)\,,
\]
and consequently,
\[
\mathcal{P}^{\alpha,\gamma}_{1,1,\mathbb{R}^{2}_{+}}u
=(2\pi)^{-1}\sum_{j=1}^{\infty} \big\langle\mathcal{F}_{s\rightarrow \xi}u , u_{j,\gamma}(\cdot;\xi)\big\rangle_{L^2(\mathbb{R}_{+})} \mathcal{F}_{\xi\rightarrow s}^{-1} u_{j,\gamma}(\cdot;\xi)\,.
\]
That way, for every
$u\in
\mathcal{D}(\mathcal{P}^{\alpha,\gamma}_{1,1,\mathbb{R}^{2}_{+}})$, we have,
\begin{equation}
\big\langle \mathcal{P}^{\alpha,\gamma}_{1,1,\mathbb{R}^{2}_{+}}u,u \big\rangle_{L^2(\mathbb{R}^2_{+})}=
(2\pi)^{-1} \int_{\mathbb{R}}\sum_{j=1}^{\infty}\Big | \big\langle\mathcal{F}_{s\rightarrow \xi}u , u_{j,\gamma}(\cdot;\xi)\big\rangle_{L^2(\mathbb{R}_{+})}\Big|^2 d\xi\,.
\end{equation}
For every $j\in\mathbb N$ and $\xi\in\mathbb{R}$, we introduce the
eigenprojector $\Pi_j(\gamma,\xi)$ defined by the corresponding
bilinear form,
\[
\big\langle\Pi_{j}(\gamma,\xi)u,v\big\rangle_{L^2(\mathbb{R}^2_{+})}= \big \langle\mathcal{F}_{s\rightarrow \xi}u , u_{j,\gamma}(\cdot;\xi)\big\rangle_{L^2(\mathbb{R}_{+})} \big\langle u_{j,\gamma}(\cdot;\xi),\mathcal{F}_{s\rightarrow \xi}v\big \rangle_{L^2(\mathbb{R}_{+})}\,.
\]
Through explicit calculations, it is easy to prove
\begin{lem}\label{Eq:spec-id}
Let $u,v\in L^2(\mathbb{R}^2_{+})$. We have
\begin{equation}\label{Eq1}
(2\pi)^{-1}\int_{\mathbb{R}}\sum_{j=1}^{\infty} \big\langle \Pi_{j}(\gamma,\xi)u,v \big\rangle_{L^2(\mathbb{R}^2_{+})} d\xi=\big\langle u,v\big\rangle_{L^2(\mathbb{R}^2_{+})}.
\end{equation}
If in addition $u \in
\mathcal{D}(\mathcal{P}^{\alpha,\gamma}_{1,1,\mathbb{R}^{2}_{+}})$, then,
\begin{equation}\label{Eq2}
\big\langle u, \mathcal{P}^{\alpha,\gamma}_{1,1,\mathbb{R}^{2}_{+}}u \big\rangle_{L^2(\mathbb{R}^2_{+})} =
(2\pi)^{-1}\sum_{j=1}^{\infty} \int_{\mathbb{R}} \mu_{j} (\gamma,\xi)\big \langle \Pi_{j}(\gamma,\xi)u,u \big\rangle_{L^2(\mathbb{R}^2_{+})} d\xi\,.
\end{equation}
\end{lem}
Let us introduce the unitary operator,
$$
U_{h,b}=L^{2}(\mathbb{R}^{2}_{+})\ni\varphi\mapsto U_{h,b}\varphi\in L^{2}(\mathbb{R}^{2}_{+}),
$$
such that, for all $x=(x_{1},x_{2}) \in\mathbb{R}^{2}_{+}$,
$$
(U_{h,b}\varphi)(x)=\sqrt{b/h}\,\varphi(\sqrt{b/h}\,x).
$$
Furthermore, we introduce the family of projectors,
\begin{equation}\label{U-pro}
\Pi_{j}(h,b;\gamma,\xi)=U_{h,b}\Pi_{j}(\gamma_{h,b},\xi)U_{h,b}^{-1}\,,
\end{equation}
with
\begin{equation}\label{gamma-h-b}
\gamma_{h,b}=h^{\alpha-1/2}b^{-1/2}\gamma.
\end{equation}
It is easy to check that
\begin{equation}\label{App-res}
U_{h,b}^{-1}\mathcal{P}^{\alpha,\gamma}_{h,b,\mathbb{R}^{2}_{+}}U_{h,b}=hb\mathcal{P}^{\alpha,\gamma_{h,b}}_{1,1,\mathbb{R}^{2}_{+}}\,.
\end{equation}
That way, we infer from Lemma~\ref{Eq:spec-id}:
\begin{lem}\label{Egpro}
Let $u,v\in L^2(\mathbb{R}^2_{+})$. We have
\begin{equation}\label{Eq1}
(2\pi)^{-1}\int_{\mathbb{R}}\sum_{j=1}^{\infty} \big\langle \Pi_{j}(h,b;\gamma,\xi)u,v \big\rangle_{L^2(\mathbb{R}^2_{+})} d\xi=\langle u,v\big \rangle_{L^2(\mathbb{R}^2_{+})}.
\end{equation}
If in addition $u \in
\mathcal{D}(\mathcal{P}^{\alpha,\gamma}_{h,b,\mathbb{R}^{2}_{+}})$, then,
\begin{equation}\label{Eq2}
\big\langle u, \mathcal{P}^{\alpha,\gamma}_{h,b,\mathbb{R}^{2}_{+}}u \big\rangle =(2\pi)^{-1}hb\sum_{j=1}^{\infty} \int_{\mathbb{R}} \mu_{j} (h^{\alpha-1/2}b^{-1/2}\gamma,\xi) \big\langle \Pi_{j}(h,b;\gamma,\xi)u,u \big \rangle_{L^2(\mathbb{R}^2_{+})} d\xi\,.
\end{equation}
\end{lem}
\section{Lower bound}\label{sec:lb}
In this section, we determine a lower bound of the trace
$-E(\lambda;h,\gamma,\alpha)$ consistent with the asymptotics
displayed in Theorem~\ref{thm:KN}.
Arguing as in \cite[Sec.~5.1]{Fo-Ka}, it follows from the
Lieb-Thirring inequality that the trace
$-E(\lambda;h,\gamma,\alpha)$ is finite.
\subsection{Decomposition of the energy}
Consider a partition of unity of $\mathbb{R}$,
\begin{equation}
\chi_{1}^{2}+\chi_{2}^{2}=1,\qquad {\rm supp}~\chi_{1}\subset (-\infty,1),\qquad {\rm supp}~\chi_{2}\subset \Big[\frac{1}{2},\infty\Big).
\end{equation}
We set for $k=1,2$, $x\in\mathbb{R}^{2}$,
\begin{equation}\label{Def-zetak}
\zeta_{k}(x)=\chi_{k}(t(x)),\qquad t(x)=
\begin{cases}
{\rm dist}(x,\partial\Omega)&{\rm if}\quad x\in\Omega\\
-{\rm dist}(x,\partial\Omega)&\text{ otherwise.}
\end{cases}
\end{equation}
Let $\delta:=\delta(h)\in(0,1)$ be a small parameter to be chosen
later. For $k=1,2$, we put,
\begin{equation}\label{Def-zetakh}
\zeta_{k,h}(x)=\zeta_{k}\Big(\dfrac{t(x)}{\delta(h)}\Big),\quad (x\in\overline{\Omega})\,,
\end{equation}
where $\zeta_{k}$ is introduced in \eqref{Def-zetak}.
Let $\{g_{j}\}_{j}$ be any orthonormal system in
$\mathcal{D}(\mathcal{P}^{\alpha,\gamma}_{h,\Omega})$. We aim to
prove a uniform lower bound of the following quantity,
\[
\sum_{j=1}^{N}(\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(g_{j})-\lambda h ).
\]
Thanks to the variational principle in Lemma~\ref{lem-VP-2}, this
will give us a lower bound of the trace
$-E(\lambda;h,\lambda,\alpha)$.
The IMS localisation formula yields
\begin{equation}\label{IMS-2}
\sum_{j=1}^{N}(\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(g_{j})-\lambda h )=\sum_{k=1}^{2}\Big( \mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(\zeta_{k,h}g_{j})- \int_{\Omega}(\mathcal{V}_{h}+\lambda h)|{\zeta_{k,h}g_{j}}|^{2}dx \Big),\quad \mathcal{V}_{h}:= \sum_{k=1}^{2}|\nabla \zeta_{k,h}|^{2}.
\end{equation}
\subsection{The bulk term}
We will prove that the bulk term in \eqref{IMS-2} corresponding to
$k=2$ is an error term, i.e. of the order $o(h^{1/2})$. Thanks to
the variational principle in Lemma~\ref{lem-VP-3}, we have,
\begin{equation}\label{eq:blk}
\sum_{j=1}^{N}\Big( \mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(\zeta_{2,h}g_{j})-
\int_{\Omega}(\mathcal{V}_{h}+\lambda h)|{\zeta_{2,h}g_{j}}|^{2} dx\Big)\geq
{\rm Tr}\Big(\big[\widetilde{\mathcal{P}}_{h}-(Bh+\mathcal{V}_{h})\big]{\bf 1}_{(-\infty,0)}\big(\widetilde{\mathcal{P}}^{\alpha,\gamma}_{h}-(Bh+\mathcal{V}_{h})\big)\Big),
\end{equation}
where
$\widetilde{\mathcal{P}}_{h}-(Bh+\mathcal{V}_{h})=(-ih\nabla+A)^2-(Bh+\mathcal{V}_{h})$
is the operator acting in $L^{2}(\mathbb{R}^{2})$. The trace on the right
side in \eqref{eq:blk} can be controlled using the Lieb-Thirring
inequality. The details are given in \cite[Sec.~5.2]{Fo-Ka}. That
way, we get,
\begin{equation}\label{eq:lb-blk}
\begin{aligned}
&\sum_{j=1}^{N}\Big( \mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(\zeta_{2,h}g_{j})- \int_{\Omega}(\mathcal{V}_{h}+\lambda h)|{\zeta_{1,h}g_{j}}|^{2}dx\Big)\\
&\qquad\geq -C h^{2} \left( \int_{\mathbb{R}^{2}}\Big(\norm{h^{-1}B}_{L^{\infty}} (-h^{-2}\mathcal{V}_{h})_{-} + (-h^{-2}\mathcal{V}_{h})_{-}^{2}\Big)dx \right)\\
&\qquad\geq -C \Big(\dfrac{h}{\delta(h)}\big(1+\dfrac{h}{\delta(h)^{2}}\big)\Big).
\end{aligned}
\end{equation}
Therefore, we get,
\begin{equation}\label{Err-bulk}
\sum_{j=1}^{N}(\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(g_{j})-\lambda h )\geq \sum_{j=1}^{N}\Big( \mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(\zeta_{1,h}g_{j})- \int_{\Omega}(\mathcal{V}_{h}+\lambda h)|{\zeta_{1,h}g_{j}}|^{2}\Big)- C\Big(\dfrac{h}{\delta(h)}\big(1+\dfrac{h}{\delta(h)^{2}}\big)\Big).
\end{equation}
Later on, we shall choose $\delta(h)$ in a manner that the first term (boundary term) on the right hand side above is the dominant term.
\subsection{The boundary term}
Here we handle the term corresponding to $k=1$ in \eqref{IMS-2}. By
assumption, $\partial\Omega$ has a finite number of connected
components. For simplicity of the presentation, we will perform the
computations in the case where $\partial\Omega$ has one connected
component. In the general case, we work on each connected component
independently and then sum the resulting lower bounds.
Let us introduce a positive, smooth function $\psi\in L^{2}(\mathbb{R})$,
supported in $(0,1)$ with the property that
\[
\int_{\mathbb{R}}\psi^{2}(s)ds=1.
\]
Recall the boundary coordinates $(s,t)$ introduced in \eqref{BC}. We
put
\begin{equation}\label{Def-psih}
\psi_{h}(x;\sigma)= \dfrac{1}{\delta(h)}\psi\left( \dfrac{s(x)-\sigma}{\delta(h)}\right)\,,\quad(\sigma \in \mathbb{R}).
\end{equation}
Using again the IMS decomposition formula, we write,
\begin{multline}\label{eq:bnd}
\sum_{j=1}^{N}\Big(\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(\zeta_{1,h}g_{j})-\int_{\Omega}(\lambda
h +\mathcal{V}_{h}) |{\zeta_{1,h}g_{j}}|^{2}\Big) \\
= \int_{\mathbb{R}}\Big(
\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(\psi_{h} (x;\sigma)\zeta_{1,h}g_{j})-
(\lambda h +\mathcal{W}_{h}) |\psi_{h}(x;\sigma)\zeta_{1,h}g_{j}|^{2}
\Big)d\sigma,
\end{multline}
where
\begin{equation}
\mathcal{W}_{h}= \mathcal{V}_{h}+ h^{2}\int_{\mathbb{R}}|\nabla \psi_{h}(x,\sigma)|^{2}d\sigma.
\end{equation}
Let us denote by ($\Phi_{t_0}$ is the coordinate change \eqref{BC} valid near the boundary)
\begin{equation}\label{Asigma}
v_{j,h}(x;\sigma):= \psi_{h} (x;\sigma)\zeta_{1,h}(x)g_{j}(x), \qquad B_{\sigma}=B(\Phi(\sigma,0))\,,\quad A_{\sigma}(s,t)=B_{\sigma}{\bf A}_{0}(s,t)=(-B_{\sigma}t,0),
\end{equation}
where ${\bf A}_0$ is the magnetic potential introduced in \eqref{A0}.
From Lemma~\ref{Lem-apqf}, we infer that for all $\varepsilon\in(0,1)$,
\begin{multline}\label{f-est}
\int_{\Omega}\big|(-ih\nabla + A) v_{j,h}(x;\sigma)\big|^{2}dx\\ \geq
(1-\varepsilon) \int_{\mathbb{R}^{2}_{+}}\big|(-ih\nabla +A_{\sigma})\widetilde
v_{j,h,\sigma}|^{2}dsdt-C\varepsilon^{-1}\delta(h)^{4}
\int_{\mathbb{R}^{2}_{+}}|\widetilde v_{j,h,\sigma}|^{2}dsdt.
\end{multline}
Here, the function $\widetilde v_{j,h,\sigma}$ is defined by the
coordinate transformation as follows
\[
\widetilde v_{j,h,\sigma}(s,t)= e^{i\phi_\sigma(s,t)/h}\,\widetilde{v}_{j,h}(\Phi(s,t);\sigma)\,,
\]
where, for a function $u$, $\widetilde{u}$ is associated to $u$ by means of \eqref{tilde} and $\phi_{\sigma}$ is the phase factor from Lemma~\ref{prop:gauge}.
Combining the foregoing estimates yields
\begin{multline}\label{N-q-f'}
\int_{\Omega}|(-ih\nabla + A) v_{j,h}(x;\sigma)|^{2}dx- \int_{\Omega}(\lambda h + \mathcal{W}_{h})|v_{j,h}(x;\sigma)|^{2}dx\\
\geq (1-\varepsilon) \int_{\mathbb{R}^{2}_{+}}|(-ih\nabla +A_{\sigma})\widetilde v_{j,h,\sigma}|^{2}dsdt
- \Big(\lambda h(1+C\delta(h))+ C\varepsilon^{-1}\delta(h)^{4}) \Big) \norm{\widetilde v_{j,h,\sigma}}^{2}_{L^{2}(\mathbb{R}^{2}_{+})} \\
- (1+C\delta(h)) \int \widetilde {\mathcal{W}}_{h}|\widetilde v_{j,h,\sigma}|^2dsdt\,.
\end{multline}
Consequently,
\begin{multline}\label{N-q-f}
\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(v_{j,h}(x;\sigma))- \int_{\Omega}(\lambda h + \mathcal{W}_{h})|v_{j,h}(x;\sigma)|^{2}dx\\
\geq (1-\varepsilon)\int_{\mathbb{R}^{2}_{+}}|(-ih\nabla +A_{\sigma})\widetilde v_{j,h,\sigma}|^{2}dsdt
+h^{1+\alpha}\int_{\mathbb{R}}\gamma(s)|\widetilde v_{j,h,\sigma}(s,0)|^{2}ds\\
- \Big(\lambda h(1+C\delta(h))+ C\varepsilon^{-1}\delta(h)^{4}) \Big) \norm{\widetilde v_{j,h,\sigma}}^{2}_{L^{2}(\mathbb{R}^{2}_{+})}
- (1+C\delta(h)) \int \widetilde {\mathcal{W}}_{h}|\widetilde v_{j,h,\sigma}|^2dsdt.
\end{multline}
The function $\gamma$ defined on $\partial\Omega$ can be viewed as a
function of the boundary variable $s\in(0,|\partial\Omega|)$. We
extend $\gamma$ by $0$ to a function in $L^3(\mathbb{R})$.
Hereafter, we distinguish between the easy case when
$\alpha>\frac12$ and the harder case when $\alpha=\frac12$.
\subsection*{The regime $\alpha>\frac12$}
Let $\eta>0$. Thanks to \eqref{N-q-f}, we have the obvious
decomposition,
\begin{multline}\label{dec-eta0}
\sum_{j=1}^{N}\bigg\{\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(v_{j,h}(x;\sigma))- \int_{\Omega}(\lambda h + \mathcal{W}_{h})|v_{j,h}(x;\sigma)|^{2}dx\bigg\}\\
\geq \sum_{j=1}^{N}\bigg[ (1-\eta)(1-\varepsilon)\int_{\mathbb{R}^{2}_{+}}|(-ih\nabla +A_{\sigma})\widetilde v_{j,h,\sigma}|^{2}dsdt+h^{1+\alpha}\int_{\mathbb{R}}\widetilde{\gamma}_{a,\sigma}|\widetilde v_{j,h,\sigma}(s,0)|^{2}ds
\\- \Big(\lambda h(1+C\delta(h))+ C\varepsilon^{-1}\delta(h)^{4}) \Big) \norm{\widetilde v_{j,h,\sigma}}^{2}_{L^{2}(\mathbb{R}^{2}_{+})}
- (1+C\delta(h)) \int \widetilde {\mathcal{W}}_{h}|\widetilde v_{j,h,\sigma}|^2dsdt \bigg]\\+\eta(1-\varepsilon)
R_{h,\alpha,\eta,\sigma}(\widetilde
v_{j,h,\sigma})\,,
\end{multline}
where
\begin{multline}
R_{h,\alpha,\eta,\sigma}(\widetilde v_{j,h,\sigma})=
\sum_{j=1}^{N}\bigg[\int_{\mathbb{R}^{2}_{+}}|(-ih\nabla
+A_{\sigma})\widetilde v_{j,h,\sigma}|^{2}dsdt +
\eta^{-1}h^{1+\alpha}\int_{\mathbb{R}}
\dfrac{\gamma(s)}{1-\varepsilon}|\widetilde
v_{j,h,\sigma}(s,0)|^{2}ds\bigg].
\end{multline}
Furthermore, we define the operator $\widetilde\Gamma$ on $L^2([0,\delta(h)]\times(0,\delta(h)))$,
\[
\widetilde \Gamma f=\sum_{j=1}^{N}\langle f, \widetilde v_{j,h,\sigma}\rangle_{L^{2}\big([0,\delta(h)]\times(0,\delta(h))\big)} \widetilde v_{j,h,\sigma},
\]
which satisfies $0\leq \widetilde\Gamma\leq C\delta(h)^{-1}$ (in the sense of quadratic forms).
Denote by $\gamma_{h,b,\eta,\varepsilon}= \frac {h^{\alpha-1/2}B_{\sigma}^{-1/2}\gamma}{\eta(1-\varepsilon)}$. Thanks to the variational principle in Lemma~\ref{lem-VP-3}, we may write,
\begin{equation}\label{Eq-FErr0}
\begin{aligned} R_{h,\alpha,\eta
,\sigma}(\widetilde v_{j,h,\sigma})&= {\rm tr}\Big[ \mathcal{P}^{\alpha,\gamma/(\eta(1-\varepsilon))}_{h,B_{\sigma},\mathbb{R}^{2}_{+}}\widetilde\Gamma \Big]
\geq -C\delta(h)^{-1}{\rm tr}\Big[ \mathcal{P}^{\alpha,\gamma/(\eta(1-\varepsilon))}_{h,B_{\sigma},\mathbb{R}^{2}_{+}}\Big]_{-}\\
&\geq -C\delta(h)^{-1}hB_{\sigma}{\rm tr}\Big[ \mathcal{P}^{\alpha,\gamma_{h,b,\eta,\varepsilon}}_{1,1,\mathbb{R}^{2}_{+}}\Big]_{-}.
\end{aligned}
\end{equation}
Here the operator $ \mathcal{P}^{\alpha,\gamma_{h,b,\eta,\varepsilon}}_{1,1,\mathbb{R}^{2}_{+}}$ has been introduced in \eqref{Op-hs} and identified with the operator $H_{1}(-\gamma_{h,b,\eta,\varepsilon})$ defined in Lemma~\ref{Lb-thm}. Thus, it follows from Theorem~\ref{Lb-thm} (with $\alpha=1$) that
\begin{equation}\label{Eq-FErr0}
\begin{aligned} R_{h,\alpha,\eta,\sigma}(\widetilde v_{j,h,\sigma})
&\geq -C B_{\sigma}^{-1/2}h\delta^{-1}h^{3(\alpha-\frac12)}(1-\varepsilon)^{-3}\eta^{-3}\int_{\mathbb{R}}|\gamma(s)|^3ds \\
&\geq -C B_{\sigma}^{-1/2}h\delta^{-1}h^{3(\alpha-\frac12)}(1-\varepsilon)^{-3}\eta^{-3}\|\gamma\|_3^3\,.
\end{aligned}
\end{equation}
Integrating \eqref{Eq-FErr0} with respect to
$\sigma\in(-\delta(h),|\partial\Omega|)$, we conclude that,
\begin{equation}\label{Error-energy0}
\begin{aligned}
\eta(1-\varepsilon) \int R_{h,\alpha,\eta,\sigma}(\widetilde v_{j,h,\sigma})d\sigma&\geq -C B_{\sigma}^{-1/2}h\delta^{-1}h^{3(\alpha-\frac12)}
(1-\varepsilon)^{-2}\eta^{-2}\|\gamma\|_3^3\\
&=\mathcal{O}\big(h^{3(\alpha-\frac12)}\big)\,\eta^{-2}\delta^{-1}h\|\gamma\|_3^3.
\end{aligned}
\end{equation}
Selecting $\delta=h^{3/8}$, $\varepsilon=h^{1/4}$ and
$\eta=h^{1/32}$, we get that the error terms in \eqref{eq:lb-blk}
and \eqref{Error-energy0} are of the order $o(h^{1/2})$. Also, by
\cite[Proof of (5.26)]{Fo-Ka}, we have,
\begin{multline*}
\sum_{j=1}^{N}\bigg\{\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(v_{j,h}(x;\sigma))-
\int_{\Omega}(\lambda h +
\mathcal{W}_{h})|v_{j,h}(x;\sigma)|^{2}dx\bigg\}\\
\geq
-\frac{h^{1/2}}{2\pi}\int_{\partial\Omega}\int_{\mathbb{R}}B(x)^{3/2}\Big(\mu_{1}(0,\xi)-\frac{\lambda}{B(x)}\Big)_{-}d\xi
ds(x)-h^{1/2}o(1)\,.
\end{multline*}
Thus, we infer from \eqref{dec-eta0}, \eqref{eq:lb-blk} and
\eqref{IMS-2} that
\begin{equation}\label{lb-thm0}
-E(\lambda;h,\gamma,\alpha)\geq
-\frac{h^{1/2}}{2\pi}\int_{\partial\Omega}\int_{\mathbb{R}}B(x)^{3/2}\Big(\mu_{1}(0,\xi)-\frac{\lambda}{B(x)}\Big)_{-}d\xi
ds(x)-h^{1/2}o(1)\,.
\end{equation}
\subsection*{The regime $\alpha=\frac12$}
The calculations here are longer compared to the case
$\alpha>\frac12$. In the rest of this section, $\alpha=\frac12$. Let
$a>0$ and consider
\begin{equation}\label{gamma:a}
\gamma_{a}(s)=j_{a}\ast \gamma
\end{equation}
where
\[
j_{a}(s)= C_* a^{-1}j\Big(\frac{s}{a}\Big), \qquad j(s)=e^{-s^2}\,.
\]
Here $C_*$ is a normalization constant such that
$\displaystyle\int_{\mathbb{R}}j(s)\,ds=1$. By \cite[Theorem~2.16]{LL}, we
know that $\gamma_{a}\in C^{\infty}(\mathbb{R})$ and, as $a\to0$,
\[
\gamma_{a}\rightarrow \gamma,\quad{\rm in~}L^3(\mathbb{R})\,.
\]
By smoothness of $\gamma_a$, we have,
\begin{equation}\label{gamma-gamma:a}
|\gamma_{a}(s)-\gamma_{a}(\sigma)|\leq C a^{-2}|s-\sigma|\leq C a^{-2}\delta(h),
\end{equation}
valid on the support of the function $ v_{j,h,\sigma}$.
Also, we have the obvious decomposition,
\begin{equation}
\int_{\mathbb{R}}\gamma(s)|\widetilde v_{j,h,\sigma}(s,0)|^{2}ds
= \int_{\mathbb{R}}\gamma_{a}(s)|\widetilde v_{j,h,\sigma}(s,0)|^{2}ds+ \int_{\mathbb{R}}(\gamma(s)-\gamma_{a}(s))|\widetilde v_{j,h,\sigma}(s,0)|^{2}ds\,.
\end{equation}
Implementing the aforementioned estimates in \eqref{N-q-f}, we
obtain,
\begin{multline}\label{N-q-f''}
\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(v_{j,h}(x;\sigma))- \int_{\Omega}(\lambda h + \mathcal{W}_{h})|v_{j,h}(x;\sigma)|^{2}dx\\
\geq (1-\varepsilon)\int_{\mathbb{R}^{2}_{+}}|(-ih\nabla +A_{\sigma})\widetilde v_{j,h,\sigma}|^{2}dsdt
+h^{3/2}\int_{\mathbb{R}}(\gamma_{a}(\sigma)-Ca^{-2}\delta(h))|\widetilde v_{j,h,\sigma}(s,0)|^{2}ds\\
+h^{3/2}\int_{\mathbb{R}}(\gamma(s)-\gamma_{a}(s))|\widetilde v_{j,h,\sigma}(s,0)|^{2}ds
- \Big(\lambda h(1+C\delta(h))+ C\varepsilon^{-1}\delta(h)^{4} \Big) \norm{\widetilde v_{j,h,\sigma}}^{2}_{L^{2}(\mathbb{R}^{2}_{+})}\\
- (1+C\delta(h)) \int \widetilde {\mathcal{W}}_{h}|\widetilde v_{j,h,\sigma}|^2dsdt.
\end{multline}
Let $\eta>0$ and
\begin{equation}\label{eq:new-gamma}
{\gamma}_{a,\sigma}=\dfrac{\gamma_{a}(\sigma)-Ca^{-2}\delta(h)}{(1-\varepsilon)(1-\eta)}\,.
\end{equation}
We can rewrite \eqref{N-q-f''} in the alternative form,
\begin{multline}\label{dec-eta}
\sum_{j=1}^{N}\bigg\{\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(v_{j,h}(x;\sigma))- \int_{\Omega}(\lambda h + \mathcal{W}_{h})|v_{j,h}(x;\sigma)|^{2}dx\bigg\}\\
\geq (1-\eta)(1-\varepsilon) \sum_{j=1}^{N}\bigg[\int_{\mathbb{R}^{2}_{+}}|(-ih\nabla +A_{\sigma})\widetilde v_{j,h,\sigma}|^{2}dsdt\\
+h^{3/2}\int_{\mathbb{R}}{\gamma}_{a,\sigma}|\widetilde
v_{j,h,\sigma}(s,0)|^{2}ds- {\lambda h} \norm{\widetilde v_{j,\sigma}}^{2}_{L^{2}(\mathbb{R}^{2}_{+})}\bigg]
\\+\eta(1-\varepsilon)(1-\eta_0)
R^{(1)}_{h,\eta,\sigma}(\widetilde
v_{j,h,\sigma})+\eta\eta_0(1-\varepsilon)
R^{(2)}_{h,\eta,\sigma,a}(\widetilde v_{j,h,\sigma})\,,
\end{multline}
where $\eta_0\in(0,1/2)$,
\begin{align*}
R^{(1)}_{h,\eta,\sigma}(\widetilde
v_{j,h,\sigma})=&\sum_{j=1}^{N}\bigg[\int_{\mathbb{R}^{2}_{+}}|(-ih\nabla
+A_{\sigma})\widetilde v_{j,h,\sigma}|^{2}dsdt-\lambda h \norm{\widetilde v_{j,\sigma}}^{2}_{L^{2}(\mathbb{R}^{2}_{+})}
\\
&+\left\{\lambda h\left(1-\frac1{1-\eta_0}\right)+\frac{\lambda h}{1-\eta_0}\left(1-\frac{1}{1-\varepsilon}\right)\right.\\
&\qquad\qquad\left.-\dfrac{2\eta^{-1}\lambda
hC\delta(h)+2C\eta^{-1}\varepsilon^{-1}\delta(h)^{4}}{(1-\varepsilon)(1-\eta_0)}\right\}
\norm{\widetilde v_{j,h,\sigma}}^{2}_{L^{2}(\mathbb{R}^{2}_{+})} \\
&-\dfrac{2\eta^{-1}(1+C\delta(h))}{(1-\varepsilon)(1-\eta_0)} \int \widetilde
{\mathcal{W}}_{h}|\widetilde v_{j,h,\sigma}|^2dsdt \bigg],
\end{align*}
and
\begin{multline}
R_{h,\eta,\sigma,a}^{(2)}(\widetilde v_{j,h,\sigma})=
\sum_{j=1}^{N}\bigg[\int_{\mathbb{R}^{2}_{+}}|(-ih\nabla
+A_{\sigma})\widetilde v_{j,h,\sigma}|^{2}dsdt +
2\eta_0^{-1}\eta^{-1}h^{3/2}\int_{\mathbb{R}}
\dfrac{\gamma(s)-\gamma_{a}(s)}{1-\varepsilon}|\widetilde
v_{j,h,\sigma}(s,0)|^{2}ds\bigg].
\end{multline}
The parameter $\eta_0$ will be selected sufficiently small but
fixed. Let us define the density matrix
\[
\widetilde \Gamma f=\sum_{j=1}^{N}\langle f, \widetilde v_{j,h,\sigma}\rangle_{L^{2}\big([0,\delta(h)]\times(0,\delta(h))\big)} \widetilde v_{j,h,\sigma},
\]
which satisfies $0\leq \widetilde\Gamma\leq C\delta(h)^{-1}$. Denote
by $\gamma_{\rm error}=2\eta_0^{-1}\eta^{-1}
\dfrac{\gamma(s)-\gamma_{a}(s)}{1-\varepsilon}$. Thanks to the
variational principle in Lemma~\ref{lem-VP-3} and the Lieb-Thirring
inequality in \eqref{Lb-thm}, we may write,
\begin{equation}\label{Eq-FErr}
\begin{aligned} R_{h,\eta,\sigma,a}^{(2)}(\widetilde v_{j,h,\sigma})&= {\rm tr}\Big[ \mathcal{P}^{\alpha,\gamma_{\rm error}}_{h,B_{\sigma},\mathbb{R}^{2}_{+}}\widetilde\Gamma \Big]
\geq -C\delta(h)^{-1}{\rm tr}\Big[ \mathcal{P}^{\alpha,\gamma_{\rm error}}_{h,B_{\sigma},\mathbb{R}^{2}_{+}}\Big]_{-}\\
&\geq -C B_{\sigma}^{-1/2}h\delta^{-1}(1-\varepsilon)^{-3}\eta_0^{-3}\eta^{-3}\int_{\mathbb{R}}|\gamma(s)-\gamma_{a}(s)|^3ds \\
&\geq -C B_{\sigma}^{-1/2}h\delta^{-1}(1-\varepsilon)^{-3}\eta_0^{-3}\eta^{-3}\|\gamma-\gamma_a\|_3^3\,.
\end{aligned}
\end{equation}
Let us make the following choice of the parameter $\delta$ and
$\varepsilon$,
\begin{equation}\label{eq:delta-eta}
\delta=\eta^{-3/4}h^{1/2},\qquad \varepsilon=h^{1/4}\,
\end{equation}
Integrating \eqref{Eq-FErr} with respect to
$\sigma\in(-\delta(h),|\partial\Omega|)$, we conclude that,
\begin{equation}\label{Error-energy}
\begin{aligned}
\eta_0\eta(1-\varepsilon) \int R_{h,\eta,\sigma,a}^{(2)}(\widetilde v_{j,h,\sigma})d\sigma&\geq -C B_{\sigma}^{-1/2}h\delta^{-1}
(1-\varepsilon)^{-2}\eta_0^{-2}\eta^{-2}\|\gamma-\gamma_a\|_3^3\\
&=\mathcal{O}\big(\,\eta_0^{-5/4}\eta^{-5/4}h^{1/2}\,\big)\|\gamma-\gamma_a\|_3^3.
\end{aligned}
\end{equation}
We estimate $R_{h,\eta,\sigma}^{(1)}(\widetilde v_{j,h,\sigma})$ using the
variational principle in Lemma~\ref{lem-VP-3} and the rough bound in
the cylinder in Lemma~\ref{energy}. Indeed, we have
\begin{multline*}
\Bigg\|\dfrac{2\eta^{-1}(1+C\delta(h))\widetilde{\mathcal{W}}_{h}}{(1-\varepsilon)(1-\eta_0)}-
\lambda
h\left\{\left(1-\frac1{1-\eta_0}\right)-\frac{1}{1-\eta_0}\left(1-\frac{1}{1-\varepsilon}\right)\right\}\\
+\dfrac{2C\eta^{-1}\lambda
h\delta(h)+C\varepsilon^{-1}\delta(h)^{4}}{(1-\varepsilon)(1-\eta_0)}
\Bigg\|_{L^{\infty}} \leq \vartheta B_{\sigma}h\,,
\end{multline*}
where $\vartheta=\mathcal O(\eta)+\mathcal O(\eta_0)+o(1)$. We may
select $\eta$ and $\eta_0$ sufficiently small such that
$\vartheta<\lambda_0$, where $\lambda_0$ is the constant in
Lemma~\ref{energy}. That way, we may apply Lemma~\ref{energy}.
First, we write by the variational principle,
\begin{equation}
R_{h,\eta,\sigma}^{(1)}(\widetilde v_{j,h,\sigma}) \geq {\rm Tr}\Big[\Big(
\mathcal{P}_{h,B_{\sigma},\mathbb{R}^{2}_{+}}^{\alpha,0} -
B_{\sigma}h(1+\vartheta)\Big)\widetilde\Gamma \Big] \geq -C
\delta(h)^{-1}\mathcal{E}(\vartheta,B_{\sigma},
\delta(h),\delta(h))\,.
\end{equation}
Applying Lemma \ref{energy} and integrating with respect to $\sigma\in(-\delta(h), |\partial\Omega|)$, we arrive at
\begin{equation}\label{Error-energy*}
\eta(1-\varepsilon) \int R_{h,\eta,\sigma}^{(1)}(\widetilde v_{j,h,\sigma})d\sigma\geq -C \eta\delta(h)= -C\eta^{1/4}\, h^{1/2}.
\end{equation}
Collecting the estimates in \eqref{Error-energy},
\eqref{Error-energy*} and \eqref{dec-eta}, we get,
\begin{multline}\label{eq:lb-con}
\sum_{j=1}^{N}\bigg\{\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(v_{j,h}(x;\sigma))- \int_{\Omega}(\lambda h + \mathcal{W}_{h})|v_{j,h}(x;\sigma)|^{2}dx\bigg\}\\
\geq (1-\eta)(1-\varepsilon)\sum_{j=1}^{N}\bigg[ \int_{\mathbb{R}^{2}_{+}}|(-ih\nabla +A_{\sigma})\widetilde v_{j,h,\sigma}|^{2}dsdt\\
+h^{3/2}\int_{\mathbb{R}}{\gamma}_{a,\sigma}|\widetilde
v_{j,h,\sigma}(s,0)|^{2}ds- \lambda h \norm{\widetilde v_{j,h,\sigma}}^{2}_{L^{2}(\mathbb{R}^{2}_{+})} \bigg]
-C\Big(\,\eta^{-5/4}\|\gamma-\gamma_a\|_3^3+\eta^{1/4}\,\Big)
h^{1/2}.
\end{multline}
The constant $C$ in the remainder term depends on the fixed
parameter $\eta_0$, but independent of the other parameters. Notice
that the choice of $\delta$ and $\varepsilon$ in
\eqref{eq:delta-eta} makes the error in \eqref{eq:lb-blk} of the
order $\mathcal O(\sqrt{\eta}\,h^{1/2})$. Thus, collecting
\eqref{eq:lb-con}, \eqref{eq:lb-blk} and \eqref{IMS-2}, we get by
the variational principle in \eqref{lem-VP-2},
\begin{equation}\label{eq:lb-con'}
\begin{aligned}
-E(\lambda;h,\gamma,\alpha)\geq& (1-\eta)(1-\varepsilon)
\sum_{j=1}^{N}\int_{\mathbb{R}}\left[ \mathcal{Q}_{h,B_{\sigma},\mathbb{R}^{2}_{+}}^{\alpha ,{\gamma}_{a,\sigma}}(\widetilde v_{j,h,\sigma})- \lambda h \norm{\widetilde v_{j,\sigma}}^{2}_{L^{2}(\mathbb{R}^{2}_{+})} \right]d\sigma\\
&-C\Big(\,\eta^{-5/4}\|\gamma-\gamma_a\|_3^3+\eta^{1/4}\,\Big)
h^{1/2}.
\end{aligned}
\end{equation}
Here $\mathcal{Q}_{h,B_{\sigma},\mathbb{R}^{2}_{+}}^{\alpha,{\gamma}_{a,\sigma}}$ is the quadratic form associated to the operator in \eqref{Op-hs}
\subsection{The leading order term}
Here we continue to handle the case $\alpha=\frac12$. We are going
to estimate the leading term in \eqref{eq:lb-con'}, i.e.
\[
\sum_{j=1}^{N}\int_{\mathbb{R}}\left[ \mathcal{Q}_{h,B_{\sigma},\mathbb{R}^{2}_{+}}^{\alpha, {\gamma}_{a,\sigma}}(\widetilde v_{j,h,\sigma})- \lambda h \norm{\widetilde v_{j,h,\sigma}}^{2}_{L^{2}(\mathbb{R}^{2}_{+})} \right]d\sigma.
\]
Here $\gamma_{a,\sigma}$ is the constant introduced in
\eqref{eq:new-gamma}. Let
$$\widetilde\gamma_{h,\sigma}=h^{\alpha-1/2}B_\sigma^{-1/2}\gamma_{a,\sigma}=B_\sigma^{-1/2}\gamma_{a,\sigma}\,.$$
Recall the definition of the eigenprojector
$\Pi_p(h,B_\sigma;\widetilde\gamma_{h,\sigma};\xi)$ in
\eqref{U-pro}.
By Lemma~\ref{Egpro}, we have,
\begin{equation}
2\pi\Sum{j=1}{N}\mathcal{Q}^{\alpha,\gamma_{a,\sigma}} _{h,B_{\sigma},\mathbb{R}^{2}_{+}}(\widetilde v_{j,h,\sigma})\\
=\Sum{j=1}{N}\Sum{p=1}{\infty}\int_{\mathbb{R}}\mu_{p}(\widetilde\gamma_{h,\sigma}\,,\,\xi)
\Big \langle \Pi_{p}(h,B_\sigma;\widetilde\gamma_{h,{\sigma}},\xi) \widetilde v_{j,h,\sigma},
\widetilde v_{j,h,\sigma}\Big\rangle d\xi\,.
\end{equation}
Thus,
\begin{multline}\label{Eq-eig}
2\pi \sum_{j=1}^{N}\Big\{
\mathcal{Q}^{\alpha,\gamma_{a,\sigma}}
_{h,B_{\sigma},\mathbb{R}^{2}_{+}}(\widetilde v_{j,h,\sigma})
-\lambda h \int_{\mathbb{R}^{2}_{+}}|\widetilde v_{j,h,\sigma}|^{2}dsdt \Big\}\\
\geq
-hB_{\sigma}\sum_{p=1}^{\infty}\int_{\mathbb{R}}\Big(\mu_{p}(\widetilde\gamma_{h,\sigma}\,,\,\xi)-\frac{\lambda}{B_{\sigma}}\Big)_{-}
\sum_{j=1}^{N}\Big\langle
\Pi_{p}(h,B_\sigma;\widetilde\gamma_{h,{\sigma}},\xi) \widetilde
v_{j,h,\sigma}, \widetilde v_{j,h,\sigma}\Big\rangle d\xi.
\end{multline}
From the definition of
$\Pi_{p}(h,B_\sigma;\widetilde\gamma_{h,{\sigma}}\,,\,\xi)$ and the identity \eqref{unw}, it follows that
\begin{equation}\label{Eq-OS}
\begin{aligned}
\Big\langle &\Pi_{p}(h,B_\sigma;\widetilde\gamma_{h,{\sigma}},\xi) \widetilde v_{j,h,\sigma}, \widetilde v_{j,h,\sigma}\Big\rangle_{L^2(\mathbb{R}^2_{+})}\\
&=\frac{B_{\sigma}}{h}
\Big|\big\langle \widetilde v_{j,h,\sigma},e^{-is\xi (h^{-1}B_{\sigma})^{1/2} }u_{p,\widetilde\gamma_{h,{\sigma}}}((h^{-1}B_{\sigma})^{1/2}t;\xi ) \big\rangle_{L^2(\mathbb{R}^2_{+})}\Big|^{2}\\
&\leq (1+C\delta(h))\frac{B_{\sigma}}{h}
\Big|\Big\langle g_{j}, \zeta_{1,h}\psi_{h}(x;\sigma)
U_{\Phi}^{-1}\Big(e^{-is\xi (h^{-1}B_{\sigma})^{1/2} }u_{p,\widetilde\gamma_{h,{\sigma}}}((h^{-1}B_{\sigma})^{1/2}t;\xi )\Big) \Big\rangle_{L^{2}(\Omega)}\Big|^{2}\,,
\end{aligned}
\end{equation
where the transformation $U^{-1}_{\Phi}:\widetilde u\mapsto u$ is
associated with the coordinate transform $\Phi_{t_{0}}$ introduced
in \eqref{BC}. Next, since $\{g_{j}\}_{j=1}^{N}$ is an orthonormal
system in $L^{2}(\Omega)$, we have
\begin{multline}\label{Eq-OS_1}
\sum_{j=1}^{N}\Big|\Big\langle g_{j}, \zeta_{1,h}\psi_{h}(x;\sigma) U_{\Phi}^{-1}\Big(e^{-is\xi (h^{-1}B_{\sigma})^{1/2} }u_{p,\widetilde\gamma_{h,{\sigma}}}((h^{-1}B_{\sigma})^{1/2}t;\xi )\Big) \Big\rangle_{L^{2}(\Omega)}\Big|^{2}\\
\leq \norm{\zeta_{1,h}\psi_{h}(x;\sigma) U_{\Phi}^{-1}\Big(e^{-is\xi
(h^{-1}B_{\sigma})^{1/2}
}u_{p,\widetilde\gamma_{h,{\sigma}}}((h^{-1}B_{\sigma})^{1/2}t;\xi
)\Big)}^{2}_{L^{2}(\Omega)}\,.
\end{multline}
Putting \eqref{Eq-OS} and \eqref{Eq-OS_1} together, we get
\begin{equation}\label{eq:Pi}
\begin{aligned}
0\leq
&\sum_{j=1}^{N}\Big\langle \Pi_{p}(h,B_\sigma;\widetilde\gamma_{h,{\sigma}},\xi)\widetilde v_{j,h,\sigma}, \widetilde v_{j,h,\sigma}\Big\rangle_{L^{2}(\mathbb{R}^{2}_{+})} \\
&\leq (1+C\delta(h))
\frac{B_{\sigma}}{h}\norm{\zeta_{1,h}\psi_{h}(\sigma)
U_{\Phi}^{-1}\Big(e^{-is\xi (h^{-1}B_{\sigma})^{1/2} }u_{p,\widetilde\gamma_{h,{\sigma}}}((h^{-1}B_{\sigma})^{1/2}t;\xi )\Big)}^{2}_{L^{2}(\Omega)}\\
&\leq (1+C\delta(h))\dfrac{B_{\sigma}}{h}
\Int{\mathbb{R}^{2}_{+}}{} (1-tk(s))
|\psi_{h}(s;\sigma)|^{2}|\zeta_{1,h}(t)|^{2}|u_{p,\widetilde\gamma_{h,{\sigma}}}(h^{-1/2} B_{\sigma}^{1/2}t; h^{-1/2} B_{\sigma}^{1/2},\xi)|^{2}dsdt\\
&\leq (1+C\delta(h))\dfrac{B_{\sigma}}{h}
\Int{\mathbb{R}}{} |\psi_{h}(s;\sigma)|^{2}ds \Int{\mathbb{R}_{+}}{}|u_{p,\widetilde\gamma_{h,{\sigma}}}(h^{-1/2} B_{\sigma}^{1/2}t;\xi)|^{2} dt\\
&= (1+C\delta(h))h^{-1/2} B_{\sigma}^{1/2}.
\end{aligned}
\end{equation}
Inserting this into \eqref{Eq-eig}, we find
\begin{multline}\label{Eq-eig-2}
2\pi \sum_{j=1}^{N}\Big\{ \mathcal{Q}^{\alpha,\gamma_{a,\sigma}} _{h,B_{\sigma},\mathbb{R}^{2}_{+}}(\widetilde v_{j,h,\sigma})
-\lambda h \int_{\mathbb{R}^{2}_{+}}|\widetilde v_{j,h,\sigma}|^{2}dsdt \Big\}\\
\geq
-h^{1/2}B_{\sigma}^{3/2}\sum_{p=1}^{\infty}\int_{\mathbb{R}}\Big(\mu_{p}(\widetilde\gamma_{h,{\sigma}}\,,\,\xi)-\frac{\lambda}{B_{\sigma}}\Big)_{-}d\xi-\mathcal
O(h^{1/2}\delta(h)).
\end{multline}
Fixing $a$ and $\eta$, we have,
${\gamma}_{h,\sigma}\rightarrow
\frac{\gamma_a(\sigma)}{1-\eta}$ as $h\rightarrow 0$. It results
from Lemma~\ref{lim-int} that, if $h\to0$, then,
\begin{equation}
\sum_{p=1}^{\infty}\int_{\mathbb{R}}\Big(\mu_{p}(\widetilde\gamma_{h,{\sigma}}\,,\,\xi)-\frac{\lambda}{B_{\sigma}}\Big)_{-}d\xi\to
\sum_{p=1}^{\infty}\int_{\mathbb{R}}\Big(\mu_{p}\left(B_{\sigma}^{-1/2}\frac{\gamma_a(\sigma)}{1-\eta} ,\xi\right)
-\frac{\lambda}{B_{\sigma}}\Big)_{-}d\xi.
\end{equation}
Since the function $\gamma_a$ is smooth and bounded (for every
fixed $a$), then by dominated convergence,
$$\int_{0}^{|\partial\Omega|}
\sum_{p=1}^{\infty}\int_{\mathbb{R}}\Big(\mu_{p}(\widetilde\gamma_{h,{\sigma}}\,,\,\xi)-\frac{\lambda}{B_{\sigma}}\Big)_{-}d\xi\,d\sigma
\to \int_{0}^{|\partial\Omega|}
\sum_{p=1}^{\infty}\int_{\mathbb{R}}\Big(\mu_{p}\left(B_{\sigma}^{-1/2}\frac{\gamma_a(\sigma)}{1-\eta} ,\xi\right)
-\frac{\lambda}{B_{\sigma}}\Big)_{-}d\xi d\sigma.$$
Inserting this and \eqref{Eq-eig-2} into \eqref{eq:lb-con'}, we get,
\begin{equation}\label{Eq-eig-3}
\begin{aligned}
\liminf_{h\to0}&\Big(-2\pi h^{-1/2}
E(\lambda;h,\gamma,\alpha)\Big)\\
&\geq
(1-\eta)\sum_{p=1}^{\infty}\int_{\partial\Omega}\int_{\mathbb{R}}B(x)^{3/2}\Big(\mu_{p}\left(B(x)^{-1/2}\frac{\gamma_a(x)}{1-\eta},\xi\right)-\frac{\lambda}{B(x)}\Big)_{-}d\xi
ds(x)\\
&\quad-C\Big(\,\eta^{-5/4}\|\gamma-\gamma_a\|_3^2+\eta^{1/4}\,\Big).
\end{aligned}
\end{equation}
Taking successively $\liminf_{a\to0_+}$ then
$\liminf_{\eta\to0_+}$, we arrive at,
\begin{multline}\label{Eq-eig-4}
\liminf_{h\to0}\Big(-2\pi h^{-1/2}
E(\lambda;h,\gamma,\alpha)\Big)\geq\\
\liminf_{\eta\to0_+}\left\{\liminf_{a\to0_+}
\sum_{p=1}^{\infty}\int_{\partial\Omega}\int_{\mathbb{R}}B(x)^{3/2}\Big(\mu_{p}\left(B(x)^{-1/2}\frac{\gamma_a(x)}{1-\eta},\xi\right)-\frac{\lambda}{B(x)}\Big)_{-}d\xi
ds(x)\right\}\,.
\end{multline}
If $\gamma\in L^\infty(\partial\Omega)$, then
$\|\gamma_a\|_\infty\leq \|\gamma\|_\infty$ and by dominated
convergence, the right side in \eqref{Eq-eig-4} is
$$\sum_{p=1}^{\infty}\int_{\partial\Omega}\int_{\mathbb{R}}B(x)^{3/2}\Big(\mu_{p}\left(B(x)^{-1/2}\gamma(x),\xi\right)-\frac{\lambda}{B(x)}\Big)_{-}d\xi
ds(x)\,.$$ Therefore, when $\gamma\in L^\infty(\partial\Omega)$ and $\alpha=\frac12$, we have the lower bound,
\begin{equation}\label{lb-thm2}
\liminf_{h\to0}\Big(-2\pi h^{-1/2}
E(\lambda;h,\gamma,\alpha)\Big)\geq\sum_{p=1}^{\infty}\int_{\partial\Omega}\int_{\mathbb{R}}B(x)^{3/2}\Big(\mu_{p}\left(B(x)^{-1/2}\gamma(x),\xi\right)-\frac{\lambda}{B(x)}\Big)_{-}d\xi
ds(x)\,.
\end{equation}
\section{Upper bound}
Let $\sigma\in[0,|\partial\Omega|)$, $\phi=\phi_{\sigma}$ be the
gauge from Proposition~\ref{prop:gauge}, $\zeta_{1,h}$ and
$\psi_{h}$ the functions from \eqref{Def-zetakh} and
\eqref{Def-psih} respectively. Let furthermore $\Phi=\Phi_{t_0}$ be
the coordinate transformation near the boundary given in \eqref{BC},
$B_{\sigma}=B(\Phi(\sigma,0))$ and $\breve{\gamma}_{h,\sigma}$ the
number introduced below in \eqref{def:gamma-tilde}.
Let $\xi\in \mathbb{R}$. If $\alpha=1/2$, we define the function
\begin{align*}
\widetilde{f}_{p,1/2}((s,t);h,\sigma,\xi)
:=B_{\sigma}^{1/4}h^{-1/4}
e^{-i\xi s\sqrt{B_{\sigma}/h}}
u_{p,\breve{\gamma}_{h,\sigma}}\big( B_{\sigma}^{1/2}h^{-1/2}t;\xi\big)e^{-i\phi_{\sigma}/h}\psi_{h}(s;\sigma)\zeta_{1,h}(t)\,,
\end{align*}
where $u_{p,\breve\gamma_{h,\sigma}}(\cdot;\xi)$ is the function from \eqref{Egv-HO}, and if $\alpha>1/2$, we define
\begin{align*}
\widetilde{f}_{p,\alpha}((s,t);h,\sigma,\xi):=B_{\sigma}^{1/4}h^{-1/4}e^{-i\xi s\sqrt{B_{\sigma}/h}}u_{p,0}\big( B_{\sigma}^{1/2}h^{-1/2}t;\xi\big)e^{-i\phi_{\sigma}/h}\psi_{h}(s;\sigma)\zeta_{1,h}(t)\,.
\end{align*}
Recall the coordinate transformation $\Phi$ valid near a
neighborhood of the point $x$ (see Subsection~\ref{Sec:BC}), and let
$x=\Phi^{-1}(y)$. We define $f_{p}(x;h,\sigma,\xi):={ \widetilde
f_{p}}((s,t);h,\sigma,\xi)$ by means of \eqref{tilde}. Let $K>0$. If
$\alpha=1/2$, we set,
\begin{equation}\label{eq:M}
M_{1/2}(h,\sigma,\xi,p,K)={{\bf 1}}_{\{(\sigma,\xi,p)\in[0,|\partial\Omega|)\mathbb\times\mathbb{R}\times\mathbb{N}~:~ \frac{\lambda}{B_{\sigma}}-\mu_{p}(\breve\gamma_{h,\sigma},\xi)\geq 0,~|\xi|\leq K\}},
\end{equation}
and if $\alpha>1/2$,
\begin{equation}\label{eq:M;alpha}
M_{\alpha}(h,\sigma,\xi,p,K)={{\bf 1}}_{\{(\sigma,\xi,p)\in[0,|\partial\Omega|)\mathbb\times\mathbb{R}\times\mathbb{N}~:~ p=1,\quad\frac{\lambda}{B_{\sigma}}-\mu_{1}(0,\xi)\geq 0,~|\xi|\leq K\}}\,.
\end{equation}
Since the calculations that we perform will be done in the regimes
$\alpha=1/2$ and $\alpha>1/2$ independently, then, for the sake of
simplification, we will drop the subscript $\alpha$ in the
calculations below and write $M$, $f_{p}$ instead of $M_{\alpha}$
and $f_{p,\alpha}$.
Let $f\in L^{2}(\Omega)$. We introduce
\begin{equation}\label{eq:Gamma}
(\Gamma f )(x)=(2\pi)^{-1}h^{-1/2}\iint B_{\sigma}^{1/2}\sum_{p=1}^{\infty}M(h,\sigma,\xi,p,K)\langle f_{p}(\cdot; h,\sigma,\xi), f \rangle f_{p}(x;h,\sigma,\xi)d\sigma d\xi\,.
\end{equation}
In Lemma~\ref{Pro-dm} below, we will prove that $\Gamma$ satisfies
the density matrix condition, namely,\break$0\leq \Gamma\leq
1+o(1)$. By the variational principle in Lemma~\ref{lem-VP-3}, an
upper bound of the sum of eigenvalues of
$\mathcal{P}_{h,\Omega}^{\alpha,\gamma}$ below $\lambda h$ follows
if we can prove an upper bound on
\begin{multline}\label{Eq:deftr}
{\rm tr}\big[(\mathcal{P}_{h,\Omega}^{\alpha,\gamma}-\lambda h){\Gamma}\big]\\
=(2\pi)^{-1}h^{-1/2}\iint\sum_{p=1}^{\infty}B_{\sigma}^{1/2}M(h,\sigma,\xi,p,K)\Big(\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(
f_{p}(x;h,\sigma,\xi))-\lambda h\norm{
f_{p}(x;h,\sigma,\xi)}^{2}\Big)d\sigma d\xi\,,
\end{multline}
where $\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}$ is the quadratic form introduced in \eqref{QF-Gen}.
We will then estimate the quantity in \eqref{Eq:deftr} in the cases
$\alpha=1/2$ and $\alpha>1/2$ independently.
\subsection*{The regime $\alpha>1/2$}
In this subsection, we suppose that $\alpha>1/2$. We see in
\eqref{eq:M;alpha} that the definition of $M$ involves the first
eigenvalue $\mu_1(\cdot,\cdot)$ only. Consequently, the summation in
the definition of $\Gamma$ is restricted to the first term
corresponding to $p=1$. We observe that
\begin{equation}\label{Eq:QFalpha>1/2}
\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(f_{1}(x;h,\sigma,\xi))
=\mathcal{Q}_{h,\Omega}^{\alpha,0}(f_{1}(x;h,\sigma,\xi))+h^{1+\alpha}\int_{\mathbb{R}}\gamma(s)|\widetilde f_{1}((s,0);h,\sigma,\xi)|^2 ds.\\
\end{equation}
Easy computations lead to
\[
\int_{\mathbb{R}}\gamma(s)|\widetilde f_{1}((s,0);h,\sigma,\xi)|^2 ds\leq B_{\sigma}^{1/2}h^{-1/2}|u_{1,0}(0,\xi)|^{2}\int_{\mathbb{R}} (\gamma(s))_{+}|\psi_{h}(s;\sigma)|^{2}ds\,.
\]
Inserting this into \eqref{Eq:QFalpha>1/2}, we obtain
\begin{multline}
\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(f_{1}(x;h,\sigma,\xi))
\leq \mathcal{Q}_{h,\Omega}^{\alpha,0}(f_{1}(x;h,\sigma,\xi))+B_{\sigma}^{1/2}h^{\alpha+1/2}|u_{1,0}(0,\xi)|^{2}\int_{\mathbb{R}} (\gamma(s))_{+}|\psi_{h}(s;\sigma)|^{2}ds.\\
\end{multline}
Now, we compute,
\begin{equation}\label{Eq:qf>1/2}
\begin{aligned}
&{\rm tr}[(\mathcal{P}_{h,\Omega}^{\alpha,\gamma}-\lambda h)\Gamma]\\
&\quad
= \iint (2\pi)^{-1}B_{\sigma}^{1/2}h^{-1/2}M(h,\sigma,\xi,p=1,K)\Big\{\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(f_{1}(x;h,\sigma,\xi))
-\lambda h\norm{f_{1}(x;h,\sigma,\xi)}^{2} \Big\}{d\sigma d\xi}\\
&\quad\leq \Int{-K}{K}\Int{0}{|\partial\Omega|} (2\pi)^{-1}B_{\sigma}^{1/2}h^{-1/2}
\Big\{\mathcal{Q}_{h,\Omega}^{\alpha,0}(f_{1}(x;h,\sigma,\xi)) -\lambda h \norm{f_{1}(x;h,\sigma,\xi)}^{2}\\
&\qquad\qquad\qquad+ h^{\alpha+1/2}B_{\sigma}^{1/2}|u_{1,0}\big(0;\xi\big)|^2 \int_{\mathbb{R}}(\gamma(s))_{+}|\psi_{h}(s;\sigma)|^2ds \Big\}{d\sigma d\xi}\\
&\quad\leq
\Int{-K}{K}\Int{0}{|\partial\Omega|} (2\pi)^{-1}B_{\sigma}^{1/2}h^{-1/2}\Big\{\mathcal{Q}_{h,\Omega}^{\alpha,0}(f_{1}(x;h,\sigma,\xi))
-\lambda h \norm{f_{1}(x;h,\sigma,\xi)}^{2} \Big\}{d\sigma d\xi}\\
&\qquad\qquad\qquad + (2\pi)^{-1} h^{\alpha}\norm{B}_{L^{\infty}(\partial\Omega)}\int_{-K}^{K}|u_{1,0}\big(0;\xi\big)|^2 \int_{\mathbb{R}}\int_{0}^{|\partial\Omega|}(\gamma(s))_{+}|\psi_{h}(s;\sigma)|^2ds {d\sigma d\xi}\\
\end{aligned}
\end{equation}
Using that
$\int_{0}^{|\partial\Omega|}\psi_{h}^{2}(s;\sigma)d\sigma=1$ and
taking into account the regularity of the function
$\xi\mapsto|u_{1,0}(0,\xi)|^2$, the second term on the right-hand
side of \eqref{Eq:qf>1/2} is estimated from above by
\[
(2\pi)^{-1} h^{\alpha}2K\norm{B}_{L^{\infty}(\partial\Omega)}\sup_{\xi\in[-K,K]}|u_{1,0}\big(0;\xi\big)|^2 \norm{\gamma}_{L^1(\partial\Omega)}\,,
\]
which is $o(h^{1/2})$ for fixed $K$. Also, by
\cite[Proof of (5.37)]{Fo-Ka}, the first term on the right-hand side of \eqref{Eq:qf>1/2} is bounded from above by,
\begin{equation*}
-\frac{h^{1/2}}{2\pi}\int_{\partial\Omega}\int_{-K}^{K}B(x)^{3/2}\Big(\mu_{1}(0,\xi)-\frac{\lambda}{B(x)}\Big)_{-}d\xi
ds(x)-h^{1/2}o(1)\,.
\end{equation*}
Thus, taking the successive limits $\limsup_{h\rightarrow 0^{+}}$
and $\lim_{K\rightarrow\infty}$, we obtain,
\begin{equation}\label{lb-thm0}
\limsup_{h\rightarrow 0}\Big(-h^{-1/2}E(\lambda;h,\gamma,1/2)\Big)\leq
-\frac{1}{2\pi}\int_{\partial\Omega}\int_{\mathbb{R}}B(x)^{3/2}\Big(\mu_{1}(0,\xi)-\frac{\lambda}{B(x)}\Big)_{-}d\xi
ds(x)\,,
\end{equation}
which gives the desired upper bound when $\alpha>1/2$.
\subsection*{The regime $\alpha=1/2$}
In this section, we restrict to the harder case $\alpha=1/2$. Here,
the definition of $M=M_{1/2}$ in \eqref{eq:M} involves the quantity,
\begin{equation}\label{def:gamma-tilde}
\breve{\gamma}_{h,\sigma}= \dfrac{B_{\sigma}^{-1/2}( \gamma_{a}(\sigma)+C a^{-2}\delta(h))}{1+\varepsilon}\,.
\end{equation}
In the definition of $\breve{\gamma}_{h,\sigma}$, $a\in(0,1)$ and
$\varepsilon\in(0,1)$ are fixed parameters, and $\gamma_{a}$ is the
function introduced in \eqref{gamma:a}. Recall that, as $a\to0_+$,
$\gamma_a\to\gamma$ in $L^3(\partial\Omega)$.
We start by computing, for all $p\geq 1$,
\begin{equation}\label{norm:fj}
\begin{aligned}
\int_{\Omega}|f_{p}(x;h,\sigma,\xi)|^2dx
&=\int_0^{\infty}\int_0^{|\partial\Omega|}|\widetilde{f}_{p}((s,t);h,\sigma,\xi)|^2(1-tk(s))dsdt\\
&\quad\leq (1+\delta(h)\norm{k}_{\infty}) B_{\sigma}^{1/2}h^{-1/2}\\
&\qquad\times
\int_0^{\delta(h)}\int_0^{|\partial\Omega|}|u_{p,\breve\gamma_{h,\sigma}}(B_{\sigma}^{1/2}h^{-1/2}t;\xi)|^2|\zeta_{1,h}(t)|^2|\psi_{h}(s;\sigma)|^2dsdt\\
&\quad\leq (1+\delta(h)\norm{k}_{\infty}) B_{\sigma}^{1/2}h^{-1/2}
\int_0^{\delta(h)}|u_{p,\breve\gamma_{h,\sigma}}(B_{\sigma}^{1/2}h^{-1/2}t;\xi)|^2dt\\
&\quad= (1+\delta(h)\norm{k}_{\infty}),
\end{aligned}
\end{equation}
where we have used that the functions $\psi_{h}$ and
$u_{p,\breve\gamma_{h,\sigma}}$ are normalized in $s$ and $t$
respectively.
Again the normalization of $\psi_{h}$ implies that
\begin{equation}\label{norm:fj:0}
\begin{aligned}
\int_{\mathbb{R}}|\widetilde{f}_{p}((s,0);h,\sigma,\xi)|^2ds&\quad
= B_{\sigma}^{1/2}h^{-1/2}|u_{p,\breve\gamma_{h,\sigma}}(0;\xi)|^2|\zeta_{1,h}(0)|^2
\int_0^{|\partial\Omega|}|\psi_{h}(s,\sigma)|^2ds\\
&\quad\leq B_{\sigma}^{1/2}h^{-1/2}|u_{p,\breve\gamma_{h,\sigma}}(0;\xi)|^2
\int_0^{|\partial\Omega|}|\psi_{h}(s;\sigma)|^2ds\\
&\quad= B_{\sigma}^{1/2}h^{-1/2}|u_{p,\breve\gamma_{h,\sigma}}(0;\xi)|^2.
\end{aligned}
\end{equation}
We also compute
\begin{equation}\label{norm-fk-lb-0}
\begin{aligned}
\int_{\Omega}|f_{p}(x;h,\sigma,\xi)|^2dx
&=\iint|\widetilde{f}_{p}((s,t);h,\sigma,\xi)|^2(1-tk(s))dsdt\\
&\geq (1-\delta(h)\norm{k}_{\infty}) B_{\sigma}^{1/2}h^{-1/2}\times\\
&\int_{0}^{\delta(h)}\int_{0}^{|\partial\Omega|}|u_{p,\breve{\gamma}_{h,\sigma}}(B_{\sigma}^{1/2}h^{-1/2}t;\xi)|^2|\zeta_{1,h}(t)|^2\left|\psi_{h}(s;\sigma)\right|^2dsdt.\\
&= (1-\delta(h)\norm{k}_{\infty})B_{\sigma}^{1/2}h^{-1/2}\int_{\mathbb{R}_{+}}|u_{p,\breve\gamma_{h,\sigma}}(B_{\sigma}^{1/2}h^{-1/2}t;\xi)|^2|\zeta_{1,h}(t)|^2dt.\\
\end{aligned}
\end{equation}
Let us write the last integral as
\begin{multline}
\int_{\mathbb{R}_{+}}|u_{p,\breve\gamma_{h,\sigma}}(B_{\sigma}^{1/2}h^{-1/2}t;\xi)|^2|\zeta_{1,h}(t)|^2dt
=\int_{\mathbb{R}_{+}}|u_{p,\breve\gamma_{h,\sigma}}(B_{\sigma}^{1/2}h^{-1/2}t;\xi)|^2dt\\
+\int_{\mathbb{R}_{+}}|u_{p,\breve\gamma_{h,\sigma}}(B_{\sigma}^{1/2}h^{-1/2}t;\xi)|^2(|\zeta_{1,h}(t)|^2-1)dt\\
=B_{\sigma}^{-1/2}h^{1/2}+ \int_{t\geq
\delta(h)/2}|u_{p,\breve\gamma_{h,\sigma}}(B_{\sigma}^{1/2}h^{-1/2}t;\xi)|^2(|\zeta_{1,h}(t)|^2-1)dt.
\end{multline}
Taking into account the support of $\zeta_{1,h}$, we can write,
\begin{multline}\label{Ag-est}
\int_{\mathbb{R}_{+}}|u_{p,\breve\gamma_{h,\sigma}}(B_{\sigma}^{1/2}h^{-1/2}t;\xi)|^2|\zeta_{1,h}(t)|^2dt\geq B_{\sigma}^{1/2}h^{-1/2}
- \int_{t\geq \delta(h)/2}|u_{p,\breve\gamma_{h,\sigma}}(B_{\sigma}^{1/2}h^{-1/2}t;\xi)|^2dt\\
= B_{\sigma}^{-1/2}h^{1/2}- \int_{t\geq
\delta(h)/2}e^{-\epsilon(B_{\sigma}^{1/2}h^{-1/2}t-\xi)^2/2}e^{\epsilon(B_{\sigma}^{1/2}h^{-1/2}t-\xi)^2/2}
|u_{p,\breve\gamma_{h,\sigma}}(B_{\sigma}^{1/2}h^{-1/2}t;\xi)|^2dt\,.
\end{multline}
In observance of the support of $M=M_{1/2}$, we
see that $|\xi|\leq K$ and
\[
(B_{\sigma}^{1/2}h^{-1/2}t-\xi)^{2}\geq \big(b^{1/2}h^{-1/2}\frac{\delta(h)}{2}-\xi\big)^{2}\geq \frac{1}{8} bh^{-1}\delta(h)^{2}-2K^2.
\]
Implementing this into \eqref{Ag-est} and using the exponential
decay given in \eqref{Decay}, we find that
\begin{multline}\label{Ag-est-1}
\int_{\mathbb{R}_{+}}|u_{p,\breve\gamma_{h,\sigma}}(B_{\sigma}^{1/2}h^{-1/2}t;\xi)|^2|\zeta_{1,h}(t)|^2dt\\
\geq B_{\sigma}^{1/2}h^{-1/2}\Big(1-C_{\epsilon,K}e^{-\epsilon(\frac{1}{8} bh^{-1}\delta(h)^{2}-2K^2)/2}(1+a^{-2}\delta(h)+(a^{-2}\delta(h))^2)\Big).
\end{multline}
In the last step we have used that $\gamma\in L^{\infty}$ together with the definition of $\breve\gamma_{h,\sigma}$ in \eqref{def:gamma-tilde}.
Inserting this into \eqref{norm-fk-lb}, we finally obtain
\begin{equation}\label{norm-fk-lb}
\int_{\Omega}|f_{p}(x;h,\sigma,\xi)|^2dx\geq
(1-\delta(h)\norm{k}_{\infty})(1-C_{\epsilon,K} e^{-\epsilon(\frac{1}{8} bh^{-1}\delta(h)^{2}-2K^2)/2}(1+a^{-2}\delta(h)+(a^{-2}\delta(h))^2).
\end{equation}
Next we estimate the quadratic form. By Lemma~\ref{Lem-apqf},
we have for all $\varepsilon>0$,
\begin{equation}\label{eq:ub-e}
\begin{aligned}
&\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(f_{p}(x;h,\sigma,\xi))\\
&\quad
=\int_{\Omega}|(-ih\nabla+A)f_{p}(x;h,\sigma,\xi)|^2dx+h^{3/2}\int_{\partial\Omega}\gamma(x)|f_{p}(x;h,\sigma,\xi)|^2 dx\\
&\quad\leq (1+\varepsilon)\int_{\mathbb{R}^2_{+}}|(-ih\nabla+A_{\sigma})e^{i\phi_{\sigma}/h}\widetilde{f}_{p}((s,t);h,\sigma,\xi)|^2 dsdt\\
&\qquad+ C \varepsilon^{-1}\delta(h)^4\int_{\mathbb{R}^2_{+}}|\widetilde f_{p}((s,t);h,\sigma,\xi)|^2 dsdt +h^{3/2}\int_{\mathbb{R}}\gamma(s)|\widetilde f_{p}((s,0);h,\sigma,\xi)|^2 ds\,.\\
\end{aligned}
\end{equation}
Writing $\gamma=\gamma_{a}+(\gamma-\gamma_{a})$, it follows that
\begin{equation}\label{QF-1}
\begin{aligned}
&\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(f_{p}(x;h,\sigma,\xi))\\
&\quad \leq (1+\varepsilon)B_{\sigma}^{1/2}h^{-1/2}\int \psi_{h}^2(s,\sigma)
\big| (-ih \nabla+A_{\sigma})e^{-i\xi s\sqrt{B_{\sigma}/h}}u_{p,\breve\gamma_{h,\sigma}}\big( B_{\sigma}^{1/2}h^{-1/2}t;\xi\big)\big|^2 dsdt\\
&\qquad+\Big((1+\varepsilon)Ch^2\delta(h)^{-2}+C \varepsilon^{-1}\delta(h)^4\Big) \int_{\mathbb{R}^2_{+}}|\widetilde f_{p}((s,t);h,\sigma,\xi)|^2 dsdt\\
&\qquad+h^{3/2}\int_{\partial\Omega}\gamma_{a}(s)|\widetilde f_{p}((s,0);h,\sigma,\xi)|^2 ds\\
&\qquad+hB_{\sigma}^{1/2}|u_{p,\breve\gamma_{h,\sigma}}\big(0;\xi\big)|^2 |\zeta_{1,h}(0)|^2\int_{\mathbb{R}}(\gamma(s)-\gamma_{a}(s))|\psi_{h}(s;\sigma)|^2ds,
\end{aligned}
\end{equation}
where $A_{\sigma}$ is defined in \eqref{Asigma}. Plugging
\eqref{norm:fj} and \eqref{norm:fj:0} into \eqref{QF-1}, and using
\eqref{gamma-gamma:a}, we find
\begin{equation}\label{Eqt:Qfj}
\begin{aligned}
&\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(f_{p}(x;h,\sigma,\xi))\\
&\quad \leq
(1+\varepsilon)\Big\{B_{\sigma}^{1/2}h^{-1/2}\int_{\mathbb{R}^{2}_{+}}
\Big| (-ih \nabla+A_{\sigma})e^{-i\xi s\sqrt{B_{\sigma}/h}}u_{p,\breve\gamma_{h,\sigma}}\big( B_{\sigma}^{1/2}h^{-1/2}t;\xi\big)\Big|^2\psi_{h}^2(s,\sigma) dsdt \\
&\qquad
+hB_{\sigma}\breve\gamma_{h,\sigma}|u_{p,\breve\gamma_{h,\sigma}}\big( 0;\xi\big)|^2 \Big\}\\
&\qquad+{(1+\norm{k}_{\infty}\delta(h))\Big((1+\varepsilon)Ch^2\delta(h)^{-2}+C \varepsilon^{-1}\delta(h)^4\Big) } \\
&\qquad+ hB_{\sigma}^{1/2}\big|u_{p,\breve\gamma_{h,\sigma}}\big(0;\xi\big)\big|^2 \int_{\mathbb{R}}(\gamma(s)-\gamma_{a}(s))_{+}|\psi_{h}(s;\sigma)|^2ds \\
&\qquad \leq (1+\varepsilon)hB_{\sigma}\mu_{p}(\breve\gamma_{h,\sigma},\xi)
+{(1+\norm{k}_{\infty}\delta(h))\Big((1+\varepsilon)Ch^2\delta(h)^{-2}+C \varepsilon^{-1}\delta(h)^4\Big) }\\
&\qquad+ hB_{\sigma}^{1/2}|u_{p,\breve\gamma_{h,\sigma}}\big(0;\xi\big)|^2 \int_{\mathbb{R}}(\gamma(s)-\gamma_{a}(s))_{+}|\psi_{h}(s;\sigma)|^2ds \Big\}\,.
\end{aligned}
\end{equation}
Using Lemma~\ref{lem-lim-mu-j} and the fact that $\gamma\in
L^{\infty}$, we infer that the number of indices $p$ appearing in
the support of $M(h,\sigma,\xi,p,K)$ is finite. More precisely,
there exists a constant $p_0\in\mathbb N$ such that, for all
$\sigma>0$, $a\in(0,1)$ and $K>0$, the function $M$ in \eqref{eq:M}
vanishes for all $p> p_0$.
Now, we collect \eqref{Eq:deftr}, \eqref{norm:fj:0},
\eqref{norm-fk-lb} and \eqref{Eqt:Qfj} to obtain
\begin{equation}\label{Eq:tr:alpha=1/2;}
\begin{aligned}
{\rm tr}[(\mathcal{P}_{h,\Omega}^{\alpha,\gamma}-\lambda h)]
&\leq (2\pi)^{-1}h^{-1/2}\iint B_{\sigma}^{1/2}\sum_{p=1}^{\infty} \Big\{\mathcal{Q}_{h,\Omega}^{\alpha,\gamma}(f_{p}(x;h,\sigma,\xi))-\lambda h\norm{f_{p}(x;h,\sigma,\xi)}^2 \Big\}d\xi d\sigma\\
&\leq \Sum{p=1}{p_0}\iint (2\pi)^{-1}B_{\sigma}^{1/2}h^{-1/2}M(h,\sigma,\xi,p,K)\Big\{(1+\varepsilon)hB_{\sigma}\mu_{p}(\breve\gamma_{h,\sigma},\xi)hB_{\sigma}\\
&-\lambda h (1-\norm{k}_{\infty}\delta(h))\big(1-C_{\epsilon,K} e^{-\epsilon(\frac{1}{8} bh^{-1}\delta(h)^{2}-2K^2)/2}(1+a^{-2}\delta(h)+(a^{-2}\delta(h))^2)\big)\\
&\qquad\qquad +(1+\norm{k}_{\infty}\delta(h))\big((1+\varepsilon)C{h^{2}}{\delta(h)^{-2}}+C\varepsilon^{-1}\delta(h)^{4}\big)\\
&\qquad\qquad + hB_{\sigma}^{1/2}|u_{p,\breve\gamma_{h,\sigma}}\big(0;\xi\big)|^2 \int_{\mathbb{R}}(\gamma(s)-\gamma_{a}(s))_{+}|\psi_{h}(s;\sigma)|^2ds \Big\}{d\sigma d\xi}\,.
\end{aligned}
\end{equation}
We may arrange the terms in \eqref{Eq:tr:alpha=1/2;} to obtain,
\begin{equation}\label{Eq:tr:alpha=1/2}
\begin{aligned}
&{\rm tr}[(\mathcal{P}_{h,\Omega}^{\alpha,\gamma}-\lambda h)]
\quad\leq -\Sum{p=1}{p_0}
\Int{-K}{K}\Int{0}{|\partial\Omega|} (2\pi)^{-1}B_{\sigma}^{1/2}h^{-1/2}\Big(\mu_{p}(\breve\gamma_{h,\sigma},\xi)hB_{\sigma}-\lambda h\Big)_{-} {d\sigma d\xi}\\
&\qquad + R_1+R_2+R_3\,,
\end{aligned}
\end{equation}
where
\begin{multline}
R_1=
p_{0}\,h|\partial\Omega|\,2K{\norm{B}}^{3/2}_{L^{\infty}(\partial\Omega)}(2\pi)^{-1}h^{-1/2}\times\\
\Big(\varepsilon+\norm{k}_{\infty}\delta(h)+ C_{\epsilon,K} e^{-\epsilon(\frac{1}{8} bh^{-1}\delta(h)^{2}-2K^2)/2}(1+a^{-2}\delta(h)+(a^{-2}\delta(h))^2)\\-\norm{k}_{\infty}\delta(h) C_{\epsilon,K} e^{-\epsilon(\frac{1}{8} bh^{-1}\delta(h)^{2}-2K^2)/2}(1+a^{-2}\delta(h)+(a^{-2}\delta(h))^2)\Big)\,,\label{eq:R1}
\end{multline}
\begin{equation}
R_2=p_{0}\,|\partial\Omega|\,2K{\norm{B}}^{1/2}_{L^{\infty}(\partial\Omega)}(2\pi)^{-1}h^{-1/2}(1+\norm{k}_{\infty}\delta(h))
\Big((1+\varepsilon){h^{2}}{\delta(h)^{-2}}+C\varepsilon^{-1}\delta(h)^{4} \Big)
\,,\label{eq:R2}
\end{equation}
and
\begin{equation}\label{eq:R3}
R_3=
(2\pi)^{-1} h^{1/2}{\norm{B}}_{L^{\infty}(\partial\Omega)}\sum_{p=1}^{p_0}
\int_{-K}^{K}\int_{0}^{|\partial\Omega|}\int_{\mathbb{R}}(\gamma(s)-\gamma_{a}(s))_{+}|\psi_{h}(s;\sigma)|^2|u_{p,\breve\gamma_{h,\sigma}}(0;\xi)|^2 ds d\sigma d\xi\,.
\end{equation}
Choosing $\delta=h^{3/8}$ and $\varepsilon=h^{1/4}$, we see that,
for fixed $a$ and $K$,
\begin{equation}\label{eq:R1+R2}
R_1+R_2=o(h^{1/2})\,,
\end{equation} and
$$|R_3|\leq (2\pi)^{-1} 2K h^{1/2}
{\norm{B}}_{L^{\infty}(\partial\Omega)}\norm{\gamma-\gamma_{a}}_{L^{1}(\partial\Omega)}\sum_{p=1}^{p_0}\sup_{\xi\in[-K,K]}|u_{p,\breve\gamma_{h,\sigma}}(0;\xi)|^2\,.$$
The term $|u_{p,\breve\gamma_{h,\sigma}}(0;\xi)|^2$ is controlled by
the estimate in Lemma~\ref{Lem:|u0|^2}. Taking into account the
condition of the support of $M=M_{1/2}$ in \eqref{eq:M}, we observe
that,
$$|u_{p,\breve\gamma_{h,\sigma}}(0;\xi)|^2\leq C\Big(\mu_p(\breve\gamma_{h,\sigma};\xi)+\breve\gamma_{h,\sigma}^2 +1 \Big)\leq C(2+\breve\gamma_{h,\sigma}^2).$$
It follows from the definition of $\breve\gamma_{h,\sigma}$ in
\eqref{def:gamma-tilde} that, when $h$ and $\sigma$ vary and $K$,
$a$ and $p$ remain fixed,
$$\sup_{\xi\in[-K,K]}|u_{p,\breve\gamma_{h,\sigma}}(0;\xi)|^2\leq C\left(1+(a^{-2}\delta(h))^2\right)\,,$$
thereby giving us that,
\begin{equation}\label{eq:R3'}
|R_3|\leq
2CKh^{1/2}\left(1+(a^{-2}\delta(h))^2\right)\|\gamma-\gamma_a\|_{L^1(\partial\Omega)}\,,
\end{equation}
as long as $a$ and $K$ remain fixed.
Now, we insert \eqref{eq:R1+R2} and \eqref{eq:R3'} into
\eqref{Eq:tr:alpha=1/2}. Thanks to Lemma~\ref{Pro-dm} below, we may
apply the variational principle in Lemma~\ref{lem-VP-3}. That way,
we infer from \eqref{Eq:tr:alpha=1/2},
\begin{equation}\label{Eq:tr:alpha=1/2}
\begin{aligned}
&-E(\lambda;h,\gamma,1/2)
\leq (1+\norm{k}_{\infty}\delta(h))^{-1} {\rm tr}[(\mathcal{P}_{h,\Omega}^{\alpha,\gamma}-\lambda h)]
\\
&\qquad\leq - (1+\norm{k}_{\infty}\delta(h))^{-1}\Sum{p=1}{p_0}\Int{-K}{K}\Int{0}{|\partial\Omega|} (2\pi)^{-1}B_{\sigma}^{3/2}h^{1/2}\Big(\mu_{p}(\breve\gamma_{h,\sigma},\xi)-\frac{\lambda}{B_{\sigma}} \Big)_{-} {d\sigma d\xi}\\
&\qquad\qquad +o(h^{1/2})+2CKh^{1/2}\left(1+(a^{-2}\delta(h))^2\right)
\norm{\gamma-\gamma_{a}}_{L^{1}(\partial\Omega)}\,.
\end{aligned}
\end{equation}
Since ${\breve\gamma}_{h,\sigma}\rightarrow B_{\sigma}^{-1/2}
{\gamma_a(\sigma)}$ as $h\rightarrow 0$, and
${\breve\gamma}_{h,\sigma}$ remains bounded for a fixed $a$, then it
results from Lemma~\ref{lim-int} and dominated convergence (as
$h\to0_+$),
\begin{equation}
\sum_{p=1}^{p_0}\int_{\mathbb{R}}\Big(\mu_{p}(\breve\gamma_{h,{\sigma}}\,,\,\xi)-\frac{\lambda}{B_{\sigma}}\Big)_{-}d\xi\to
\sum_{p=1}^{p_0}\int_{\mathbb{R}}\Big(\mu_{p}\big(B_{\sigma}^{-1/2}{\gamma_a(\sigma)},\xi\big)
-\frac{\lambda}{B_{\sigma}}\Big)_{-}d\xi.
\end{equation}
Since the function $\gamma_a$ is smooth and bounded (for every
fixed $a$), then by dominated convergence,
$$\int_{0}^{|\partial\Omega|}
\sum_{p=1}^{p_0}\int_{\mathbb{R}}\Big(\mu_{p}(\breve\gamma_{h,{\sigma}}\,,\,\xi)-\frac{\lambda}{B_{\sigma}}\Big)_{-}d\xi\,d\sigma
\to \int_{0}^{|\partial\Omega|}
\sum_{j=1}^{p_0}\int_{\mathbb{R}}\Big(\mu_{p}\big(B_{\sigma}^{-1/2}{\gamma_a(\sigma)}
,\xi\big) -\frac{\lambda}{B_{\sigma}}\Big)_{-}d\xi d\sigma.$$ Taking
$\limsup_{h\to 0}$ on both sides in \eqref{Eq:tr:alpha=1/2}, it
follows that,
\begin{multline*}
\limsup_{h\to 0}\Big(-h^{-1/2}E(\lambda;h,\gamma,1/2)\Big)\\
\leq -\Sum{p=1}{p_0}\Int{-K}{K}\int_{\partial\Omega} (2\pi)^{-1}B(x)^{3/2}\Big(\mu_{p}(B(x)^{-1/2}{ \gamma_{a}(x)},\xi)-\dfrac{\lambda}{B(x)} \Big)_{-} { d\xi ds(x)} \,.
\end{multline*}
Now, we take the successive limits, $\limsup_{a\rightarrow 0_{+}}$
and $\lim_{K\rightarrow\infty}$ to obtain,
\begin{multline}\label{Eq:ub}
\limsup_{h\to 0}\Big(-h^{-1/2}E(\lambda;h,\gamma,1/2)\Big) \\
\leq -\lim_{K\rightarrow\infty}\Bigg\{\liminf_{a\rightarrow 0_{+}}\frac{1}{2\pi}\Sum{p=1}{p_0}\Int{-K}{K}\int_{\partial\Omega} B(x)^{3/2}\Big(\mu_{p}(B(x)^{-1/2}{ \gamma_{a}(x)},\xi)-\frac{\lambda}{B(x)}\Big)_{-} {d\xi ds(x)} \Bigg\}\,.
\end{multline}
Since $\gamma\in L^{\infty}(\Omega)$, then $\|\gamma_{a}\|_{\infty}\leq \|\gamma\|_{\infty}$ and by dominated convergence, the right-hand side in \eqref{Eq:ub} is
\[
-\dfrac{1}{2\pi}\Sum{p=1}{p_0}\Int{-\infty}{\infty}\int_{\partial\Omega}B(x)^{3/2}\left(\mu_{p}(B(x)^{-1/2}{ \gamma(x)},\xi)-\frac{\lambda}{B(x)} \right)_{-} { d\xi ds(x)}\,.
\]
This finishes the proof of the upper bound in Theorem~\ref{thm:KN}.
It remains to verify that the density matrix $\Gamma$ satisfies the
necessary properties to apply the variational principle in
Lemma~\ref{lem-VP-3}. That is contained in
\begin{lem}\label{Pro-dm}
There exists a constant $C>0$ such that,
\begin{equation}\label{DM-cond}
\forall~f\in L^2(\Omega)\,,\quad
0\leq \langle \Gamma f, f\rangle_{L^{2}(\Omega)} \leq (1+\norm{k}_{\infty}\delta(h))\norm{f}^{2}_{L^{2}(\Omega)},
\end{equation}
where $\Gamma$ is as in \eqref{eq:Gamma}.
\end{lem}
\begin{proof}
Let $f\in L^{2}(\Omega)$. Due to the support of $\Gamma$ (in particular $\zeta_{1,h}$), we may suppose that ${\rm supp}~{f}\subset\{ x\in\overline{\Omega}~:~{\rm dist}(x,\partial\Omega)\leq \delta(h)\}.$
We compute,
\begin{multline}\label{eq1-g-gamma-g}
\langle f, \Gamma f\rangle_{L^{2}(\Omega)}\\
=({2\pi})^{-1}h^{-1/2}\Sum{p=1}{\infty}\iint M(h,\sigma,\xi,p,K)B_{\sigma}^{1/2}\left|\iint \overline{\widetilde f(s,t)}{\widetilde f}_{p}((s,t); h,\sigma,\xi)(1-tk(s)) ds dt \right|^{2}{d\sigma
d\xi}\,.
\end{multline}
We estimate from above by replacing $\iint M\times|\cdot|^{2}$ by $\iint1\times|\cdot|^{2}$ in the above expression.
Defining
\[
G(s,t)=\overline{\widetilde f(s,t)}\psi_{h}(s;\sigma)\zeta_{1,h}(t)e^{-i\phi_{\sigma}/h}(1-tk(s)).
\]
Using Cauchy-Schwarz inequality and the fact that
$u_{p,\breve\gamma_{h,\sigma}}(\cdot,\xi)$ in the case $\alpha=1/2$
(or $u_{p,0}(\cdot,\xi)$ in the case $\alpha>1/2$) is an orthonormal
basis of $L^{2}(\mathbb{R}_{+})$ for all $\xi$, we get,
\begin{equation*}
\sum_{p=1}^{\infty}\left|\iint \overline{\widetilde f(s,t)}\widetilde f_{p}(s,t;h,\sigma,\xi)(1-tk(s))dsdt\, \right|^{2}\leq 2\pi\Int{t>0}{}\left|(\mathcal{F}_{s\rightarrow\xi}G)({B_{\sigma}^{1/2}h^{-1/2}}\xi)\right|^{2}dt\,.
\end{equation*}
Here, $\mathcal{F}_{s\rightarrow\xi}$ denotes the Fourier transform with respect to the variable $s$.
Integrating with respect to $\xi$ and using Plancherel identity, we
find that,
\begin{align*}
&2\pi\Int{\mathbb{R}}{}\Int{t\in \mathbb{R}_{+}}{}\left|(\mathcal{F}_{s\rightarrow\xi}G)(B_{\sigma}^{1/2}h^{-1/2}\xi)\right|^{2}dtd\xi=
2\pi h^{1/2}B_{\sigma}^{-1/2}\Int{\mathbb{R}^{2}_{+}}{}|G(s,t)|^{2}dsdt.
\end{align*}
Consequently,
\begin{equation}\label{eq2-g-gamma-g}
\langle f, \Gamma f\rangle_{L^2(\Omega)}\leq \Int{0}{|\partial\Omega|}\int_{\mathbb{R}^{2}_{+}}|\widetilde f(s,t)|^{2}(1-tk(s))^{2}\psi_{h}^{2}(s;\sigma)\zeta_{1,h}^{2}(t)dsdtd\sigma.
\end{equation}
We do the $\sigma$-integration first. The normalization of $\psi_{h}$ implies that the result is
\begin{align*}
\langle f, \Gamma f\rangle_{L^{2}(\Omega)}&\leq \Int{\mathbb{R}^{2}_{+}}{}|\widetilde f(s,t)|^{2}(1-tk(s))^{2}\zeta_{1,h}^{2}(t)dsdt\\
&\leq (1+\delta(h)\norm{k}_{\infty})\Int{\Omega}{}|f(x)|^{2}dx.
\end{align*}
This finishes the proof of \eqref{DM-cond}.
\end{proof}
\section{Proof of Corollary~\ref{cor:KN}}\label{Sec:7}
We will prove the second assertion in \eqref{SA}. The first
assertion in \eqref{FA} can be proven similarly. Define
\begin{equation}\label{Def:fp}
f_{p}(x,\lambda):=\int_{\mathbb{R}}B(x)^{3/2}\Big(\mu_{p}\left(B(x)^{-1/2}\gamma(x),
\xi\right)-\frac{\lambda}{B(x)}\Big)_{-}d\xi\,.
\end{equation}
We start by computing the left- and right- derivatives of the
function $ \lambda\to f_{p}(x,\lambda)$. We thus find
\begin{equation}\label{right-der}
\dfrac{\partial f_{p}}{\partial\lambda_{+}}(x,\lambda)=\int_{
\{\xi\in\mathbb{R}~:~B(x)\mu_p\left(B(x)^{-1/2}\gamma(x),\xi\right)\leq\lambda\}}
B(x)^{1/2}d\xi\,,
\end{equation}
and
\begin{equation}
\dfrac{\partial f_{p}}{\partial\lambda_{-}}(x,\lambda)=\int_{
\{\xi\in\mathbb{R}~:~B(x)\mu_p\left(B(x)^{-1/2}\gamma(x),\xi\right)<\lambda\}}
B(x)^{1/2}d\xi.
\end{equation}
In view of Lemma~\ref{lem:Ka}, the equation
\[
\mu_{p}\big(B(x)^{-1/2}\gamma(x),\xi\big)={\lambda}B(x)^{-1},
\]
has exactly two solutions
\[
\xi_{p,\pm}:=\xi_{p,\pm}(\gamma^{\prime}_{x},\lambda^{\prime}_{x});\quad \gamma^{\prime}_{x}:=B(x)^{-1/2}\gamma(x)\quad{\rm and}\quad \lambda^{\prime}_{x}:=\lambda B(x)^{-1}.
\]
Since the set $\{\xi: \xi=\xi_{p,\pm}\}$ has measure zero with respect to the $\xi$ integration, it follows that the left- and right- derivatives coincide and we can write
\begin{equation}\label{Def:derfp}
\dfrac{\partial f_{p}}{\partial\lambda_{\pm}}(x,\lambda)=\int_{
\{\xi\in\mathbb{R}~:~B(x)\mu_p\left(B(x)^{-1/2}\gamma(x),\xi\right)\leq\lambda\}}
B(x)^{1/2}d\xi.
\end{equation}
Let $\varepsilon>0$. By the variational principle in
Lemma~\ref{lem-VP-3}, we have
\begin{equation}{\label{First-eq}}
E(\lambda+\varepsilon;h,\gamma,\alpha) -E(\lambda;h,\gamma,\alpha) \geq \varepsilon h {N}(\lambda;h,\gamma,\alpha).
\end{equation}
On the other hand, it follows from Theorem~\ref{thm:KN} that
\begin{equation}\label{F:thKN}
E(\lambda;h,\gamma,\alpha)
=\frac{h^{1/2}}{2\pi}\sum_{p=1}^{\infty}\int_{\partial\Omega}f_{p}(x,\lambda)ds(x)+h^{1/2}o(1).
\end{equation}
In light of Lemma~\ref{lem-lim-mu-j}, the sum on the right hand side of \eqref{F:thKN} is actually a sum of a finite number of terms. Thus $\sum_{p=1}^{\infty}$ can be replaced by $\sum_{p=1}^{p_0}$.
Implementing \eqref{F:thKN} into \eqref{First-eq}, then taking $\limsup_{h\rightarrow 0_{+}}$, we get
\begin{equation}
\limsup_{h\rightarrow 0_{+}}h ^{1/2} {N}(\lambda;h,\gamma,\alpha)\leq\dfrac{1}{2\pi} \sum_{p=1}^{p_0}\int_{\partial\Omega}\dfrac{f_{p}(x,\lambda+\varepsilon)-f_{p}(x,\lambda)}{\varepsilon} d\sigma(x).
\end{equation}
Taking the limit $\varepsilon\rightarrow 0_{+}$ and using dominated
convergence, we deduce that
\begin{equation}\label{u-b-n}
\limsup_{h\rightarrow 0_{+}} h^{1/2}{N}(\lambda;h,\gamma,\alpha)\leq \dfrac{1}{2\pi} \sum_{p=1}^{p_0}\int_{\partial\Omega}\dfrac{\partial f_p}{\partial\lambda_+}(x,\lambda)d\sigma(x).
\end{equation}
Replacing $\varepsilon$ by $-\varepsilon$ in \eqref{First-eq} and following the same arguments that led to \eqref{u-b-n}, we find
\begin{equation}\label{l-b-n}
\liminf_{h\rightarrow 0_{+}} h^{1/2}{N}(\lambda;h,\gamma,\alpha)\geq \dfrac{1}{2\pi} \sum_{p=1}^{p_0}\int_{\partial\Omega}\dfrac{\partial f_p}{\partial\lambda_-}(x,\lambda)d\sigma(x).
\end{equation}
By combining \eqref{u-b-n} and \eqref{l-b-n}, we obtain
\begin{equation}\label{F-eq-nb}
\lim_{h\rightarrow 0_{+}} h^{1/2}{N}(\lambda;h,\gamma,\alpha)= \dfrac{1}{2\pi} \sum_{p=1}^{p_0}\int_{\partial\Omega}\dfrac{\partial f_p}{\partial\lambda_\pm}(x,\lambda)d\sigma(x)\,.
\end{equation}
Now, in light of \eqref{Def:derfp}, we finally get that
\begin{multline}\label{F-eq-nb}
\lim_{h\rightarrow 0_{+}} h^{1/2}{N}(\lambda;h,\gamma,\alpha)
=\frac{1}{2\pi}\sum_{p=1}^\infty
\iint_{
\{(x,\xi)\in\partial\Omega\times\mathbb{R}~:~B(x)\mu_p\left(B(x)^{-1/2}\gamma(x),\xi\right)<\lambda\}}
B(x)^{1/2}d\xi ds(x)\,.
\end{multline}
This finishes the proof of \eqref{SA}.
\section{Proof of Theorem~\ref{thm:SQ}}
We will apply a simple scaling argument to pass from the
semi-classical to the large area limit. Let $T$ be a positive number
and $\Omega_T=(0,T)\,\times\,(0,T)$. Define the operator,
$$P_{\Omega_T}=-(\nabla-i\mathbf {A}_0)^2\quad {\rm in}~L^2(\Omega_T)\,.$$
Functions in the domain of $P_{\Omega_T}$ satisfy Neumann condition
$\nu\cdot(\nabla-i\mathbf {A}_0)u=0$ on the smooth parts of the boundary of
$\Omega_T$. We assume that the vector field $\mathbf {A}_0$ is given by
\begin{equation}\label{eq:vf}
\mathbf {A}_0(x_1,x_2)=(-x_2,0)\,,\quad\Big((x_1,x_2)\in\mathbb R^2\Big)\,.
\end{equation}
The operator $P_{\Omega_T}$ has compact resolvent and its spectrum
consists of an increasing sequence of eigenvalues $(e_j)_{j\geq1}$
converging to $\infty$. Note that the terms of the sequence $(e_j)$
are listed counting multiplicities. Given $\lambda\geq 0$, the
number of eigenvalues below $1+\lambda$ is finite. Denote by
\begin{equation}\label{nb-torus}
\mathcal N(\lambda,T)={\rm Card}\,\{j~:~e_j\leq 1+\lambda\}\,.
\end{equation}
By a scaling argument, Theorem~\ref{thm:SQ} follows from:
\begin{thm}\label{thm:TDL}
There exists a positive number $\delta$ such that,
$$\limsup_{T\to\infty}\frac{\mathcal
N(\lambda,T)}{T^2}=\frac1{2\pi}\,,\quad(\lambda\in[0,\delta])\,.$$
\end{thm}
\subsection{Preliminaries}\label{sec:p}
\subsubsection{Variational min-max principle}\label{sec:vp}
We shall need the following version of the variational min-max
principle.
\begin{thm}\label{thm:vp}
Let $A$ be a self-adjoint operator in a Hilbert space $H$. Suppose
that $A$ is semi-bounded (i.e. bounded from below) and has compact
resolvent. The terms of the sequence of eigenvalues of $A$ counting
multiplicities are given by,
$$\mu_n=\inf\Big\{\max_{\substack{\phi\in M\\\|\phi\|_H=1}}\langle
A\phi,\phi\rangle_H~:~M\subset D(A)\,,~{\rm dim}\,M=n\Big\}\,.$$
\end{thm}
\subsubsection{Rough bound for the operator
$P_{\Omega_T}$}\label{sec:torus}
Let $S$ and $T$ be positive numbers, and
$\Omega_{S,T}=(0,S)\times(0,T)$. Consider the operator,
$$P_{\Omega_{S,T}}=-(\nabla-i\mathbf {A}_0)^2\quad{\rm
in}~L^2(\Omega_{S,T})\,.$$ A function $u(x_1,x_2)$ in the domain of
$P_{\Omega_{S,T}}$ satisfies Neumann condition at $x_2=0$, Dirichlet
condition at $x_2=T$, and periodic conditions at $x_1\in\{0,S\}$.
Define,
$$\mathcal N(\lambda;S,T)={\rm tr}\Big(\mathbf
1_{(-\infty,(1+\lambda)bh]}(P_{h,b,\Omega_{S,T}})\Big)\,.$$
Along the proof of Lemma~3.1 in \cite{Fo-Ka}, a useful rough bound on
$\mathcal N(\lambda;S,T)$ is given. We recall this bound below.
\begin{lem}\label{roughestimate'}
There exist positive constants $C$, $T_0$ and $\lambda_0$ such that,
for all $T\geq T_0$, $\lambda\in[0,\lambda_0]$ and $S>0$, we have,
\begin{equation}\label{eq-cyl:nb}
\mathcal N(\lambda;S,T)\leq CST \,.
\end{equation}
\end{lem}
\subsubsection{The Dirichlet operator in a
square}\label{sec:Dirchlet}
Recall the magnetic potential $\mathbf {A}_0$ in \eqref{eq:vf}. Consider a
positive real number $R$ and the operator
$P^D_{\Omega_R}=-(\nabla-i\mathbf {A}_0)^2$ in the square
$\Omega_R=(0,R)\times(0,R)$ and with Dirichlet boundary conditions.
If $\Lambda\in\mathbb{R}$, we define the functions,
\begin{equation}\label{eq-nub}
\nu_{b}(\Lambda)=\frac{1}{2\pi}{\rm Card}\,\left\{n\in\mathbb{N}~:~2n-1\leq\Lambda\right\}\,.
\end{equation}
and
\begin{equation}\label{eq-nub}
N\left(\Lambda ,P^D_{\Omega_R}\right)={\rm tr}\Big(\mathbf 1_{(-\infty,\Lambda ]}(P^D_{\Omega_R})\Big)\,.
\end{equation}
The next two-sided estimate on the eigenvalue counting function
of the operator $P^D_{\Omega_R}$ is proved in \cite[Thm.~3.1]{CdV}.
\begin{lemma}\label{lem-CdV}
There exists a constant $C>0$ such that, for all $\Lambda\in\mathbb{R}$,
$R>0$ and $A\in(0,R/2)$, the following two-sided estimate holds
true,
$$
(R-A)^2\nu_{b}\left(\Lambda-\frac{C}{A^2}\right)\leq
N\left(\Lambda ,P^D_{\Omega_R}\right)\leq R^2\nu_{b}(\Lambda)\,.
$$
In particular, if $\Lambda<3$, then
$$N\left(\Lambda ,P^D_{\Omega_R}\right)\leq
\frac{R^2}{2\pi}\,.$$
\end{lemma}
\subsubsection{The periodic operator}\label{sec:per}
Consider a positive number $R$, the square
$\Omega_R=(0,R)\times(0,R)$ and the function space,
\begin{equation}\label{eq:ER}
E_R=\{u\in H^1_{\rm loc}(\mathbb{R}^2)~:~u(x_1+R,x_2)=u(x_1,x_2)~\&~u(x_1,x_2+R)=e^{-iRx_1}u(x_1,x_2)\,\}\,.
\end{equation}
Recall the magnetic potential $\mathbf {A}_0$ in \eqref{eq:vf}. If $u\in
E_R$, then $|u|$ and $|(\nabla-i\mathbf {A}_0)u|$ are periodic with respect
to the lattice generated by $\Omega_R$. Consider the self-adjoint
operator
$$P^{\rm per}_{\Omega_R}=-(\nabla-i\mathbf {A}_0)^2\quad{\rm in}\quad
L^2(\Omega_R)\,,$$ whose domain is that defined by the Friedrichs'
extension associated with the quadratic form,
$$E_R\ni f\mapsto
\int_{\Omega_R}|(\nabla-i\mathbf {A}_0)u|^2\,dx\,.$$ Denote by $(\mu_j)$ the
sequence of distinct eigenvalues of the operator $P^{\rm
per}_{\Omega_R}$. Let us recall the following classical results (see
\cite[Proposition~2.9]{FK3D}). These results are valid under the
assumption that $R^2/(2\pi)$ is a positive integer.
\begin{itemize}
\item The first eigenvalue of $P^{\rm per}_{\Omega_R}$ is $\mu_1(P^{\rm per}_{\Omega_R})=1$ and the second eigenvalue $\mu_2(P^{\rm per}_{\Omega_R})\geq3$.
\item The dimension of the eigenspace ${\rm Ker}(P^{\rm
per}_{\Omega_R}-{\rm Id})$ is $R^2/(2\pi)$\,.
\end{itemize}
As a consequence, we may state the following lemma.
\begin{lem}\label{lem:per}
Suppose that $R^2\in 2\pi\mathbb N$. If $0\leq\lambda<2$ and $ N_{\rm
per}(\lambda,R)={\rm tr}\big(\mathbf
1_{(-\infty,1+\lambda]}(P_{\Omega_R})\big)$, then,
$$ N_{\rm
per}(\lambda,R)=\frac{R^2}{2\pi}\,.$$
\end{lem}
\subsubsection{The operator in a sector}\label{sec:sec}
Recall the magnetic potential $\mathbf {A}_0$ in \eqref{eq:vf}. Consider
the operator,
\begin{equation}\label{eq:sec}
P_{\Omega_{R,\pi/2}}=-(\nabla-i\mathbf {A}_0)^2\quad {\rm in}~L^2(\Omega_{R,\pi/2})\,,
\end{equation}
where $\Omega_{R,\pi/2}=\{(r\cos\theta,r\sin\theta)~:~0\leq
r<R\,,~0<\theta<\pi/2\,\}$. Functions in the domain of
$P_{\Omega_{R,\pi/2}}$ satisfy Neumann condition on $\theta=0$ and
$\theta=\pi/2$, and Dirichlet condition on $r=R$.
The operator $P_{\Omega_{R,\pi/2}}$ has compact resolvent and its
spectrum consists of an increasing sequence of eigenvalues $(\zeta_j)$
counting multiplicities. We introduce,
\begin{equation}\label{eq:nb-sec}
\mathcal N_{\rm sec}(\lambda,R)={\rm Card}\{j~:~\zeta_j\leq 1+\lambda\}\,.
\end{equation}
A useful rough bound on $\mathcal N_{\rm sec}(\lambda,R)$ is proved
in \cite{KK}. We recall this bound in the next lemma.
\begin{lem}\label{lem:sec}
There exist positive constants $C$, $R_0$ and $\lambda_1$ such that,
for all $R\geq R_0$ and $\lambda\in[0,\lambda_1]$, we have,
$$\mathcal N_{\rm sec}(\lambda,R)\leq C(R^2+1)\,.$$
\end{lem}
\subsection{Proof of Theorem~\ref{thm:TDL}}\label{sec:TDL} Through
this section, the following convention will be used. If $P$ is a
self-adjoint operator and $\Lambda<\inf\sigma_{\rm ess}(P)$, denote
by
$$ N(\Lambda, P)={\rm tr}\Big(\mathbf
1_{(-\infty,\Lambda]}(P)\Big)\,.$$
Recall the operator $P_{\Omega_T}$ and the number $\mathcal
N(\lambda, T)=N\big(1+\lambda,P_{\Omega_T}\big)$ introduced in
\eqref{nb-torus}.
We start by the observation:
\begin{lem}\label{lem:TDL-per}
Let $T_n=\sqrt{2\pi\, n}$, $n\in\mathbb N$. For all
$\lambda\in[0,2)$, there holds,
$$\frac{\mathcal N(1+\lambda,T_n)}{T_n^2}\geq \frac1{2\pi}\,.$$
\end{lem}
\begin{proof}
Recall the operator $P^{\rm per}_{\Omega_R}$ introduced in
Sec.~\ref{sec:per} together with the number $\mathcal N_{\rm per}(\lambda,R)$
in Lemma~\ref{lem:per}. Notice that functions in the form domain of
$P^{\rm per}_{\Omega_R}$ are in $H^1(\Omega_R)$ and consequently in
the form domain of $P_{\Omega_R}$. The variational min-max principle
(Theorem~\ref{thm:vp}) then tells us that the eigenvalues of $P^{\rm
per}_{\Omega_R}$ are larger than the corresponding ones of
$P_{\Omega_R}$. Consequently (we use $R=T_n$),
$$\mathcal N(\lambda,T_n)\geq N_{\rm
per}(\lambda,T_n)\,.$$ Notice that $T_n^2\in2\pi\mathbb N$. Consequently, when $\lambda\in[0,2)$, it results from Lemma~\ref{lem:per} that
$$N_{\rm
per}(\lambda,T_n)=\frac{T_n^2}{2\pi}\,.$$ This proves Lemma~\ref{lem:TDL-per}.
\end{proof}
\begin{lem}\label{lem:subseq}
There exist positive constants $\delta\in(0,1)$, $T_1$ and $C$ such
that, for all $\lambda\in[0,\delta]$ and $T\geq T_1$, there holds,
$$\mathcal N(\lambda,T)\leq \frac{T^2}{2\pi}+CT\,.$$
\end{lem}
\begin{proof}
Consider a number $L\in(0,T)$. We cover the square
$\Omega_T=(0,T)\times(0,T)$ by sets $U$, $V_j$ and $U_j$,
$j\in\{1,2,3,4\}$ defined as follows:
\begin{align*}
&U=\left(\frac{L}2,T-\frac{L}2\right)\times \left(\frac{L}2,T-\frac{L}2\right)\,,\\
&U_1=\left(\frac{L}2,T-\frac{L}2\right)\times[0,L)\,,& U_2&=[0,L)\times \left(\frac{L}2,T-\frac{L}2\right)\,,\\
&U_3=\left(\frac{L}2,T-\frac{L}2\right)\times(T-L,T]\,,& U_4&=(T-L,T]\times\left(\frac{L}2,T-\frac{L}2\right)\,,\\
&V_1=[0,L)\times[0,L)\,,& V_2&=(T-L,T]\times [0,L)\,,\\
&V_3=[0,L)\times (T-L,T]\,,& V_4&=(T-L,T]\times(T-L,T]\,.
\end{align*}
Let $P_{V_j}$ and $P_{U_j}$ be self-adjoint realizations of the
operator $-(\nabla-i\mathbf {A}_0)^2$ in $L^2(V_j)$ and $L^2(U_j)$
respectively and defined as follows. For every $j$ and
$\Omega\in\{V_j,U_j\}$, functions in the domain of $P_{\Omega}$
satisfy Neumann condition on the common smooth
boundary of $\Omega$ and $\Omega_T$ and Dircihlet condition
elsewhere.
Notice that the operators $P_{V_j}$, $j\in\{1,2,3,4\}$, are unitary
equivalent and have the same spectra. Also, it results from the
variational min-max principle that the spectrum of $P_{V_j}$ is
below that of the operator $P_{\Omega_{2L,\pi/2}}$ introduced in
Sec.~\ref{sec:sec} thereby obtaining,
$$\mathcal N(\lambda,P_{V_j})\leq \mathcal N_{\rm sec}(\lambda,2R)\,.$$
The operators $P_{U_j}$, $j\in\{1,2,3,4\}$, are unitary equivalent
also and (recall the operator $P_{\Omega_{S,T}}$ introduced in
Sec.~\ref{sec:torus}),
$$\sigma(P_{U_j})=\sigma(P_{\Omega_{T-L,L}})\,,\quad(j\in\{1,2,3,4\})\,.$$
Consider a partition of unity
$$\sum_{j=1}^4\chi_j^2+\sum_{j=1}^4\varphi_j^2+f^2=1\quad{\rm
in~}\overline{\Omega_T}\,,$$ such that
$$\sum_{j=1}^4\left(|\nabla\chi|^2+|\nabla\varphi_j|^2\right)+|\nabla
f|^2\leq \frac{C}{L^2}\,,$$
$$
{\rm supp}\chi_j\subset V_j\,,\quad {\rm supp}f_j\subset
U_j\,,\quad{\rm supp}\,f\subset U\,,$$
and $C$ is a universal constant.
Using the IMS decomposition formula, we may write for any function
$u$ in the form domain of $P_{\Omega_T}$,
\begin{align*}
q(u)&=q(fu)
+\sum_{j=1}^4q(\chi_ju)+
\sum_{j=1}^4q(\varphi_ju
-\int_\Omega\left(|\nabla f|^2+\sum_{j=1}^4(|\nabla \chi_j|^2+|\nabla \varphi_j|^2)\right)|u|^2\,dx\\
&\geq q(fu)
+\sum_{j=1}^4q(\chi_ju)+
\sum_{j=1}^4q(\varphi_ju)-\frac{C}{L^2}\int_\Omega|u|^2\,dx\,,
\end{align*}
where the quadratic form $q$ is defined by,
$$q(v)=\displaystyle\int_\Omega|(\nabla-i\mathbf {A}_0)v|^2\,dx\,.$$
As has been proven in \cite{CdV}, it results from the variational
min-max principle (Theorem~\ref{thm:vp}):
\begin{equation}\label{eq:TDL-p}
\mathcal{N}(\lambda,P_{\Omega_T})\leq N(1+\lambda+\frac{C}{L^2},
P^D_{\Omega_{T-L}})+\sum_{j=1}^4 N(1+\lambda+\frac{C}{L^2},
P_{V_j})+\sum_{j=1}N(1+\lambda+\frac{C}{L^2},
P_{U_j})\,.\end{equation} Recall that the operator
$P^D_{\Omega_{R}}$ (with $R=T-L$) has been introduced in
Sec.~\ref{sec:Dirchlet}. Let
$\lambda_2=\frac12\min(\lambda_0,\lambda_1,1)$ where $\lambda_0$ and
$\lambda_1$ are as introduced in Lemmas~\ref{roughestimate'} and
\ref{lem:sec}. Select $L$ such that,
$$L\geq T_0\quad{\rm and}~ \lambda+\frac{C}{L^2}<\min(\lambda_0,\lambda_1,1)\,,\quad
(\lambda\in[0,\lambda_2])\,,$$ where $T_0$ is as in Lemma~\ref{roughestimate'}.
Consequently, it follows from
Lemma~\ref{lem-CdV} that,
$$N(1+\lambda+\frac{C}{L^2},
P^D_{\Omega_{T-L}})\leq \frac{(T-L)^2}{2\pi}\,.$$ Also, as pointed
earlier and using Lemmas~\ref{roughestimate'} and \ref{lem:sec}, we
get for $\lambda\in[0,\lambda_2]$ and sufficiently large $T$,
\begin{align*}
&N(1+\lambda+\frac{C}{L^2},
P_{V_j})\leq\mathcal N_{\rm sec}(\lambda+\frac{C}{L^2},2L)\leq C(4L^2+1)\,,\\
&N(1+\lambda+\frac{C}{L^2},
P_{U_j})\leq \mathcal N(\lambda;T-L,L)\leq C(T-L)L\,.
\end{align*}
By substituting the above upper bounds into \eqref{eq:TDL-p}, we get
the upper bound in Lemma~\ref{lem:subseq}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:TDL}]
Let $\lambda\in[0,\delta]$ with $\delta$ as in
Lemma~\ref{lem:subseq}. In light of Lemma~\ref{lem:TDL-per}, we get
$$\limsup_{T\to\infty}\frac{\mathcal N(\lambda,T)}{T^2}\geq
\frac1{2\pi}\,.$$ On the other hand, Lemma~\ref{lem:subseq} tells us that,
$$\limsup_{T\to\infty}\frac{\mathcal N(\lambda,T)}{T^2}\leq
\frac1{2\pi}\,.$$
\end{proof}
|
2,877,628,089,852 | arxiv | \section{Supplementary information}
\subsection{The model}
We treat the cavity mode in second quantization, with $\hat n=\hat a^{\dagger}\hat a$ the photon number operator, $\hat a$ the bosonic operator destroying a quantum at energy $\hbar\omega_c$ and $\hat a^{\dagger}$ its adjoint. The resonator is driven by a classical field at frequency $\omega_p$, and we report the dynamics in the reference frame rotating with $\omega_p$. The ions behave as classically polarizable particles, which is valid when the atomic transition which couples with the cavity mode is far detuned from the fields, so that the detuning $\Delta_0=\omega_p-\omega_{\rm el}$ is the largest frequency of the problem. Note that since $|\Delta_0|\gg |\Delta_c|$, in the parameter regimes we consider, $\Delta_0\simeq \omega_c-\omega_{\rm el}$.
The coherent dynamics of $N$ ions and cavity field mode is governed by the Hamilton operator ($\hbar=1$)
\begin{equation}
\hat H=\sum_j\frac{\hat p_j^2}{2m}+V_{\rm ion}-\Delta_c\hat n + U_0\hat n \sum_j \cos^2k\hat x_j -{\rm i} (\eta\hat a -\eta^*\hat a^{\dagger})\,.
\end{equation}
The effect of cavity losses and damping on the motion is introduced by means of Heisenberg-Langevin (HL) equation. For the moment we assume that the only incoherent effects are losses of the cavity field at rate $\kappa$, so that the HL equation for the cavity field reads
\begin{equation}
\label{eq:a:0}
\dot {\hat a}=\frac{1}{\rm i}[\hat a,\hat H]-\kappa \hat a+\sqrt{2\kappa}\hat a_{\rm in}(t)\,.
\end{equation}
We perform the study in the semiclassical limit, assuming that the fluctuations about the mean values of field and atomic variables are sufficiently small to justify the treatment. To this aim, we decompose the operators as a sum of mean values and fluctuations according to the prescription
\begin{equation}
\label{eq:mean:delta}
\begin{array}{l}
\hat a = \bar a + \delta \hat a \,,\\
\hat x_j = \bar x_j + \delta \hat x_j \,,\\
\hat p_j = \bar p_j + \delta \hat p_j \,,\end{array}
\end{equation}
where $\langle \hat a \rangle= \bar a$, $\langle\hat x_j \rangle= \bar x_j $, and $\langle \hat p_j \rangle= \bar p_j$, while the expectation value of the fluctuations $\delta \hat a, \delta \hat x_j, \delta \hat p_j$ vanishes. \\
\begin{figure}
\includegraphics[width=3.2in]{omega_v_C.png}
\caption{(a) The phonon gap plotted as a function of $C$ with $\eta=100\kappa$ (blue line) and $\eta=150\kappa$ (orange line). The vanishing of the phonon gap happens at the sliding to pinned transition. (b) The phonon gap as a function of $\eta/\kappa$ for $C=2.4$ showing bistable states where a sliding (black line) and pinned (red dashed line) phase co-exist. The parameters are the same as those in Fig.2 in the main article.}
\label{phonons}
\end{figure}
{\it Mean values.} The mean values satisfy the equations of motion
\begin{align}
& \frac{\partial~}{\partial t} \bar a = ( {\rm i} \Delta_{\rm eff} - \kappa) \bar a + \eta \,, \label{eq:a eq}\\
& \frac{\partial~}{\partial t} \bar x_j = \frac{\bar p_j}{m} \,, \label{eq:r eq} \\
& \frac{\partial~}{\partial t} \bar p_j = - \partial_j V_{\rm ions} - U_0 \bar n \partial_j\cos^2(k\bar x_j) \,, \label{eq:p eq}
\end{align}
with $\bar n=|\bar a|^2$ and $\partial_j=\partial/\partial x_j$ the gradient with respect to the spatial coordinates of the $j$-th particle (evaluated at the equilibrium positions $\bar x_1,\ldots, \bar x_N$), while
\begin{equation}
\label{Delta:eff}
\Delta_{\rm eff}=\Delta_c-U_0NB_N\,.
\end{equation}
In order to determine the classical equilibrium values we require that the quantities $\bar a$, $\bar x_j$ and $\bar p_j$ correspond to stationary solutions of the dynamical equations, namely $\partial_t \bar a=0$, $ \partial_t \bar x_j =0$, and $\partial_t \bar p_j =0$.
From \eqref{eq:a eq} we obtain
\begin{equation}
\label{bar:a}
\bar a=\frac{\eta}{\kappa-{\rm i}\Delta_{\rm eff}}
\end{equation}
and with no loss of generality, we choose the phase of $\eta$ such that $\bar{a}$ is real.
Setting Eq.~(\ref{eq:r eq}) to zero gives $\bar{p}_j=0$. Substituting the value of $\bar n=|\bar a|^2$ into Eq. (\ref{eq:p eq}), one finds that it can be cast in the form
\begin{equation}
\frac{\partial~}{\partial t} \bar{p}_j = - \partial_j V_{\rm ions} - \partial_jV_{\rm cav}\,,
\end{equation}
with
\begin{equation}
\partial_jV_{\rm cav}=-2\frac{\eta^2}{\kappa^2} \frac{U_0\sin(k\bar x_j)\cos(k\bar x_j)}{1+(\Delta_c-U_0\sum_\ell\cos^2(k\bar x_\ell))^2/\kappa^2}
\end{equation}
which gives
\begin{equation} \label{eq:eff opt potential}
V_{\rm cav} = \frac{ |\eta|^2}{\kappa} \arctan \left(-\frac{\Delta_{\rm eff}}{\kappa} \right)\,.
\end{equation}
The equilibrium positions of the ions are then found by minimizing the total potential $V=V_{\rm ion}+V_{\rm cav}$ such that the forces due to the Coulomb repulsion, the confining potential and the cavity field are balanced.
{\it Phonon gap.} We determine the phonon gap by evaluating the dispersion relation of the ions vibrations in the total potential $V$ (thus discarding the fluctuations of the cavity field but taking into account its mean effect). The phonon gap is plotted in Fig.~\ref{phonons}(a) for $N=11$ ions as a function of $C$. As noted in Ref. \cite{Mata} the phonon gap does not vanish in the sliding phase as in the FK model due to the confining harmonic potential, however as the depth of the cavity potential is increased the phonon gap can fluctuate in the sliding phase. The transition from sliding to pinned can be visualized by the phonon gap vanishing at a critical cavity depth, after which the gap increases monotonically \cite{PeyrardAubry}. It should be noted that for an even number of ions a similar behaviour is observed however the phonon gap may not vanish around the transition point. This is a finite-size effect.
The phonon gap across the bistable region is plotted in Fig.~\ref{phonons}(b) dependent on $\eta$ in units of $\kappa$. The black line shows an extended sliding phase due to the ions in the chain becoming stable as a result of the cavity back-action flattening the shape of the cavity potential. At the critical point the phonon gap increases abruptly signifying a drastic jump from the sliding to the pinned phase. The red dashed line shows the other stable configuration for the same paramaters, however in this case the sliding phase is shorter as the ions move to the pinned phase at a lower critical value of the cavity depth.
{\it Fluctuations.} The coupled dynamics of the quantum fluctuations of field and motion are governed by the HL equations \cite{Collett}, which are found substituting the decomposition Eq.~\eqref{eq:mean:delta} into Eq.~\eqref{eq:a:0} and into the Heisenberg equations of motion for the center-of-mass variables, and using that the mean values are the stationary solutions. The equations read
\begin{align}
\label{eq:a fluct}
\delta\dot{\hat a} &= ({\rm i} \Delta_{\rm eff} - \kappa) \delta \hat a - {\rm i}U_0 \bar a \sum_\ell (\delta \hat x_\ell \partial_\ell)\cos^2(k\bar x_\ell) + \sqrt{2\kappa}\, \hat a_{\rm in}\,, \\
\label{eq:r fluct}
\delta\dot{\hat x}_j&= \frac{\delta \hat p_j}{m}\,, \phantom{\sum_k}\\
\label{eq:p fluct}
\delta\dot{\hat p}_j &= - \sum_\ell(\delta \hat x_\ell \partial_\ell) \left( \partial_j V_{\rm ions} + U_0 \bar n\partial_j\cos^2(k\bar x_j) \right) \nonumber \\
& \quad \, - U_0\left({\bar a}^* \delta \hat a + \bar a \delta \hat a^\dagger\right) \partial_j\cos^2(k\bar x_j) \,
\end{align}
where the derivatives in the expressions above are evaluated at the equilibrium positions.
For convenience we introduce the normal modes of the crystal, that characterize the dynamics of the ions when the coupling with the quantum fluctuations of the cavity field can be neglected:
\begin{equation}
\delta \hat x_j = \sum_n M_{jn} \sqrt{\frac{\hbar}{m\omega_n}} \hat q_n \, ,
\end{equation}
with $M_{jn}$ the element of the orthogonal matrix relating the local coordinates $\delta \hat x_j$ with the normal-mode coordinates,
that diagonalize Eqs. (\ref{eq:r fluct})-(\ref{eq:p fluct}) when the cavity fluctuations $\delta \hat a$ are set to zero;
the variables $\hat q_n$ are the dimensionless position coordinates of the normal modes.
We denote by $b_n$ and $b_n^\dagger$ the bosonic operators annihilating and creating, respectively, a phonon of the
normal mode at frequency $\omega_n$. They are defined through the equations $\hat q_n=(\hat b_n+\hat b_n^\dagger)/\sqrt{2}$ and
$p_n= {\rm i}(\hat b_n^\dagger-\hat b_n)/\sqrt{2}$, and the dynamical equations for the fluctuations then take the form:
\begin{align}
\label{eq:a}
\!\!\!& \delta \dot{\hat a} = ({\rm i} \Delta_{\rm eff} - \kappa) \delta \hat a - {\rm i} \bar a \sum_n c_n ( \hat b_n + \hat b_n^\dagger) + \sqrt{2\kappa}\, \hat a_{\rm in}\,, \\
\label{eq:b}
\!\!\!& \dot{\hat b}_n= - ({\rm i} \omega_n + \Gamma_n) \hat b_n - {\rm i} \bar a c_n (\delta \hat a + \delta \hat a^\dagger) + \sqrt{2\Gamma_n} \, \hat b_{{\rm in}, n}\,,
\end{align}
which also includes the coupling of mode $n$ to a reservoir at rate $\Gamma_n$. The corresponding Langevin force is described by the input noise operator $\hat b_{{\rm in}, n}$, with $\langle \hat b_{{\rm in},n} \rangle = 0$ and
\begin{equation} \label{eq:b-input}
\langle \hat b_{{\rm in}, n}^\dagger(t') \, \hat b_{{\rm in}, n'} (t'') \rangle = \bar N_n \, \delta_{n n'} \, \delta(t'-t'')\,.
\end{equation}
with $\bar N_n = \bar N (\omega_n)$ the mean excitation number of an oscillator of frequency $\omega_n$ at the temperature of the considered environment.
The coefficients $c_n$ in Eq. \eqref{eq:a}-\eqref{eq:b} read
\begin{equation}
c_n = \sqrt{\frac{\hbar}{2m\omega_n}} U_0\sum_j M_{jn} \partial_j \cos^2(k\bar x_j)\,, \label{eq:cavity-motion-coupling}
\end{equation}
where the derivatives are evaluated at the equilibrium positions $x_j$.
{\it Stability diagrams and cavity cooling}
The stability of the system is dependent on the equations of motion coupling the cavity and motional fluctuations which can be written in the compact form
\begin{equation}
\frac{d\overrightarrow{X}}{dt}=A\overrightarrow{X}+\overrightarrow{X}_{in}(t)
\end{equation}
where the elements of the matrix $A$ contain the arguments of Eq.~\eqref{eq:a} and Eq.~\eqref{eq:b} \cite{Cormick}.
For the system to be stable the real parts of the eigenvalues of $A$ must be negative, otherwise at least one
mode of the chain will be heated. It can be shown that the stability condition is $\Delta_{\rm eff}<0$. The stable regions of our parameter space are plotted in Fig.~\ref{stab} for zero and finite $\Delta_c$, with $\Gamma_n = 0 \, \forall\, n$. In the extended stable region produced by the detuning the chain is cooled by the cavity to sufficiently low temperatures (minimum of around $125\mu$K for $\Delta_c=-10\kappa$) and this is shown in Fig.\ref{temp}. The cavity cooled region may be increased by selecting a larger $\Delta_c$, and the transition may be observed as a function of $\eta$ by choosing specific $C$ thereby only experiencing a small increase in temperature.
In order to stabilise the rest of the unstable region, all the modes can be coupled to an environment that can damp the excitations (this coupling is denoted by a finite $\Gamma_n$). To stabilize the entirety of the unstable regions shown here we must choose the coupling to be $\Gamma_n=0.1\kappa$ with the temperature of the external bath being $T_{\rm ext}=100\mu K$.
\begin{figure}
\includegraphics[width=80mm]{Stability_plotsX.png}
\caption{Unstable (yellow) and stable (blue) regions of the system for (a) $\Delta_c=0$ and (b) $\Delta_c=-2\kappa$. For the system to be stable the eigenvalues of the matrix $A$ (see Eq.~$42$ in \cite{Cormick}) must have negative real parts. If this is not the case then there will be at least one mode which is heated by the cavity. }
\label{stab}
\end{figure}
\begin{figure}
\subfigure{\includegraphics[width=41mm]{DCm2_Temp_PRLXX.png}}
\subfigure{\includegraphics[width=41mm]{DCm10_Temp_PRL2.png}}
\caption{Average chain temperature in units of degrees Kelvin versus $C$ and $\eta$ in units of $kappa$. The chain is cooled in the colored area by the cavity, which is reached by suitably modifying the detuning, shown for (a) $\Delta_c=-2\kappa$ and (b) $\Delta_c=-10\kappa$. The solid red line is the symmetry breaking transition, and the white area is the region of the parameter space which experiences heating from the cavity.}
\label{temp}
\end{figure}
\subsection{Evaluation of the restoring force}
To discern the sliding and pinned phases we calculate the restoring force required to return the central ion back to the maxima of the cavity field. This is analogous to the de-pinning force of the Frenkel-Kontorova model, whereby in the sliding phase with the symmetry of the system intact the restoring force is zero. However in the pinned phase the restoring force becomes finite thereby realizing the growth of static friction in the system. A finite force is applied to all the ions equally which may be physically implemented by tilting the lattice potential \cite{Tosatti}. The sum of the total forces acting on all the ions must vanish to ensure a stable solution and are given by
\begin{equation}
F=F_{\rm ion}+F_{\rm cav}+F_{T}=0
\end{equation}
where $F_{\rm ion}=-\frac{\partial V_{\rm ion}}{\partial x}$ and $F_{\rm cav}=-\frac{\partial V_{\rm cav}}{\partial x}$ and $F_{T}$ is the finite force applied to the chain. The critical value of the restoring force is then taken as the finite force required to move the central ion back to the maximum of the cavity field, i.e. such that $\cos^2(k x_0)=1$ at which point $F_{\rm res}=F_{T}$. The restoring force is shown in Fig.~\ref{resF} as a function of $C$ for different pump strengths $\eta/\kappa$. Before the transition the restoring force is zero, however after the transition the restoring force becomes finite highlighting the broken symmetry of the system.
\begin{figure}
\includegraphics[width=3.2in]{Fres2.png}
\caption{Restoring force in units of $m\omega^2L$ versus $C$, where $L=(q^2/4\pi\epsilon_0 m \omega^2)^{1/3}$ and is of the order of the interparticle distance at the chain center $d$. Three different pump strengths are shown (from left to right): $\eta=100\kappa$ (blue), $\eta=105\kappa$ (red) and $\eta=110\kappa$ (yellow)
(the critical value of C becomes smaller with increasing $\eta$). The symmetry breaking transition is also shown as a black dashed line for each case. The parameters are the same as those in Fig.2 in the main article.}
\label{resF}
\end{figure}
\subsection{Spectrum at the cavity output}
We report here the analytical results, which were obtained in Ref. \cite{Cormick}. The spectrum is calculated after evaluating the Fourier transform of the linearized HL for the fluctuations, using $\hat a=\bar a+\delta \hat a$. The quantum component of the spectrum reads
\begin{equation}
S(\nu) =\frac{\langle \delta \hat{\tilde{a}} (\nu)^\dagger \delta\hat{\tilde{a}} (\nu) \rangle}{\bar a^2} \,,
\end{equation}
where $\delta\hat{\tilde{a}} (\nu)$ is the Fourier transform of $\delta \hat a$, $\delta \hat{\tilde{a}} (\nu)^\dagger$ is the Hermitian conjugate of $\delta \hat{\tilde{a}} (\nu)$, and we omit the Rayleigh peak at $\nu=0$, i.e., $\omega=\omega_p$, which corresponds to the classical part. After some algebra, one finds the expressions
\begin{eqnarray}
S(\nu)&=&S_0(\nu)\Bigg(\frac{4\kappa|\theta(\nu)|^2 \bar{a}^2}{\kappa^2+(\nu-\Delta_{\rm eff})^2}\nonumber \\
&&+\sum_n c_n^2\Gamma_n^2\left(\frac{\bar{N}_n}{\Gamma_n^2+(\omega_n-\nu)^2}+\frac{\bar{N}_n+1}{\Gamma_n^2+(\omega_n+\nu)^2}\right)\Bigg)\nonumber\\
\end{eqnarray}
where the first term is due to coupling of the quantum vacuum with the crystal vibrations with $\theta(\nu)=\sum_n \frac{c_n^2 \omega_n}{\omega_n^2+(\gamma_n-i\nu)^2}$
and the second term is due to thermal noise coupling to the modes. The prefactor is given by
\begin{equation}
S_0(\nu)=\frac{2}{\kappa^2+(\nu+\Delta_{\rm eff})^2}\left\vert 1+\frac{4 \theta(\nu)\Delta_{\rm eff}\bar{a}^2}{(\kappa-i\nu)^2+\Delta_{\rm eff}^2}\right\vert^{-2}.
\end{equation}
These are the formula we employed to evaluate Fig. 3.
|
2,877,628,089,853 | arxiv |
\subsection{Independent Phase Shift-based Data Augmentation}
The first algorithm is based on the observation that in wireless systems, the clocks of different APs suffer from phase noise and drift that is independent between APs and independent of the phase of the UE.\footnote{Note that further phase variation/drift could arise from small scale variation in the environment or variation of the capturing devices, though such changes might be correlated.} The fact of such \emph{independent phase shift} inspires us to generate augmented CSI data by having each AP add an independent phase shift to each recorded measurement signal coming from the UE. Note, however, that this phase shift is the same over the different subcarriers on one AP, since it arises from the same physical source.
For example, when looking at only one AP, we generate a random phase $\theta \sim \mathcal{U}[0, 2\pi]$ which then is added to the all signals measured by the $k^{\text{th}}$ AP of interest, which corresponds to $M \times N_{\mathrm{RX}}$ different complex channel responses, by multiplying $e^{j\theta}$ with each channel response $h_{k,j}(f_m)$. As a result, a total of $N_{\mathrm{AP}}$ random phases are generated at each augmentation step and these phases are enter, as complex exponentials, the associated channel responses.
\begin{algorithm}
\KwIn{$\mathcal{D} = \{(\boldsymbol{x}_i, \boldsymbol{y}_i\}_{i = 1}^N$, $N^{\star}$, $N, N_{\mathrm{AP}}, N_{\mathrm{RX}}, M$}
\KwOut{$\mathcal{D}^{\star} = \{(\boldsymbol{x}_i, {\boldsymbol{y}_i})\}_{i = 1}^{N^{\star}}$}
$i \gets 1$ \\
$j \gets 1$\\
$\mathcal{D}^{\star} \gets \mathcal{D}$\\
\While{$j + N \leq N^{\star}$}{
$\boldsymbol{\theta} \gets \mathcal{U}[0,2\pi]^{N_{\mathrm{AP}}}$\\
${\boldsymbol{x}_{i}^{\star}} \gets \boldsymbol{x}_i \otimes e^{j\boldsymbol{\theta}}$\\
$\mathcal{D}^{\star} \gets \mathcal{D}^{\star} \cup (\boldsymbol{x}_{i}^{\star}, \boldsymbol{y}_{i})$\\
$j \gets j + 1$ \\
$i \gets i + 1$\\
\If{$N \leq i$}{
$i \gets 1$
}
}
\caption{Independent Phase Shift-based Data Augmentation}
\label{algo1}
\end{algorithm}
Algorithm \ref{algo1} presents the augmentation procedure, which is based on the independent phase shift we discussed above. The operator $\otimes$ below corresponds to the tensor product that we described earlier. The random phase vector's $e^{j\boldsymbol{\theta}}$ each element is multiplied corresponding AP's channel response, which is a $M \times N_{\mathrm{RX}}$ matrix.
\subsection{Data Augmentation with Random Amplitude} \label{sec:algo2}
In the first algorithm we introduced, we tried to mimic the potential phase drift appearing in the most of the wireless systems. Here, we propose an augmentation algorithm that emulates the potential amplifier fluctuations, which can result, e.g., from temperature drift of the amplifiers. To leverage this phenomenon, we uniformly generate an amplitude from the interval $[-P^{\star}, P^{\star}]$ dB for each anchor points, where $P^{\star}$ is a user defined parameter (alternative statistics of the fluctuations, possibly based on measurements of typical devices, can be used instead). Then, this amplitude is added (on a dB scale) to all measured signals by that anchor point, similar to the procedure in Algorithm \ref{algo1}. Note that by adding this random fluctuations, we do not mimic the fading but the actual fluctuations caused from the measurement device. Furthermore, this fluctuation is also fundamentally different from random noise injection. Algorithm \ref{algo2} provides the detailed description of the procedure.
\begin{algorithm}
\KwIn{$\mathcal{D} = \{(\boldsymbol{x}_i, \boldsymbol{y}_i\}_{i = 1}^N$, $N^{\star}$, $N, N_{\mathrm{AP}}, N_{\mathrm{RX}}, M$}
\KwOut{$\mathcal{D}^{\star} = \{(\boldsymbol{x}_i, \boldsymbol{y}_i)\}_{i = 1}^{N^{\star}}$}
$i \gets 1$ \\
$j \gets 1$\\
$\mathcal{D}^{\star} \gets \mathcal{D}$\\
\While{$j + N \leq N^{\star}$}{
$\boldsymbol{P} \gets \mathcal{U}[0,2\pi]^{N_{\mathrm{AP}}}$\\
$\boldsymbol{x}_{i}^{\star} \gets \boldsymbol{x}_i \otimes \boldsymbol{P}$\\
$\mathcal{D}^{\star} \gets \mathcal{D}^{\star} \cup (\boldsymbol{x}_{i}^{\star}, \boldsymbol{y}_{i})$\\
$j \gets j + 1$ \\
$i \gets i + 1$\\
\If{$N \leq i$}{
$i \gets 1$
}
}
\caption{Random Amplitude-based Data Augmentation}
\label{algo2}
\end{algorithm}
\begin{comment}
\subsection{\tred{Correlation Based Augmentation}}
The second algorithm is based on creating different realizations of the small-scale fading that are consistent with a measured transfer function. In other words, we create channel realizations that will occur somewhere in the close vicinity (typically $10-40 \lambda$) of the measurement point, and assign them the same location label as the actually measured point (this labeling will be discussed further below). The creation of the different small-scale realizations is based on the Wide-Sense Stationary Uncorrelated Scattering (WSSUS) assumption, and the fact that frequency- and spatial variations of the channel gain are caused by the same physical effect, namely the superposition of the different multi-path components \cite{prof_molisch}.
Under the WSSUS assumption, the frequency normalized correlation function for $k^{\text{th}}$ anchor point and UE position $i$ can be written as:
\begin{align}
R_k(f_m, f_n; i) &= \frac{\mathbb{E}_{f}[|h_{k}(f_m ;i )h_{k}^{*}(f_n;i)|]}{\mathbb{E}_{f}[|h_{k}(f;i)h_{k}^*(f;i)|]}\label{eq 1}\\
&= \frac{\mathbb{E}_f[|h_{k}(f;i)h_{k}^*(f + \Delta f; i)|]}{\mathbb{E}_f[|h_{k}(f; i)h_{k}^*(f;i)|]} \label{eq 2} \\
&\triangleq R_k((m - n)\Delta f;i) \label{eq 3}
\end{align}
where $R_k(\cdot, \cdot, ; i)$ is the autocorrelation function of the random process $h_{k}(f;i)$. Eq. (\ref{eq 3}) follows from the WSSUS assumption since the autocorrelation function does not depend on individual $f_m, f_n$ but the difference $(m-n)\Delta f$, where $\Delta f $ is the frequency spacing between consecutive subcarries. The data augmentation is based on generating more $h_{k}(f; i)$ that obey the autocorrelation function given in Eq. (\ref{eq 1}). Note that we dropped $j$ from the previous channel model description as we deal with only one RX per AP.
Since an observation of the ensemble is not available, we approximate the expectation in \ref{eq 3} by measurements we have. We denote the estimate as $\hat{R}_k(n\Delta f; i)$ for and it is given as
\begin{equation}
\hat{R}_k(n\Delta f;i) = \frac{\Tilde{R}_k(n\Delta f;i)}{\frac{1}{M}\sum_{m = 1}^M |h_k(f_m;i)h_k^*(f_m;i)|}
\end{equation}
where $\Tilde{R}_k(n\Delta f; i)$ is defined as follows:
\begin{equation}\label{eq4}
\Tilde{R}_k(n\Delta f;i) \triangleq \frac{1}{\Tilde{M_n}}\sum_{m = 1}^{\Tilde{M_n}} |h_{k}(f_m)h^*_{k}(f_m + n\Delta f)|
\end{equation}
where $\Tilde{M_n}$ denotes the index of the largest subcarrier frequency that such that $f_{\Tilde{M_n}} + n\Delta f < f_M$ and $n \in \{0, 1, \dots M - 1\}$ Note that we only consider $n < \frac{M}{2}$ as correlation values for larger $n \Delta f$ may yield large approximation error due to lack of enough samples.
Then we define the normalized correlation matrix for $k^{\text{th}}$ AP and $i^{th}$ UE location as $ \Sigma_{k,i} &\triangleq \mathbb{E}[\boldsymbol{h}_{k,i}\boldsymbol{h}^{\dagger}_{k,i}]$, which yields the following:
\begin{align}
\Sigma_{k,i} =
\begin{bsmallmatrix}
R_{k}(0) & R_{k}(\Delta f)& \cdots & R_{k}((M-1)\Delta f) \\
R_{k}(-\Delta f) & R_{k}(0) & \cdots & R_{k}((M - 2)\Delta f) \\
\vdots & \vdots & \ddots & \vdots \\
R_{k}(-(M - 1) \Delta f)& R_{k}(-(M-2)\Delta f)& \cdots & R_{k}(0)
\end{bsmallmatrix}
\end{align}
where $\boldsymbol{h}_{k,i} \in \mathbb{C}^{M}$ is the channel response vector for corresponding AP and UE location and $\dagger$ corresponds to Hermitian operation.
As we assumed a WSSUS model, each realization of $h_{k}(f_m;i)$ obeys the above correlation estimations. Thus, we can create transfer functions as realizations of $\boldsymbol{h}_{k, i} \sim \mathcal{CN}(0, \Sigma_{k,i})$ where $\Sigma_{k,i}$ is the covariance matrix. Off-diagonal elements of $\Sigma_{k,i}$ is estimated through Eq. (\ref{eq4}) and diagonal elements are simply 1. Moreover, as we only estimate a portion of correlations, others are set to zero and $R_k(\Delta f; i) = R_k(-\Delta f; i)$ could be used to construct the matrix. Since the dimension of the generated vector is larger than $\Tilde{M}$, elements that cannot be directly computed are set to zero. This implies that the correlation function decays to zero for large subcarrier spacings, which is physically reasonable.
To generate correlated random variables, first uncorrelated complex Gaussian random vector $\boldsymbol{z}$ is generated. Then, we get $C = \Sigma\Sigma^{\dagger}$, where this decomposition can be made with a choice of a generic algorithm such as Cholesky. Afterwards, we get $\boldsymbol{h} = C\boldsymbol{z}$ as correlated random variables, where their covariance matrix obeys Eq. (\ref{eq 1}).
Algorithm (\ref{algo2}) summarizes the procedure, which produces fresh samples for the location $i$ with the same label $\boldsymbol{y}_i$.
\begin{algorithm}
\KwIn{$\mathcal{D} = \{(\boldsymbol{x}_i, \boldsymbol{y}_i\}_{i = 1}^N$, $N^{\star}, N_{\mathrm{AP}}, N_{\mathrm{RX}}, M$}
\KwOut{$\mathcal{D}^{\star} = \{(\boldsymbol{x}_i, \boldsymbol{y}_i)\}_{i = 1}^{N^{\star}}$}
$i \gets 1$ \\
$j \gets 1$ \\
$\mathcal{D}^{\star} \gets \mathcal{D}$\\
\While{$j + N \leq N^{\star}$}{
\For{$k = 1 ;\ k \leq N_{\mathrm{AP}} ;\ k = k + 1$}{
$Compute \,\, \Sigma_{k,i}$ \\
$C_{k,i} = \Sigma_{k, i}^{1/2}$ \\
$\boldsymbol{z} \gets \mathcal{CN}(0, 1)^{M}$ \\
$\boldsymbol{h}_{k}(f;i) \gets C_{k,i}\boldsymbol{z}$ \\
$\boldsymbol{x}^{\star} \gets [\boldsymbol{x}^{\star}, \; \boldsymbol{h}_{k}(f;i)]$ \\ \label{stacking}
}
$\mathcal{D}^{\star} \gets \mathcal{D}^{\star} \cup (\boldsymbol{x}^{\star}, \boldsymbol{y}_i)$\\
$j \gets j + M$ \\
$i \gets i + 1$ \\
\If{$N \leq i$}{
$i \gets 1$
}
}
\EndWhile
\caption{Correlation Based Augmentation Algorithm}
\label{algo2}
\end{algorithm}
where $\hat{R}_m^i$ and $C_m^i$ are computed in the mentioned way for each AP $m$ with respect to measurement data for the sample $i$. In Line (\ref{stacking}), the channel responses are stacked together, which creates $M$ samples for each step.
The algorithm above describes the approach when each AP has only a single antenna, and the APs are widely spaced apart, which makes $N_{\mathrm{RX}} = 1$. This is the reason why we omit the term $j$ in channel response subscript. For the case of multiple antennas, a generalization to a space-frequency autocorrelation function is required; this situation will be handled in our future work.
An important caveat for this algorithm is that it trades off the better trainability and increased robustness created by the data augmentation with the spatial resolution. Specifically, we assign all channel realizations (which represent a {\em region} of stationarity, typically of a size $10-20\lambda$), a {\em single} location label (that of a measured point). This causes a loss in spatial resolution. The algorithm is thus mainly beneficial in situations where only a small number of training data is available, since in that case the benefits of augmentation outweigh the loss of resolution.
\end{comment}
\subsection{Deep Learning}
\begin{figure}[]
\centering
\includegraphics[width=.65\linewidth]{figures/Fig_NN.pdf}
\caption{{Illustration of fully connecteed feedforward neural network.}}
\label{neural_figure}
\end{figure}
Within the area of data-driven solution methods, the most widely applied is supervised learning, which works as follows.
Let $\mathcal{X} \subseteq \mathbb{R}^d$ be the input feature domain and $\mathcal{Y} \subseteq \mathbb{R}^m$ the label domain. We assume that there is a joint probability distribution $\mathcal{P}$ with a cumulative distribution function $F_\mathcal{P}: \mathcal{X} \times \mathcal{Y} \rightarrow [0, 1]$ governing both input and label domains.
The main objective of ML models is to find out the best mapping $f: \mathcal{X} \rightarrow \mathcal{Y}$ from a hypothesis class $\mathcal{F}$ such that
\begin{equation}\label{test}
\argmin_{f \in \mathcal{F}} \mathbb{E}_{\boldsymbol{x}, \boldsymbol{y} \sim \mathcal{P}}[\mathcal{L} (f(\boldsymbol{x}), \boldsymbol{y})]
\end{equation}
where $\mathcal{L}: \mathcal{X}\times \mathcal{Y} \rightarrow \mathbb{R}$ is a loss function, which measures the error between estimated label $f (\boldsymbol{x}) = \hat{\boldsymbol{y}}$ and true label $\boldsymbol{y}$. Moreover, we assume that there is a mapping $f^{\star}: \mathcal{X} \rightarrow \mathcal{Y}$ such that $f^{\star}(\boldsymbol{x}) = \boldsymbol{y}$, where $\forall \boldsymbol{x} \in \mathcal{X}$ and $\forall \boldsymbol{y}\in\mathcal{Y}$. Then, the corresponding expectation in Eq. (\ref{test}) becomes 0, where the expectation is called $\mathcal{R}(f)$, true risk of the model $f$. In our problem setup, we assume that there indeed exists a such mapping.
Unfortunately, the true distribution $\mathcal{P}$ is not known in practice. Thus, there are different approaches such as empirical risk minimization (ERM) to tackle this problem.
\begin{equation} \label{erm}
\hat{f} \triangleq \argmin_{f} \frac{1}{N}\sum_{i = 1}^N\mathcal{L}(f(\boldsymbol{x}_i), \boldsymbol{y}_i))
\end{equation}
Eq. (\ref{erm}) can be referred to as the training procedure, $\mathcal{D} \triangleq \{(\boldsymbol{x}_i, \boldsymbol{y}_i)\}_{i = 1}^N$ is the training dataset, and $\hat{f}$ is the ERM solution.
There are also different enhancements to find $\hat{f}$ such as regularized loss minimization, which aims to control the variance of the estimator and prevents over-fitting \cite{rlm}. Further methods and analysis could be found in \cite{deep_book}.
Neural networks are one of the paradigms to approximate $\hat{f}$. Neural nets consist of cascaded activation functions with corresponding weight vectors.
Following the notation in Fig. (\ref{neural_figure}), $\phi$ is called the activation function, usually non-linear. $\boldsymbol{W}$ and $\boldsymbol{V}$ are weight matrices. In this example there is only one hidden layer and the neural network is fully connected. We call a neural network as deep fully connected feed-forward neural network if the number of hidden layers are more than two and all sequential units are connected to each other. The non-linearity of activation functions, such as rectified linear unit(ReLU), brings an important property to the multi-layer feed-forward neural network which has been shown to be universal approximators \cite{hornik}. This work uses such fully connected feedforward neural networks in all the examples; other architectures of deep neural networks such as CNNs, RNNs, LSTMs etc. can be found in \cite{deep_book}.
\subsection{Data Augmentation}
As mentioned in the previous section, supervised learning problems use a dataset $\mathcal{D} = \{(\boldsymbol{x}_i, \boldsymbol{y}_i)\}_{i = 1}^N$, where $N$ is the number of data points and ($\boldsymbol{x}_i, \boldsymbol{y}_i)$ is the input feature-label tuple for data point $i$. The aim of the augmentation is finding an operator $\mathcal{T}: \mathcal{X} \rightarrow \mathcal{X}$ where the operation is invariant to the dataset $\mathcal{D}$. Formally, if there exists a mapping $f: \mathcal{X} \rightarrow \mathcal{Y}$, then $f(\boldsymbol{x}) = f(\mathcal{T}(\boldsymbol{x}))$. In other words, if we apply the augmentation operator on an input feature $\boldsymbol{x}$ then the corresponding mapping in the label space $\boldsymbol{y}$ remains the same. A well-known example in image classification is the rotation or translation of images. Our aim is to find such mapping operators $\mathcal{T}$ in our indoor localization problem
setup and its corresponding dataset.
Fig. (\ref{aug_figure}) shows the data augmentation procedure and how it is connected to the neural network training. Note that data augmentation deals with only training data. Throughout the experiments we separate the training and test set at the beginning, and apply augmentation techniques solely to the training set.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{figures/Fig_DataAugmentation.pdf}
\caption{{Illustration of data augmentation process.}}
\label{aug_figure}
\end{figure}
\subsection{Indoor Localization}
We first briefly summarize the methods, features used in the methods, and available ML solutions to the indoor localization problem. The \emph{ML-based} methods considered in this paper use direct coordinates as their labels and use different type input features, such as RSSI or CSI, to find an appropriate matching.
Input features of the methods mentioned could be RSSI, CSI, CSI amplitude, CSI phase or the pre-processed features such as Angle of Arrival(AoA) images created from CSI data to feed CNN based localization algorithms. These inputs can be acquired via different wireless technologies such as Bluetooth, WiFi, LTE etc. These data structures are used with different deep learning structures such as CNN, ResNets as supervised learning solutions \cite{survey}. There are also unsupervised and semi-supervised learning solutions which are out of the scope this paper. Further and more detailed information can be found in the survey paper \cite{survey}.
\subsection{System Model}
Assume there are $N_{\mathrm{AP}}$ wireless access points (APs) and each AP has $N_{\mathrm{RX}}$ antennas.
The system employs orthogonal frequency-division multiplexing (OFDM) with $M$ subcarriers.
Without loss of generality, here the localization operates based on uplink transmission.
Let $r_{k, j}$ be the received signal at the $k^{\text{th}}$ AP's $j{\text{th}}$ antenna, where $k \in \{1,2, \dots N_{\mathrm{AP}}\}$, $j \in \{1,2, \dots N_{\mathrm{RX}}\}$.
Further, let the transmitted signal from position $i$ at subcarrier frequency $f_m$ be $s_i(f_m)$, where $i \in \{1,2, \dots N\}$, and $m \in \{1,2, \dots, M\}$, and $h_{k,j}(f_m)$ be the channel frequency response at the $m^{\text{th}}$ subcarrier with respect to $k^{\text{th}}$ anchor points' $j^{\text{th}}$ antenna. Then,
\begin{equation}
r_{k,j}(f_m; i) = h_{k,j}(f_m;i)s_i(f_m) + w_{k,j}
\end{equation}
where $w_{k,j} \sim \mathcal{CN}(0, \sigma^2_{m, j})$. The noise samples are i.i.d. zero-mean circularly symmetric complex Gaussian samples with variance $\sigma_{k,j}^2$.
Moreover, the channel response for an environment with $L$ multi-path components (MPC) is:
\begin{equation}
h_{k, j}(f_m) \triangleq \sum_{l = 1}^L \alpha_l a_{k,j}(\phi_l, \theta_l, f_m)e^{-j2\pi f_m\tau_l}
\end{equation}
where $a_{k,j}(\phi_l, \theta_l, f_m)$ is the antenna pattern of $j^{\rm th}$ element with respect to azimuth angle $\phi_l$, elevation angle $\theta_l$, and the MPC has a complex amplitude gain $\alpha_l$. The complex exponential $e^{-j2\pi f_m \tau_l}$ is the characterization of the delay $\tau_l$ in frequency domain.
\subsection{DL-based Indoor Localization}
We use a feedforward neural networks for supervised learning as follows. Let $\mathcal{D} = \{(\boldsymbol{x}_i, \boldsymbol{y}_i)\}_{i = 1}^N$ be the dataset consisting of $N$ measurements (input features).
For each measurement $\boldsymbol{x}_i$, corresponding label data $\boldsymbol{y}_i$ consist of the information regarding the location of UE, e.g., the coordinate of the UE location and the corresponding fingerprint.
The input feature $\boldsymbol{x}_i \in \mathbb{R}^d$ is a real vector, which consists of the CSI of the measurement, in which the dimension $d$ depends on several factors, such as the number of anchor points, the number of receiver antennas, the number of subcarriers, etc.
Note that the complex-valued CSI data can be split into real and complex parts and then concatenated for real-value tensors in a neural network.
In more involved localization algorithms, the input feature $\boldsymbol{x}_i$ could be 2-D data including, e.g., angle-of-arrival (AOA) information.
Then, depending on being a fingerprinting or direct coordinate application, the label $\boldsymbol{y}_i \in \mathbb{R}^n$ could be a scalar value for fingerprint index or 2-D coordinate.
Lastly, the neural network could be trained to handle the localization as a classification or regression problem, respectively; in our examples we will use fully connected feedforward neural networks solving a regression problem.
We are now ready to formulate the problem of data augmentation for ML solution of the indoor localization problem. Let $\mathcal{F}$ be the hypothesis class and $\mathcal{A}$ be the algorithm, namely deep learning training procedure, when it is fed through the dataset $\mathcal{D}$. Then, let $f: \mathcal{X} \rightarrow \mathcal{Y}$ be the model that is the output of the algorithm $\mathcal{A}$, where $f \in \mathcal{F}$. We assume that the input features are channel response tensors as $\boldsymbol{x} \in \mathcal{X} \subseteq \mathbb{C}^{M \times N_{\mathrm{RX}} \times N_{\mathrm{AP}}}$. The label $\boldsymbol{y} \in \mathcal{Y} \subseteq \mathbb{R}^2 $ is the coordinates of the UEs.
The problem this paper considers is finding an augmentation operator $\mathcal{T}: \mathcal{X} \rightarrow \mathcal{X}$ such that $\mathcal{R}(f^{\star}) \leq \mathcal{R}(f)$, where $f^{\star}$ is the output of the algorithm $\mathcal{A}$, which is fed by dataset $\mathcal{D}^{\star}$. Dataset $\mathcal{D}^{\star}$ is produced after operator $\mathcal{T}$ is applied on dataset $\mathcal{D}$.
\begin{comment}
\tredd{[JH: $\mathcal{R}(\cdot)$ seems has not been introduced; Is $\mathcal{A}$ representing a data augmentation method? what here algorithm $\mathcal{A}$ means?; in Algorithm and the section, I have changed $x^{\star}$ with $x_{i}^{\star}$, please confirm it.]}
\end{comment}
\subsection{Dataset}
\textbf{WILD Dataset}.\quad
The WILD dataset contains measurements from two different environments separately. The first one is an NLOS environment over 1500 sq. ft. with $N_{\mathrm{AP}}= 4$ APs and the second one is a 500 sq. ft. LOS environment with $N_{\mathrm{AP}} = 3$ APs, where each AP has $N_{\mathrm{RX}} = 4$ RX antennas. The dataset is based on a WiFi system with $M = 234$ subcarriers. Moreover, $N = 51613$ data points for the NLOS environment and $N = 56395$ for the LOS environment are labeled. The dataset presents the measurements as a complex 4-D tensor, i.e. $N \times M \times N_{\mathrm{RX}} \times N_{\mathrm{AP}}$. The corresponding labels for UE coordinates are given as $(x,y) \in \mathbb{R}^2$.
\begin{figure}[h!]
\centering
\subfloat[{LOS} \label{fig:LOS}]{\includegraphics[width=.45\columnwidth]{figures/LOS.jpg}} \
\subfloat[{NLOS} \label{fig:NLOS}]{\includegraphics[width=.53\columnwidth]{figures/NLOS.jpg}}
\caption{{Measurement scenario of original dataset (WILD \cite{wild}).}}
\label{fig:WILD}
\end{figure}
With the original dataset, we demonstrate the effect of our proposed augmentation method in three different (size of original dataset) scenarios: \textit{i)} \emph{small} ($4$K samples), \textit{ii)} \emph{medium} ($20$K samples), and \textit{iii)} \emph{large} ($40$K samples).
Here, small data regime refers to a size of the dataset that is small compared to the usual deep learning applications. Medium case aims to show the transition from low to high data regimes. Finally, we are in interested how much performance gain in high data regimes, where we actually have enough data to perform reasonably.
\begin{table}[h!]
\centering
\caption{{Measurement setup/scenario for original dataset (WILD \cite{wild}).}}
\resizebox{\columnwidth}{!}{\begin{minipage}[t]{0.9\columnwidth}
\centering
\input{tables/parameter_dataset}
\label{table_Paramter}
\end{minipage}}
\end{table}
\subsection{Impacts of Independent Phase Shift-based Data Augmentation (Algorithm \ref{algo1})}
To evaluate the performance of the independent phase shift as an augmentation method, we used a feedforward neural network with 3 hidden layers each consisting of 256 neurons with ReLU activation function and trained for 300 epochs. The augmentation method is applied to NLOS and LOS datasets separately.
Each numerical column of Table \ref{algo_1_small}, Table \ref{algo_1_medium} and Table \ref{algo_1_large} gives the test set performance with respect to MSE in meters. \emph{No Augmentation} refers to training with the original data only, which in the low-data case means that we train with only 4000 samples. The other columns refer to training with a dataset that is augmented by the given multiple compared to the original size. For example, low data regime with $\times 6$ means training with 24000 samples, where 20000 samples are generated with the augmentation algorithm and 4000 samples are the original data.
In all of the cases data augmentation is very effective. First compare the improvement of the accuracy for a given number of measured labeled data; in this case augmentation provides up to 3.25 times better MSE. Of particular practical importance is the reduction of the MSE in the NLOS case in the low-data case, where augmentation allows to realize reasonable ($1.5$ m) accuracy, instead of the more than $5$ m without augmentation. Moreover, we improve the localization performance even in the case that the measured data set is very large, reducing the MSE from $0.8$ to $0.4$ m in NLOS.
Another practically relevant question is: how much can we reduce the size of the {\em measured and labeled} training set without losing accuracy. We thus performed an additional experiment in which we augmented the low data regime to create a set as large as the high data case, in other words, $\times 10$ of the initial dataset size. In the NLOS case, we get 0.823344 m and in the LOS case 0.316802 m MSE, which are similar to \emph{No Augmentation} cases of high data regimes. These results show that, we may actually get the same performance but with only 10\% of the required measurement/labeling effort (remember that the high data regime has 40000 samples, whereas low data regime we have only 4000 samples!).
\begin{table}[t!]
\centering
\caption{{Impact of Algorithm 1 for \emph{small} original dataset. MSE score comparison with respect to augmentation size.}}
\resizebox{1.\columnwidth}{!}{
\begin{minipage}[h]{.9\columnwidth}
\centering
\label{algo_1_small}
\input{tables/algo1_small_los_and_nlos}
\end{minipage}
}
\end{table}
\begin{table}[t!]
\centering
\caption{{Impact of Algorithm 1 for \emph{medium} sized original dataset. MSE score comparison with respect to augmentation size.}}
\resizebox{1.\columnwidth}{!}{
\begin{minipage}[h]{.9\columnwidth}
\centering
\label{algo_1_medium}
\input{tables/algo_1_medium_los_and_nlos}
\end{minipage}
}
\end{table}
\begin{table}[t!]
\centering
\caption{{Impact of Algorithm 1 for \emph{large} original dataset. MSE score comparison with respect to augmentation size.}}
\resizebox{1.\columnwidth}{!}{
\begin{minipage}[h]{.9\columnwidth}
\centering
\label{algo_1_large}
\input{tables/algo1_large_los_and_nlos}
\end{minipage}
\vspace{-3em}
}
\end{table}
\subsection{Impacts of Random Amplitude-based Data Augmentation (Algorithm \ref{algo2})}
For the experiments with the random-amplitude augmentation,we have used 75 to 150 epochs and other
the neural network architecture is kept the same as the previous experiment. We used $P^{\star} = 1.5$ dB in the small and medium data cases, but $P^{\star} = 0.75$ dB in the large-data case (see below for motivation).
We firstly notice that Algorithm \ref{algo2} does not provide performance improvement for the large data set, and can actually degrade performance; this is due to the problem of overfitting in the rich data set. The problem would be even more pronounced if we used $P^{\star}=1.5$ also in the large-data case, which is why we adopted the smaller value of $0.75$.
However, for the case of medium and small datasets, Algorithm \ref{algo2} is still useful. Tables \ref{algo_2_small} and \ref{algo_2_medium} demonstrate the significant performance improvements: up to 2 times reduction of the MSE in the NLOS/small dataset case. In LOS environments, the performance improvement is less pronounced.
\begin{table}[h!]
\centering
\caption{{Impact of Algorithm 2 for \emph{small} original dataset. MSE score comparison with respect to augmentation size.}}
\resizebox{1.\columnwidth}{!}{
\begin{minipage}[h]{.9\columnwidth}
\centering
\label{algo_2_small}
\input{tables/algo_2_small_los_and_nlos}
\end{minipage}
}
\end{table}
\begin{table}[h!]
\centering
\caption{{Impact of Algorithm 2 for \emph{medium} sized original dataset. MSE score comparison with respect to augmentation size.}}
\resizebox{1.\columnwidth}{!}{
\begin{minipage}[h]{.9\columnwidth}
\centering
\label{algo_2_medium}
\input{tables/algo_2_medium_los_and_nlos}
\end{minipage}
}
\end{table}
\begin{table}[h!]
\centering
\caption{{Impact of Algorithm 2 for \emph{large} original dataset. MSE score comparison with respect to augmentation size.}}
\resizebox{1.\columnwidth}{!}{
\begin{minipage}[h]{.9\columnwidth}
\centering
\label{algo_2_large}
\input{tables/algo_2_large_los_and_nlos}
\end{minipage}
}
\end{table}
As a baseline, we also implement random noise injection, which is adding each data point zero mean unit variance circular symmetric complex gaussian random variable realization. However, the MSE reduction is between $0$ and $50\%$, i.e., significantly less than the factor-3 reduction our method achieves.
\section{Introduction} \label{intro}
\input{chapters/introduction.tex}
\section{Background} \label{background}
\input{chapters/background.tex}
\section{CSI-based Indoor Localization} \label{sys_model}
\input{chapters/model.tex}
\section{Data Augmentation Methods for CSI-based Indoor Localization} \label{algos}
\input{chapters/algorithms.tex}
\section{Numerical Evaluation} \label{num_results}
\input{chapters/numerical.tex}
\section{Conclusion} \label{conclusion}
\input{chapters/conclusion.tex}
\vspace{-.6em}
\section*{ACKNOWLEDGEMENT}
\input{chapters/ack.tex}
\vspace{-.6em}
\bibliographystyle{IEEEtran}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.